AI View: April 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

14 April 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. EU releases AI Continent Action Plan

  2. Connecticut proposes new AI governance law

  3. AI faces closer scrutiny from Bank of England

  4. EU AI Office launches survey to expand AI literacy practices repository

  5. Hong Kong PCPD issues guidelines on employees’ use of generative AI

  6. US updates federal AI policy with new guidance memoranda

  7. European Commission publishes Digital Europe Work Programme for 2025-2027 with €1.3 billion investment in AI

1. EU releases AI Continent Action Plan

On 9 April 2025, the EU Commission unveiled the AI Continent Action Plan, aiming to position Europe as a global leader in AI. This comprehensive strategy focuses on five key pillars:

  • Building large-scale AI infrastructure: Establishing a network of “AI Factories” around Europe’s supercomputers to support startups, industry, and researchers. Plans include developing AI gigafactories equipped with approximately 100,000 state-of-the-art AI chips to enhance computing power for complex AI models.
  • Enhancing access to high-quality data: Creating data labs to curate large volumes of data within AI Factories. A “Data Union Strategy” will be launched to foster a unified internal market for data, facilitating scalable AI solutions.
  • Promoting AI adoption in strategic sectors: Introducing the “Apply AI Strategy” to develop tailored AI solutions and boost their adoption in key public and private sectors. European AI innovation infrastructures, including AI factories and digital innovation hubs, will play a significant role.
  • Strengthening AI skills and talent: Facilitating international recruitment of AI experts through initiatives like the Talent Pool and AI fellowship schemes offered by the upcoming “AI Skills Academy”. Educational and training programmes on AI and generative AI will be developed to prepare specialists and support workforce upskilling.
  • Simplifying regulations: Launching the “AI Act Service Desk” to assist businesses in complying with the AI Act, serving as a central hub for information and guidance.

Read more here.

2. Connecticut proposes new AI governance law

On 9 April 2025, the Connecticut General Assembly proposed Senate Bill No. 2 (the Bill), which sets out a wide-ranging regulatory framework for the development and use of AI within the state.

Key features of the Bill include:

  • Scope: The Bill applies to any entity “doing business” in Connecticut, including those based outside the state or the country if they market their services to Connecticut residents.
  • Risk-based regulation: The Bill targets “high-risk” AI systems used in consequential decisions (e.g. hiring, healthcare, credit scoring). Developers and deployers must take reasonable steps to mitigate foreseeable risks, including the risk of algorithmic discrimination, and implement governance and impact assessment measures.
  • Consumer protections and transparency: Entities must notify individuals when AI is involved in decision-making, provide clear explanations of its use, and offer mechanisms for human review or appeal. Synthetic or AI-generated content must be clearly disclosed.
  • Enforcement power: Enforcement power would fall exclusively to the Connecticut Attorney General, with a formal notice-to-cure process for addressing non-compliance

The Bill has passed the Senate Committee on General Law and is now under consideration by the Connecticut House of Representatives before heading to the Governor for final approval. Once approved, most provisions would take effect from 1 October 2025, with core obligations for developers, integrators and deployers beginning on 1 October 2026.

Read the Bill here.

3. AI faces closer scrutiny from Bank of England

On 9 April 2025, the Bank of England’s Financial Policy Committee (FPC) released the “Financial Stability in Focus” report, examining the implications of AI on the UK’s financial system. The report highlights the emerging risks associated with AI adoption in the finance sector.

The key risks include:

  • Reliance on AI for core decision-making: Banks’ and insurers’ increasing reliance on AI for core financial decision-making could introduce financial stability risks if common weaknesses in widely used models cause many firms to misestimate certain risks, leading to mispricing and misallocation of credit.
  • Market manipulation: Autonomous AI systems could identify and exploit market weaknesses, potentially triggering or amplifying financial volatility. The FPC warns that such models might learn that stress events increase profit opportunities and that periods of extreme volatility were beneficial for the firms they were trained to serve, leading them to take actions that heighten the likelihood of such events.
  • Herding and market concentration: The widespread use of similar AI models may result in firms taking increasingly correlated positions and acting in a similar way during a stress, thereby amplifying shocks.
  • Operational dependencies: Financial institutions generally rely on providers outside the financial sector for AI-related services. Reliance on a limited number of AI service providers could introduce systemic risks, as malfunction in a widely used AI system might have cascading effects across the financial sector.
  • Cybersecurity threats: While AI could increase firms’ cyber defensive capabilities, it could also increase malicious actors’ capabilities to carry out cyberattacks against the financial institutions.

The FPC stated that a key area of focus for the Bank of England will be working with industry, including through the AI Consortium (a collaborative AI policy group), to understand how AI is being deployed and to share good practice for managing AI-related risks. The FPC also highlighted the potential need for regulators to evolve existing guidance and regulation to support the safe adoption of AI across the industry.

Read the full report here.

4. EU AI Office launches survey to expand AI literacy practices repository

On 2 April 2025, the EU AI Office initiated a survey to gather examples of AI literacy practices from organisations across Europe to support the implementation of Article 4 of the AI Act. This effort aims to enhance the existing living repository of AI literacy initiatives, promoting learning and exchange among AI system providers and deployers.

The repository, introduced during the AI Pact webinar on AI literacy in February 2025, currently features over 20 practices contributed by AI Pact organisations.

Organisations are encouraged to share their AI literacy experiences through the survey. Submissions will undergo verification by the AI Office to ensure they meet criteria of transparency and reliability before inclusion in the public repository. A dedicated website for the living repository will soon be developed.

Read more here and respond to the survey here.

5. Hong Kong PCPD issues guidelines on employees’ use of generative AI

On 31 March 2025, the Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong released the “Checklist on Guidelines for the Use of Generative AI by Employees”. This document aims to assist organisations in formulating internal policies governing employees’ use of generative AI tools.

Key recommendations to organisations include:

  • Scope of permissible AI use: Clearly define which generative AI tools are approved, their intended purposes (e.g. drafting, summarising information, creating content), and the applicability of the policies.
  • Personal data protection: Provide explicit instructions on the types and amounts of information that can be input into generative AI tools, permissible uses of output information, storage guidelines, data retention policies, and adherence to related internal policies such as data handling and information security.
  • Lawful and ethical use: Emphasise that employees must not use generative AI tools for unlawful or harmful activities.
  • Bias prevention: Inform employees that they are responsible for verifying the accuracy of AI-generated outputs through proofreading and fact-checking, correcting and reporting biased or discriminatory outputs, and following guidelines on when and how to watermark or label AI-generated content.
  • Data security: Specify the types of devices authorised for accessing generative AI tools (e.g. employer-provided work devices) and the categories of employees permitted to use them (e.g. those with operational needs, relevant training, and prior permission). Require the use of robust user credentials, stringent security settings, and mandate the reporting of AI-related incidents, such as data breaches or the unauthorised input of personal data.
  • Violations of policies or guidelines: Outline the possible consequences for employees who violate these policies or guidelines.

To support organisations in implementing these guidelines, the PCPD has launched an AI Security Hotline for enquiries related to AI usage and to assist in the safe and privacy-compliant adoption of AI technologies.

Read more here.

6. US updates federal AI policy with new guidance memoranda

On 7 April 2025, the US Office of Management and Budget (OMB) published two new memoranda (M-25-21 and M-25-22) to reshape how federal agencies adopt and procure AI. While maintaining the former administration’s core protections for public trust and civil rights, the current administration positions AI as a strategic tool for efficiency, global competitiveness, and national strength.

Key updates include:

  • Chief AI Officers redefined: Agencies must designate or reaffirm Chief AI Officers, now expected to act as “change agents” responsible for accelerating AI deployment, assessing AI maturity, and aligning investment strategies with national priorities.
  • High-impact AI oversight: AI systems that materially affect legal rights, access to public services, or personal safety must undergo ongoing risk assessments, human oversight, and performance monitoring.
  • Continuity with earlier guidance: While the new memos emphasise innovation and operational efficiency, they continue to uphold key tenets from prior policies, such as requiring AI impact assessments and inventories of AI use across federal agencies.
  • Procurement reforms: Agencies must avoid vendor lock-in, favour open standards and American-developed AI systems, and retain IP rights over training data, while performance-based contracts will replace more prescriptive reporting regimes.
  • Public accountability: While agencies remain required to maintain public AI inventories and apply safeguards to high-impact use cases, the new guidance removes an explicit public notice rule, replacing it with more general language around human appeal rights and review access.

The memoranda form part of a broader strategy to scale responsible AI use in the federal government. Implementation will depend on individual agency action, supported by central oversight and inter-agency coordination.

Read more about the memoranda here.

7. European Commission publishes Digital Europe Work Programme for 2025-2027 with €1.3 billion investment in AI

The European Commission has adopted the 2025–2027 Work Programme for the Digital Europe Programme (the Work Programme), allocating €1.3 billion to accelerate the deployment of key digital technologies, with a strong emphasis on AI. The funding aims to support the use of reliable AI across public administrations and businesses, strengthen Europe’s technological sovereignty, and align with the EU’s broader digital goals.

Key AI initiatives under the programme include:

  • Support for AI testing and experimentation facilities to enable real-world validation of AI technologies in sectors such as healthcare and manufacturing.
  • Investment in secure, trustworthy AI systems, including efforts to align deployment with the forthcoming AI Act.
  • Promotion of AI in sectors like law enforcement and crisis response, with safeguards for fundamental rights.
  • Funding to improve the availability and quality of datasets for the development and fine-tuning of AI models.
  • Tailored support to increase AI adoption by small and medium-sized enterprises, including through Digital Innovation Hubs.

The Commission will launch funding calls from April 2025, which are open to entities across the EU and participating countries in the Work Programme.

Read about the Work Programme here and the Commission’s investment here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.