Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
UK House of Lords demands more copyright protection against AI for data bill
US House of Representatives passes 10-year moratorium on US state AI regulation
Switzerland releases guidance on applicability of data protection law to AI
US Department of Commerce rescinds AI Diffusion Rule
EU publishes draft report on the impact of AI on the financial sector
EU Committee on Internal Market and Consumer Protection adopted opinion recommending rejection of the AI Liability Directive
1. UK House of Lords demands more copyright protection against AI for data bill
On 20 May 2025, the House of Lords proposed an amendment to the UK Government’s Data (Use and Access) Bill (the Bill) for the third time, as peers led by Baroness Kidron again backed a copyright-focused amendment aimed at helping rights holders identify if their copyright content has been used by AI developers.
The Bill was first unveiled in Parliament in October last year and is a landmark piece of legislation aimed at modernising how data is used and accessed across public and private sectors. Baroness Kidron has proposed copyright/AI-related amendments to the Bill on behalf of the creative industries to require the UK Government to tackle the difficult issues that arise, notwithstanding that the UK Government is still working through the responses it received to its consultation on the issues (which closed in February 2025).
The key concern from the House of Lords has been that the UK does not offer protections for the UK creative sector against their works being used for training AI models without notification, consent or compensation. Baroness Kidron’s current proposed amendment, as backed by the House of Lords, would provide transparency by requiring AI companies to disclose the sources of their training data and how that data was used.
The House of Commons had previously voted down earlier versions of Baroness Kidron’s amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt the ongoing AI and copyright consultation.
Baroness Kidron’s latest revised amendment would require businesses making available AI models in the UK and targeting UK users to disclose detailed data training information about the materials used at all stages of AI model development (e.g. pre-training, training, fine-tuning, and retrieval-augmented generation). This data is required to be clear, relevant, accurate and accessible to copyright holders.
The Bill is now back with the House of Commons for consideration of the amendment. The Government has already tabled concessions previously to include a commitment to publish a report on the recent AI and copyright consultation and also to conduct an economic impact assessment of the legislation within the next 12 months.
Read more here.
2. US House of Representatives passes 10-year moratorium on US state AI regulation
On 22 May 2025, the House of Representatives voted to advance a bill that would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years (the Moratorium). The proposed Moratorium is part of a budget reconciliation process advanced by the Trump administration.
The Republicans claim that the Moratorium will help the US safeguard its investment in AI and compete with China. They argue that allowing 50 different state-level approaches to AI regulation would hinder progress and suppress innovation, particularly for small startups.
The Moratorium includes some narrow exceptions, including that the ban would not apply to laws intended to remove “impediments” to AI models or streamline AI licensing or procurement.
The Moratorium now goes to the Senate, where it will likely be reviewed for compliance with the Byrd rule to ensure that reconciliation provisions are relevant to federal spending. If passed, the Moratorium would halt new or existing state AI regulations in places including California and Colorado that have taken significant measures to develop state regulation around the use of AI.
Read more here.
3. Switzerland releases guidance on applicability of data protection law to AI
On 8 May 2025, the Swiss Federal Data Protection and Information Commissioner (FDPIC) issued guidance confirming that the existing Swiss Data Protection Act (DPA) applies directly to data processing operations involving AI.
The FDPIC noted that the DPA requires AI systems to be transparent, mandating the need to disclose the AI systems’ purpose, how they operate, and the data sources processed. As provided in the DPA, the right to transparency is closely linked to the right of data subjects to object to automated data processing or to demand that automated individual decisions be controlled by a human being. For example:
- Where a user interacts with an AI system, the user must be informed that they are communicating with a machine and whether their inputs will be used to train or otherwise enhance the system.
- Any AI system that fabricates or alters the faces, images, or voices of identifiable individuals (i.e. deepfakes) must be clearly labelled.
- High-risk AI systems that process data must also undergo data protection impact assessments.
Read the guidance here.
4. US Department of Commerce rescinds AI Diffusion Rule
On 13 May 2025, the US Department of Commerce initiated a rescission of the Biden administration’s AI Diffusion Rule. The AI Diffusion Rule was set to come into effect on 15 May 2025.
The AI Diffusion Rule would have restricted chip exports to jurisdictions such as China, Russia, Iran, and North Korea, while limiting exports to around 120 countries and exempting 18 allies such as Australia, Canada, the UK, and Japan.
The Bureau of Industry and Security also announced measures to tighten export controls on overseas AI chips, including issuing guidance:
- alerting industry of the risks associated with using Chinese advanced computing integrated circuits
- warning the public about the potential consequences of US AI chips being used for training and inference of Chinese AI models
- to US companies on protecting supply chains against diversion tactics
The Bureau initiated a Federal Register notice to formalise the rescission. The US Government stated that it will issue a replacement rule in the future.
Read more here.
5. EU publishes draft report on the impact of AI on the financial sector
On 14 May 2025, the European Parliament’s Committee on Economic and Monetary Affairs published a draft report on the impact of AI on the financial sector.
The report examines the use and impact of AI in the EU’s financial services sector and the regulatory landscape. It concludes that, to date, AI deployment in financial services has been prudent, with only a small number of high-risk use cases and no prospect of the financial system being heavily dependent on autonomous, auto-pilot AI models that could endanger market stability or consumer interests.
The draft highlights overlapping and sometimes unclear interactions between the EU AI Act and existing financial services legislation. It notes a lack of authoritative guidance on how these rules should be interpreted together and highlights that certain GDPR requirements around data minimisation and purpose limitation may constrain AI applications in the sector.
Notably, the draft report:
- Calls for the European Commission to provide clear guidance on the application of current financial services regulations to the use of AI;
- Calls for consistent definitions and a simplification of the regulatory framework to avoid duplicate obligations, including overlapping risk assessment reporting requirements;
- Warns against the adoption of new sectoral legislation to regulate AI in financial services; and
- Encourages European and national supervisory bodies to foster AI adoption by interpreting existing rules consistently and refraining from unduly stringent enforcement.
Read the draft report here.
6. EU Committee on Internal Market and Consumer Protection adopted opinion recommending rejection of the AI Liability Directive
On 20 May 2025, the European Parliament’s Committee on Internal Market and Consumer Protection adopted an opinion recommending the rejection of the AI Liability Directive (the Opinion), although the Directive – which would have created a new fault-based liability regime for harm caused by AI, piggy-backing off the EU AI Act – was already unlikely to progress .
The Opinion states that the proposal is unnecessary in light of the EU AI Act and the revised Product Liability Directive, which already impose stricter duties on AI providers and deployers.
The Opinion also flags concern about the lack of empirical market data, the Commission’s reliance on hypothetical cases in its impact assessment, and unresolved definitional and procedural questions.
Read the Opinion here.








.jpg?crop=300,495&format=webply&auto=webp)


_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)








