AI View: March 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

18 March 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. AI Bill reintroduced to the House of Lords

  2. European Commission publishes third draft of general-purpose AI Code of Practice

  3. OECD publishes common framework for AI incidents reporting

  4. European Commission publishes model contractual clauses for procurement of non-high-risk AI systems

  5. Japan government passes new AI bill

  6. Utah legislature sends AI consumer protection bill to governor for approval

  7. South Korea issues guidelines to prevent user damage in the process of generative AI services 

1. AI Bill reintroduced to the House of Lords

On 4 March 2025, Lord Holmes presented the Artificial Intelligence (Regulation) Bill (the Bill) to the House of Lords for the second time. The Bill is currently awaiting its second reading, in what is the latest effort to introduce AI-specific legislation to the UK.

The Bill proposes a range of governance and regulatory frameworks, including the creation of an AI Authority, a regulatory body that would be tasked with ensuring that key regulatory principles are followed during the creation, development and use of AI. The proposed Bill would require the AI Authority to be responsible for:

  • ensuring the development and utilisation of AI in accordance with key regulatory principles, including safety, transparency, fairness, accountability and contestability

  • ensuring alignment with sector-specific regulators in respect of AI

  • assessing and monitoring risks across the economy arising from AI

  • collaborating with relevant regulators to construct regulatory sandboxes for AI to provide opportunities for companies to test AI technology in real-world conditions

  • implementing a public consultation regarding the risks, opportunities and appropriate frameworks

The requirements under this proposed legislation also include a number of design requirements, such as provisions around data protection, privacy and intellectual property law. Under the Bill, new regulations must ensure that persons involved in training AI must supply to the AI Authority a record of all third party data and IP used in that training and assure the AI Authority that they use such data and IP by informed consent and comply with all applicable IP and copyright obligations. The Bill also provides that any person supplying a product or service involving AI must give users clear health warnings, labelling and allow third parties accredited by the AI Authority to audit its processes and systems.

Read the proposed Bill in full here.

2. European Commission publishes third draft of general-purpose AI Code of Practice

On 11 March 2025, the general-purpose AI (GPAI) Code of Practice (the Code) entered its final round of drafting, with a streamlined structure compared to previous drafts and with refined commitments and measures based on stakeholder feedback.

This revised draft is based on a list of broad commitments and provides more detailed measures to implement these commitments. It includes two sections detailing transparency and copyright obligations for all GPAI providers, with certain exemptions for open-source models in line with the AI Act (AIA). It also includes a user-friendly Model Documentation Form allowing signatories to document the required transparency and copyright information relating to the relevant GPAI model in one place.

The third section of the draft is only applicable for providers of the most advanced GPAI models that are classified as posing systemic risk in accordance with Article 51 of the AIA, albeit this is a major focus of the Code. This section outlines measures for systemic risk assessment and mitigation, including model evaluations, incident reporting, and cybersecurity obligations.

The revisions are drawn from feedback on the second draft, which was published in December 2024. Alongside this third draft, the authors have introduced an executive summary and interactive website to facilitate stakeholder engagement.

Separately and independently from the new draft of the Code, the AI Office is developing a template for the public summary of GPAI training data required by Article 53(1)(d) of the AIA. It is also providing guidance on AIA rules, clarifying the scope of the rules and exemptions for open-source models.

Stakeholders are invited to submit feedback by 30 March 2025. Workshops will be offered to ensure comprehensive engagement, with the final Code expected by May to aid compliance with the AIA by GPAI providers.

Read the latest draft of the Code here and the interactive website published by its authors here.

3. OECD publishes common framework for AI incidents reporting

The Organisation for Economic Co-operation and Development (OECD) has published a report which outlines a common framework (the Framework) for incident reporting relating to AI.

The Framework aims to create a global standard for the reporting of AI-related incidents that can be adopted across various jurisdictions and sectors. The Framework outlined in the report would enable countries to adopt a common reporting approach that could then be tailored to ensure compliance with regulation and local reporting requirements.

The Framework consists of 29 criteria and is designed to assist policymakers in comprehending AI incidents in various contexts, identifying high-risk systems, assessing both current and emerging risks, and evaluating the impact of AI on individuals and the environment.

Read the Framework here.

4. European Commission publishes model contractual clauses for procurement of non-high-risk AI systems

On 5 March 2025, the European Commission published a new version of model contractual clauses, which were originally published in September 2023. These model contractual clauses aim to aid responsible AI procurement.

This updated version includes both more comprehensive clauses for high-risk AI systems, and lighter clauses for lower risk AI systems. These new model contractual clauses are accompanied by practical guidance on the implementation of procurement practices that comply with AI regulations.

The model contractual clauses are drafted to ensure that parties to a contract concerning the procurement of AI are in compliance with the AIA. With varying levels of onerousness depending on the risk classification of the relevant AI system, the model contractual clauses include measures to contractually require such parties to:

  • establish risk and quality management systems

  • provide the necessary technical documentation for the relevant AI system

  • implement measures to ensure adequate levels of human oversight

  • comply with adequate transparency and data governance obligations

Read model contractual clauses here.

5. Japan government passes new AI bill

On 28 February 2025, the Japanese government passed a bill on AI, allowing the state to assess cases of misuse of AI, conduct investigations and order firms to take corrective measures.

The bill obligates companies to cooperate with the government's AI measures. It stipulates that businesses that infringe on the rights and interests of the people, such as by spreading false information, will be investigated and told to take corrective measures. If companies violate human rights in their use of AI, their names will be disclosed to be public.

However, the bill opts to leave it up to the companies themselves to improve their safe and transparent use of AI and does not include penalty provisions. Malicious cases of AI misuse will face penalties under existing laws including the criminal code and the copyright law.

The bill states that a task force, led by Prime Minister Shigeru Ishiba and all Cabinet ministers, will be established to shape AI policy and develop a national AI strategy.

The bill also states the government should promote international cooperation, such as by actively participating in the creation of international AI standards based on the Hiroshima AI Process.

Read more here.

6. Utah legislature sends AI consumer protection bill to governor for approval

On 13 March 2025, the Utah legislature sent a generative AI consumer protection bill (Senate Bill 226) to the governor for final approval, after it was signed by the Speaker and the President of the Senate on 8 March 2025.

Senate Bill 226 defines terms such as 'generative artificial intelligence' and 'high-risk artificial intelligence interaction' and outlines disclosure obligations for suppliers using generative AI in the provision of consumer transactions or regulated services.

In particular, Senate Bill 226 requires clear disclosure when individuals are interacting with generative AI, with specific provisions for verbal and written interactions, which will help consumers understand how their data is being used and the nature of the services they are receiving.

Senate Bill 226 also establishes liability for businesses that violate consumer protection laws related to AI.

Read Senate Bill 226 here.

7. South Korea issues guidelines to prevent user damage in the process of generative AI services  

On 28 February 2025, the Korea Communications Commission introduced new guidelines (the Guidelines) to mitigate user harm associated with generative AI services which will come into effect on 28 March 2025.

The Guidelines proposed six implementation methods for protecting users of generative AI services:

  • Protection of user personality rights: Developers and service providers must implement algorithms and monitoring systems to detect and prevent content that infringes on users' personality rights.

  • Efforts to inform users about the decision-making process: Service providers should clearly communicate that outputs are AI-generated and provide accessible information about the AI's decision-making process.

  • Efforts to respect diversity: Developers and service providers should reduce bias in algorithms and provide users with intuitive methods to report and address potentially discriminatory outputs.

  • Management of input data collection and utilisation: Service providers must inform users about data usage for training and allow them to consent or opt-out.

  • Responsibility and participation in resolving issues: Service providers should define responsibilities for AI outputs and establish systems to minimise potential harm to users.

  • Efforts to ensure healthy distribution and dissemination of generated content: Service providers should guide users to avoid inappropriate content and review prompts and outputs to ensure they comply with moral and ethical standards.

The Guidelines also offer best practices to enhance user protection, addressing concerns like deepfake crimes and discrimination.

Read the report here and the Guidelines here (only available in Korean).

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.