AI View: July 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

23 July 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. European Commission publishes General-Purpose AI model Code of Practice

  2. European Commission publishes General-Purpose AI model guidelines

  3. Spain’s data protection authority ready to enforce EU AI Act’s bans

  4. European Parliament releases study on training, creation and regulation of generative AI and copyright

  5. UK FCA publishes terms of reference for AI Live Testing Initiative and opens application window

  6. UK Office of Communications publishes discussion paper on attribution measures to combat harmful deepfakes

  7. Singapore launches the Global AI Assurance Sandbox

  8. Germany has launched the AI Service Desk

  9. South Korea proposes a presidential decree amendment to strengthen the National AI Committee

1. European Commission publishes General-Purpose AI model Code of Practice

On 10 July 2025, the European Commission published the final version of its General-Purpose AI (GPAI) Code of Practice, a voluntary framework designed to help GPAI model providers demonstrate compliance their obligations under the EU AI Act.

The Code of Practice is currently under review by Member States and the Commission. Providers can sign up to the Code of Practice to reduce administrative burdens and gain legal certainty - though it does not confer legal immunity.

The Code of Practice is divided into three chapters: Transparency, Copyright, and Safety & Security. The first two apply to all GPAI providers, while the third is limited to providers of the most advanced GPAI models with systemic risk.

  • Transparency: This chapter explains how providers can comply with their obligations to produce technical documentation and information to downstream integrators of their models, providing a Model Documentation Template form to facilitate transparency. The AI Office is shortly expected to issue a template to allow providers to publish a summary of their training data, which is also required under the EU AI Act.
  • Copyright: This chapter sets out requirements on GPAI model providers relating to their obligation to put in place a policy to comply with EU law on copyright and related rights, and also to respect any reservations of rights in training data that they use for their GPAI models.
  • Safety and security: This chapter applies only to GPAI models with systemic risks defined as models with high-impact capabilities that could affect public health, safety, or fundamental rights at scale. Providers must develop a Safety and Security Framework, conduct systemic risk assessments, and implement mitigations before releasing models, as well as producing Model Reports.

Read the European Commission’s press release which contains a link to the Code of Practice here.

2. European Commission publishes General-Purpose AI model guidelines

On 18 July 2025, the European Commission published new guidelines clarifying the scope of the GPAI model category under the EU AI Act (the Guidelines). These Guidelines focus on the scope of the regime; the Code of Practice focuses on how to comply with the relevant obligations once you’re in the regime.

The Guidelines focus on four key topics:

  • Definition of GPAI model: The Guidelines introduce a clear technical criteria to determine when an AI model qualifies as “general-purpose” – an AI model is considered to be a GPAI model if it was trained using an amount of computational resource that exceeds 10^23 floating point operations and if it can generate language (whether in the form of text or audio), or text-to-image or text-to-video.
  • Providers of GPAI models: The Guidelines clarify the concepts of “provider” and “placing on the market” and clarify when an actor modifying a GPAI model is considered to become a provider.
  • Exemptions: The Guidelines clarify under what conditions providers of GPAI models released under a free and open-source licence and satisfying certain transparency conditions may be exempt from certain obligations under the EU AI Act.
  • Enforcement: The Guidelines complement the GPAI Code of Practice and explain the implications for providers of GPAI models that choose to adhere and implement the GPAI Code of Practice.

The Guidelines are accessible here. Read the European Commission’s press release here.

3. Spain’s data protection authority ready to enforce EU AI Act’s bans

On 15 July 2025, Spain’s data protection authority (AEPD) has announced that it is ready to act against prohibited AI systems under the EU AI Act that process personal data, even before the Act’s full enforcement regime comes into force.

While Spain has not yet adopted national legislation to implement its enforcement network for the EU AI Act, the agency has made clear that its existing powers under data protection law allow it to intervene where AI systems infringe on fundamental rights. This includes AI systems banned under Article 5 of the EU AI Act, such as those involving social scoring or untargeted scraping of facial images to create a facial recognition database.

As we have previously reported , there is a common misunderstanding about enforcement of the EU AI Act. While it has been reported (including from potential regulators) that enforcement can begin from 2 August 2025, this is fundamentally incorrect. Enforcement can only begin from 2 August 2026.

Read the announcement here (in Spanish only) and Simmons & Simmons’ publication on the enforcement of the EU AI Act here.

On 9 July 2025, the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs published a study examining the challenges that generative AI poses to EU copyright law. The report highlights a growing legal mismatch between the practices of AI model training and the current framework of text and data mining (TDM) exceptions, as well as the uncertain status of AI-generated content under existing law.

  • TDM exceptions: The study finds that current TDM exceptions in the EU are not fit for the scale and nature of generative AI training. The opt-out mechanism for rightsholders is fragmented and difficult to enforce, leaving many creators without real control or compensation.
  • AI-Generated Outputs: Under EU law, only works reflecting meaningful human input and creativity are protected by copyright. The study suggests that fully machine-generated content should not be protected by copyright. The line between AI-assisted and AI-generated works is increasingly blurred, creating legal uncertainty and any protection of AI-assisted works should be harmonised.
  • Remuneration and Value Gap: There is currently no mechanism to ensure that creators are paid when their works are used to train AI models. The study recommends introducing a statutory remuneration scheme, managed by collective management organisations, to ensure fair compensation.
  • Policy Recommendations: In order to address immediate coordination gaps, the study recommends establishing a dedicated Working Group on AI and Copyright to ensure political follow-up and structured inter-committee dialogue. It also recommends establishing a six-month High-Level Expert Group to deliver enforceable technical standards and pilot remuneration prototypes, including assessing whether a machine-readable interim opt-out tag is a workable solution. For long-term governance, the study proposes creating a specialised AI & Copyright Unit within the EU AI Office.

The study warns that without reform, the EU risks legal uncertainty, market concentration, and the erosion of incentives for human creativity. It urges policymakers to act swiftly to balance innovation with the protection and fair remuneration of authors.

Read the report here.

5. UK FCA publishes terms of reference for AI Live Testing Initiative and opens application window

On 15 July 2025, the UK Financial Conduct Authority (FCA) launched its AI Lab, a new initiative designed to support the safe and responsible use of AI in UK financial markets. The AI Lab is aimed to deepen the regulator’s understanding of AI’s risks and opportunities while fostering collaboration with firms and stakeholders.

The AI Lab comprises five key components:

  • Supercharged Sandbox: In partnership with NVIDIA, the FCA is enhancing its Digital Sandbox to provide firms with access to advanced computing power, enriched datasets and tooling. From October 2025, firms will be able to test early-stage proofs of concept in a controlled environment. Applications are now open.
  • AI Live Testing: Open until 20 August 2025, this programme invites firms with material AI use cases to test their solutions in live market conditions. Eligible applicants must demonstrate a developed proof of concept, a clear deployment plan, and a commitment to post-deployment monitoring and collaboration with the FCA.
  • AI Spotlight: This initiative showcases real-world AI use cases in financial services. Projects selected for the AI Spotlight are featured on a dedicated digital platform and were presented at a Showcase Day held at the FCA’s London office in January 2025. Applications remain open.
  • AI Sprint: Held in January 2025, the FCA’s AI Sprint brought together regulators, academics, technologists and consumer representatives to explore how to shape a regulatory environment that supports AI innovation. A summary of the event has been published.
  • AI Input Zone: Between November 2024 and January 2025, the FCA invited stakeholders to share their views on transformative AI use cases, regulatory barriers, and whether current frameworks are fit for purpose. The FCA is now reviewing the feedback received.

Read more about the FCA’s AI Lab here.

6. UK Office of Communications publishes discussion paper on attribution measures to combat harmful deepfakes

On 11 July 2025, the UK Office of Communications (Ofcom) released its second discussion paper on deepfake defences, titled “Deepfake Defences 2 - The Attribution Toolkit”. The paper explores the role of attribution measures, such as watermarking, provenance metadata, AI labels and context annotations.

The paper follows Ofcom’s earlier work on deepfakes and the Online Safety Act, which requires regulated services to assess and mitigate risks posed by illegal and harmful content, including deepfakes. This latest publication draws on literature reviews, expert interviews, user research and technical evaluations to assess the effectiveness of attribution tools.

Ofcom identifies four key attribution measures:

  • Watermarking: Embeds invisible signals into content to indicate it was generated by AI. These can be detected by platforms using algorithms, enabling traceability and moderation at scale.
  • Provenance metadata: Attaches detailed information about the origin and editing history of content, including the tools and models used. This metadata can be cryptographically bound to content and displayed to users via labels.
  • AI labels: Visible icons that indicate content has been generated or edited using AI. These are designed to be simple and user-friendly, helping users engage more critically with content.
  • Context annotations: Provide additional information or alternative viewpoints about content, often through expert or crowdsourced contributions. These annotations can help users identify deepfakes and reduce the spread of misleading content.

The paper highlights that while each measure has its strengths, they also face limitations. For example, watermarking tools can be removed through simple edits like cropping, and AI labels may be misinterpreted or ignored by users. Provenance metadata can be stripped or manipulated, and context annotations may be delayed or gamed by bad actors.

To address these challenges, Ofcom recommends a multi-layered approach. Key takeaways include the need for standardisation across attribution tools, combining multiple measures for greater effectiveness, and ensuring that platforms - not just users - bear responsibility for identifying and mitigating deepfakes. Ofcom also calls for further research, testing, and international collaboration to improve the robustness and usability of these tools.

The paper will inform Ofcom’s upcoming consultation on fraudulent advertising and its broader regulatory strategy under the Online Safety Act.

Read the full report here.

7. Singapore launches the Global AI Assurance Sandbox

On 7 July 2025, Singapore unveiled three major initiatives to strengthen trust in AI and data governance: the Global AI Assurance Sandbox, a new Privacy Enhancing Technologies (PETs) Adoption Guide, and the elevation of the Data Protection Trustmark to a national standard.

  • Global AI Assurance Sandbox: The Infocomm Media Development Authority (IMDA) and AI Verify Foundation have expanded the Global AI Assurance Sandbox following a successful pilot launched at the Paris AI Action Summit. The Sandbox now includes new AI archetypes such as agentic AI and addresses risks like data leakage and prompt injection vulnerabilities. It offers practical tools and testing guidance, including IMDA’s Starter Kit for Safety Testing of LLM-based applications. The Sandbox is open to sector regulators and aims to inform policy guidance and future accreditation for AI testers.
  • PETs Adoption Guide: To support privacy-preserving AI development, IMDA and the Personal Data Protection Commission released a new PETs Adoption Guide. The guide includes a Use Case Evaluation Tool and an Implementation Checklist to help organisations assess, adopt and deploy PETs securely and effectively. It builds on insights from Singapore’s PET Sandbox and is intended as a living document, evolving with industry feedback and new use cases.
  • Data Protection Trustmark: IMDA elevated the Data Protection Trustmark to a new Singapore Standard (SS 714:2025), aligning it with global benchmarks and best practices. This new standard provides clear data protection requirements around critical areas such as third-party management and overseas data transfers. Organisations that demonstrate accountable data protection practices can now apply to be certified with the new Data Protection Trustmark.

Read more about these AI initiatives here.

8. Germany has launched the AI Service Desk

On 3 July 2025, Germany’s Federal Ministry for Digital Transformation launched the AI Service Desk at the Bundesnetzagentur (the Federal Network Agency). The platform is designed to help businesses, particularly SMEs and start-ups, navigate the EU AI Act by offering practical guidance and compliance tools.

The AI Service Desk includes an interactive “compliance compass” to assess whether systems fall under the EU AI Act, and provides examples, training resources, and AI literacy guidance. Germany’s digital minister described the initiative as “business- and innovation-friendly”, aiming to position the country as a leader in responsible AI development.

The Bundesnetzagentur, which is expected to play a central role in implementing the EU AI Act in Germany, will continue to expand the AI Service Desk over time.

The AI Service Desk is accessible here (in German). Read Bundesnetzagentur’s press release here.

9. South Korea proposes a presidential decree amendment to strengthen the National AI Committee

South Korea has unveiled a proposed presidential decree amendment aimed at significantly expanding the powers and structure of its National AI Committee (the Committee). The Ministry of Science and ICT announced that the revised decree, now open for public consultation until 28 July 2025, would transform the Committee into a presidential-level “control tower” for national AI policy.

If adopted, the amendment will empower the Committee to do more than just coordinate between ministries. It will be able to deliberate and decide on the national AI vision, mid- and long-term strategies, and key projects, including policies for building and using AI-related data. The number of participating ministries will be expanded, and the Committee will be permitted up to three vice chairs (including at least one full-time), with newly appointed civilian members guaranteed a fixed two-year term.

A new “AI Officer Council” will also be established, bringing together vice-ministerial officials and local deputy governors to support policy coordination and implementation across government. The overhaul is intended to lay the groundwork for a nationwide AI transformation and help position South Korea among the world’s top three AI powers.

Public feedback on the proposal is open until 28 July 2025.

Read more here (in Korean only).

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.