Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
On 2 August 2025, the EU AI Act (the Act)'s provisions on general-purpose AI (GPAI) models entered into force. Providers of GPAI models placed on the market after this date must comply with requirements such as maintaining up-to-date technical documentation and publishing training data summaries. More onerous obligations will apply to providers of GPAI models deemed to pose "systemic risk". Our AI team has been advising extensively on the GPAI model provisions and would be happy to help with any queries on this important regime.
This edition brings you:
European Commission publishes mandatory template for AI training data disclosure
US launches AI Action Plan focusing on global expansion and reducing regulations
US President signs three executive orders on AI export, "woke" AI and data centre infrastructure
US Senator reintroduces the TRAIN Act for AI model data transparency for copyrighted works
China announces global AI governance plan emphasising international coordination
German Federal Office for Information Security publishes white paper on bias in AI
French data protection authority publishes AI recommendations on GDPR applicability, security, and training data annotation
1. European Commission publishes mandatory template for AI training data disclosure
On 24 July 2025, the European Commission's AI Office published a mandatory template for providers of GPAI models to disclose training data, as required under Article 53(1)(d) of the Act. The template for this obligation - which is effective from 2 August 2025 - aims to enhance transparency on the type of data that is used in training GPAI models while safeguarding rights such as trade secrets and data protection.
Key points include:
- Transparency requirements: Providers must publish a summary of data used across all stages of model training, including pre-training, fine-tuning, and alignment in the form set by the Commission. This must then be updated at least every six months.
- Scope of disclosure: The summary provides a comprehensive overview of the data used to train a model, including details of public and private datasets, data scraped from the internet, users' data and synthetic data, as well as any other sources of data used. The level of disclosure varies across these categories, e.g. more granularity is required on scraped data.
- Multiple models: The summary can cover multiple models or model versions, provided that the content of their respective summaries are the same.
- Timeline for implementation: For GPAI models placed on the market before 2 August 2025, providers should take the necessary steps to publish the summary by no later than 2 August 2027. For GPAI models placed on the market after 2 August 2025, immediate compliance is required.
You can read the mandatory template and explanatory document here.
2. US launches AI Action Plan focusing on global expansion and reducing regulations
On 23 July 2025, the White House published "Winning the AI Race: America's AI Action Plan" (the Plan), aligning with a recent executive order aimed at strengthening US leadership in AI (see update below). The Plan prioritises the global expansion of the US AI industry and reducing regulatory barriers to innovation.
Key policies include:
- Exporting American AI: Partnering with AI firms to deliver secure AI export packages, such as hardware, software, and standards, to nations allied to the US.
- Accelerating infrastructure development: Streamlining permits for data centres and semiconductor facilities, alongside initiatives to address workforce shortages in high-demand roles.
- Encouraging innovation: Removing federal regulations and seeking private sector input to identify further regulatory barriers.
- Evaluating National Security Risks: Ensuring the US government leads in assessing national security risks posed by advanced AI systems, including threats such as cyberattacks and vulnerabilities in critical infrastructure.
Read the Plan here.
3. US President signs three executive orders on AI export, "woke" AI and data centre infrastructure
On 23 July 2025, the US President signed three executive orders aimed at strengthening the country's leadership in AI, removing "woke" elements in AI systems, and accelerating the development of critical infrastructure.
The executive orders and their key provisions are:
- Promoting the Export of the American AI Technology Stack: Launching a programme to prioritise the global deployment of US-developed AI technologies, including full-stack solutions such as hardware, data pipelines, and sector-specific applications. Industry leaders are invited to submit export proposals, which must comply with US export control laws.
- Preventing "Woke AI" in the Federal Government: Updating federal procurement rules to prevent the government from procuring large language models that promote "diversity, equity, and inclusion" - initiatives which the Trump administration has been criticising. Federal contracts will enforce compliance with these principles, with non-compliance risking termination.
- Accelerating Federal Permitting of Data Centre Infrastructure: Fast-tracking the development of AI data centres through financial incentives, streamlined environmental reviews, and the use of federally owned lands. Qualifying projects will be listed on the Federal Permitting Dashboard to facilitate greater transparency and expedited approvals.
Read the executive orders here, here, and here.
4. US Senator reintroduces the TRAIN Act for AI model data transparency for copyrighted works
On 24 July 2025, US Senator Peter Welch reintroduced a bill in the Senate for the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act, which seeks to protect creators by requiring AI companies to maintain and provide access to transparent training records. The TRAIN Act proposes to enable copyright holders to access training records of AI models via subpoena to determine if and how their works were used without authorisation and seek compensation for misuse.
The bill for the TRAIN Act had previously been introduced in 2023 and 2024 without significant movement afterwards. With Congress divided on AI regulation, its prospect is currently uncertain. The bill has been referred to the Committee on the Judiciary for further review and consideration.
The press release and a link to the TRAIN Act can be found here.
5. China announces global AI governance plan emphasising international coordination
On 26 July 2025, China announced a global AI governance plan (the Plan), calling for international coordination to guide the development and use of AI. Premier Li Qiang proposed the creation of a global AI cooperation organisation and emphasised the need for a balanced approach to AI development and security. Li Qiang suggested that AI should be developed as a public good that should benefit all of humanity, with a focus on supporting the Global South and promoting inclusive development.
The plan outlines the following:
- Global AI framework: China advocates for a global framework and rules to address AI risks and maximise its potential for economic and social development.
- AI empowerment: The plan promotes AI applications across industries such as healthcare, education, agriculture, and smart cities, while encouraging cross-border cooperation and best practice sharing.
- Infrastructure development: Emphasis on building global AI infrastructure, including clean energy, next-generation networks, and data centres, with support for developing nations.
- Sustainable AI: Advocates for green AI technologies, energy efficiency standards, and environmentally friendly development models.
The press release and the Plan can be found here and here (the Plan is only available in Chinese).
6. German Federal Office for Information Security publishes white paper on bias in AI
On 24 July 2025, the German Federal Office for Information Security (BSI) published a white paper highlighting the risks of bias in AI systems. The report provides guidance for developers, providers, and operators of AI systems to identify, mitigate, and manage bias throughout the AI lifecycle.
The report identifies four key areas of focus:
Types of bias: Descriptions of the types of bias that can arise during the AI lifecycle, including historical bias, representation bias, and algorithmic bias.
Detection of bias: Methods for identifying bias, such as data analysis techniques and fairness metrics to evaluate model behaviour.
Mitigation strategies: Strategies to address bias at different stages of development, including pre-processing, in-processing, and post-processing.
Bias and cybersecurity: Analysis of how bias can exacerbate cybersecurity threats by compromising the confidentiality, integrity, and availability of AI systems, such as leaving room for membership inference attacks where attackers exploit the bias in an AI model to extract sensitive data from the model.
The BSI recommends checking AI projects for possible bias as early as the planning phase, clearly defining responsibility for AI outcomes, compiling diverse datasets that are continuously evaluated, and using technical correction methods such as reweighting or bias mitigation algorithms to actively reduce distortions.
Read the press release and the white paper here (only available in German).
7. French data protection authority publishes AI recommendations on GDPR applicability, security, and training data annotation
On 22 July 2025, the French Data Protection Authority (CNIL) published its latest recommendations on AI, focusing on GDPR applicability, security requirements, and training data annotation. These guidelines are designed to help stakeholders comply with legal requirements and manage risks when developing AI systems that process personal data.
Key highlights include:
- GDPR applicability: The CNIL provides guidance on determining whether AI models trained on personal data fall under GDPR and how to document this analysis.
- Security and annotation: Recommendations include measures to ensure compliance in training data annotation in order to develop reliable AI systems and protect individuals' rights.
- Data minimisation and retention: Developers are required to collect only the data necessary for the defined purpose and set clear retention periods to avoid keeping data longer than needed.
The CNIL is also developing practical tools, such as the PANAME project, to assess whether AI models process personal data. Further guidance on AI responsibilities under the GDPR is expected in late 2025.
The press release and the recommendations can be found here and here (the recommendations are only available in French).








.jpg?crop=300,495&format=webply&auto=webp)

_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)









