Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
UK and US Governments sign memorandum of understanding on Technology Prosperity Deal
Italy passes first national framework law on AI
European Commission opens consultation on digital omnibus
India issues AI Governance Framework for 2025-2026
China’s Cyberspace Administration issues draft rules on safeguarding minors
South Korea proposes easing of regulations on AI developers
US National Institute of Standards and Technology opens consultation on standards for documentation of AI datasets and models
Private members’ bill on AI and public sector progresses to UK House of Commons for further debate
1. UK and US Governments sign memorandum of understanding on Technology Prosperity Deal
On 18 September 2025, the UK and US governments signed a memorandum of understanding (MoU) establishing a framework for collaboration in science and technology, including AI, under the “Technology Prosperity Deal”. The MoU outlines key areas of cooperation, such as AI, civil nuclear fusion, quantum technologies, 6G, and advanced telecommunications.
The MoU includes a commitment by the UK and US to collaborate on advancing AI research, development and deployment. This will be furthered by joint initiatives, including the creation of shared datasets, and the establishment of cross-border research programmes. The MoU highlights the importance of regulatory alignment of pro-innovation AI frameworks between the UK and US and knowledge sharing in driving responsible innovation.
The MoU aims to foster international collaboration and could serve as a blueprint for global AI governance.
A ministerial-level working group will be established within six months, which will meet annually to determine the scope and direction of the next phase of collaborative programs, taking into account emerging opportunities, policy developments, and shared strategic interests.
Read the MoU here.
2. Italy passes first national framework law on AI
On 26 September 2025, the law on “Provisions and Delegation to the Government on Artificial Intelligence”, Law n. 132 of Sept. 23, 2025 (the Law) was published in the Italian Official Gazette. The Law aims to implement the provisions of the AI Act without creating new obligations beyond its scope, but nevertheless amounts to new rules.
The Law is a comprehensive AI framework law which: (a) lays down AI regulation through principles and sector specific rules, (b) establishes supervisory and enforcement bodies (“market surveillance authorities”), and (c) requires the Italian Government to pass secondary legislation to align Italian law with the AI Act.
The key measures include:
- Servers and Data Localisation: The Law provides that public administration e-procurement platforms should prioritise suppliers of AI systems and models that ensure the localisation and processing of strategic data within data centres located in Italy.
- Consent for Minors: Access to systems or models by minors under 14 requires parental consent. Minors aged 14 to 17 may provide their own consent for the processing of personal data related to AI systems, provided the information is clear and easily accessible.
- Copyright Protection: The Law amends copyright laws to ensure protection applies exclusively to works of human intellect, even if created with the assistance of AI tools, provided they represent the result of the author’s intellectual effort. Text and data mining using AI is permitted only with legitimate access and in line with EU copyright exceptions.
- Market surveillance authorities: The Agency for Digital Italy (AgID) is responsible for overseeing the evaluation, accreditation, and monitoring of AI conformity bodies and is appointed as the “notifying authority” under the AI Act. The National Cybersecurity Agency (ACN) is responsible for supervising AI systems specifically in relation to cybersecurity, including inspection and sanctioning activities, and acts as the market surveillance authority under the AI Act.
- Criminal and Civil Liability: The Law introduces new criminal offences, increases penalties for certain offences, and establishes aggravating circumstances for crimes involving the use of AI systems, such as the unlawful dissemination of AI-generated or altered content.
- Research: The Law promotes collaborative research between companies, research organisations, and technology transfer centres and facilitates access to high-quality data for AI development.
The Law will come into force on 10 October 2025.
Read the Law here (in Italian only).
3. European Commission opens consultation on digital omnibus
On 26 September 2025, the European Commission opened a consultation on the digital omnibus, focusing on simplification of digital regulation (the Consultation).
The purpose of the Consultation is to propose amendments, via a directive and a regulation, to reduce compliance costs and simplify current legislation. The digital omnibus focuses on amendments related to data, cookies, cyber security incident reporting and AI Act implementation.
The Consultation is aimed at companies providing digital services or products with a digital component in the EU, including small and medium-sized enterprises and mid-cap data-driven businesses.
Notably, the Consultation covers the following legislation: Data Governance Act, Free Flow of Non-Personal Data Regulation, Open Data Directive, ePrivacy Directive, cybersecurity incident reporting obligations, the AI Act, and the European Digital Identity Framework. The Consultation aims to minimise cookie consent, streamline cybersecurity reporting obligations, and promote effective application of the AI Act.
The Consultation will last until 14 October 2025.
Read the Consultation here.
4. India issues AI Governance Framework for 2025-2026
On 15 September 2025, India’s National Cyber and AI Center (NCAIC) issued its “AI Governance Framework for India 2025-26” (the Framework). The Framework aims to provide practical guidance on AI governance for ministries, regulatory bodies, the public sector, and organisations deploying AI in India. It is comprehensive and multifaceted, encompassing proposed risk classification, governance blueprints, assurance models, and implementation roadmaps.
The key measures in the Framework include:
- Risk Classification: Recommends classifying AI use into prohibited, high, medium, and low-risk categories and prohibiting biometric emotion recognition in hiring, education, credit, and social scoring.
- Assurance Systems: Recommends implementing comprehensive audits, evaluations, and attestations aligned with international standards such as ISO 42001 and the NIST AI Risk Management Framework.
- Implementation Roadmaps: Provides detailed 100-day, 12-month, and 24-month plans, including templates and checklists, to accelerate AI adoption.
- Governance Roles: Recommends a multi-tiered structure for organisations deploying AI, including AI Risk and Ethics Committees and Chief AI Risk Officers.
- Certification Framework: Recommends implementing a tiered certification system – Basic Compliance, Enhanced Assurance, and Premium Certification – supported by independent testing by the India AI Safety Institute. Enhanced Assurance certifications will undergo biennial audits, while Premium Certifications will require annual audits.
- Incident Reporting: Recommends mandatory reporting of incidents related to safety, security, and rights-related harms within six hours.
- Deep fakes: Calls for a coordinated response across platforms, media organisations, and government entities to limit the spread of malicious synthetic content. Measures include content identification and labelling systems, rapid takedown coordination mechanisms, public communication strategies to address misinformation without amplifying harmful content, collaboration with fact-checking organisations, and media literacy initiatives.
Read the Framework here.
5. China’s Cyberspace Administration issues draft rules on safeguarding minors
On 16 September 2025, the Cyberspace Administration of China (CAC) released draft rules (the Rules) outlining additional duties for online platform service providers with a large base of underage users or significant influence on minors. The Rules build on China’s 2025 regulation on the protection of minors online and aim to ensure that platforms bear primary responsibility for safeguarding minors.
Platform providers are defined broadly to include online product/service providers, manufacturers, sellers of smart devices, and providers of emerging internet technologies, applications, and products, including AI-powered services. Platforms qualify if they have 10 million registered accounts or 1 million monthly active users and specifically target minors. If a platform does not target minors, it will still qualify if its underage users meet the same thresholds. Platforms are counted as having a “significant influence” based on factors such as user numbers, sales, minors’ login frequency/duration, youth-related content volume, and past violations involving minors.
The Rules set out that qualifying platforms must: (i) conduct regular impact assessments on minors’ online protection; (ii) offer a “minor mode” or section for underage users; (iii) implement compliance systems and publish annual social responsibility reports; (iv) draft rules outlining their obligations towards minors; and (v) suspend services for non-compliant providers.
The Rules are open for public comment until 15 October 2025.
Read the Rules here (in Mandarin only).
6. South Korea proposes easing of regulations on AI developers
On 15 September 2025, the South Korean Personal Information Protection Commission (PIPC) announced plans to ease regulatory barriers for AI development this year by revising current legislation and issuing new guidelines for copyrighted content (the Initiative). The Initiative reflects the government’s resolve to accelerate AI innovation and remove regulatory obstacles.
Notably, the key measures include:
- Copyright: Adopting fair-use guidelines for copyrighted content and revising laws to facilitate AI development. Establishing a framework for fair transactions and rewards for copyrighted content use by the end of 2025.
- Use public data in AI development: Adopting exemption guidelines for public servants handling public data by the end of 2025. Introducing measures to innovate pseudonymised data management systems.
- Self-driving vehicles: Revising laws like the Personal Information Protection Act to allow the use of original videos in AI and expanding the designated pilot zone for self-driving vehicles.
- AI Robots: Streamlining rules for robot installations in car parks and construction sites by the end of 2025.
Read the Initiative here (in Korean only).
7. US National Institute of Standards and Technology opens consultation on standards for documentation of AI datasets and models
On 12 September 2025, the National Institute of Standards and Technology (NIST) opened a public consultation (the Consultation) on the extended outline for the proposed “zero draft” of a standard on the documentation of AI datasets and models under its AI Standards Zero Drafts Pilot Project (the Project).
The Project applies to AI developers, providers, and organisations using datasets and models. The Project aims to develop practical templates for documenting AI datasets and models in a consistent and comparable manner, harmonising documentation practices across the AI lifecycle to promote transparency and trustworthiness in AI datasets and models.
Key features of the Project include:
- Templates: NIST proposes templates for dataset and model documentation that balance detail and flexibility. These templates incorporate testing, evaluation, verification, and validation (TEVV) descriptors, with appendices providing expanded examples and suggestions for specific fields.
- Dataset Documentation Template: This template includes fields such as dataset-identifying descriptors, intended use, usage rights, composition, evaluation, and maintenance.
- Model Documentation Template: This template includes fields such as model-identifying descriptors, intended use, training, evaluation, maintenance, and governance.
- Establishing Guidance: NIST aims to establish standards to improve transparency, comparability, and trustworthiness in AI.
The Consultation will remain open until 17 October 2025.
Read the Consultation here.
8. Private members’ bill on AI and public sector progresses to UK House of Commons for further debate
On 18 September 2025, a private members’ bill seeking to regulate the use of AI systems in decision-making processes in the public sector (the Bill) progressed from the House of Lords to the House of Commons. The Bill was introduced into the House of Lords on 9 September 2024, and has now completed all necessary stages in the House of Lords.
The key features of the Bill, if passed, would include:
- Mandatory Duties: Public authorities must complete and regularly update Algorithmic Impact Assessments and Algorithmic Transparency Records before deploying or procuring such systems, and publish these within 30 days of completion or results being known.
- Logging: Systems must have logging capabilities, with logs retained for at least five years.
- Prohibited use: Public authorities are prohibited from using AI systems if effective scrutiny is prevented by contractual or technical barriers.
- Dispute Resolution: An independent dispute resolution service must be available in relation to decisions made by AI systems.
The Bill is currently undergoing its first reading in the House of Commons.
Read the Bill here.











_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)








