AI View - September 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

03 September 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. United Nations establishes Global Dialogue on AI Governance

  2. UK Data (Use and Access) Act which governs use of copyrighted material in AI training enters into force

  3. Colorado delays effective date of Colorado AI Act

  4. China National Cybersecurity Standardisation Technical Committee publishes AI security standard application practice cases

  5. Reserve Bank of India releases report on framework for responsible and ethical enablement of AI in financial sector

  6. South Korea launches reorganised National AI Strategy Committee

1. United Nations establishes Global Dialogue on AI Governance

On 18 August 2025, the United Nations (UN) General Assembly established the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI (the Panel).

The Global Dialogue on AI Governance is an international forum that will convene for two days each year for governments and stakeholders to discuss:

  • The development of safe, secure and trustworthy AI

  • Interoperability and compatibility of AI governance approaches

  • Robust human oversight of AI systems in a manner that complies with international law

  • The development of open-source software, open data and open AI models

The Panel will be comprised of 40 members, representing a number of disciplines and countries. The Panel will advise the UN on the opportunities, risks and impacts of AI and will also produce a flagship annual study. One of the key functions of the Panel is to advise and inform the Global Dialogue on AI Governance.

Read more about this development in the UN press release here.

2. UK Data (Use and Access) Act which governs use of copyrighted material in AI training enters into force

On 20 August 2025, the UK Data (Use and Access) Act (the Act) came into force. The Act is a landmark piece of legislation aimed at modernising how data is used and accessed across the private and public sectors. 

Notably, the Act has a provision requiring inquiry into use of copyrighted works in the development of AI systems. Section 136 of the Act requires that the Secretary of State prepare and publish a report on the use of copyright works in the development of AI systems within nine months of the Act being passed (i.e. by 18 March 2026).

Within their report, the Secretary of State must consider the four policy options outlined in the UK Government's "Copyright and AI Consultation Paper", as well as any alternative options which the Secretary of State considers appropriate. The report must also consider and make proposals in relation to the following:

  • Technical Measures and Standards: Including measures and standards concerned with metadata that may be used to control: (a) how copyright works are used to develop AI systems; and (b) how AI systems access copyrighted works (e.g. via web crawlers).
  • Impact on Developers: How copyright affects data access and use by AI developers, especially smaller businesses and individuals.
  • Transparency: The disclosure by AI system developers of information regarding what copyrighted material they use and how they accessed it.
  • Licencing: How licences for using copyrighted material can be granted to AI developers of all sizes.
  • Enforcement: Ways to enforce restrictions relating to the access to and use of copyrighted works in the development of AI systems, including through a regulator.
    The Secretary of State is also required to have regard to the responses to the UK Government's Consultation Paper, as well as to approaches taken in other jurisdictions.

Read section 136 of the Act here.

3. Colorado delays effective date of Colorado AI Act

On 26 August 2025, Colorado senators passed a special-session bill (SB 004) to postpone the Colorado AI Act (CAIA)'s effective date from 1 February 2026 to 30 June 2026 to allow more time for potential revisions.

Originally passed in 2024 as Senate Bill 24-205, the CAIA was intended to become the first comprehensive law in the United States regulating AI systems involved in critical decisions such as employment, housing, loans, and healthcare. From the effective date, the CAIA will require developers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.

This extension grants legislators nearly an entire regular session in 2026, which starts in early January, to negotiate amendments before the 30 June 2026 implementation deadline. For businesses using AI systems, the delay provides extra time to develop compliance programmes. They might also benefit from some relief in the form of an amendment if lawmakers and stakeholders can reach a consensus in 2026.

In addition to local disagreements over CAIA's potential impact on businesses, Colorado lawmakers are also facing pressure on the federal level, including the proposed 10-year moratorium on AI regulation, which was ultimately removed from the Big Beautiful Bill, and the White House's AI Action Plan, which threatens to withhold federal funding from states implementing comprehensive AI laws. It remains to be seen whether the Colorado legislature will reach an agreement on amendments to CAIA in 2026, especially as stakeholders had originally planned to propose changes during the 2025 regular legislative session to align with the CAIA's initial effective date.

Read the official update here.

4. China National Cybersecurity Standardisation Technical Committee publishes AI security standard application practice cases

On 26 August 2025, China's National Cybersecurity Standardisation Technical Committee launched a public consultation, open until 1 September, on a proposed list of AI security standard application practice cases (the Practice Cases).

The Practice Cases aim to gather feedback on a proposed list of AI security standard application practice cases. These cases are intended to serve as examples of how national cybersecurity standards, particularly those related to AI security, can be applied in real-world scenarios. The initiative seeks to enhance the implementation of AI security standards, promote best practices, and ensure consistency in the application of these standards across various industries.

The nine Practice Cases include:

  • Alibaba Cloud: "Identification Methods for AI Generated Synthetic Content" standard under the "Detection, Watermarking, Verification" full-link technical solution (GB 45438-2025)

  • NetEase: "Identification Methods for Generated Synthetic Content" standard in AI service content audit systems (GB 45438-2025)

  • Beijing Xiaoju Technology: Generative AI service security standards in travel service scenarios

  • Beijing Douyin Technology: "Identification Methods for Generated Synthetic Content" standard in short video platform dissemination

  • Migu Culture Technology: Application practice of GB/T 45654-2025 and other generative AI security national standards in security large models

  • China Mobile Communications Group: Application of the "Basic Security Requirements for Generative AI Services" (GB/T 45654-2025) standard in AI security evaluation scenarios

  • iFLYTEK: "Basic Security Requirements for Generative AI Services" standard in multiple fields

  • Lenovo: Basic Security Requirements for Generative AI Services" in the field of AI PC security and compliance

  • Beijing Zero One Everything Technology: AI industry Large Language Model practice and application

Read the Practice Cases here.

5. Reserve Bank of India releases report on framework for responsible and ethical enablement of AI in financial sector

On 13 August 2025, the Reserve Bank of India (the RBI) released the "Framework for Responsible and Ethical Enablement of AI (the FREE-AI Framework)" report, introducing a comprehensive framework for the responsible adoption of AI in India's financial sector.

The FREE-AI Framework is a set of guidelines that is targeted at all RBI-regulated entities, including banks, Non-Banking Financial Companies (i.e. entities that offer financial services but lack a banking licence), and financial technology firms.

The report emphasises a balanced approach that fosters innovation while mitigating risks, anchored in seven core Sutras (principles)  - innovation, fairness, accountability, safety, resilience, and sustainability  - and supported by 26 targeted recommendations.

The recommendations are structured around two main strategic objectives:

  • Innovation Enablement: Its three pillars - Infrastructure, Policy, and Capacity - include recommendations such as creating a financial sector data infrastructure, establishing an AI innovation sandbox, developing indigenous AI models, integrating AI with India's digital public infrastructure, and promoting capacity-building across regulated entities and regulators.

  • Risk Mitigation: It addresses potential harms through three pillars - Governance, Protection, and Assurance. Key recommendations include requiring regulated entities to implement board-approved AI policies, robust data lifecycle governance, and AI-specific product approval processes. It also emphasises consumer protection measures, enhanced cybersecurity protocols, red teaming exercises, AI-specific business continuity plans, and a structured incident reporting system.

Read the FREE-AI Framework here.

6. South Korea launches reorganised National AI Strategy Committee

On 2 September 2025, South Korea launched the National AI Strategy Committee (the Committee). The Committee now has expanded powers and broader membership and becomes the highest apparatus to review, coordinate and decide policy on AI in South Korea.

The Committee, which replaces the existing National AI Committee, has wider authority across a broader range of areas, including security, governance, and cross-sectoral oversight.

Key changes to the role of the Committee include the granting of decision-making power over defining national AI vision, formulating medium-to-long-term strategies, coordinating inter-ministerial policies, and managing implementation and performance evaluations, and the promotion of data utilisation. This development marks a shift from the previous administration's industry-focused model to the development of a more comprehensive national strategy.

The Committee's membership is expanded to include five additional ministers representing the defence, health, environment, labour, and SMEs sectors. A new council of Chief AI Officers, comprising deputy ministers and provincial government representatives, will also be established to oversee policy implementation.

Further information on the Committee can be found here and the official announcement can be found here (in Korean only).

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.