AI View - November 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

12 November 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

A few initial points:

  • For a summary of the recent Getty Images v Stability AI decision from the English High Court, please see our article here.
  • Over 1,000 people registered for our recent AI and Legal Privilege webinar. Please find the recording here. We are also launching our AI / Legal Privilege Guide and Policy which is intended to help organisations navigate the potentially significant risks in this tricky area. Please contact charmaine.mirandilla@simmons-simmons.com for further information, including on early bird pricing.
  • We are aware of press reports around a leak of the European Commission’s proposed changes to the EU AI Act. We will report on these in AI View once confirmed.

This edition brings you:

  1. China updates Cybersecurity Law to reinforce AI oversight and toughen penalties for cybersecurity breaches

  2. US Senate introduces bill restricting children's access to AI chatbots

  3. US and Japan sign memorandum of cooperation regarding Technology Prosperity Deal to accelerate AI adoption and innovation

  4. European Data Protection Supervisor publishes guidance on data protection-compliant use of generative AI systems

  5. Australia announces consultation on updates to copyright laws to address AI challenges

  6. Singapore Cybersecurity Agency opens consultation on guidelines for securing agentic AI systems

1. China updates Cybersecurity Law to reinforce AI oversight and toughen penalties for cybersecurity breaches

On 28 October 2025, the Standing Committee of the 14th National People’s Congress (NPC), comprising approximately 150 elected members who carry out legislative functions according to the NPC’s guidelines when the full NPC is not in session, concluded its second reading of proposed amendments to China’s Cybersecurity Law, with the revised law scheduled to take effect on 1 January 2026. This second amendment is intended to further align China’s core cybersecurity framework with its strategic priorities in AI. It also aims to ensure consistency with related legislation, including the Data Security Law (DSL) as effective from 1 September 2021, and the Personal Information Protection Law (PIPL) as effective from 1 November 2021.

One notable addition to the law will be a new article providing explicit legal support for AI development. This includes measures to promote fundamental research in AI, the construction and management of training data resources and computing infrastructure, the integration of AI into cybersecurity management, and the advancement of AI ethics and safety governance.

The revised law will also introduce more stringent legal liabilities for offences that compromise network and information security. This encompasses violations affecting the security of network operations, products and services, as well as the protection of information and personal data.

These amendments form part of China’s broader legislative initiative to modernise and unify its approach to digital governance.

Read the Decision on Amending the Cybersecurity Law of the People's Republic of China in full here (only available in Mandarin).

2. US Senate introduces bill restricting children's access to AI chatbots

On 28 October 2025, the Guidelines for User Age-verification and Responsible Dialogue Act (GUARD Act / SB 3062) (the Bill) was introduced in the United States Senate, setting out comprehensive requirements for AI chatbot providers regarding user age verification and responsible service provision.

Under the proposed Bill, all AI chatbot providers would be required to verify the age of users before granting access to their services. Minors identified through the verification process would be prohibited from accessing AI chatbot services. Existing accounts would be frozen until users provide verifiable age data, while new users would be subject to an age verification process at the point of account creation. Providers would also be required to conduct regular re-verification of users’ ages and may engage third parties to perform these verification processes. The Bill stipulates that any data collected for age verification must be limited to what is strictly necessary, encrypted, retained only for as long as required, and must not be sold or shared.

In addition, the Bill mandates that AI chatbots must clearly disclose their AI nature at the outset of each conversation and at regular intervals thereafter. Chatbots are expressly prohibited from presenting themselves as professionals or from offering legal, financial, medical, or psychological advice, and must remind users to seek guidance from licensed professionals for such matters. Violations of the Bill can lead to fines of up to $100,000.

The Bill has been assigned to the Senate Judiciary for consideration and may be subject to amendment before potentially going to the full Senate for further debate and voting.

Read the Bill here.

3. US and Japan sign memorandum of cooperation regarding Technology Prosperity Deal to accelerate AI adoption and innovation

On 28 October 2025, the United States and Japanese Governments signed a Memorandum of Cooperation (MOC) in Tokyo as part of the Technology Prosperity Deal to advance strategic technology collaboration. The MOC is designed to foster collaboration between the two countries in the strategic science and technology sector, with a particular focus on AI frameworks, enforcement of protection measures, export promotion, industry standards, and digital wellbeing.

Through this agreement, the US and Japan have committed to working closely together to promote pro-innovation AI policy frameworks. Key objectives include supporting the development of a US and Japan-led AI technology ecosystem, establishing a partnership to ensure the enforcement and strengthening of existing protection measures, and promoting a mutual understanding of guidelines and frameworks for AI development and adoption to harmonise practices and encourage integration.

The MOC also aims to advance cooperation between the U.S. Center for AI Standards and Innovation and the Japan AI Safety Institute, with the goal of promoting AI innovation and the development of robust industry standards.

Read the MOC here.

4. European Data Protection Supervisor publishes guidance on data protection-compliant use of generative AI systems

On 28 October 2025, the European Data Protection Supervisor (EDPS) published version 2 of its guidance on data protection compliance for the use of generative AI systems (the Guidance). The revised Guidance is addressed to European Union institutions, bodies, offices, and agencies (EUIs) that utilise generative AI and process personal data.

The updated Guidance is intended to assist EUIs in meeting their data protection obligations under Regulation (EU) 2018/1725, taking into account the rapid pace of technological advancement and the specific challenges presented by generative AI. Key updates include a refined definition of generative AI to enhance clarity and consistency, and the introduction of a new, action-oriented compliance checklist to support EUIs in assessing and ensuring the lawfulness of data processing activities involving generative AI.

Further, the Guidance clarifies the allocation of roles and responsibilities, assisting EUIs in determining whether they act as controllers, joint controllers, or processors in the context of generative AI systems. It also offers detailed advice on establishing lawful bases for processing, applying the principle of purpose limitation, and managing data subjects’ rights in relation to generative AI.

It is important to note that this Guidance is issued by the EDPS in its capacity as the independent data protection authority for EUIs, and not as a market surveillance authority under the EU AI Act.

Read the Guidance here.

On 26 October 2025, the Australian Attorney-General announced the commencement of a government consultation on potential reforms to Australia’s copyright laws, designed to look at the challenges arising from AI technologies.

As part of this process, the Copyright and AI Reference Group will consider a range of issues, including options to encourage fair and legal access to copyrighted material for AI use. The Group will assess whether to establish a new paid collective licensing framework for AI applications under the Copyright Act, or whether a voluntary licensing framework would be more appropriate. The Government has already confirmed that it will not introduce a text and data mining exception, maintaining the prohibition on AI developers using creators’ works without payment.

In addition, the consultation will seek to clarify the application of copyright law to AI-generated content and explore mechanisms to make enforcement of rights more affordable, such as the potential creation of a small claims forum to efficiently address lower-value copyright infringement matters.

Read the official press release here.

6. Singapore Cybersecurity Agency opens consultation on guidelines for securing agentic AI systems

On 22 October 2025, the Cyber Security Agency of Singapore (CSA) initiated a public consultation on an addendum to its existing Guidelines and Companion Guide on Securing AI Systems which were published in October 2024.

The Guidelines were developed to protect AI systems against cybersecurity risks including supply chain attacks and Adversarial Machine Learning, and can be accessed here. The Companion Guide was created in collaboration with AI and cybersecurity practitioners to provide practical measures, security controls and best practices from the industry and academia, and can be accessed here.

The addendum is focused on the security of agentic AI systems, those capable of independent planning, action, and decision-making. It sets out voluntary, risk-based regulatory measures intended for system owners, AI practitioners, and cybersecurity professionals. Key controls addressed in the guidance include supply chain security, model and system hardening, authorisation, limiting system autonomy, and continuous monitoring.

To support practical implementation, the addendum provides examples for use cases such as coding assistants, client onboarding, and fraud detection. It provides a comprehensive foundation to AI security principles by taking a risk-based approach to AI development, whilst introducing new considerations to identify potential threats to AI systems. The addendum is intended to be used for informational purposes and is not mandatory, prescription or exhaustive.

The consultation period is open until 31 December 2025.

Read the official announcement of the consultation here, and the draft addendum here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.