AI View : October 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

29 October 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. California enacts legislation on AI transparency and consumer protection (AB 853, SB 243 and AB 316)

  2. G7 Cyber Expert Group publishes statement on cyber security risks arising from AI

  3. Financial Stability Board publishes report on monitoring AI adoption and related vulnerabilities in the financial sector

  4. European Commission publishes scientific reports on GPAI regulation under the EU AI Act

  5. UK announces ‘AI Growth Lab’ to promote real-world AI system testing

  6. Australia’s National AI Centre publishes guidance on responsible AI governance

  7. Croatia proposes criminalisation of endangerment of life or property by an AI system

1. California enacts legislation on AI transparency and consumer protection

On 13 October 2025, California signed into law three new bills on the subject of transparency, consumer protection, and the use of AI as a civil defence.

  • AB 853 delays the entry into force of the California AI Transparency Act. Originally scheduled to come into force on 1 January 2026, the law will now enter into force on 2 August 2026. The law requires providers of large online platforms which offer generative AI (GenAI) functionality to provide users with a free AI detection tool to assess whether image, video or audio content was altered by the provider’s GenAI system from 1 January 2027. Further transparency provisions are set to be rolled out in 2028.
  • SB 243 places transparency obligations on the providers of companion chatbots to ensure that, where a reasonable user would believe they are interacting with a human, users are notified that they are interacting with an AI system (with more stringent disclosure requirements in respect of users whom the operator knows are minors), as well as obligations to maintain a protocol to ensure that users are not served AI-generated self-harm or suicide-related content. Providers must also make annual reports to the Office of Suicide Prevention on steps taken to detect and respond to suicide ideation by users. A person injured as a result of a breach of these provisions may bring a civil claim against a provider.
  • AB 316 prevents a defendant who has developed, modified, or used an AI system, who is alleged in civil proceedings to have caused harm, from raising the defence that harm caused to a claimant was done by an autonomous system. However, the law does not prevent a defendant from relying on other affirmative defences based on issues such as causation or foreseeability, or the comparative fault of another person.

Read the full text of the laws here: Delaying the AI Transparency Act (AB 853); Protection in respect of companion chatbots (SB 243); Civil Defences (AB 316)

2. G7 Cyber Expert Group publishes statement on cyber security risks arising from AI

On 6 October 2025, the G7 Cyber Expert Group (CEG) issued a statement on key considerations for stakeholders in the financial sector surrounding AI-related cybersecurity risks to which financial institutions may be particularly exposed, and outlining action points for safe and responsible AI adoption.

Key takeaways include:

  • Benefits of AI: the CEG emphasises that AI can have beneficial cybersecurity functions, including detecting fraud, identifying network threats, and anticipating system failures much more quickly and effectively than would be possible through human oversight alone.
  • Increasing existing cyber risk: the CEG also notes that AI uptake among malicious actors has the potential to significantly increase risks, such as AI-powered phishing, or evolutions in malware capable of launching increasingly sophisticated attacks.
  • AI adoption-related vulnerabilities: malicious actors may directly target AI systems themselves, via prompt injection, data poisoning, or engineering data leaks.

The CEG’s statement highlights that financial institutions should proactively consider how to manage cybersecurity risks in AI adoption, considering key issues such as whether they have adequate governance safeguards in place, how data security is ensured, and maintaining effective incident response playbooks to address specifically the risks of AI-related incidents.

The CEG recommends measures such as investing in secure AI development, identifying opportunities to use AI to defend against risks (both cyber and non-cyber), and ensuring staff are fully AI-literate, as institutions without a sufficient knowledge base may be much more vulnerable.

Read the full statement here.

On 10 October 2025, the Financial Stability Board issued a report on AI adoption in the financial sector and the vulnerabilities which it may pose. The FSB is responsible for coordinating international financial stability by coordinating national- and international-level bodies and identifying potential risks and vulnerabilities facing the global financial sector.

The report highlights that many national financial authorities are still at an early stage of developing processes to monitor AI-related vulnerabilities in the financial sector, relying primarily on publicly available data, industry surveys and outreach. However, most authorities have plans to develop data collection practices, which this report aims to support, highlighting key concerns for member authorities in monitoring AI usage and risk, and providing guidance on areas such as identifying vulnerabilities, improving processes, and ensuring data is timely and representative.

The FSB outlines several possible strategies to assess vulnerabilities:

  • National authorities could develop their monitoring approaches by collaborating with domestic stakeholders, building closer supervisory relationships with financial institutions, and engaging with AI firms and other key entities to build their knowledge of AI risks.
  • National authorities could also encourage data sharing between national financial regulators and non-financial authorities to improve monitoring efforts.
  • FSB and standard-setting bodies can encourage cross-border cooperation, promote alignment and indicators of vulnerabilities, and address remaining data gaps in relation to particularly challenging vulnerabilities.

Read the full report here.

4. European Commission publishes scientific reports on GPAI regulation under the EU AI Act

On 10 October 2025, the European Commission’s research arm, the Joint Research Centre, published six new reports on GPAI models, including key benchmarks and frameworks with the potential to inform how the EU AI Act is applied to general-purpose AI models

The reports address key concerns for providers and modifiers of GPAI models, including factors which inform key themes such as:

  • GPAI identification: outlining when a model can be considered GPAI (and thus when the relevant obligations will be triggered under the EU AI Act), and providing metrics linked to relevant cognitive domains to assist providers in identifying when their model is at risk of being considered GPAI.
  • Systemic risk: clarifying metrics influencing whether a GPAI model will be categorised as involving ‘systemic risk’ (which is subject to a higher degree of regulatory scrutiny), including compute thresholds, safety benchmarks, and high-impact capabilities.
  • Behavioural change: assessing when a modified GPAI model will be considered a ‘new’ model (and therefore triggers new compliance obligations).

The six reports are titled:

  • A Framework for General-Purpose AI Model Categorisation
  • A Framework to Categorise Modified General-Purpose AI Models as New Models Based on Behavioural Changes
  • A Proposal to Identify High-Impact Capabilities of General-Purpose AI Models
  • Training Compute Thresholds – Key Considerations for the EU AI Act
  • The Role of AI Safety Benchmarks in Evaluating Systemic Risks in General-Purpose AI Models
  • General-Purpose AI Model Reach as Criterion for Systemic Risk

Read the full reports here.

5. UK announces ‘AI Growth Lab’ to promote real-world AI system testing

On 21 October 2025, the Department for Science, Innovation and Technology announced the launch of the AI Growth Lab, a new scheme of real-world AI system testing (known as ‘sandboxes’).

Key takeaways include:

  • Cross-sector: the scheme will target key sectors including healthcare, manufacturing, professional services and transport, allowing providers to deploy an AI system in a live situation with regulatory supervision (involving both AI experts and conventional regulators).
  • Dynamic regulation: the statement emphasises the need to refine the UK regulatory environment to promote innovation and growth through responsible AI usage. Sandboxes allow particular regulations to be ‘switched off’ (such as those requiring financial services advice to be explainable, which is not possible through AI’s ‘black box’ functioning) to explore how AI tools which would otherwise be prohibited can promote better outcomes, to ultimately encourage wider AI adoption.
  • Call for evidence: the government is running a public call for evidence, requesting input on individuals’ and organisations’ opinions on the scheme, including on issues such as whether it should be managed by the government or existing regulatory bodies, and expert opinions on the technical implementation of sandboxes.

Read more and view the call for evidence here.

6. Australia’s National AI Centre publishes guidance on responsible AI governance

On 21 October 2025, the Australian National AI Centre (NAIC) published two guidance documents on responsible AI governance and adoption.

The first, ‘Foundations’, is more streamlined, targeting organisations with little previous experience in AI, while the second, ‘Implementation Practices’, is addressed to governance professionals, technical experts, and organisations developing AI models or systems, or operating in high-risk or sensitive fields, and addresses more technical elements and provides more comprehensive analysis of the NAIC’s expectations.

The guidance is the NAIC’s first update to the Voluntary AI Safety Standard, originally published in September 2024. Drawing on national and international ethical standards, the updated version sets out six essential practices and is now targeted to developers, as well as deployers, of AI systems.

The six practices are:

  • Decide who is accountable
  • Identify and engage stakeholders
  • Measure and manage risks
  • Share essential information
  • Test and monitor
  • Maintain human control

‘Implementation Practices’ now applies to both developers and deployers of AI systems. The guidance emphasises that developers and deployers are ultimately accountable for the consequences of AI usage, and advises organisations to proactively define and communicate how responsibility for AI usage is allocated across an organisation.

The NAIC also provides a toolkit to encourage practical steps for compliance, including a screening tool for organisations to assess the required level of governance for particular AI systems, and template AI policies and registers, to support organisations in monitoring AI usage effectively. Further expansion of the tools and resources available is planned to be rolled out over the next year.

Access the full guidance here.

7. Croatia proposes criminalisation of endangerment of life or property by an AI system

On 16 October 2025, the Ministry of Justice, Administration and Digital Affairs announced a new amendment to its Criminal Code (the Bill), which would introduce a new offence of endangering life or property through the development, testing, verification, supervision, management or use of an AI system (as defined in the EU AI Act).

While discussion around the Bill has been predominantly focused on the potential risks of automated vehicles, the Bill has attracted scrutiny for its potentially much broader applicability, which is capable of covering a wide range of circumstances and applying criminal liability to a much wider range of individuals and organisations involved in the process of AI development.

Read the official statement here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.