AI View - December 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

10 December 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

Practical Insights for Navigating Crypto and AI Disputes in Asia

On 10 November 2025, we published our latest analysis on emerging dispute resolution trends across Asia, offering guidance on managing challenges and risks arising as the region adapts to rapid developments in cryptocurrencies, digital assets and artificial intelligence.

The analysis highlights sharp regulatory divergence across the region, the increasing use of arbitration for technology-driven disputes, and the practical issues now facing businesses, including interim relief over volatile crypto assets, smart-contract risks and liability questions arising from AI decision-making.

Read the article here for more insight into these developments.

This edition brings you:

  1. Australia launches National AI Plan

  2. European Parliament adopts resolution on impact of AI in the financial sector

  3. WHO publishes report on the use of AI in healthcare

  4. European Banking Authority publishes factsheet on implications of AI Act for banking and payments sector

  5. European Commission launches whistleblower tool for reporting AI Act breaches

  6. Singapore announces consultation on AI risk management guidelines

1. Australia launches National AI Plan

On 2 December 2025, the Australian Government released its National AI Plan (the Plan), setting a national framework for the safe and responsible development and use of AI. The Plan is structured around three goals:

  1. Capturing economic opportunities;

  2. Spreading the benefits of AI across the economy and communities; and

  3. Keeping Australians safe as the technology evolves.

The Plan states that "established laws remain the foundation for addressing and mitigating AI-related risks" and confirms that Australia will take a proportionate, risk-based approach for AI governance by initially relying on those established laws, supplemented by targeted safety measures where legislative gaps emerge. It also highlights ongoing international engagement to shape global norms and deliver consistent safeguards across jurisdictions.

Regulatory priorities highlighted in the Plan include:

  • Maintaining robust legal, regulatory, and ethical frameworks as the foundation of AI governance.

  • Ensuring regulators retain responsibility for assessing and mitigating AI-related harms within their domains.

  • Continuous review of privacy, copyright, online safety and sector-specific rules to reflect AI developments.

  • Monitoring emerging risks and intervening with regulation where necessary, including where frontier systems create new harms.

  • Strengthening international cooperation on AI standards and safety governance, including bilateral and multilateral partnerships.

Establishment of the Australian AI Safety Institute

Alongside the Plan, the Australian Government has confirmed the establishment of the Australian AI Safety Institute (AISI). The AISI is positioned as the federal hub for AI governance and will act as a technical and regulatory support body by:

  • Assessing emerging AI capabilities;

  • Advising where legislative updates may be required;

  • Assisting regulators to enforce existing consumer, privacy, competition and online safety laws;

  • Ensuring AI companies comply with standards of fairness and transparency; and

  • Collaborating internationally and coordinating cross-government responses to AI risks.

The AISI is expected to become operational in early 2026.

National AI Centre guidance on identifying AI-generated content

In parallel with the broader regulatory direction set by the Plan, the National AI Centre has released new guidance "Being clear about AI-generated content: A guide for business", to support transparency in digital communications.

The guidance emphasises that clearly identifying when content has been created or modified using AI can reduce regulatory and reputational risks and helps build trust among users of digital services. It provides practical steps for organisations including labelling, watermarking and metadata recording and encourages businesses to adopt a proportionate level of transparency for their content.

Australia's National AI Plan is available here. Guidance on identifying AI-generated content is here. More information on the AISI can be found here.

2. European Parliament adopts resolution on impact of AI in the financial sector

On 25 November 2025, the European Parliament adopted a resolution on the impact of artificial intelligence on the financial sector (2025/2056(INI)). The resolution calls on the European Commission and supervisory authorities to issue clearer, proportionate guidance on how existing financial services legislation should apply where AI is used. It also sets out high-level recommendations for how the AI Act should operate alongside existing financial services legislation, though stops short of providing detailed interpretative rules.

The resolution notes the growing use of AI across financial institutions, from traditional machine learning to emerging generative AI and foundation models. It recognises AI's potential to enhance credit risk assessment, fraud detection, AML and sanctions monitoring, ESG analytics, customer support and operational efficiency. At the same time, it identifies significant risks, including data bias, model opacity, cybersecurity vulnerabilities, concentration risks amongst technology providers, and challenges to supervisory oversight.

Key priorities include:

  • Ensuring robust data governance, documentation, testing and human oversight for AI systems, particularly where AI affects consumers.

  • Clarifying interactions between the AI Act and sector-specific rules (including prudential, MiFID, Solvency II, UCITS, AIFMD, PSD2 and the Digital Operational Resilience Act (DORA)) to avoid duplication and reduce legal uncertainty.

  • Improving supervisory coordination to prevent diverging interpretations and fragmentation across the single market.

  • Addressing concentration risk and third-party dependency where financial institutions rely on a small number of technology providers for AI development and cloud infrastructure.

  • Supporting responsible innovation through AI-specific regulatory sandboxes, innovation hubs and cross-border testing environments.

The European Parliament also urges greater investment in AI capabilities, improved AI literacy and reskilling across the financial workforce, and further work on assessing the environmental impact of large scale AI systems.

Read the adopted resolution in full here.

3. WHO publishes report on the use of AI in healthcare

On 19 November 2025, the World Health Organization's (WHO) European Region published its first region-wide assessment of AI in health systems entitled "Artificial intelligence is reshaping health systems: state of readiness across the WHO European Region" which notes gaps in legislation relating to AI use in the health sector, AI skills development and AI healthcare investment.

The report draws on a 2024-25 survey of 50 countries and frames AI governance around six pillars:

  • Navigators - leadership, strategy and oversight for AI in health systems.

  • Change-makers - stakeholder engagement and workforce readiness.

  • Guardrails - legal, policy and governance frameworks for safe AI use.

  • Backbone - data-governance foundations for trustworthy health AI.

  • Catalysts - how AI supports core health-system functions.

  • Gatekeepers - barriers to AI adoption and what blocks progress.

The report warns that rapid deployment of AI is outstripping legal and ethical safeguards and notes that only 8% of states participating in the report have a health-specific AI strategy, while just four (Belgium, Russian Federation, Spain, and Sweden) have established liability standards tailored to AI in healthcare.

The report notes that many countries rely on cross-sector digital or AI laws that lack clinical specificity. With only 30% of respondent countries issuing guidance on the secondary use of health data, patients and clinicians are left exposed on accountability, transparency and bias, particularly where AI tools blur the line between regulated medical devices and unregulated wellness products.

Read the WHO Report here, the Digital Omnibus on AI Regulation Proposal here, and our Digital Omnibus Update here.

4. European Banking Authority publishes factsheet on implications of AI Act for banking and payments sector

On 21 November 2025, the European Banking Authority (EBA) released a factsheet outlining how the EU AI Act will apply across the banking and payments sector, following a mapping exercise which compared the AI Act's high-risk requirements with existing financial services regulation.

The EBA found no contradictions between the AI Act and existing EU financial services legislation (including CRD/CRR, DORA, the Consumer Credit Directive and PSD2). Instead, most obligations are either already reflected in current frameworks or can be integrated with limited adjustment.

The factsheet also highlights areas of regulatory synergy and interpretation, and notes that some obligations are absorbed by existing sectoral rules, while others, such as risk-management and documentation, can be integrated into current frameworks.

The EBA will support implementation over 2026-27, including contributing to forthcoming European Commission guidelines and coordinating supervisory approaches.

Download the factsheet here.

5. European Commission launches whistleblower tool for reporting AI Act breaches

On 24 November 2025, the European Commission launched a new whistleblower tool allowing individuals to report suspected breaches of the EU AI Act directly to the European AI Office. The tool creates a secure, confidential reporting channel designed to help surface harmful practices by providers of general-purpose AI models and certain AI systems, including those that may endanger fundamental rights, health, safety or public trust.

The tool is designed to enable individuals professionally connected to an AI model provider to submit reports anonymously in any EU language, upload supporting documents, and communicate with the AI Office through an encrypted bidirectional inbox. This inbox allows whistleblowers to receive updates, answer follow-up questions and track the status of their report without disclosing their identity.

The European Commission also explains that confidentiality is the primary safeguard against employer retaliation until 2 August 2026, when reports concerning AI Act infringements will fall within the scope of the EU Whistleblower Directive.

The European AI Office has published FAQs explaining what can be reported, how the secure inbox operates, and how the AI Office assesses and, if needed, redirects reports.

Access the AI Act Whistleblower tool here.

6. Singapore announces consultation on AI risk management guidelines

On 13 November 2025, the Monetary Authority of Singapore (MAS) issued a consultation paper on proposed Guidelines for AI Risk Management (the Guidelines). The Guidelines set out MAS's supervisory expectations for financial institutions using AI, covering governance, lifecycle controls and the capabilities required to manage AI-related risks.

The Guidelines would apply to all FIs and outline expectations in three core areas:

  • Board and senior management oversight of AI risk management frameworks and culture;

  • Systems, policies and procedures to identify and inventorise AI use cases, supported by risk materiality assessments; and

  • Lifecycle controls addressing data management, fairness, transparency, explainability, human oversight, third-party risk, evaluation, testing, monitoring and change management.

MAS emphasises proportionality, noting that AI risks vary with an institution's size, business model and the nature and complexity of its AI usage. The Guidelines also cover generative AI and emerging technologies such as AI agents, building on MAS's 2024 thematic review of major banks' AI practices.

The consultation is open until 31 January 2026, and the paper is available on MAS's website here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.