AI View: December 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

23 December 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. US President signs Executive Order on establishing a National Policy Framework for AI

  2. EU publishes first draft of Code of Practice on marking and labelling of AI-generated content

  3. EU and Canada sign memorandum of understanding on AI

  4. EU proposes regulation to simplify treatment of medical devices under the AI Act

  5. UK Medicines and Healthcare products Regulatory Agency launches “call for evidence” regarding proposed AI regulation

  6. New York enacts law requiring advertisers to disclose use of AI avatars

  7. French Competition Authority releases report on antitrust implications of AI

  8. UK confirms plans to ban ‘nudification’ apps and AI models that generate child abuse imagery

1. US President signs Executive Order on establishing a National Policy Framework for AI

On 11 December 2025, President Trump signed an executive order on ‘Ensuring a National Policy Framework for Artificial Intelligence’ (the Executive Order).

The Executive Order identifies the proliferation of state-level AI laws as a key challenge, citing concerns over a fragmented regulatory landscape, increased compliance burdens for start-ups, and the risk of state laws mandating perceived ideological bias or regulating beyond state borders. In response, the Administration commits to working with Congress to develop a minimally burdensome national standard for AI, which would override inconsistent state requirements while safeguarding child protection, free expression, copyright, and community safety.

Key measures include:

  • Establishing an AI Litigation Task Force within the Department of Justice to challenge state AI laws deemed inconsistent with federal policy, particularly those that may unconstitutionally regulate interstate commerce or require AI models to alter truthful outputs.
  • Mandating the Secretary of Commerce to evaluate existing state AI laws and identify those that conflict with the new federal policy, with implications for states’ eligibility for certain federal funding programmes.
  • Directing the Federal Communications Commission to consider a federal reporting and disclosure standard for AI models that would pre-empt conflicting state requirements.
  • Restricting funding allocation to states which enact regulations the Administration deems onerous or otherwise inconsistent with the Executive Order.
  • Requiring the Federal Trade Commission to issue a policy statement stating that state-mandated bias mitigation in relation to AI model outputs is a deceptive trade practice, on the basis that it makes the AI outputs less “truthful”.
  • Preparing legislative recommendations for a uniform federal AI policy framework, with limited carve-outs for state laws on child safety, infrastructure, procurement, and other specified areas.

The order signals a shift towards greater federal pre-emption in AI regulation, with a focus on promoting innovation and reducing regulatory fragmentation, while maintaining certain safeguards. Implementation will proceed through a combination of litigation, funding conditions, and further legislative proposals.

Read the Executive Order here.

2. EU publishes first draft of Code of Practice on marking and labelling of AI-generated content

On 17 December 2025, the EU published the first draft Code of Practice on Transparency of AI-Generated Content (the Code), developed under Article 50 of the AI Act. The Code sets out high-level commitments and technical measures for providers and deployers of generative AI systems, aiming to enhance transparency, trust and accountability in the use of synthetic content across the EU.

The Code is the result of a multi-stakeholder process involving industry, academia, civil society and Member States, and is intended to serve as a guiding document for demonstrating compliance with the AI Act’s requirements on marking, detection and labelling of AI-generated and manipulated content.

Key features include:

  • Multi-layered marking and detection: Providers of generative AI systems are required to implement a combination of machine-readable marks (such as metadata, watermarks and fingerprinting) to ensure outputs are reliably identifiable as artificially generated or manipulated. Measures are tailored to different content types and modalities, with additional requirements for open-weight models and multimodal outputs.
  • Accessible and user-friendly disclosure: Deployers of AI systems must label deepfakes and AI-generated text intended to inform the public, using a common taxonomy and icon. Disclosure must be clear, distinguishable and accessible at the time of first exposure, with specific provisions for different content formats (e.g. video, audio, images, text) and for creative works.
  • Compliance, training and monitoring: Both providers and deployers are required to maintain internal compliance frameworks, provide training to relevant personnel, and cooperate with market surveillance authorities. Proportionate measures are envisaged for SMEs and smaller entities.
  • Advancing standards and interoperability: The Code encourages investment in research and collaboration on technical standards for marking and detection, with a view to supporting open standards and interoperability across the AI value chain.
  • Safeguards and exceptions: The Code recognises the need for proportionate application of transparency requirements, particularly for artistic, satirical or fictional works, and sets out exceptions for content subject to human editorial control or law enforcement use.

The draft Code remains subject to further consultation and refinement, with stakeholders invited to provide feedback ahead of finalisation. Once adopted, it is expected to play a central role in operationalising the AI Act’s transparency obligations and supporting a trustworthy information ecosystem in the EU.

Read the draft Code here.

3. EU and Canada sign memorandum of understanding on AI

On 8 December 2025, Canada and the EU signed a Memorandum of Understanding (MoU) on AI, formalising their commitment to enhanced bilateral cooperation on the responsible development and deployment of AI technologies. The MoU was adopted alongside the inaugural meeting of the Canada-EU Digital Partnership Council.

The MoU sets out a framework for collaboration on AI standards, regulation, skills development and sectoral adoption, with the aim of fostering innovation, facilitating trade and ensuring the development of trustworthy AI systems that respect fundamental rights.

Key initiatives include:

  • Accelerating sectoral AI adoption: The EU and Canada will organise joint workshops and exchanges to share strategies and best practices for AI adoption in key sectors such as healthcare, manufacturing, energy, science, culture and public services. Particular emphasis is placed on supporting SMEs, addressing barriers to commercialisation, and expanding AI-focused talent exchanges.
  • Mutual recognition of conformity assessments: The parties will launch exploratory talks to facilitate mutual recognition of conformity assessments for high-risk AI systems, in line with the EU AI Act and the CETA Protocol on Conformity Assessment. This includes identifying technical and administrative steps for inclusion of high-risk AI systems and defining accreditation requirements for Conformity Assessment Bodies.
  • Cooperation on AI standards and regulatory sandboxes: The MoU provides for ongoing information exchange on AI standardisation activities, including direct cooperation between the Standards Council of Canada and European standardisation bodies (CEN, CENELEC). The parties will also share experiences on the implementation of AI regulatory sandboxes to support responsible innovation and compliance.
  • Skills development and infrastructure: The EU and Canada will collaborate on addressing the AI skills gap, including through targeted workshops and talent exchanges. Joint efforts will also focus on the development and deployment of large-scale AI infrastructure, with exchanges on the EU’s AI Factories and Gigafactories and Canada’s sovereign public AI infrastructure.
  • Frontier research and AI for public good: The MoU envisages scientific cooperation on next-generation AI architectures and agentic systems, as well as the co-development of innovative AI models for public good applications, including climate adaptation, disaster management and support for low and middle-income regions.

The MoU is non-binding and does not create legal or financial obligations, but provides a flexible framework for voluntary cooperation and the implementation of joint initiatives under the Canada-EU Digital Partnership.

Read the MoU as part of the Joint Statement here.

4. EU proposes regulation to simplify treatment of medical devices under the AI Act

On 16 December 2025, the European Commission announced a major package of measures to modernise the EU health sector, with AI at the heart of its strategy to drive innovation, competitiveness and resilience. The package comprises a new Biotech Act, reforms to medical device regulation, and the launch of the Safe Hearts Plan.

Biotech Act

The Biotech Act (the Act) aims to accelerate the development and deployment of cutting-edge therapies and diagnostics, with a strong emphasis on AI-driven innovation. The Act will support the transition of research from laboratory to market, incentivise biotech companies to conduct research and production within Europe, and fast-track clinical trial authorisations. Notably, the Act introduces regulatory sandboxes and single regulatory pathways for complex products, enabling the rapid development of advanced therapies that leverage AI and data analytics. Targeted investment, including a new health biotech investment pilot with the EIB Group, will further support AI-enabled bio-manufacturing and the commercialisation of innovative solutions.

Safe Hearts Plan

AI and digital solutions are central to the Safe Hearts Plan (the Plan), the EU’s first comprehensive strategy to tackle cardiovascular disease, which is the leading cause of premature death in Europe. The Plan will deploy personalised disease prediction tools and therapies powered by AI, integrate data-driven approaches to prevention and treatment, and launch an Incubator to accelerate the use of AI in cardiovascular care.

Medical Devices Reform

The proposed reforms to medical device regulation place a strong emphasis on digitalisation and the integration of AI applications. The new framework will simplify procedures, introduce faster conformity assessments, and ensure uniform rules for medical devices incorporating AI. Specifically, the Medical Devices Regulation will move from Annex I A to Annex I B of the AI Act, meaning compliance will be channelled through the sector-specific regime rather than products having to meet the AI Act’s high-risk obligations.

The European Medicines Agency (EMA) will play a strengthened role in providing scientific and regulatory expertise, monitoring shortages, and supporting the safe and efficient deployment of AI-powered medical technologies. These measures are expected to deliver significant cost savings and enable faster access to innovative devices for patients across the EU.

The legislative proposals for the Biotech Act and medical device reforms will be submitted to the European Parliament and Council for adoption, while work will begin with Member States to implement the Safe Hearts Plan and its AI-driven deliverables.

Read the press release here, the proposed Biotech Act here, the Safe Hearts Plan here and the proposed medical devices regulation here.

5. UK Medicines and Healthcare products Regulatory Agency launches “call for evidence” regarding proposed AI regulation

On 18 December 2025, the Medicines and Healthcare products Regulatory Agency (MHRA) launched a “call for evidence” seeking the views of the public, clinicians, industry and healthcare providers on how AI in healthcare should be regulated.

The call for evidence will support the work of the newly formed National Commission into the Regulation of AI in Healthcare which has been convened to advise the MHRA on the future of health AI regulation.

Key themes include:

  • Modernising the rules for AI in healthcare: Are the current rules for regulating AI in healthcare working, or do they need updating to keep pace with new technology?
  • Keeping patients safe as AI evolves: As AI systems become more advanced and are used in new ways, how can we spot and address any problems quickly, especially with new types of AI that can learn and change over time?
  • Clarifying responsibility: What should the distribution of responsibilities between regulators, companies, healthcare organisations and individuals involved in the use of technology in healthcare look like?

The call for evidence will be open for submissions until 2 February 2026.

Read the call for evidence here.

6. New York enacts law requiring advertisers to disclose use of AI avatars

On 11 December 2025, New York Governor Kathy Hochul signed two bills which aim to protect consumers and boost transparency when AI is used in the film industry (S.8420-A/A.8887-B and S.8391/A.8882, together the Bills).

S.8420-A/A.8887-B requires anyone who produces or creates an advertisement to identify whether it includes AI generated synthetic performers. S.8391/A.8882 requires consent from heirs or executors for any use of an individual’s name, image, or likeness for commercial purposes after their death.

The Bills follow months of lobbying against a backdrop of anxiety in the entertainment industry and society at large about the rapid evolution of AI.

Read the Bills here and here.

7. French Competition Authority releases report on antitrust implications of AI

On 17 December 2025, the French Competition Authority (the Authority) published a study examining the competition issues arising from the energy and environmental impact of AI, with a particular focus on generative AI and data centres. The report highlights the rapid growth of AI and data centre infrastructure in France and Europe, noting significant increases in electricity consumption, carbon footprint, and pressure on resources such as water and rare metals.

Key findings and priorities include:

  • Energy Access and Market Dynamics: The expansion of AI is driving unprecedented demand for electricity, with data centres expected to account for up to 4% of France’s national consumption by 2035. The report identifies challenges in accessing the electricity grid and securing competitively priced energy, which may affect market dynamics and create barriers to entry, particularly for smaller players. The Authority notes the importance of public measures to facilitate grid access and calls for vigilance against anti-competitive behaviours, such as capacity hoarding or discriminatory supply agreements.
  • Frugality as a Competitive Parameter: The emergence of “frugal AI” - solutions designed to minimise resource consumption and environmental impact - is highlighted as a new axis of competition. The report finds that frugality can enable smaller firms to compete with larger incumbents, provided it is not hindered by market practices. Regulatory and procurement criteria increasingly value the environmental footprint of AI solutions, and the Authority encourages transparency and robust methodologies to prevent misleading claims (so-called “greenwashing”).
  • Standardisation and Transparency: Ongoing efforts to standardise the measurement of AI’s environmental impact are seen as fundamental to ensuring fair competition. The Authority stresses the need for reliable, transparent data and inclusive standard-setting processes, warning against standards that could be biased or manipulated to favour certain market participants. The proliferation of standards should foster innovation and comparability, but must avoid anti-competitive information sharing or exclusionary practices.

Stakeholders are invited to engage with the Authority on these issues, report suspected anti-competitive practices, and seek informal guidance on the compatibility of sustainability-oriented projects with competition law.

Read the report here.

8. UK confirms plans to ban ‘nudification’ apps and AI models that generate child abuse imagery

On 18 December 2025, the UK Government published its cross-government strategy, “Freedom from Violence and Abuse: a cross-government strategy to build a safer society for women and girls” (the Strategy). Among a wide range of measures, the Strategy sets out decisive new plans to tackle the growing threat of technology-facilitated abuse, including a world-leading ban on nudification apps and AI models that generate child sexual abuse material.

  • Banning Nudification Apps and Synthetic Abuse Tools: The strategy highlights the proliferation of so-called “nudification” apps, which are AI-powered tools that create non-consensual synthetic intimate images, including child sexual abuse material. Over 290 such tools were identified in 2025 alone. The Government will introduce a ban on nudification apps and other technologies designed to create synthetic non-consensual intimate images, targeting both the firms and individuals providing and supplying these tools.
  • Criminalising AI Models Generating Child Abuse Material: In response to the evolving threat of AI-generated child sexual abuse material, the Government will introduce a new criminal offence targeting AI models that have been made or adapted to create such content. These optimised models are capable of producing hyper-realistic child sexual abuse material, often using the likenesses of real children. The new offence will close a gap in existing law, ensuring that the development, possession, or use of AI models for this purpose is explicitly illegal in the UK.
  • Broader Digital Safeguarding Measures: These bans form part of a wider package of digital safeguarding reforms, including new powers for law enforcement to target online child sexual abuse, updated laws on “paedophile manuals” to cover AI-generated content, and enhanced border controls to detect child sexual abuse material. The Online Safety Act 2023 and Ofcom’s new guidance further strengthen the UK’s regulatory framework, requiring robust age assurance and proactive measures from online platforms.

The proposed bans on nudification apps and AI child abuse models will be introduced through the forthcoming Crime and Policing Bill. The Government will work with law enforcement, regulators, and technology companies to ensure effective implementation and enforcement.

Read the Strategy here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.