AI View - January 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

06 January 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. New York passes Responsible AI Safety and Education Act

  2. China's Cyberspace Administration issues draft regulation targeting AI products and services that simulate human personalities and emotions

  3. Taiwan legislature passes new AI law

  4. Japan opens consultation on draft generative AI transparency and IP protection code

  5. South Korea to progress with enforcement decree for AI Basic Act

1. New York passes Responsible AI Safety and Education Act

On 19 December 2025, the New York Governor signed into law the Responsible AI Safety and Education Act (the RAISE Act) with the agreement of the New York legislature that forthcoming amendments – expected this month – will revise key sections of the RAISE Act (RAISE as amended).

RAISE as amended will apply to “large developers” of “frontier models” developed, deployed, or operating in whole or in part within New York State. A “large developer” is an entity that has trained at least one frontier model and spent over $100 million in aggregate compute costs to train the models, but excludes accredited colleges and universities engaged in academic research. A “frontier model” is defined as an AI model trained using more than 1026 computational operations with a compute cost exceeding $100 million, or models produced by knowledge distillation from such frontier models with a compute cost exceeding $5 million.

The key measures of RAISE as amended will include:

  • Safety Protocols: In-scope developers must implement, publish and submit a written protocol detailing technical and organisational safeguards to reduce the risk of “critical harm”, defined as death or injury to more than 100 people or $1 billion in damages, including cybersecurity protections, testing procedures, compliance requirements, and designation of responsible senior personnel.
  • Safety incident disclosures: Safety incidents must be reported within 72 hours of discovery to the New York Attorney General, New York’s division of Homeland Security and emergency services.
  • Third-party audits: Annual independent audits of compliance are required, with reports to be published and submitted to the same recipients as safety incident disclosures.
  • Oversight: Establishment of a new office within the Department of Financial Services to oversee implementation and incident reporting.
  • Enforcement: The Attorney General may bring civil actions, with penalties up to $1 million for a first violation and $3 million for subsequent violations. RAISE as amended will also void contractual attempts to shift liability and allow courts to disregard corporate formalities to prevent evasion of liability.

Read the RAISE Act here.

2. China's Cyberspace Administration issues draft regulation targeting AI products and services that simulate human personalities and emotions

On 27 December 2025, the Cyberspace Administration of China (CAC) opened a consultation on the draft Interim Measures for the Administration of Anthropomorphic Interactive Artificial Intelligence Services (the Measures). The Measures reflect China’s ongoing efforts to address the unique risks posed by anthropomorphic AI services, particularly in relation to minors and other vulnerable users, while promoting the healthy, regulated development of such services and safeguarding national security and the public interest.

The Measures apply to AI services that simulate human personality, thinking patterns, and communication styles, including AI companionship and virtual idols, and interact with users via text, images, audio, or video throughout the entire lifecycle of the AI service.

Notably, the Measures introduce the following key provisions:

  • Prohibitions: Prohibitions on generating or disseminating content that endangers national security, damages national honour or unity, spreads rumours, promotes illegal religious activities, gambling, obscenity, or violence, incites crime or defames others, or encourages suicide or self-harm.
  • Governance Systems: Providers must implement comprehensive governance systems covering algorithm and ethics reviews, content moderation, cybersecurity, data security, personal information protection, anti-fraud measures, and emergency response protocols.
  • Mandatory Technical Safeguards: Providers must implement measures to prevent addiction and emotional dependency, including mental health protections, emotional-boundary guidance, and warnings about dependency risks. Designs intended to replace real-world social interaction, manipulate user psychology, or encourage addiction and emotional reliance are expressly prohibited. Providers must intervene when signs of distress or addiction are detected.
  • Protection of Vulnerable Groups: Enhanced protections are required for vulnerable groups, including a ‘minors mode’, content filters, guardian consent, and restrictions on simulating personal relationships.
  • Data governance requirements: Providers must ensure lawful and traceable data sources, and implement data cleaning, labelling, and anti-tampering measures. The Measures require opt-in consent for the use of user interaction data and sensitive personal information in model training, encryption of user data, and mechanisms for data deletion, including by a parent or guardian requesting deletion of a minor’s data.
  • Mandatory obligations: Providers must ensure clear identification of AI interactions and accessible exit and complaint channels. Providers are required to conduct security assessments and algorithm filings, particularly when launching or materially changing anthropomorphic functions. App marketplaces must ensure robust review and incident response mechanisms.

The Measures are open for public consultation until 25 January 2026.

Read the Measures here (in Mandarin only).

3. Taiwan legislature passes new AI law

On 23 December 2025, Taiwan’s Legislative Yuan passed the Artificial Intelligence Basic Act (the Act), establishing a foundational framework for AI governance in Taiwan, aiming to balance the promotion of AI development with the protection of social welfare and national interests.

The Act is a principles-based law which sets out seven core principles for AI development.

The key measures include:

  • Core Principles: AI development must adhere to principles of sustainability and well-being, human autonomy, privacy and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability.
  • Prohibitions: AI applications are barred from harming lives, freedom, property, or social order; undermining national security or the environment; or engaging in bias, discrimination, false advertising, misinformation, or fabrication.
  • National AI Strategy Committee: The Executive Yuan must establish a committee chaired by the premier, comprising scholars, industry representatives, agency heads, and local government leaders, to meet at least annually and set national AI development guidelines.
  • Data and Labour Protections: The Act requires measures for data openness and personal data protection, risk-based AI management aligned with international standards, and safeguards for labour rights, including retraining and employment assistance for workers affected by AI.

Read the Act here (in Mandarin only).

4. Japan opens consultation on draft generative AI transparency and IP protection code

On 26 December 2025, Japan’s Cabinet Office opened a consultation on a draft principles code (the Code) for generative AI transparency and IP protection (the Consultation).

The Code is based on a ‘comply or explain’ approach, requiring generative AI developers and providers to either implement the principles or publicly explain their reasons for non-implementation. It is positioned as a soft-law instrument, relying on market pressure and stakeholder scrutiny rather than administrative enforcement, and may be revised following industry feedback and international developments.

The purpose of the Consultation is to promote voluntary disclosure and responsible practices by generative AI businesses, focusing on model training, web crawling, and safeguards against IP infringement. The Code applies to generative AI developers and providers offering services to users in Japan, including those based overseas.

The Consultation encourages businesses to publish standardised information about model identity, version history, architecture, terms of use, training processes, data categories (including web-crawled and synthetic data), crawler details, and accountability measures.

Notably, the Consultation proposes:

  • Commitments: Commitments not to infringe IP during development and training, respect access restrictions and machine-readable instructions, and avoid crawling pirated sites.
  • Safeguards: Implement technical and operational safeguards to reduce infringing outputs, including the use of digital watermarks and provenance tools.
  • Disclosure Pathways: Pathways for rights holders and users to request limited information about training and validation data sources in connection with potential disputes, with procedures for responding to narrowly framed disclosure requests from those considering legal action.
  • Open Source Software: Exceptions for open source software, allowing businesses to disclose licensing details where full disclosure is not possible.
  • Procedural Controls: Measures to prevent abusive disclosure requests, such as reasonable fees or limits.

The Consultation will last until 26 January 2026.

Read the Consultation here.

5. South Korea to progress with enforcement decree for AI Basic Act

On 24 December 2025, South Korea’s Ministry of Science and ICT (the Ministry) announced that it will finalise the enforcement decree for the AI Basic Act (the Decree) largely as drafted, with the Decree and main law set to come into force on 22 January 2026.

The Decree clarifies and specifies matters delegated by the Act, aiming to balance the promotion of the AI industry with the establishment of a robust safety and trust framework.

Key measures include clear criteria and procedures for AI R&D support, training-data construction, AI adoption, and the designation of AI clusters, as well as the establishment of supporting institutions such as the AI Safety Institute and AI Policy Center. The Decree also elaborates on transparency and safety obligations, including advance user notification for high-impact or generative AI, clear labelling of AI-generated outputs, and detailed criteria for high-impact AI and AI impact assessments.

Despite calls to suspend regulatory obligations, the Ministry confirmed that obligations will remain in place, but fines for serious violations will be temporarily suspended for at least one year, with possible extension. An integrated support centre, the AI Safety and Trust Support Desk, will be established to advise businesses on compliance and provide support, particularly for smaller enterprises. The Ministry pledges flexible enforcement and will reflect stakeholder input in subordinate rules and guidelines, with ongoing financial and practical support for industry compliance.

Read the press release and download the Decree here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.