AI View 30 April 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

30 April 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

We are hosting our AI x Dispute resolution webinar series throughout 2026, exploring how AI-related disputes are already emerging in practice and what organisations should be doing as regulatory enforcement and litigation risk accelerate.

To sign up for future sessions, covering topics such as IP, product liability and personal negligence, please follow the link to register.

This edition brings you:

  1. EU policymakers stall on AI Act "omnibus" amendments amid sectoral disputes

  2. Singapore proposes global standard for generative AI testing

  3. Australian federal court allows generative AI use in court proceedings

  4. UK Treasury Committee publishes report on AI in financial services

  5. Japan's justice ministry sets up panel to review civil liability for AI deepfakes

  6. European Commission consults on energy efficiency labelling system for GPAI models

  7. China's cybersecurity standards-setting body issues draft AI ethics guidelines

  8. Hong Kong's judiciary to consult legal profession on AI guidelines

1. EU policymakers stall on AI Act "omnibus" amendments amid sectoral disputes

On 28 April 2026, negotiations on proposed amendments to the EU AI Act reached an impasse, following talks between the various European institutions. The talks, which were widely expected to conclude the process for the so-called "AI omnibus" proposal, were paused after the parties failed to resolve a key dispute over the interplay between the AI Act and sector-specific regulatory regimes.

The central point of contention concerned how the AI Act should regulate AI systems embedded in products already subject to sectoral product safety laws, such as industrial machinery and toys. Centre-right lawmakers advocated for shifting certain sectoral legislation from Annex I Section A - whereby AI systems must comply directly with the AI Act - to Section B, which would see AI requirements integrated into sectoral regimes at a later stage.

As a result of the deadlock, no agreement was reached on other outstanding issues. Notably, there was tentative progress on a narrow definition of the so-called "nudifier" ban (the use of generative AI to undress or sexualise individuals), flexibility on legal deadlines, and the AI Office's powers.

The postponement of the talks has raised concerns about delays to time-sensitive provisions, particularly those relating to high-risk AI systems, which are due to take effect on 2 August. A new high-level meeting is expected in approximately two weeks, but the timeline for final approval now appears uncertain.

Read press coverage here.

2. Singapore proposes global standard for generative-AI testing

On 20 April 2026, Singapore announced the proposal of a new international standard, ISO/IEC 42119-8 (the Standard), aimed at establishing consistent methodologies for testing generative AI systems, with a particular focus on benchmarking and red teaming.

This marks the first international standard of its kind for generative AI testing and was due to be discussed at the 17th ISO/IEC JTC 1/SC 42 plenary meeting, hosted in Singapore from 20-24 April 2026. The event, co-organised by the Infocomm Media Development Authority and Enterprise Singapore, brings together over 250 AI experts and representatives from more than 35 national bodies, including the US, UK, China, Japan, Germany, France, and the Republic of Korea. 

The Standard seeks to address the rapid proliferation of AI technologies by providing a standardised framework for benchmarking and red teaming, thereby enhancing the reproducibility and comparability of AI testing outcomes. This, in turn, is expected to drive greater assurance and trust in generative AI systems, supporting their safe adoption by both deployers and users. 

Singaporean officials emphasised the importance of accelerating the pace of standards development to keep up with rapid technological change, highlighting the need for robust conformance systems, including certification and accreditation, to ensure effective adoption. They also called for more inclusive participation in standards-setting, particularly from emerging economies, to ensure frameworks are globally relevant and reflect diverse use cases. 

Read the official ISO abstract for the Standard here.

3. Australian federal court allows generative AI use in court proceedings

On 16 April 2026, the Federal Court of Australia issued a practice note (the Note) permitting the use of generative AI in court proceedings, while emphasising the need for responsible and transparent use. The Note highlighted that, although AI has the potential to increase court efficiency, the presentation of inaccurate or misleading information is "unacceptable" and may result in financial or legal consequences.

The Note applies to all participants in court proceedings, including litigants, witnesses, and third parties appearing under subpoena or other court orders. It requires that the use of generative AI, and the manner in which it is used, be disclosed at the start of any document submitted to the court. Judges may request further details regarding the use of AI in any proceeding.

The Note cautions that generative AI tools can produce fictitious or erroneous results, such as non-existent legal citations or misattributed quotes, and warns against uploading confidential or private information to AI platforms due to the risk of unauthorised access. The court expects that documents submitted by witnesses and experts reflect their own opinions and reasoning, rather than those generated by AI.

The Note follows earlier advisories from Australian state courts. Notably, the Supreme Court of New South Wales initially banned the use of AI in November 2024, but reversed this decision in February 2025, permitting use in certain circumstances provided that uploaded information is not used to train AI models.

The Federal Court's AI Project Group will shortly commence consultations with representative bodies and legal professionals to develop further guidelines on the use of AI in legal practice. Legal practitioners have been reminded to use AI responsibly and in accordance with their existing obligations to the court and other parties, and to disclose any use of AI if required by a judge or court registrar.

Read the Note here.

4. UK Treasury Committee publishes financial institution responses to report on AI in financial services

On 16 April 2026, the UK Treasury Committee published responses from HM Treasury, the Bank of England, and the Financial Conduct Authority (FCA) to its report on AI in financial services published on 20 January 2026. Key points include:

  • HM Treasury: Reiterates commitment to the safe adoption of AI, balancing innovation with risk mitigation. Confirms ongoing evidence gathering for the designation of major AI and cloud providers as Critical Third Parties (CTPs), with initial decisions expected in 2026. Highlights the appointment of Financial Services AI Champions to support responsible adoption.

  • Bank of England: Rejects the "wait and see" characterisation, outlining proactive measures including supervisory engagement, model risk management, and the AI Consortium. Monitors financial stability risks from advanced AI and is incorporating AI-specific scenario analysis and stress testing. Notes that CTP designations are in progress.

  • FCA: Emphasises a principles-based, outcomes-focused approach, with innovation initiatives such as the Supercharged Sandbox and AI Live Testing. Commits to further practical guidance and joint scenario testing with the Bank of England once CTPs are designated. Stresses accountability under existing regulatory regimes.

The responses reflect a coordinated approach to supporting innovation while ensuring robust risk management as AI adoption in financial services accelerates.

Read the responses here.

5. Japan's justice ministry sets up panel to review civil liability for AI deepfakes

On 17 April 2026, Japan's Ministry of Justice announced plans to clarify when users of generative AI may face civil liability for copying a person's voice or likeness without consent. The Ministry will convene an expert panel to review the interpretation and application of existing laws and court precedents in cases involving deepfakes and voice cloning, with the aim of publishing the results.

The move marks a shift from previous Government focus on intellectual property and unfair competition frameworks, bringing civil law considerations to the forefront. Japanese courts have previously addressed publicity right disputes under tort law, but the Ministry's review will specifically examine when AI-generated uses of voices and images may infringe publicity or portrait rights, and whether voices themselves can be protected under these rights.

The panel's first meeting was due to take place on 24 April 2026. The Ministry's initiative follows a Government copyright panel's conclusion that voices are not protected under copyright law, leaving affected individuals to rely on tort and publicity rights. The Ministry of Economy, Trade and Industry has also indicated that unauthorised use of images and voices may, in some cases, fall under the Unfair Competition Prevention Act, though assessments remain case-specific.

Further guidance on the application of civil law remedies is expected in due course.

Read press coverage here.

6. European Commission consults on energy efficiency labelling system for GPAI models

On 7 April 2026, the European Commission launched a public consultation on the energy consumption and efficiency of general-purpose AI (GPAI) models (the Consultation), with a view to developing a potential labelling system for their energy use and carbon footprint. While the AI Act does not provide a legal basis for such a labelling scheme, the Commission is studying the issue and developing a measurement framework that could inform future policy.

The Consultation is part of a broader study aiming to identify governance and implementation conditions for a possible AI energy and emission label. The study references existing EU legislation on energy efficiency and labelling, but notes that these frameworks do not currently cover software.

Under the AI Act, the Commission is required to assess the need for further measures on energy efficiency by August 2028, but any new labelling system would require explicit legal authority. The Commission is expected to propose a Cloud and AI Development Act as part of a forthcoming Tech Sovereignty package, in which energy efficiency is likely to feature prominently.

At present, the Commission has not set out firm plans for a GPAI energy-efficiency label but is laying the groundwork for potential future initiatives as part of its broader sustainability agenda for digital technologies.

The Consultation will close on 15 May 2026.

Read the Consultation here.

7. China's cybersecurity standards-setting body issues draft AI ethics guidelines

On 17 April 2026, China's TC260, the nation's cybersecurity standards-setting body, released draft ethical security guidelines (the Draft Guidelines) for technologies such as generative AI and autonomous agents, setting out principles including a people-centred approach, safety, fairness, transparency, collaborative governance, and inclusive access. The draft outlines requirements for developers, service providers, and users.

Relatedly, on 2 April 2026, the Ministry of Science and Technology and the Ministry of Industry and Information Technology, with eight other departments, issued the Measures for the Administration of Review and Services of Artificial Intelligence Technology Ethics (Trial) (the Measures). The Measures establish a human-centric, lifecycle-based ethics governance framework for all AI activities in China, requiring responsible entities (such as universities, research institutes, healthcare institutions, and enterprises) to set up internal AI ethics review committees or use qualified third parties.

The review process includes application, assessment, decision, and post-approval monitoring at least every 12 months. High-risk AI activities, such as those influencing human behaviour, or public opinion or deployed in safety-critical scenarios, require an additional expert review by authorities. The Measures complement existing AI regulations and do not introduce standalone penalties. Violations are handled under existing laws.

The Measures further embed ethics review into China's AI governance and highlight the growing importance of ethics compliance for market access, particularly for higher-risk applications.

Access the Draft Guidelines here and the Measures here.

On 13 April 2026, the Hong Kong Judiciary's Finance Committee held a meeting at which it discussed several issues including the use of generative AI by legal practitioners and court users. The Judiciary is in the process of drafting guidelines, with a formal consultation with the legal profession planned for later this year.

This initiative builds on guidelines issued in July 2024, which already permit judges, judicial officers, and support staff to use generative AI technology, provided it is done prudently and responsibly. The Judiciary has emphasised the importance of responsible adoption, noting that it will continue to explore the application of AI in both judicial and non-judicial work, and will keep its policies under review in light of technological developments.

Read the speaking notes from the meeting here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.