AI View - June 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

11 June 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

Simmons & Simmons is excited to announce its AI Agents partnership with Flank, specialists in autonomous legal agents, to co-develop and deploy a suite of AI-powered tools both for the firm and for clients. We have already developed an NDA Agent and are working on additional agents. Read more about the partnership here.

This edition brings you:

  1. AI Office launches consultation on high-risk AI under EU AI Act

  2. Japan passes new law to promote responsible AI research and development

  3. Texas Senate passes the Responsible AI Governance Act

  4. UK FCA publishes research note on use of AI in consumer-facing financial services

  5. European Commission proposes Council decision to adopt Council of Europe AI Convention

  6. UN Council of Presidents publishes report on the transition to Artificial General Intelligence

  7. German data protection authority publishes AI Questionnaire for AI and GDPR compliance

  8. European Commission opens consultation on using AI in justice systems

  9. China's tightened facial recognition regulations come into force

AI Office launches consultation on high-risk AI under EU AI Act

On 6 June 2025, the European Commission's (EC) AI Office launched a stakeholder consultation to support the development of guidelines on high-risk AI classifications under the EU AI Act (the Act).

The consultation aims to clarify how AI systems will be classified as 'high-risk' under Articles 6(1) and 6(2) of the Act, which cover AI systems that are used as safety components in products or that themselves are products covered by EU legislation listed in Annex I, as well as AI systems that are considered to pose a significant risk to health, safety or fundamental rights. The EC has invited a range of stakeholders, including AI providers, deployers, and public authorities, to contribute to the consultation. Participants are asked to provide input on practical examples and highlight key issues to help shape forthcoming EC guidelines on the classification process, applicable requirements, and obligations of 'high-risk' AI across the AI value chain.

The consultation is open for 6 weeks until 18 July 2025 and will inform non-binding guidelines expected to be issued by the EC in February 2026.

You can read the full consultation here. Our team has been advising extensively on the high-risk regime under the Act and would be happy to assist with a response to this consultation or more generally.

Japan passes new law to promote responsible AI research and development

On 28 May 2025, Japan enacted the "Act on the Promotion of Research, Development and Utilisation of Artificial Intelligence-Related Technologies" (the Act), its first AI-specific law. The legislation forms part of Japan's strategy to accelerate domestic innovation in AI while addressing risks such as the spread of misinformation generated by AI.

Unlike the EU's risk-based AI Act, Japan's approach is principles-based and non-binding, and does not include penalty provisions in an effort not to discourage technological innovation. Likewise, the Act does not impose risk classifications or legal obligations, but sets national objectives and defines roles for government, business and society, emphasising voluntary compliance by developers and users of AI with government-issued guidance.

Key features of the Act include:

  • The establishment of an AI Strategy Headquarters, chaired by the Prime Minister, to coordinate national policy.
  • A requirement to publish a national AI Basic Plan to promote the research and development of AI technologies.
  • Government support for AI developers through the provision of AI infrastructure, such as shared computing power, large datasets and testing environments.
  • Promoting voluntary cooperation among stakeholders with government-led AI strategies.
  • A commitment to promoting the research, development and utilisation of AI through international cooperation.

Read more about the AI Promotion Act here (Japanese-only).  

Texas Senate passes the Responsible AI Governance Act

On 27 May 2025, the Texas Senate passed the Texas Responsible AI Governance Act (TRAIGA), which is set to become one of the most comprehensive state-level AI laws in the US. The bill now awaits approval by the Governor before it can be enacted. If signed into law, it will take effect on 1 January 2026.

TRAIGA adopts a risk-based framework modelled partly on the EU AI Act, and establishes a comprehensive framework for AI transparency and risk management with a broad scope and strict prohibitions. It regulates AI systems used in various sectors such as employment, healthcare, housing, finance, and education. TRAIGA will apply to developers and deployers of AI systems, as well as to any person or business organisation doing business in Texas or offering its products or services for its residents.

Notable provisions include:

  • Prohibited uses: Harmful uses of AI, including systems that incite self-harm, infringe constitutional rights, discriminate on protected grounds, or facilitate deepfake abuse will be banned.
  • Restrictions on Government use: Government bodies will be prevented from using AI to conduct social scoring or biometric surveillance, unless specific consent is given.
  • Transparency obligations: Mandatory disclosures must be made by public agencies when AI is used in consequential decisions by public services.
  • Creation of a state AI council: This body will advise on policy and ethical AI use and development.
  • Introduction of a regulatory sandbox: A program will be established to support responsible AI development in a supervised, lower-risk environment.
  • Enforcement: TRAIGA will be enforced by the Texas Attorney General, with penalties ranging from $10,000 to $200,000 and a 60-day cure period for violations.

Read TRAIGA in full here.

UK FCA publishes research note on use of AI in consumer-facing financial services

On 30 May 2025, the UK Financial Conduct Authority (FCA) published a research note discussing the use of large language models (LLMs), such as GPT-3.5 and GPT-4, in consumer-facing financial services.

In the research note, the FCA tested LLMs in two pilot projects:

  • Simplifying financial terminology, which involved GPT models rewriting complex terms in plain English.

  • Delivering savings guidance via a chatbot compared to guidance given by a standard website Q&A.

The FCA determined that while LLMs showed promise in making financial contexts easier to understand, their effectiveness depended heavily on how the tools were integrated into the customer journey. It emphasised the need for the outputs of LLMs to be validated by combining human oversight and automation.

Alongside the note, the FCA published an engagement paper proposing an AI live testing framework (similar to its regulatory sandbox) in order to allow firms to trial LLM deployments in monitored environments. Stakeholder feedback on this proposal is currently open, with further details on the pilot's implementation expected to be published later this year.

Read the full research note here.

European Commission proposes Council decision to adopt Council of Europe AI Convention

On 3 June 2025, the European Commission (EC) submitted a proposal for a Council Decision to authorise the EU to join the Council of Europe's Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The Convention, adopted in May 2024, is the first binding international treaty on AI, and establishes safeguards to ensure AI is used in line with democratic values and fundamental rights.

The proposal marks a formal step toward EU accession, which must now be approved by the Council of the EU. If adopted, the EU would join 46 member states and observers in aligning with the Convention's principles, including transparency, accountability, and human oversight of AI systems. The EC has also proposed a Council declaration to clarify how the Convention will apply alongside the EU's own AI Act.

The Council is set to begin deliberations in the coming months.

Read the full proposal here.

UN Council of Presidents publishes report on the transition to Artificial General Intelligence

On 3 June 2025, the UN Council of Presidents of the General Assembly published a landmark report encouraging immediate international action to prepare for the transition to artificial general intelligence (AGI). The report warns that AGI could emerge within this decade and present both transformative opportunities and serious global risks.

Prepared by a high-level expert panel, the report outlines six key risk areas:

  1. An irreversible loss of human control over AGI.

  2. New forms of weapons of mass destruction.

  3. Infrastructure vulnerabilities to cyberattacks launched with the aid of AGI.

  4. Economic destabilisation if AGI use is concentrated in the hands of only a few nations or corporations.

  5. Existential risks posted by entirely autonomous AGI.

  6. The loss of shared global benefits, such as advancing medicine or addressing poverty and climate change, if the use of AGI is not properly governed.

Key recommendations of the report include:

  • Convening a dedicated UN General Assembly session on AGI.

  • Establishing a Global AGI Observatory to monitor development and flag risks.

  • Creating a certification system for secure and trustworthy AGI.

  • Proposing a UN Framework Convention on AGI governance.

  • Exploring a dedicated UN agency to coordinate global efforts.

The report has been formally submitted to the President of the UN General Assembly, with initial discussions already underway. Official briefings and potential follow-up action are expected in the coming months as the UN considers how to respond to the panel's recommendations.

Read the full report here.

German data protection authority publishes AI Questionnaire for AI and GDPR compliance

On 20 May 2025, Germany's Federal Data Protection Authority published a detailed questionnaire to help organisations assess their compliance with the General Data Protection Regulation (GDPR) when deploying AI systems.

The 17-page document covers key topics such as initial assessment of data processing, legal basis for processing, data protection supervision, controller and processor relationship, internal management processes, monitoring and accountability structures, transparency, reliability, security, and rights of data subjects. It is intended as both a self-assessment and a documentation tool to support GDPR-aligned AI development and transparency, in order to help organisations identify and address data protection risks early in the AI lifecycle.

While non-binding, the questionnaire reflects growing regulatory expectations around AI accountability and signals that EU data protection authorities are likely to apply increased scrutiny to AI deployments.

Read the questionnaire here (only available in German).

European Commission opens consultation on using AI in justice systems

On 28 May 2025, the European Commission initiated a call for evidence to gather stakeholder input on the use of AI and other digital technologies in EU civil and criminal justice systems. The call for evidence seeks feedback on managing AI systems that may pose unacceptable risks under the EU AI Act, particularly in sensitive judicial contexts.

The initiative aims to inform a forthcoming "Digital Justice 2025--2030" strategy, expected later this year, which will provide guidance to national authorities on adopting AI tools in their justice systems while safeguarding fundamental rights and ensuring compliance with EU law. The initiative intends to improve efficiency and resilience and quality of the justice system, while increasing access to justice for businesses and the public. The Commission plans to adopt the strategy in the fourth quarter of 2025.

The call for evidence is open from 26 May to 23 June 2025.

Read the full consultation here.

China's tightened facial recognition regulations come into force

On 1 June 2025, China's new facial recognition regulations titled "Security Management Measures for the Application of Facial Recognition Technology" (the Measures) came into effect. Introduced by the Cyberspace Administration of China and the Ministry of Public Security, the Measures aim to address increasing concerns around privacy, mass surveillance, and the misuse of facial data, whilst promoting more responsible and transparent deployment of AI-driven identification systems.

Key provisions of the Measures include:

  • Transparency: Businesses using facial recognition technology must provide clear and comprehensive information to individuals before collecting their biometric data.

  • Necessity and purpose of use: Use must be justified as essential and limited to defined purposes.

  • Restrictions on data storage and transfers: Data should be stored locally and retained only for the shortest period necessary.

  • High-volume oversight: Businesses processing data from over 100,000 individuals must register with provincial authorities and report on data practices.

  • Protection of minors: Parental consent and enhanced safeguards are required for facial data relating to children under 14.

  • Ban on coercion: Any misleading or mandatory use of facial recognition is prohibited.

Data handlers in China whose storage of facial information processed via facial recognition technology reaching 100,000 individuals must complete registrations to use facial recognition within 30 working days of reaching the threshold, while those who hit the mark before 1 June 2025 must register by 14 July 2025. The registrations will be submitted online through a dedicated portal and applicants will need to provide a facial recognition technology application record form and a personal information protection impact assessment report.

Read the Measures here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.