AI View | April 2025

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

30 April 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. European Data Protection Board releases report on LLMs

  2. Texas House of Representatives passes Texas Responsible AI Governance Bill

  3. African Declaration on AI signed at Global AI Summit on Africa

  4. Saudi Arabia launches consultation on Global AI Hub Law

  5. European Commission opens consultation for guidance on
    general-purpose AI models under EU AI Act

  6. France’s CNIL publishes recommendations on AI use in public services

  7. European Union considers simplification of EU AI Act

1. European Data Protection Board releases report on LLMs

On 10 April 2025, the European Data Protection Board (EDPB) released its AI Privacy Risks and Mitigations Large Language Models (LLMs) Report (the Report). This is the latest installation of the EDPB’s ‘Support Pool of Experts’ programme, an initiative aimed at collating guidance for Data Protection authorities (DPAs) from a range of industry-leading experts. The Report is designed to assist DPAs in understanding LLMs, the way they function and the risks associated with their use.

The Report proposes a risk management methodology to identify, assess and mitigate privacy and data protection risks with the use of LLMs. It also provides use cases to explore the application of this risk-management framework in a range of practical scenarios.

As well as serving as a practical tool for developers and users of LLMs, the Report also supports the requirements of GDPR Article 25 and Article 32. While it is not intended to replace a Data Protection Impact Assessment (DPIA), as required under Article 32, it can be used to compliment the DPIA process.

Read the full Report here.

2. Texas House of Representatives passes Texas Responsible AI Governance Bill

On 23 April 2025, the Texas State House passed the Texas Responsible AI Governance Bill (TRAIGA).

The current version of TRAIGA, as passed, is a substantively revised version of the original TRAIGA. The original version filed in December 2024 was wider in scope. Notable changes between the original version and the current version of TRAIGA include:

  • Removal of “high-risk AI systems”: The original version contained provisions for “high-risk AI systems” – AI systems that play a substantive role in consequential life-affecting decisions. The current TRAIGA no longer covers these systems.
  • Disclosure requirements for governments: The original version required all AI developers and deployers to disclose specific information to consumers. This obligation now is only targeted at government AI developers and deployers.
  • Removal of annual impact assessment and risk mitigation policy: The original version required all AI deployers to complete an impact assessment for every AI system deployed, both annually and within 90 days of any substantial modification. Developers and deployers were also required to implement a risk management policy. These provisions no longer exist in the current TRAIGA.

TRAIGA introduces a number of prohibitions on specific uses of AI. Some of the prohibitions apply only to the government’s use of AI, including:

  • Social scoring’: TRAIGA prohibits only the government’s use of AI for ‘social scoring’ – classifying people or classes of people according to their social behaviour.
  • Certain biometric identification: TRAIGA prohibits the government from using AI systems to gather biometric data from the internet or publicly available sources without consent or using biometric data to uniquely identify specific individuals.

Other prohibitions under TRAIGA, applying to all public and private commercial developers and deployers of AI systems, include:

  • Manipulation of human behaviour: AI systems that intentionally aim to encourage a person to commit harm to themselves or another person or to commit crimes.
  • Anti-constitution use: AI systems that are for the sole purpose of infringing rights under the US constitution.
  • Unlawful discrimination: AI systems that are intended to unlawfully discriminate against a protected class of individuals.
  • Certain explicit content: AI systems with the sole purpose of producing, or distributing certain illegal sexually explicit content and child pornography.

Possible infringements of TRAIGA will be investigated and enforced by the Office of the Texas Attorney General. Penalties for each violation range from $10,000 to $200,000.

TRAIGA will take effect on 1 January 2026.

Read TRAIGA here.

3. African Declaration on AI signed at Global AI Summit on Africa

On 3 and 4 April 2025, the inaugural Global AI Summit on Africa (the Summit) was hosted in Kigali, Rwanda. The Summit was themed ‘AI and Africa’s Demographic Dividend: Reimagining Economic Opportunities for Africa’s Workforce’.

On 4 April 2025, the African Declaration on AI (the Declaration) was announced. Over 50 countries are signatories to the Declaration which seeks to:

  • leverage the potential of AI to drive innovation and competitiveness to advance Africa’s economies, industries, and societies
  • position Africa as a global leader in ethical, trustworthy, and inclusive AI adoption
  • foster the sustainable and responsible design, development, deployment, use, and governance of AI technologies in Africa

The Declaration sets out guiding principles for AI adoption, as well as outlining the signatories’ objectives, key commitments and strategies for developing data, computer infrastructure and institutional cooperation.

The Declaration commits that a $60 billion Africa AI Fund will be established, consisting of public, private and philanthropic capital “to create a safe, inclusive, and competitive African AI economy”.

To ensure proper governance, the signatories propose establishing a continent-wide knowledge sharing platform, including policy toolkits and regulatory sandboxes to develop best practices. An AI Council will also be established to ensure “strategic alignment with continental and global digital transformation efforts”.

See the Declaration here.

4. Saudi Arabia launches consultation on Global AI Hub Law

On 14 April 2025, Saudi Arabia’s Communications, Space & Technology Commission (CST) invited government and private entities, investors, interested parties and the general public to contribute to its draft Global AI Hub Law. This forms part of the Kingdom’s wider strategy of creating an attractive regulatory environment for investments in technology, with a particular focus on data centres and AI.

The draft Global AI Hub Law provides a regulatory framework for the establishment and operation of sovereign and foreign-affiliated data centres, or “AI Hubs”, within Saudi Arabia. The draft Global AI Hub Law outlines three types of AI Hubs:

  • Private Hubs: Designed to attract countries to host their data and services in Saudi Arabia.
  • Extended Hubs: Operated by authorised entities serving themselves or others under a guest country’s laws.
  • Virtual Hubs: Managed by local service providers but subject to the jurisdiction of a designated foreign state.

Data embassy Initiatives are not new to the Middle East region, and principles which have underpinned the regional development of independent free zones are well established but the AI Hub concept represents a significant step in the Kingdom’s thinking to attract digital infrastructure and content-heavy industries into Saudi Arabia.

Read about the Global AI Hub Law here. The consultation is open until 14 May 2025 and contributions can be made here.

5. European Commission opens consultation for guidance on general-purpose AI models under EU AI Act

On 22 April 2025, the European Commission (the Commission) launched a consultation relating to the provisions in the EU AI Act on general-purpose AI (GPAI) models. Designed to clarify existing provisions in the EU AI Act, and obtain targeted input from stakeholders, the product of the consultation will be new Commission guidelines on GPAI models.

Based on the outcomes of the consultation, the new guidelines will address a number of key questions, including:

  • the definition of a GPAI model
  • the tricky concepts of placing on the market and fine-tuning
  • the role of the AI Office in facilitating compliance with the EU AI Act
  • how signing the Code of Practice – if approved by the AI Office and AI Board – might serve as a benchmark for regulatory compliance

The Commission is seeking input from all interested stakeholders, including providers of GPAI models, downstream providers of AI systems, public authorities and experts. The deadline for submitting a response is 22 May 2025. The guidelines and final Code of Practice are expected to be published around August 2025.

Contributions can be made here.

6. France’s CNIL publishes recommendations on AI use in public services

On 18 April 2025, the French Data Protection Authority (CNIL) published recommendations made following the sandbox of 2023-24 dedicated to AI projects for public services. As part of a wider action plan to support AI innovators, this third edition of the sandbox programme focused on supporting AI projects in public services. The CNIL has summarised the recommendations that it provided to the companies selected in the context of the sandbox programme.

The recommendations address legal and technical challenges such as database creation for AI learning, human intervention requirements, and data minimisation in generative AI applications. The recommendations are designed to offer practical compliance frameworks for deploying AI in the public sector while ensuring compliance with data protection standards.

The CNIL will shortly announce the projects selected for its next sandbox dedicated to the silver economy (the economy comprising people aged over 50).

Read the recommendations here.

7. European Union considers simplification of EU AI Act

After introducing the EU AI Act, the European Commission (the Commission) is now looking to potentially simplify the law to reduce the administrative burden on affected parties.

During the Committee on Legal Affairs Ordinary Meeting on 9 April 2025, the Commission’s Technology Chief Henna Virkkunen explained that the Commission is "committed" to the EU AI Act’s main goals but is now looking into the "administrative burden" and considering "some reporting obligations [that] we could cut."

The Commission will seek industry views "where regulatory uncertainty is hindering the development and the adoption of AI" and feed that into a wider effort to review and potentially roll back a number of digital rulebooks at the end of this year. The move is part of a wider relaxation of the EU AI regulatory framework. Earlier this year, the Commission axed plans to implement a strict liability scheme for harm caused by AI (the AI Liability Directive).

It remains to be seen if or to what extent the provisions of the EU AI Act might be rolled back.

See here for more details.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.