AI View: February 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

18 February 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

EU AI Act: Articles 4 and 5 now apply

As of 2 February 2025, two key provisions of the EU AI Act (AIA), the prohibition of certain AI practices (Article 5) and the requirement for AI literacy (Article 4) within organisations, now apply.

The European Commission recently issued guidelines on the prohibited AI practices and on the definition of AI system. We are hosting a webinar discussing the key points from both of these guidelines. Sign up here to watch the webinar on-demand.

This edition brings you:

  1. Key outcomes of Paris AI Summit: 60 countries sign AI declaration, EU commits to spend €200bn on AI and Singapore announces new AI safety initiatives

  2. European Commission publishes guidelines on prohibited AI

  3. European Commission releases guidelines on AI system definition

  4. European Commission publishes repository on AI literacy

  5. European Commission withdraws AI Liability Directive

  6. EDPB creates task force on AI enforcement

  7. High-risk AI bill in Virginia, USA passes key legislative hurdle

  8. CNIL publishes recommendations on responsible use of AI

Key outcomes of Paris AI Summit: 60 countries sign AI declaration, EU commits to spend €200bn on AI and Singapore announces new AI safety initiatives

During the AI Action Summit in Paris (the AI Summit), which took place from 10 to 11 February 2025, 60 countries, including France, China and India, signed the AI Summit's declaration on 'inclusive and sustainable' AI. The declaration emphasised an ethical approach to AI development, with commitments to transparency, safety, and addressing the growing concern over AI's energy consumption. While the UK and US chose not to sign, citing concerns over national security and the lack of practical clarity on global governance, the overall support for the declaration highlighted a broad international commitment to responsible AI development.

Read more on the AI declaration here.

During the AI Summit, the EU also unveiled the launch of InvestAI, a new initiative to mobilise €200 billion for investment in AI. The initiative includes the creation of a new European fund worth €20 billion to support the development of AI gigafactories across Europe, which will play a central role in advancing AI by enabling the development of the most complex AI models for fields like medicine and science.

The initiative will fund four AI gigafactories equipped with the latest AI chips, and will play a critical part in Europe's AI strategy, including by creating the conditions required for innovation and the development of trustworthy AI. The European Commission also disclosed its plans to establish a European AI Research Council and an 'Apply AI' initiative to drive AI adoption across industries.

Read more about InvestAI here.

Finally, Singapore announced new AI governance safety initiatives to enhance the safety of AI applications. These include the Global AI Assurance Pilot to develop best practices for testing generative AI, a joint testing report with Japan assessing large language models (LLMs) in multiple languages, and the Singapore AI Safety Red Teaming Challenge Evaluation Report, which tests LLM safeguards for cultural and linguistic biases in the Asia-Pacific region.

Read more here.

European Commission publishes guidelines on prohibited AI

On 4 February 2025, the EC issued draft guidelines clarifying prohibited AI practices under the AIA (effective since 2 February 2025). The guidelines - which are c.140 pages - provide guidance on interpreting and applying the AIA prohibitions on subliminal, deceptive and manipulative techniques, social scoring, emotion recognition in workplaces / education, facial recognition databases and sensitive biometric categorisation.

Read more about the prohibited AI guidelines here or watch our webinar to understand the key points arising out of the guidelines.

European Commission releases guidelines on AI system definition

On 6 February 2025, the EC also released draft guidelines on the definition of 'AI system' under the AIA. These guidelines aim to help organisations determine whether their software systems qualify as "AI systems" under the AIA.

The guidelines provide helpful clarity on the key elements of the "AI system" definition and they confirm that certain techniques are likely to fall outside of the definition.

Read more about the AI system guidelines here or watch our webinar to understand the key points arising out of the guidelines.

European Commission publishes repository on AI literacy

On 4 February 2025, the EC launched a new AI literacy repository to support the implementation of Article 4 of the AIA, which requires AI system providers and deployers to ensure sufficient AI literacy for their staff.

The repository, compiled by the EU AI Office, gathers examples of ongoing AI literacy practices from organisations participating in the AI Pact. These practices are categorised by implementation stage (fully implemented, partially rolled out, or planned), and serve as a resource to encourage learning and knowledge-sharing among AI stakeholders. The list will be regularly updated with new practices.

While adopting practices from the repository does not guarantee compliance with Article 4, it aims to foster exchange and collaboration. In addition, the EC clarified that the publication of these practices does not imply endorsement or evaluation.

The repository is part of the EU AI Office's broader effort to support AI literacy and implementation of the AIA.

Access the full EU AI Office Repository of AI Literacy Practices here. Read about our dedicated AI literacy programme here.

European Commission withdraws AI Liability Directive

On 7 February 2025, the EC confirmed the withdrawal of the AI Liability Directive (AILD) from further consideration, as outlined in its finalised 2025 work programme. The proposed AILD sought to harmonise across the EU the liability framework for compensating harm caused by AI-related incidents.

The EC stated that it could not foresee an agreement on the directive and will instead assess whether an alternative proposal or approach should be pursued in the future. This decision, along with the withdrawal of the ePrivacy Regulation, might signal the EC's shift away from more aggressive regulation in the digital space.

In the absence of the AILD, the existing Product Liability Directive will continue to govern AI-related liability matters, alongside national civil law. While non-contractual civil liability for AI could be proposed at a later date, no immediate changes are expected.

Read more here.

EDPB creates task force on AI enforcement

On 12 February 2025, the European Data Protection Board (EDPB) established a new task force on AI enforcement during its plenary meeting. This task force expands the scope of the existing ChatGPT task force to cover AI-related data protection issues under the GDPR. It aims to coordinate the efforts of data protection authorities (DPAs) in addressing urgent AI matters, ensuring that AI technologies comply with data protection principles.

The task force will also set up a quick response team to handle sensitive AI-related issues quickly and efficiently, and will be able to support DPAs in navigating the complexities of AI while maintaining strong data protection standards.

In addition to the AI enforcement task force, the EDPB also adopted a statement on age assurance, which outlines 10 principles to ensure compliant data processing when determining the age of an individual in order to protect minors. The EDPB is also collaborating with the EC on age verification within the framework of the Digital Services Act.

For more details on the new AI enforcement task force, read here.

High-risk AI bill in Virginia, USA passes key legislative hurdle

On 4 February 2025, the General Assembly of Virginia narrowly passed House Bill 2094, the 'High-Risk Artificial Intelligence Developer and Deployer Act' (the Bill), by a vote of 51-47. The Bill requires developers and deployers of high-risk AI systems to take steps to mitigate algorithmic discrimination, including implementing risk management policies, conducting impact assessments and disclosing key information to consumers when interacting with AI systems.

The Bill targets AI systems used in critical sectors like employment, healthcare and housing. Most notably, it imposes civil penalties for non-compliance, including fines ranging from $1,000 to $10,000, and a 45-day cure period for businesses to address violations.

If passed, the Bill is set to take effect on 1 July 2026.

Read the Bill in full here.

CNIL publishes recommendations on responsible use of AI

On 7 February 2025, France's data protection authority, the Commission Nationale de l'Informatique et des Libertés (the CNIL), published new guidelines concerning AI and its interaction with data subjects' rights under the GDPR. These guidelines focus on two key areas: (1) providing information to data subjects, and (2) the exercise of their rights.

Key highlights include:

  • Information to data subjects: The CNIL clarified that information should be clear, concise and easily accessible, particularly for complex AI tools. It recommends using visual aids like graphs to explain data usage and AI functionality, and states that information concerning data retention periods and the rights of individuals should be provided.
  • Exercising data subject rights: The CNIL advises that, where a data controller cannot identify a data subject, efforts should be made to allow the subject to provide additional information for identification. For data subject access requests, controllers should perform a case-by-case analysis on the information they provide, including on the sources of data. The CNIL also emphasises that data used to train AI models should be anonymised whenever possible.

Read the full recommendations here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.