Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world. This edition brings you:
- Generative AI Copyright Disclosure Act introduced before US Congress
- CMA highlights competition risks in AI foundation models market
- French Data Protection Authority issues recommendations on the development of AI systems in compliance with GDPR
- China releases second iteration of Model Artificial Intelligence Law
- Update to UK Children's Code imminent, and new code safeguarding children from AI harms in development
- Denmark appoints AI Supervisory Authority
Generative AI Copyright Disclosure Act introduced before US Congress
On 9 April, California Democratic congressman Adam Schiff introduced a bill in the US Congress that would require AI companies to be transparent regarding copyrighted material used to train their generative AI models.
If passed, the Generative AI Copyright Disclosure Act would require AI companies creating or altering training datasets to provide a detailed summary of any "copyrighted works used in building generative AI systems" and the URL for the dataset if publicly available, either:
no later than 30 days before making the generative AI system relating to the dataset publicly available; or
in the case of generative AI systems that are already publicly available, no later than 30 days after the date of the Act.
Failure to comply with the notice requirements may result in fines of $5,000 or more.
Read the proposed bill here.
CMA highlights competition risks in AI foundation models market
On 16 April, the UK's Competition and Markets Authority (CMA) published its Technical Update Report on AI Foundation Models.
The report identifies a number of developments across the foundation models landscape and highlights the benefit of foundation models, however, the CMA outlines the following three risks to "fair, open and effective competition":
Restricted access by dominant AI firms to key inputs for developing foundation models such as data and expertise;
Reduced consumer choice over which foundation model service they use due to integration of foundation models in existing digital devices restricting competition in downstream deployment; and
Reinforced positions of market power due to strategic investments and partnerships by firms including Google, Amazon, Microsoft, Meta and Apple.
The report outlines six updated principles to mitigate the identified risks to competition and consumer protection. The six principles build on the seven proposed principles to guide the development and deployment of foundation model markets outlined in the CMA's initial report in September 2023, with the initial 'choice' and 'flexibility' principle now being combined into a single 'choice' principle. The six principles are:
Access to critical inputs for developing foundation models such as AI data and expertise;
Diversity of foundation models for different needs;
Choice of foundations models and services for consumers;
Fair dealing meaning no anti-competitive conduct including the use of partnerships to restrict competition;
Transparency regarding information about a foundation model's use and limitations; and
Accountability of foundation model developers and deployers for outputs.
You can read the updated report here.
French Data Protection Authority issues recommendations on the development of AI systems in compliance with GDPR
On 8 April, the French Data Protection Authority (CNIL) issued its recommendations on the development of AI in compliance with the EU GDPR.
The CNIL's recommended steps include:
Define a purpose for the AI system, depending on whether the operational use could be identified at the development stage, or whether the AI system is developed for a general use.
Determine the legal qualification and responsibility of the actors involved in the development of an AI system and the processing of personal data, whether as a data controller, joint controller, or subcontractor.
Define a legal basis within one of the six legal bases provided for by the EU GDP Rto allow processing of personal data through the AI system.
Carry out tests and verifications to ensure that the processing is authorised by law in the event of data reuse, whether this involves data originally collected for another purpose, or data collected from open sources on the internet.
Respect the data minimisation principle when making AI system design choices, by selecting only relevant data for training and cleaning non-relevant data from the database.
Define a retention period for the development phase of the AI system, and also for the maintenance and improvement of the AI system, in accordance with the EU GDPR.
Carry out a data protection impact analysis (DPIA) when necessary, considering new and/or specific risks associated with AI systems such as data misuse, data breaches, processing that may lead to discrimination caused by bias in the AI system and other risk criteria introduced by the AI Act.
Read the recommendations on CNIL's website here (French language).
China releases second iteration of Model Artificial Intelligence Law
On 16 April, academic experts in China released the second iteration of their inaugural Model Law of Artificial Intelligence.
Drafted by academic experts, the initial draft was published in August 2023 to regulate activities relating to AI research and development, and to the provision and use of AI within China, as well as AI activities conducted outside of China that may affect the national security or interests of individuals or organisations within China.
Not yet in force, the updated draft includes new provisions on:
protection of intellectual property rights;
allocation of specific funds to develop AI;
preferential tax treatments and tax credit incentives for research and development, and investments made by AI developers; and
establishment of AI special zones in designated locations to promote AI innovation and development.
The updated draft also adds further detail in relation to AI innovation, open-source AI platforms and the 'Artificial Intelligence Negative List', which subjects certain products and services to a licensing oversight system. The AI Negative List imposes enhanced obligations for AI developers and providers to comply with, such as possessing comprehensive AI quality and network data security management systems.
Read the updated draft Model Artificial Intelligence Law here.
Update to UK Children's Code imminent, and new code safeguarding children from AI harms in development
From 8 - 12 April, the Global Age Assurance Standards Summit hosted leading experts to discuss the latest in standards and technologies behind age assurance measures enabling a safer online environment for children.
At the Summit, the Information Commissioner's Office (ICO) indicated that the UK Children's Code, a code of practice setting out standards that online services need to follow, will likely be updated in 2025 or 2026. This is to align with the upcoming overhaul of data protection legislation in the UK brought in by the Data Protection and Digital Information Bill, and could mean new or different measures for online platforms to comply with. Baroness Beeban Kidron, a campaigner for children's rights in the digital environment, suggested that a new code protecting children from AI is in its early stages.
The Summit coincides with the ICO's recent announcement of its 2024-2025 priorities for protecting children's personal information online, which focuses on what social media and video-sharing platforms need to improve upon. The ICO highlighted the potential harms to children presented by geolocation settings being active by default, profiling children for targeted advertising and algorithm-generated content feeds.
Read the UK Children's Code here and our recent publication on the ICO's priorities here.
Denmark appoints AI Supervisory Authority
On 10 April, Denmark appointed the Danish Agency for Digitisation as the national Supervisory Authority for AI, pursuant to the EU AI Act.
The Agency for Digitisation is distinct from the Danish Data Protection Agency and focuses on six priorities, including the development and operation of the national public digital service infrastructure, and ensuring new legislation can be implemented and administered digitally.
As part of its role, the Agency for Digitisation will be responsible for implementing and supervising compliance with the AI Act and will coordinate with relevant authorities in Denmark and in the EU.
Read the press release here (Danish language).










_11zon.jpg?crop=300,495&format=webply&auto=webp)





_11zon.jpg?crop=300,495&format=webply&auto=webp)


