Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.
This edition brings you:
EU AI Act to come into force on 1 August 2024
Plans for UK legislation for AI unveiled in King's speech
AI law proposed in Turkey
Cyberspace Administration of China issues guidelines for AI
Philippines planning to issue national AI governance framework by year end
EU governments open debate on copyright and generative AI
EDPB launches AI Auditing project
EU AI Act to come into force on 1 August 2024
The EU’s flagship AI Act was published in the Official Journal on 12 July, meaning it will enter into force on 1 August.
We are increasingly advising on this important development. Watch this space for resources to help you navigate the Act.
For a high-level overview of the Act, please see our ‘Quick Guide’ here.
Plans for UK legislation for AI unveiled in King’s speech
On 17 July, King Charles III announced the government’s plan to introduce AI-specific legislation for the UK, during an address to the Lords and Members of the House of Commons.
Whilst not expressly mentioned in the King’s speech (instead, a reference was made to establishing “the appropriate legislation”), the new Labour Government is expected to introduce an AI bill for the UK, to follow Labour’s recent manifesto pledge to introduce new legislation to govern development of AI technologies.
A standout claim in Labour’s manifesto is that “Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”. In addition, Labour promise to ban “sexually explicit deepfakes.”
Labour’s hands-on approach may depart from the former Conservative Government’s stated “Pro-Innovation Approach” and refusal to introduce AI-specific legislation. These statements signal an intention to create binding regulation to address a “handful” of companies developing the most “powerful” AI models, though it has not yet been made clear how these models are to be defined.
No further details have been provided as to the status of an AI bill for the UK, and the timing of any bill is yet uncertain. It remains to be seen if this specific approach to AI regulation will continue throughout the Labour Party’s time in government.
Read more here.
AI law proposed in Turkey
On June 26, the Artificial Intelligence Bill was introduced to the Parliamentary Committee of Turkey, and is presently under review. The Bill aligns closely with the EU AI Act, classifying AI systems in accordance with their risk levels and implementing specific requirements across different AI systems. The bill also introduces strict penalties for failure to comply with conformity assessments and registration of AI systems.
This will be the first specific law or regulation in Turkey to govern AI, marking a significant legislative step in a rapidly growing AI market. Furthermore, the risk-based approach, as adopted from the EU model, highlights Turkey’s intention to foster AI innovation while ensuring that AI systems are closely and safely monitored.
Cyberspace Administration of China issues guidelines for AI
On 3 July, the Cyberspace Administration of China (CAC) issued its Guidelines for the Construction of a National Comprehensive Standardisation System for the AI Industry.
The Guidelines aim to:
strengthen the planning of AI standardisation work systems,
accelerate the construction of a standards system that meets the needs of high-quality development in the AI industry, and
make better use of the supporting role of standards in advancing technological progress, promoting enterprise development, leading industrial upgrading, and ensuring industrial security.
This will be done by setting goals such as the formulation of over 50 new national standards and industry standards by 2026, with “at least 20” international standards to accelerate the “high-quality development of the AI industry”.
The Guidelines outline basic support standards to be introduced for data services, smart chips, sensors, computing equipment, computing centres, system software, development frameworks, and software-hardware collaboration. They also address key technical standards for machine learning, knowledge graphs, large models, natural language processing, intelligent speech, computer vision, biometric recognition, human-machine hybrid intelligence, intelligent agents, swarm intelligence, cross-media intelligence and embodied intelligence.
See here for the approved version (currently only available in Mandarin Chinese).
Philippines planning to issue national AI governance framework by year end
In its latest National AI Strategy Roadmap, the National Economic and Development Authority of the Philippines has proposed a national AI governance framework, which is likely to be published by the end of the year.
The AI Strategy Roadmap builds on a previous iteration published in 2021, updating this to incorporate issues specific to generative AI, and further discussions around ethics and governance of AI. Additionally, the Trade and Industry Secretary, Alfredo Pascual, noted that this framework will establish “the approved scope and limitations of what developers and stakeholders can do.”
Read more here.
EU governments open debate on copyright and generative AI
On 3 July, EU governments debated various issues, including reservation rights, content protection, and copyright and generative AI. The debate also considered whether a new copyright infringement liability regime should be introduced in relation to AI-generated output.
This debate took the form of a detailed questionnaire presented by the government of Hungary.
More substantive discussions in relation to these issues are due to be heard in September and October this year, with governments given until 1 October to submit written comments. Once all comments are provided, these will be compiled into a “stocktaking paper”, which will then be presented on 11 December.
Read more here.
EDPB launches AI Auditing project
On 27 June, the European Data Protection Board (EDPB) introduced the AI Auditing project, which aims to help parties understand and assess data protection safeguards in the context of the AI Act.
In particular, it may help DPAs to inspect AI systems by defining a methodology in the form of a check-list to perform an audit of an algorithm, and proposing tools to enhance transparency.
Read more here.

.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)



_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)





.jpg?crop=300,495&format=webply&auto=webp)


