Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
European AI Office releases first draft of the General-Purpose AI Code of Practice
European Commission launches consultation on application of AI systems and prohibited AI practices
UK ICO releases new report on AI tools in recruitment
DSIT publishes new report aimed at futureproofing AI
Open Source Initiative publishes parameters for open source AI and machine-learning systems
Texas proposes new AI bill
'AI factory' proposals lodged to create EU network of computing power, data and talent
Ofcom warns generative AI providers in UK of responsibilities under Online Safety Act
Canada launches Canadian Artificial Intelligence Safety Institute
European AI Office releases first draft of the General-Purpose AI Code of Practice
On 14 November 2024, the European AI Office published the first draft of the General-Purpose AI Code of Practice (GPAI CoP). This draft, prepared by independent experts from four thematic working groups, marks the first of four drafting rounds that began on 30 September 2024, and will continue until April 2025. The GPAI CoP aims to translate the key GPAI model requirements in the AI Act, such as copyright related rules, into specific and actionable measures.
The next steps involve discussing the draft during the Code of Practice Plenary with 1,000 stakeholders, including EU Member State representatives and international observers, who will have the opportunity to provide verbal feedback. On 22 November 2024, key insights from these discussions will be shared with the full Plenary. In parallel, stakeholders will have until 28 November 2024 to submit any written feedback, which will be considered for further refinement of the draft.
The draft GPAI CoP is available to download here, and you can view the GPAI Q&A here.
European Commission launches consultation on application of AI systems and prohibited AI practices
On 13 November 2024, the European Commission (Commission) initiated a consultation process aimed at developing guidelines on the definition of AI systems and prohibited practices under the EU AI Act (AIA). The purpose of the consultation is to assist businesses in complying with the provisions in the AIA, ahead of the application of the relevant provisions on 2 February 2025. This includes:
- Addressing prohibited AI practices, such as harmful manipulation, social scoring, and certain biometric uses, with an aim to protecting individual autonomy;
- Facilitating consistent and effective enforcement and application of the AIA across the EU; and
- Helping national competent authorities as well as providers and deployers to comply with the AIA’s rules.
The Commission’s AI Office has invited stakeholders, such as AI system providers, businesses, national authorities, academia, research institutions and civil society, to submit their practical examples and input. These contributions will feed into the Commission's guidelines on the definition of AI system and prohibited AI practices under the AIA, and are expected to be published in early 2025.
The consultation is open for submissions until 11 December 2024.
Read the announcement and consultation here.
UK ICO releases new report on AI tools in recruitment
On 6 November 2024, the UK Information Commissioner’s Office (ICO) published a report on the use of AI-powered tools in recruitment, specifically examining how these tools comply with UK data protection laws. Through consensual audits with developers and providers of AI sourcing, screening and selection tools, the ICO has highlighted benefits in using AI to enhance the recruitment process, whilst also identifying potential risks to privacy and information rights for individuals.
The audits revealed data protection compliance and privacy risk management as areas requiring the most improvement, alongside key concerns about the lack of accuracy testing, potential discrimination through search functionalities and the unlawful processing of personal data without candidates' knowledge.
Both AI providers and recruiters are encouraged to follow the recommendations in the report, which include emphasising fair processing of personal information, providing clear explanations of processing activities, minimising data collection and conducting privacy impact assessments.
You can read the full report here.
DSIT publishes new report aimed at futureproofing AI products
On 6 November 2024, the UK’s Department for Science, Innovation and Technology (DSIT) published a report titled "Assuring a Responsible Future for AI". The report provides a comprehensive analysis of the UK AI assurance market, identifying opportunities for growth and outlining strategies to drive this emerging industry forward.
The report highlights the UK government's commitment to mitigating the risks associated with AI and driving adoption of safe and responsible AI. The report sets out three targeted actions that will be implemented to increase demand and supply in the UK market:
Develop a one-stop-shop AI Assurance Platform – This platform will aim to raise awareness of the demand for AI assurance tools and services.
Increase the supply of third-party AI assurance – DSIT aims to do this by (i) working with industry to develop a roadmap towards trusted third-party AI assurance and (ii) collaborating with the AI Safety Institute to advance AI assurance research, development and diffusion.
Enable the interoperability of AI assurance – DSIT’s objective is to develop a Terminology Tool for Responsible AI, to help assurance service providers navigate the international governance ecosystem.
The full report can be found here.
In addition, the DSIT has also launched a consultation for its recently developed AI Management Essentials (AIME) tool, a self-assessment tool aimed at helping organisations assess and implement responsible AI management systems and processes. The consultation states that AIME will provide clarity around practical steps for establishing a baseline of good practice, as well as distilling key principles from existing AI regulations, standards and frameworks to provide an accessible resource for businesses.
You can find out more about the AIME tool and consultation here.
Open Source Initiative publishes parameters for open source AI and machine-learning systems
The Open Source Initiative (OSI) has published a set of parameters for open source AI in an effort to ensure that AI systems labelled as “open source” can continue to be accessed by developers, deployers and end users.
The key parameters include allowing users to freely:
- Use the AI system for any purpose and without having to ask for permission;
- Study how the system works and inspect its components;
- Modify the system for any purpose, including to change its output; and
- Share the system for others to use with or without modifications, for any purpose.
The announcement also clarifies the preferred form for making modifications to machine-learning systems, namely:
- Ensuring detailed data information is available to train the system and enable a skilled person to build a substantially equivalent system;
- Reflecting the complete source code used to train and run the system, representing (i) the full specification of how the data was processed and filtered and (ii) how the training was carried out; and
- Identifying the ideal model parameters, such as weights or other configuration settings.
Whilst the definition does not currently possess any legal enforceability, the OSI’s aim is to limit the labelling of AI systems as “open” to those that truly allow free and open access by users.
Read the full announcement here.
Texas proposes new AI bill
On 8 November 2024, Texas State Representative, Giovanni Capriglione, announced the proposal of a new AI bill titled “The Texas Responsible AI Governance Act” (TRAIGA). Following the passing of the Colorado AI Act in May 2024, this bill draws inspiration from the EU AI Act and creates a framework for developers, distributors and deployers of high-risk AI systems (HRAIS) by establishing a standard of reasonable care to mitigate risks of algorithmic discrimination.
The key requirements under TRAIGA include:
- Conducting semi-annual HRAIS impact assessments;
- Carrying out detailed record-keeping and reporting;
- Ensuring AI literacy training and educational outreach (for state agencies and local governments);
- Ensuring that intentional and substantial modification to a HRAIS triggers additional responsibilities;
- Disclosing HRAIS to consumers and right to explanation for AI-driven decisions; and
- Developing an AI risk management policy.
TRAIGA also proposes the establishment of a new state agency to govern the ethical and legal use of AI and to provide guidance to state agencies, called the Texas Artificial Intelligence Council. The bill will now be formally introduced for the 2025 legislative session in Texas.
If passed, the bill would come into force from 1 September 2025.
You can view the full bill here.
'AI factory' proposals lodged to create EU network of computing power, data and talent
On 11 November 2024, the European Commission (Commission) announced that it had received seven proposals for the establishment of AI factories, aimed at enhancing AI innovation across the EU.
These AI factories will leverage the EU's network of European High-Performance Computing (HPC) supercomputers to create a robust ecosystem for training advanced AI models and developing AI solutions. By integrating computing power, data and talent, it is expected that the AI factories will significantly boost the computing resources available for AI in Europe, benefiting startups, industry and researchers.
The proposals, submitted by 15 Member States and two associated states, indicate countries’ strong interest in building AI factories around existing or new supercomputers, in particular the below nations:
- Finland (with Czechia, Denmark, Estonia, Norway, and Poland);
- Luxembourg;
- Sweden;
- Germany;
- Italy (with Austria and Slovenia);
- Spain (with Portugal, Romania, and Türkiye); and
- Greece.
The next step is for an independent panel of experts to evaluate the proposals, with the European High-Performance Computing Joint Undertaking expected to announce the selected AI factories in December 2024, aiming for a launch in early 2025.
Read the announcement here.
Ofcom warns generative AI providers in UK of responsibilities under Online Safety Act
On 8 November 2024, Ofcom published an open letter to online service providers operating in the UK on how the UK's Online Safety Act applies to generative AI and chatbots. The Act regulates platforms that allow user interaction with AI-generated content, classifying them as "user-to-user services". It also covers AI tools functioning as "search services" and those generating pornographic material, requiring age assurance.
It recommends that UK providers conduct risk assessments, manage harmful content and enable user reporting. The first duties will begin to take effect from December 2024, once Ofcom has published its final Illegal Harms Risk Assessment Guidance and Codes of Practice, with key assessments due by mid-March 2025. With this in mind, Ofcom warns of enforcement actions for non-compliance and encourages providers to prepare by implementing content moderation and safety measures.
You can read the full letter here.
Canada launches Canadian Artificial Intelligence Safety Institute
On 12 November 2024, Canada announced the establishment of the new Canadian Artificial Intelligence Safety Institute (CAISI), tasked with enhancing AI safety and encouraging responsible use. The CAISI, which aims to address AI risks such as disinformation, cybersecurity breaches and election interference, is part of a broader CAD 2.4 billion investment to support responsible AI development and adoption in Canada.
The CAISI will leverage Canada's AI research ecosystem, collaborating with national and international partners, including the National Research Council and the Canadian Institute for Advanced Research. The institute will conduct applied and government-directed research to assess AI risks and develop safety measures, and engage with other national AI safety institutes around the world.
The initiative complements Canada's existing AI strategies, including the proposed Artificial Intelligence and Data Act and the Voluntary Code of Conduct.
You can read the full press release here.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)


.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)





