AI View - January 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

20 January 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This special edition focuses on the UK Government's AI Opportunities Action Plan.

It also brings you:

  1. US announces measures to power data centres and impose export controls on AI chips

  2. UK Government launches consultation on reforming copyright law to balance AI innovation with creators' rights

  3. India unveils AI governance framework for public consultation

UK Government launches AI Opportunities Action Plan  

On 13 January 2025, the UK Prime Minister, Sir Keir Starmer, launched a new AI Opportunities Action Plan (the "Action Plan") aimed at harnessing AI to deliver improvements across the UK.

This blueprint for the nationwide development and integration of AI is backed by leading tech firms and includes a £14bn investment, which is expected to create over 13,000 jobs and build the necessary digital infrastructure. The Action Plan also aims to establish AI Growth Zones ("AIGZs") and leverage AI for public sector efficiency, including for road maintenance and healthcare improvements.

Central to the plan is the commitment to implement all 50 recommendations made by Matt Clifford, the author of the Action Plan. These recommendations outline a strategy for AI to drive significant economic and societal benefits. The government's approach marks a departure from previous strategies, focusing on active investment into the AI sector, including the establishment of AIGZs for accelerated infrastructure development, a twenty-fold increase in public compute capacity, and the creation of a National Data Library.

Key initiatives of the plan include:

  • AI Growth Zones: The establishment of designated areas with improved access to the energy grid to support AI projects, with the first zone announced in Culham, Oxfordshire. These zones address the critical need for energy to power AI data centres and aim to attract global investment and innovation in AI.

  • Sovereign Compute Capacity: A commitment to expand the UK's computing capacity - where data is processed and stored in the UK - by 20-fold to support AI development, including the construction of a new supercomputer. Additionally, Nscale, an AI data centre firm, plans to invest £2bn in building sovereign computing facilities to reduce British dependence on US cloud providers.

  • Public Data Utilisation: Plans to grant researchers and AI companies access to anonymised public data sets, including NHS patient information, to stimulate AI innovation and improve public service productivity.

  • Creation of a National Data Library: An initiative to unlock the value of public data in a secure manner, supporting the development and application of AI across various sectors.

  • Integration of AI in Public Services: Plans to use AI to reduce administrative tasks for public sector workers, allowing them to focus more on service delivery. This includes using AI for road inspections to identify potholes and in healthcare for quicker diagnosis of diseases like cancer.

  • AI Energy Council: The formation of a council chaired by the Science and Energy Secretaries, working with energy companies to meet the demands of AI technology, aligning with clean energy goals and including exploring the use of small modular reactors.

  • Appointment of AI Opportunities Advisor: Matt Clifford appointed to lead a team across government departments to capitalise on AI opportunities and embed its usage in public services.

The Action Plan forms a core part of the government's industrial strategy and serves as the initial component of the forthcoming Digital and Technology Sector Plan, set to be released in the coming months.

Read the full announcement here and the AI Opportunities Action Plan here.

US announces measures to power data centres and impose export controls on AI chips

US President, Joe Biden, signed an executive order on 14 January 2025 to support the energy needs of AI data centres by making available federal sites for private sector development of AI infrastructure.

The order sets out, amongst others, the following mandates:

  • The Department of Defence and Department of Energy each identifying three sites for private companies to develop AI data centres, with proposals for these sites to be selected by 30 June 2025 and the AI infrastructure operational by the end of 2027.

  • Establishing a reporting system for electric grid connections to support AI data centres, prioritising permit approvals for AI infrastructure on federal sites, and ensuring these sites have access to adequate transmission facilities by the end of 2027.

The order follows an announcement of a new regulatory framework to impose export controls on advanced AI technology and computing chips. The US government has introduced these new restrictions in an effort to protect against national security risks posed by AI and prevent advanced AI technology from reaching "malicious actors".

The framework introduces measures to:

  • Require US companies to obtain authorisation for exporting advanced computing chips, with exceptions for allies and supply chain continuity, and for open models and orders under a specific computing power threshold.

  • Impose security conditions to safeguard the storage of advanced models, aiming to prevent their diversion and misuse, and ensure the secure spread of AI capabilities in alignment with US national security and foreign policy objectives.

Read the full executive order here and the announcement of export controls here.

The UK IP Office has launched a public consultation on copyright law in relation to AI, with 47 questions on various copyright issues, including clarifying the application of UK copyright law to text and data mining ("TDM").  

This consultation addresses the need for AI developers to access large and varied datasets, which may contain copyright materials, for training purposes. It proposes a new TDM exception that would allow the use of copyright materials for TDM for any purpose (including AI training), provided rightsholders have not opted out and the materials are freely available. This approach aims to balance AI innovation with creators' rights, aligning with the EU's stance, and introducing transparency requirements for AI developers to identify their training data.

The consultation also explores issues such as copyright protection for AI-generated works, liability for copyright infringement, and the need for labelling AI-generated content. This reflects the Government's efforts to make the UK competitive for AI development and to ensure the UK does not fall behind other nations by maintaining a restrictive stance on copyright law.

This consultation is part of the Government's broader strategy to enhance AI development in the UK, as outlined in its AI Opportunities Action Plan.

Read the full article by Simmons & Simmons here.

India unveils AI governance framework for public consultation

India's Ministry of Electronics and Information Technology ("MeitY") has unveiled a report for public consultation, aiming to establish a comprehensive framework for AI governance in India. Stakehoolders are invited to provide feedback before 27 January 2025.

The report identifies gaps in the application of current laws to AI, including the need for stronger enforcement capabilities against AI misuse, alongside a necessity for a unified government approach to manage AI's broad impacts to ensure transparency, responsibility, and non-discrimination across its applications.

Recommendation for the India-specific AI regulatory framework include:

  • AI Governance Mechanism: Forming an Inter-Ministerial AI Coordination Committee or Governance Group to coordinate AI governance efforts across the government.

  • Systems-Level Understanding: Establishing a Technical Secretariat to serve as a technical advisory body, pooling expertise from various sectors to map the AI ecosystem and assess risks.

  • AI Incident Database: Creating an AI incident database to collect and analyse real-world AI-related problems, focusing on harm mitigation.

  • Transparency and Governance: Engaging the industry to promote voluntary commitments towards transparency and responsible governance in the AI ecosystem, including regular disclosures and evaluations of AI systems.

  • Technological Measures: Exploring technological solutions to AI-related risks, such as content provenance tools, evaluating their viability and engaging with global counterparts.

  • Legal and Regulatory Framework: Forming a sub-group to work with MeitY on enhancing the legal and regulatory framework under proposed legislation like the Digital India Act, focusing on grievance redressal and regulatory capacity.

Read the full report here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.