AI View June 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

03 June 2024

Publication

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

This edition brings you:

  1. EU AI Act approved by Council of the EU

  2. Outcomes of the AI Seoul Summit

  3. Automated Vehicles Act becomes law in the UK

  4. Council of Europe adopts first international treaty on AI

  5. California Senate passes California AI Transparency Act and Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

  6. Japan’s ruling party publishes second AI White Paper

EU AI Act approved by Council of the EU

On 21 May, the Council of the EU approved the flagship EU AI Act.

Once signed by the presidents of the European Parliament and the Council, the EU AI Act will be published in the EU’s Official Journal in the coming days and enter into force twenty days after its publication.

As a reminder, some of the provisions of the EU AI Act will apply six months after publication. In particular, providers and deployers of AI systems will have to ensure a sufficient level of AI literacy among their staff and other persons involved in the operation and use of AI systems. The prohibited AI provisions also come into force after six months.

See more here and please contact us if you’d like to discuss the Act. We are heavily involved in advising on it.

Outcomes of the AI Seoul Summit, 21-22 May 2024

South Korea and the UK co-hosted the AI Seoul Summit on 21-22 May, serving as a follow-up to the inaugural AI Safety Summit held at Bletchley Park in late 2023. At the Summit, the leading nations on AI discussed a number of topics, including the regulation and innovation of ‘frontier’ AI. ‘Frontier’ AI, defined as highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced models, were identified as both offering vast opportunities and posing systemic risks.

The Summit resulted in:

  • Seoul Declaration for AI: Promotes safe, secure AI development, addressing global challenges and human rights protection, with nations committed to sharing AI safety information.

  • Seoul Statement of Intent: World leaders' commitment to advance AI safety science through international collaboration.

  • Seoul Ministerial Statement: Participating countries' pledge to advance AI safety, innovation, and inclusivity, agreeing on shared risk thresholds and international cooperation on AI safety science.

  • Frontier AI Safety Commitments: Signed by 16 AI tech companies, pledging responsible AI development, including safety frameworks, threat red-teaming, cybersecurity investment, and public reporting of model capabilities. Signatories will refrain from deploying models if risks cannot be sufficiently mitigated.

Read the declarations here and commitments here.

Automated Vehicles Act becomes law in the UK

On 20 May, the Automated Vehicles (AV) Act became law in the UK, setting the stage for self-driving vehicles on British roads by 2026. The Act removes one of the key roadblocks to the adoption of AVs in the UK by providing a clear legal framework for drivers and developers. It is hoped that the Act will encourage further investment in an industry forecast to be worth £42 billion and to create 38,000 jobs in the UK by 2035.

Of note, the Act sets out a new authorisation and licensing regime for self-driving vehicles. An AV will only be authorised if it passes a ‘self-driving test’. For example, an AV which provides ‘hands off, eyes on’ driving assistance will not be authorised. Manufacturers who are authorised must ensure ongoing compliance and report significant updates or modifications to the relevant authority.

Liability for accidents caused by an AV is a topic that often captures the public’s attention. The Act clarifies that whilst an AV is in self-driving mode, drivers will not be held responsible for how the AV drives. This liability will be assumed by insurance providers, software developers and automotive manufacturers as first outlined in the Automated and Electric Vehicles Act 2018.

Read the Act here.

Council of Europe adopts first international treaty on AI

On 17 May, the Council of Europe adopted the first ever international treaty on artificial intelligence (the Convention). The Convention sets out a legal framework that covers activities within the entire lifecycle of AI systems and is aimed at ensuring the respect of human rights, the rule of law and democracy in the use of AI systems.

The Convention:

  • adopts a risk-based approach for the lifecycle of AI systems and applies to both the public and private sectors, focusing on responsible innovation and addressing potential negative impacts;

  • establishes transparency and oversight requirements tailored to specific contexts, including measures to identify AI-generated content and ensure accountability for adverse impacts; and

  • requires parties to adopt measures to ensure that democratic institutions and processes are not undermined by the use of AI systems, including the principle of separation of powers, respect for judicial independence and access to justice.

The Convention covers the use of AI systems in the public and private sectors. Parties to the Convention may opt to be directly obliged by the relevant convention provisions, or take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law.

The Convention will be open for signature in Vilnius, Lithuania, on 5 September 2024.

Read the full text here.

California Senate passed California AI Transparency Act and Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

On 21 May, the California Senate passed two key pieces of AI-related legislation:

  1. The California AI Transparency Act, which aims to protect consumers by giving them the ability to determine if certain materials have been generated by AI. The Act requires large generative AI system providers to label AI-generated content with imperceptible (yet machine detectable) embedded disclosures, and to provide an AI detection tool to enable users to query whether content was created by a generative AI system.

  2. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aims to regulate the development and use of advanced AI models. The Act mandates developers to make certain safety determinations before training AI models, comply with various safety requirements and report AI safety incidents. It also establishes the Frontier Model Division within the Department of Technology for oversight of these AI models.

Both Acts are now being considered by the Assembly. If approved, they will be sent to the Governor for signature.

Read the full text of the Acts here and here.

Japan’s ruling party publishes AI White Paper on “becoming the world’s most AI-friendly country”

On 21 May, the Liberal Democratic Party (LDP) of Japan published its second AI White Paper, outlining Japan's strategy to “become the world's most AI-friendly country”.

The paper was produced by the AI project team of the LDP, which was established in January 2023 to consider Japan’s AI strategy and provide policy recommendations.

The recommendations proposed in the paper aim to enhance Japan’s competitiveness through the use of AI, while ensuring the safe application of these technologies.

Key recommendations outlined in the paper include:

  • Public-private collaboration: Encourage cooperation to collect, maintain and update data, and to develop and utilise AI in sectors where Japan excels, such as automobiles, robotics, and materials development. This should also apply to crucial areas like medicine, finance and agriculture.

  • Infrastructure development: The government should offer financial and policy support to ensure the construction of data centres and other infrastructure necessary to support AI technologies. The report emphasises that computing infrastructure for processing and storing data needs to be developed domestically to ensure the safe management of critical data and improve processing times. The government should also prioritise energy-efficient infrastructure and consider future energy needs.

  • International coordination on AI safety: Establish a high-level network of AI Safety Institutes (AISIs) in Japan, backed by the government. AISIs should consider safety assessments and standards, and produce educational materials on appropriate AI use. There should also be international coordination with other countries to ensure AI safety, with due attention given to harmonising international standards around audits and third-party certification of AI technologies.

Read the full paper here and an outline of the key recommendations here.


Join us at London International Disputes Week (LIDW)!

Simmons & Simmons is co-hosting (together with TrialView) a panel discussion at LIDW on the subject of AI adoption by law firms and in-house legal teams, and readers of AI View are cordially invited to attend.

The event will be taking place at our London offices on 6 June at 14:00, and will be followed by afternoon tea.

Experts from the Simmons & Simmons AI Group will be joined by TrialView and in-house legal counsel to discuss how AI is unlocking new ways of working in a disputes context, how law firms and their clients are adapting, and how legal teams can adopt AI effectively, safely and sustainably.

You can register for the event here – registration is free and not restricted to those registered for LIDW.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.