AI View - 30 January 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

30 January 2024

Publication

> Sign up to AI View

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  • The major changes in the leaked EU AI Act text
  • European Commission Set to Launch AI Office
  • FTC launches inquiry into biggest AI Investments and Partnerships
  • ICO consultation series on generative AI and Data Protection
  • Singapore proposes Model AI Governance Framework for Generative AI
  • Australian Government publishes interim response to Safe and Responsible AI Consultation

EU AI Act Leaked: Key Changes and Implications for High-Risk AI Systems, General Purpose AI, and Deepfakes

In our 11 December 2023 edition of the AI View, we reported that a political deal was reached on the EU AI Act but that the finalised text was yet to be agreed. The final text for the proposed regulation, which is set to become a global standard, was leaked online earlier this week here.

Three key changes are:

  1. How high-risk AI systems, which have the most onerous obligations and restrictions applied to them, are classified. The AI systems listed in Annex 3 will not be considered high risk if they do not pose "a significant risk of harm to the health, safety or fundamental rights of natural persons", including if they do not "materially influence" the outcome of decision-making. A provider seeking to benefit from this exemption will have to document its risk assessment of the AI system before it is put to market and register (i) themselves and (ii) the AI system in the EU database.

  2. "General Purpose AI" (GPAI) concept introduced. The definition of GPAI models focuses on the generality of the model and the capability to competently perform a wide range of distinct tasks (foundation models, for example). A new chapter provides specific rules for GPAI, which includes a requirement for providers to keep and provide technical documents and to establish a copyright policy. The chapter also sets out the additional requirements for GPAI that pose systemic risks. GPAI will be deemed to pose a systemic risk if (i) it has high impact capabilities (i.e. its total computational power exceeds 10^25 floating points of operations) or (ii) it is as designated as such by the Commission. Providers of GPAI will have additional obligations to comply with.

  3. A hot topic and one of the most controversial uses of AI, deepfakes have fallen under the scrutiny of the EU AI Act. Deployers of AI systems which could generate deepfakes will need to disclose that content has been artificially created or manipulated by labelling the output accordingly and disclosing its origin.

European Commission Set to Launch AI Office

The European Commission is on the verge of establishing the European Artificial Intelligence Office (AI Office), a crucial step in enforcing the upcoming EU AI Act.

Its main function will be to oversee the enforcement of rules concerning GPAI , which include some of the most advanced AI technologies e.g. foundation models powering generative AI. It will develop methodologies for evaluating these powerful AI models, focusing on those that might pose systemic risks. The Commission has made it clear that the AI Office should not affect the powers of national authorities, who will otherwise enforce the remainder of the EU AI Act at a Member State level.

The AI Office's other responsibilities include supporting the implementation of the EU AI Act, including the preparation of secondary legislation, standardisation requests, and the establishment of regulatory sandboxes. It will also provide secretariat services for the AI Board and facilitate collaboration with various stakeholders, including the open-source community.

Read the press release here.

FTC launches inquiry into biggest AI Investments and Partnerships

The Federal Trade Commission (FTC) is launching an inquiry into the deals between: (i) Microsoft and OpenAI, (ii) Amazon and Anthropic, and (iii) Google and Anthropic.

The FTC will seek information about these relationships, including their strategic rationale, governance structures, and competitive impacts, such as market share and potential sales growth. The FTC has previously emphasised the importance of guarding against practices that could undermine fair competition and innovation in the AI industry.

Access the press release here.

ICO consultation series on generative AI and data protection

The Information Commissioner's Office (ICO) has launched the first chapter of a consultation series on generative AI (GenAI), defined as AI models that can create new content e.g., text, computer code, audio, music, images, and videos. The first chapter focusses on the application of data protection law in the development and use of GenAI models and particularly around "lawful basis of processing" in this context and legitimate interests specifically.

The first chapter of this series is open for feedback until 1 March 2024 and stakeholders, including developers, users, legal advisors, and civil society groups, are invited to contribute.

Access the consultation here.

Singapore proposes Model AI Governance Framework for Generative AI

On 16 January 2024, the AI Verify Foundation and Infocomm Media Development Authority of Singapore published their draft Model AI Governance Framework for Generative AI. The framework builds on the foundation of Singapore's existing Governance Framework for Traditional AI and has evolved to deal with the advent of GenAI. This framework introduces nine dimensions to foster a trusted AI ecosystem, with reference to principles including explainability, transparency, and fairness. It is the latest step in Singapore's policy to adopt a practical approach to AI governance.

Singapore is accepting feedback on its draft GenAI governance framework until March 15.

Read the draft Framework here.

Australian Government publishes interim response to Safe and Responsible AI Consultation

On 17 January 2024, the Australian Government announced its interim response to the 2023 Safe and Responsible AI in Australia consultation, setting out a targeted strategy which focuses specifically on high-risk applications. In contrast to the EU's single regulation approach, the Australian Government's proposes to address potential harms in areas such as the justice system with AI safety guardrails whilst allowing existing applications of AI in lower-risk sectors to continue largely unimpeded. A temporary expert advisory group has been set up to guide the development of the guardrails.

The strategy reflects the growing international consensus of a risk-tiered regulatory approach to AI systems. Download the Australian Government's response here.

If you have any questions or feedback or would like to discuss any of these updates further, please contact Minesh Tanna, Global AI Lead at Simmons & Simmons.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.