AI View - September 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

12 September 2024

Publication

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

(NEW!) Our EU AI Act AI Literacy Programme – a first-of-its-kind eLearning solution

One of the key requirements under the EU AI Act (which came into effect on 1 August 2024) is for organisations to ensure, by February 2025, that all staff involved in AI have a sufficient level of AI literacy.

Our AI Group has launched an AI Literacy Programme, comprising a three-tiered approach including an eLearning Foundational AI Literacy Training Course (which can plug into your existing eLearning systems).

Further information and our promotional video can be found here. To learn more, contact our AI Team.

AI View

This edition brings you:

  1. Proposed new AI regulations for Australia

  2. UK, US and EU sign first global AI treaty

  3. Amendments to EU's AI compliance initiative to encourage participation

  4. Compilation of AI resources published by library of the UK’s House of Commons

  5. Public engagement initiative on AI and copyright law by the Hong Kong Government

Proposed new AI regulations for Australia

The Industry and Science Minister for Australia has released a paper outlining options for AI regulation in Australia.

Broadly, the options are: adapting existing frameworks, introducing new framework legislation, or creating a similar to the EU AI Act.

The paper refers to possible:

  • Mandatory guardrails:

    • The proposal includes ten mandatory guardrails for high-risk AI uses, designed to mitigate risks while allowing for human control and oversight.
    • High-risk AI applications, such as generative AI and AI used in recruitment, are highlighted as areas requiring stringent safeguards.
  • Transparency and accountability:

    • Organisations developing or using high-risk AI systems would need to inform end users about AI usage, be transparent about the data and models involved, and allow individuals to challenge AI decisions.
  • Fines for non-compliance:

    • The possibility of fines or repercussions for non-compliance is subject to further discussion but mentioned in the paper.

The government will consult over the next four weeks to determine the best legislative approach

Separately, the Australian Government has introduced a Voluntary AI Safety Standard to give practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI. The Standard includes 10 voluntary guardrails and how to use them.

The Australian Government’s proposal marks a significant step towards responsible AI regulation, aiming to balance innovation with safety and accountability. As consultations proceed, the Government will refine its approach to ensure that AI technologies are developed and used in line with Australian values and expectations.

The paper on AI regulation can be found here. The Voluntary AI Safety Standard can be found here.

US, UK and EU sign first global AI treaty

The US, UK and EU have signed the world’s first binding treaty on AI, the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law. This historic agreement aims to ensure AI development and use across both public and private sectors adhere to principles of transparency, human rights, democracy, and the rule of law.

The Convention requires signatories to implement measures to assess and mitigate the adverse impacts of AI, as well as provide effective remedies for violation of fundamental rights.

Signatories will now therefore need to consider how they comply with this Convention, whether through new legislative measures or determining that existing legal requirements (e.g. human rights law) are sufficient. In the UK, the impact of this Convention is likely to be considered alongside existing discussions around the possibility of new UK AI regulation.

The Convention can be found here.

Amendments to EU's AI compliance initiative to encourage participation

The European Commission has shared a new draft of the voluntary pledges designed to anticipate some of the AI Act’s legal requirements for risky AI applications, which will come into force in August 2026.

The initiative, known as the AI Pact, had been delayed due to more urgent matters but is now being expedited.

Key updates include:

  • Toning Down Language: The new version of the pledges has been adjusted based on feedback from potential signatories. The language has been softened, making the commitments less prescriptive and more aligned with the AI Act. The pledges are divided into “core” commitments, mandatory for all participants, and additional, optional ones. Participants have more flexibility in how they communicate their compliance to the public.

  • Core and Additional Commitments: Core commitments now include caveats such as mapping AI systems in critical use areas “to the extent feasible.” Additional pledges for developers and deployers have been made conditional, focusing on “known and reasonably foreseeable risks.” For instance, the commitment to implement a logging system should be “appropriate for the intended purpose of the system.”

  • Generative AI Systems: Providers of generative AI systems are asked to ensure their outputs are detectable as artificially generated or manipulated, using technical solutions like metadata and watermarks. Deployers must label AI-generated content, with separate pledges for deep fakes and synthetically produced text. They are also required to inform individuals about AI-made decisions that adversely impact their health, safety, or fundamental rights.

The softened language aims to increase the initiative’s uptake. The AI Pact had struggled to attract signatories, who were wary of public scrutiny. The Commission is now pushing to secure the first wave of participants by the end of September.

Further information on the AI Pact can be found here.

Compilation of AI resources published by the library of the UK’s House of Commons

The House of Commons Library has published a comprehensive reading list on AI, covering various aspects of AI technology, its applications, and the associated ethical and regulatory considerations:

  • Machine Learning and Deep Learning: Explains how AI systems develop and improve over time, with applications in voice and image recognition.
  • Generative AI: Discusses advanced AI models like GPT that generate new content based on unstructured data.
  • UK Government Publications: Includes strategy documents, white papers, and guidance for regulators on implementing AI regulatory principles.
  • AI in Different Sectors: Covers the use of AI in supply chains, creative industries, defence, education, energy, financial services, healthcare, retail, and transport.
  • Safety and Ethics: Addresses the ethical implications of AI, including transparency, accountability, and the potential for embedded biases.

The Government of the Hong Kong Special Administrative Region has initiated a two-month public consultation to enhance the Copyright Ordinance (Cap. 528) concerning the protection of AI technology development.

The consultation aims to further develop Hong Kong’s copyright regime to support AI technology, particularly generative AI, ensuring it encourages creativity and innovation while balancing the interests of copyright owners and the public.

This initiative follows the implementation of the Copyright (Amendment) Ordinance 2022, which strengthened copyright protection in the digital environment. The current consultation is part of the broader strategy outlined in the Chief Executive's 2023 Policy Address and aligns with the National 14th Five-Year Plan to position Hong Kong as a regional IP trading centre.

The consultation document addresses several key issues related to generative AI and copyright:

  • Copyright protection of AI-generated works.
  • Copyright infringement liability for AI-generated works.
  • Possible introduction of specific copyright exceptions.
  • Other issues related to generative AI.

If you have any questions (or feedback) or would like to discuss any of these updates further, please contact Minesh Tanna, Global AI Lead at Simmons & Simmons.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.