AI View - October 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

04 October 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

1. Artificial Intelligence Civil Rights Act of 2024 introduced in the US Senate  

On 24 September, the Artificial Intelligence Civil Rights Act of 2024 was introduced in the United States (US) Senate.

The Act aims to restrict companies' use of algorithms for decision-making, ensure pre and post-deployment testing of algorithms, and help eliminate and prevent bias. The Act seeks to regulate AI algorithms used in decisions in various sectors, including employment, banking, healthcare, criminal justice public accommodations and government services.

In particular, the Act:

  • prohibits developers and deployers from offering, licensing, or using covered algorithms that discriminate based on protected characteristics;

  • requires developers and deployers of algorithms to complete independently audited pre-deployment evaluations and post-deployment impact assessments to identify, evaluate, and mitigate any potential discriminatory outcomes;

  • requires developers and deployers to mitigate any harms identified by the pre-deployment evaluations and impact assessments, and ensure that the algorithms perform and are consistent with their publicly-advertised purpose;

  • provides individuals a right to appeal an algorithmic decision to a human decision-maker; and

  • authorises the Federal Trade Commission to enforce the Act.

Read the updates on the Artificial Intelligence Civil Rights Act here.

2. California's Governor vetoes controversial AI safety bill

On 29 September, California Governor, Gavin Newsom, vetoed a hotly contested AI safety bill after the tech industry raised objections as it applies only to the biggest and most expensive AI models and leaves others unregulated. The bill, officially known as SB 1047, targets companies developing generative AI.

Newsom stated that the bill does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or uses sensitive data. He also criticised the fact that the bill imposes strict standards across the board, affecting even basic functions if implemented by a large system.

Newsom expressed his commitment, however, to working with the legislature on AI legislation during its next session.

Read more here.

3. Over 100 companies join new EU AI Pact ahead of EU AI Act taking effect

On 25 September, the European Commission announced that over 100 companies have signed the EU AI Pact to voluntarily adopt the principles of the EU AI Act ahead of it coming into effect.

The voluntary pledges call on participating companies to undertake at least three core actions:

  • Implementing an AI governance strategy to encourage AI adoption within the organisation and work towards future compliance with the EU AI Act.

  • Mapping high-risk AI systems to identify AI systems likely to be classified as high-risk under the EU AI Act.

  • Promoting AI literacy and awareness among staff to ensure ethical and responsible AI development.

In addition to these core actions, more than half of the signatories committed to additional pledges, including ensuring human oversight, mitigating risks, and labelling certain types of AI-generated content, such as deepfakes.

Read the announcement here.

4. California Bill for generative AI training data signed by Governor

On 28 September, California's Governor signed Assembly Bill No. 2013 requiring transparency of training data for generative AI.

The bill defines generative AI as AI which can create synthetic content such as text, images, video, and audio based on its training data.

This bill requires both original developers and those who makes substantial modifications to generative AI systems or services released after 1 January 2022, to publish documentation on their websites detailing the data used to train these systems by 1 January 2026.

The documentation would need to include a high-level summary of the datasets, including their sources, purposes, data points, and whether they contain any copyrighted or personal information. The disclosure should also specify whether synthetic data generation was used.

The bill exempts certain AI systems from these requirements, including those designed solely for security, integrity, aircraft operation, or national security purposes.

Read the bill here.

5. Saudi Arabia publishes guidance on deepfakes

On 18 September, the Saudi Data & AI Authority released draft guidance for public feedback on deepfakes in Saudi Arabia, outlining use cases (including malicious and non-malicious), and guidance for key groups like generative AI developers and content creators on areas such as risk assessments, consent, and watermarking artificial content.

The draft guidance also advises consumers on assessing messages, analysis of audio-visual elements, utilising content authentication tools, and reporting technology misuse.

Read the draft guidance here.

6. Bank of England announces establishment of Artificial Intelligence Consortium

On 25 September, the Bank of England announced the establishment of the Artificial Intelligence Consortium. The consortium will provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in UK financial services.

Read the announcement here.

7. US and UAE cooperation on AI

On 23 September, the US and the UAE issued a statement on their commitment to cooperate on AI, emphasising a shared vision of safe, secure, and trustworthy AI. The statement details intentions to create a memorandum of understanding between the governments to enhance cooperation on AI initiatives.

Read the statement here.

8. Netherlands seeks input on EU AI Act's prohibition of manipulative AI systems

The Dutch Data Protection Authority is soliciting feedback on the EU AI Act's prohibitions against manipulative and exploitative AI systems which are set to take effect on 2 February 2025. These prohibitions target AI practices that either distort individuals' decision-making capabilities through subliminal, manipulative techniques or exploit vulnerabilities due to age, disability, or socio-economic status, leading to significant harm.

Public comments may be submitted until 17 November.

Read the call for input here.

9. Singapore issues AI Playbook for Small States

On 22 September, the Singapore Infocomm Media Development Authority (IMDA) issued the AI Playbook for Small States. The Playbook was developed by the IMDA in collaboration with Rwanda's Ministry of ICT and Innovation, with contributions from members of the Digital Forum for Small States. The Playbook is a compilation of best practices shared by small states on AI adoption, governance, and societal impacts.

Read the Playbook here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.