AI View: August 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

23 August 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. California AI bill moves closer to becoming law

  2. South Africa publishes national AI policy framework

  3. UNESCO opens consultation on Guidelines for the Use of AI Systems in Courts and Tribunals

  4. Hong Kong Monetary Authority issues guiding principles for consumer protection regarding the use of generative artificial intelligence

  5. Philippines proposes Deepfake Accountability and Transparency Act

  6. China registers over 190 generative-AI models

California AI bill moves closer to becoming law

On 15 August, California’s proposed bill on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047) took a big step towards becoming law as it was passed with amendments by the California State Assembly Appropriations Committee and is currently under review by the full assembly.

If passed, SB-1047 would require developers of large AI models (those that require a large quantity of computing power that costs more than $100 million to train or more than $10 million to fine-tune) to comply with various safety standards and requirements that include:

  • implementing the capability to promptly fully shut down the AI model;

  • establishing safety and security protocols;

  • annually undergoing independent audits assessing the developer’s compliance with the Act, the developer’s internal controls, and recommendations on how compliance may be improved; and

  • submitting an annual certificate of compliance with the requirements of the Act.

Failure to comply with the Act may result in civil penalties from the Attorney General.

The amendments that were made to SB-1047 include:

  • removal of the proposed new body within the Government Operations Agency that would govern compliance with the Act;

  • removal of criminal penalties for perjury; and

  • amending the developer’s standard of compliance from “reasonable assurance” to “reasonable care”.

SB-1047 has faced some significant opposition including from industry and academic experts due to concerns that the requirements are restrictive and would impede the progress of technologies.

You can read the proposed bill here.

South Africa publishes national AI policy framework

On 14 August, the South African Department of Communications & Digital Technologies published a national AI policy framework that aims to harness AI technologies to drive economic growth and improve societal well-being.

The framework consists of the following 12 strategic pillars to implement the policy’s objectives:

  1. Talent Development / Capacity Development: aims to ensure that South Africa has a robust AI talent pool by integrating AI with education, developing specialised AI training programs and collaborations between academia and industry.

  2. Digital Infrastructure: aims to enhance AI innovation by developing supercomputing infrastructure and investing in advanced connectivity technologies.

  3. Research, Development and Innovation: aims to advance technological capabilities and innovation through the establishment of research centres, public-private partnerships and financial support for AI research and startups.

  4. Public Sector Implementation: aims to improve government efficiency with AI through its integration of AI in administration and the development of ethical and effective AI deployment guidelines.

  5. Ethical AI Guidelines Development: aims to ensure responsible and ethical AI use by developing guidelines that adhere to human rights principles and developed to consider bias mitigation and transparency.

  6. Privacy and Data Protection: aims to protect personal information through standardised data practices, strengthened data protection laws, and transparency in AI data usage and storage practices.

  7. Safety and Security: aims to protect citizens and infrastructure by implementing cybersecurity measures and risk management frameworks.

  8. Transparency and explainability: aims to build public trust in AI by promoting systems that are understandable and transparent.

  9. Fairness and Mitigating Bias: aims to ensure equitable AI deployment by developing bias mitigation methods and using inclusive and diverse training data sets.

  10. Human Control of Technology: aims to maintain human oversight over critical AI decisions and develop decision-making frameworks that prioritise human judgment.

  11. Professional Responsibility: aims to foster responsible AI development and use through a code of conduct for AI professionals and ethics training.

  12. Promotion of Cultural and Human Values: aims to align AI development with societal values, focusing on value-based AI to promote well-being, equality, and environmental sustainability and stakeholder engagement in AI policy-making.

    You can read the framework here

UNESCO opens consultation on Guidelines for the Use of AI Systems in Courts and Tribunals

On 2 August, UNESCO opened its recently published draft guidelines for the use of AI systems in courts and tribunals for public consultation. The guidelines are prompted by a 2023 UNESCO survey revealing that while 93% of judicial operators are familiar with AI technologies, only 9% reported that their organisations had AI-related guidelines or training, highlighting the need for guidance on the use of AI systems in courts and tribunals.

The guidelines aim to ensure that AI deployment in courts and tribunals adheres to justice, human rights, and rule of law principles. UNESCO has opened these guidelines for public consultation until 5 September 2024 and is inviting feedback from judicial professionals, legal experts, and the public.

The draft guidelines are available here and feedback on the guidelines can be provided here

Hong Kong Monetary Authority issues guiding principles for consumer protection regarding the use of generative artificial intelligence

On 19 August, Hong Kong Monetary Authority (HKMA) issued guiding principles to authorised institutions (primarily banks) for consumer protection relating to the use of generative AI in customer-facing applications.

The circular outlines principles under four major areas to ensure consumer protection when using generative AI in customer facing applications:

  1. Governance and Accountability: The board and senior management of authorised institutions should remain accountable for generative AI decisions and processes and should ensure that the scope of customer-facing generative AI applications is clear, responsible use policies are put in place, and proper validation of generative AI models are put in place.

  2. Fairness: Measures should be taken to ensure that generative AI models produce fair outcomes to customers that avoid unfair bias and allow customers the option to opt-out of using generative AI and request human intervention.

  3. Transparency and Disclosure: Authorised institutions should disclose the use of generative AI to customers, explaining its use and purpose as well as the limitations, in order to enhance the customers’ understanding of the generative AI outputs.

  4. Data Privacy and Protection: Authorised institutions should implement effective measures to protect customer data, complying with the Personal Data (Privacy) Ordinance and other relevant recommendations and good practices issued by the Office of the Privacy Commissioner for Personal Data.

You can read the guiding principles here

Philippines proposes Deepfake Accountability and Transparency Act

On 29 July, the bill on the Deepfake Accountability and Transparency Act was referred to the Philippines’ Committee on Information and Communications Technology. The bill aims to regulate deepfakes (technologically created or altered content that falsely depicts a person's speech or conduct) by requiring clear disclosures in the use of any deepfake content.

The bill seeks to require the following disclosures:

  • For content with audio and visual elements, there must be both a verbal and written statement indicating the content has been altered, along with a description of the alteration and a link or icon indicating AI involvement.

  • Visual-only deepfakes require a clear written statement detailing the alteration and a link or icon indicating AI involvement.

  • Audio-only content must have a verbal statement at the beginning and additional statements every two minutes, explaining the alterations.

Failure to include these disclosures may lead to a fine of 5 million pesos for failing to meet disclosure requirements, a fine of 2 million pesos for removing or altering disclosures for the first violation, and 3 million pesos for every violation thereafter.

You can read the proposed bill here.

China registers over 190 generative-AI models

On 12 August, Zhuang Rongwen, director of the Cyberspace Administration of the People’s Republic of China, confirmed that more than 190 generative AI models have been registered with the regulator in China and made available to the public, with more than 600 million registered users.

Zhuang also discussed the next steps to safeguard the healthy development of AI in China, which include efforts to:

  • promote domestic AI technology, including the development and construction of independent and controllable computing chips and algorithm frameworks;

  • promote the application of AI across various industries including education and healthcare; and

  • improve the safety standard system in relation to classification, security testing, and emergency response measures.

You can read the interview here (English translation available).


If you have any questions (or feedback) or would like to discuss any of these updates further, please contact Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.