AI View: December 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

18 December 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

1. UK Government launches consultation on AI and copyright

2. New EU Product Liability Directive comes into force

3. Monetary Authority of Singapore publishes review of AI model risk management practices in banks

4. Malaysia’s Ministry of Digital launches National AI Office

5. New AI Standardisation Committee formed in China

6. Brazilian Senate approves updated text of new AI bill

7. UK judiciary comments on potential impacts of AI on the justice system and IP law

On 17 December 2024, the UK Government released its consultation on clarifying the relationship between AI and copyright law. The proposal includes 47 questions and centres around clarifying UK copyright law in order to give copyright holders more control of their content from AI training while also supporting AI developers in the UK to have lawful access to high-quality data.  The UK Government recognises that some intervention is necessary to provide clarity and legal certainty, but has not settled on the precise nature of that intervention or any new legislation.

The proposals to clarify current copyright law on text and data mining that have been made by the Government, supported by the Intellectual Property Office, Department for Science, Innovation and Technology, and Department for Culture, Media and Sport, include:

  • Doing nothing: Although the UK Government states that it does not wish to prolong the legal uncertainty associated with the copyright infringement risks of AI model training.
  • Strengthening copyright licensing: This option would mean AI models could only be trained on copyright works if rights holders have granted an express licence.  This would give rights holders greater control, but would make it harder for AI developers to train their models in the UK.
  • Introducing a broad data mining exception: This would allow data mining on copyrighted works, including for AI training and commercial use, without rights holders' permission (as in Singapore).  The UK Government describes this option as requiring "radical changes to the UK copyright framework".
  • Introducing a "rights reservation" mechanism: This would allow AI developers to train their models on material to which they had lawful access, but only to the extent that rights holders had not expressly reserved their rights. This option would be coupled with transparency requirements about the works used for training and standards for the reservation of rights.  This option mirrors the approach followed by the EU and is the UK Government's preferred approach, subject to views expressed in the consultation.

The consultation also seeks views more generally on (1) the need for development of standards for the reservation of rights; (2) (collective) licensing of copyright works for AI training; (3) the need for AI developers to disclose the sources of their training material and at what level of detail; (4) the treatment of models trained outside the UK; (5) copyright protection for AI-generated outputs (including whether protection for computer-generated works in the UK should be retained); (6) whether liability for AI-generated outputs which infringe copyright is sufficiently clear; (7) labels for AI-generated outputs; and (8) whether the current framework is sufficient to tackle deepfakes and digital replicas.

The consultation is open for public review until 25 February 2025.

Read the full text of the consultation here. If you have any questions about the consultation or are interested in submitting a response and shaping UK copyright law in this area, please do get in touch with our Copyright team.

New EU Product Liability Directive comes into force

Just over two years after the European Commission (EC)'s proposal for a new directive, the new Product Liability Directive came into force on 8 December 2024.  

Introduced to update the Product Liability Directive 1985, the new legislation is the product of public consultations, a study on civil liability for AI and a report by experts on 'Liability for AI and other emerging technologies'. One of the EC's key objectives was to make the regime fit for the digital age, specifically with regard to new technologies like AI.

Key developments include:

  • Broadened "product" definition: the new regime's definition of a "product" has broadened to include software, whether sold as an individual product or embedded in hardware. This includes software in applications, operating systems and AI systems, but excludes free and open-source software.

  • Data damage covered: to account for the rise in intangible assets, the destruction or corruption of data (e.g. digital files deleted from a hard drive) is also covered, including the cost of recovering or restoring such data, provided the data is not used for professional purposes.

  • Presumptions as to causation and defect: the new legislation introduces a number of rebuttable presumptions that help claimants more easily establish that a product is defective and/or has caused damage.

The new Directive applies to products placed on the market as from 9 December 2026. Member States will have until this date to reflect the developments in their national laws.

Read the EC announcement here, and the S&S update from June 2024 here.

Monetary Authority of Singapore publishes review of AI model risk management practices in banks

After conducting a review of AI model risk management practices in banks earlier this year, the Monetary Authority of Singapore (MAS) published its findings and best practice for AI model risk management on 5 December 2024.

The report concluded that as AI continues to develop, financial institutions should:

  • implement robust governance;

  • conduct comprehensive risk identification and update AI risk management frameworks to align with technological advancements;

  • implement specific controls for managing generative AI risks;

  • perform compensatory testing and enhance legal agreements for third-party AI; and

  • review controls in non-AI specific areas to take AI developments into account, including legal and compliance, technology and cyber risk management policies.

The MAS is considering supervisory guidance for financial institutions next year to build on the findings of its paper. Although the review focused on select banks, it can offer useful guidance for other financial institutions when developing and deploying their own AI.

Read the full report here.

Malaysia's Ministry of Digital launches National AI Office

On 12 December 2024, Malaysia's Ministry of Digital announced the launch of the National AI Office (NAIO) as the Prime Minister called for a shift towards "sustainability, integrity, and equitable wealth distribution", stressing the importance of developing AI with local talent.

As Malaysia seeks to establish itself as a hub for regional AI development, NAIO will provide strategic planning, research, and development, and oversee regulation. Over the next year, it will aim to produce seven deliverables including a code of ethics, an AI regulatory framework, an AI impact study for the Malaysian Government and a five-year AI technology action plan.

For the next 12 months, NAIO will be incubated with MyDIGITAL Corporation, another Ministry of Digital agency, as it works towards the deliverables and develops its future strategy.

Read the announcement here.

New AI Standardisation Committee formed in China

On 12 December 2024, the Chinese Ministry of Industry and Information Technology announced the establishment of the Ministry of AI Standardisation Technical Committee.

Designed to create and revise industry standards, the Committee's remit includes AI evaluation and testing, operation and maintenance, data sets, hardware, software platforms, large models and managing AI risks.

The first session of the Committee is composed of 41 members and includes representatives from tech companies such as Baidu, Alibaba Group Holding, Tencent Holdings and Huawei Technologies, along with representatives from academic institutions such as Peking University.

The announcement is available here (in Chinese).

Brazilian Senate approves technical analysis of new AI bill

On 10 December 2024, the Federal Senate of the Brazilian National Congress approved changes to an AI bill, which aims to regulate the use of AI systems in Brazil, following the submission of a technical analysis.

The bill proposes dividing AI systems into risk levels, with different regulation for each category of AI. Given their possible impact on human life and fundamental rights, as with the EU AI Act, 'high-risk' AI systems would be subject to stricter rules, with such systems including autonomous vehicles, AI-powered border control systems and medical diagnoses. AI systems with 'excessive risk', such as autonomous weapons systems, would be completely prohibited.

The bill would also provide rights for those affected by certain AI systems, including the right to privacy and the protection of personal data. Individuals affected by 'high-risk' systems would be entitled to additional rights, including the right to contest decisions and have these decisions reviewed by humans.

The bill will now progress to the Chamber of Deputies. Read more about the technical analysis here.

UK judiciary comments on potential impacts of AI on the justice system and IP law

On 3 December 2024, the Deputy Head of Civil Justice, the Rt Hon. Lord Justice Birss, delivered a speech on the impact and value of AI for IP and the courts at the Life Sciences Patent Network European Conference.

He highlighted that AI could help democratise justice by:

  1. Summarising information: Large language models could be harnessed to provide summaries of cases to all judges, which are currently only available in select courts.

  2. Future decision-making: AI advancements may enable more efficient decision-making, especially in small money or similar claims, if there was a right of appeal to human judges.

He also examined the impact of AI on IP law, noting that the patent system may need to be re-examined to grapple with new concepts as AI shapes scientific methods and inventions.

Although there will be "many interesting developments, challenges and changes", Lord Justice Birss concluded that, if we get AI right, "there is every prospect it will be good for society as a whole".

Read the full speech here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.