AI View - January 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

15 January 2024

Publication

Simmons & Simmons EU AI Act resources

The EU AI Act, which reached a political deal before Christmas, is likely to impact many organisations. Our 'quick guide' to the EU AI Act has been updated to reflect this and our quick-fire webinar summarising the deal is available on-demand here. We are increasingly advising on the Act and would be happy to assist if it's on your radar.

This edition brings you:

  • EU Commission FAQs on EU AI Act;
  • Updated Product Liability Directive brings AI into scope;
  • ISO introduces international standard for AI management systems;
  • Council of Europe's Committee on Artificial Intelligence publishes draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law;
  • UKSC: AI cannot be an 'inventor' under UK patent law; and
  • UN AI Advisory Body publishes an interim report: Governing AI for Humanity.

EU Commission FAQs on EU AI Act

Previous issues of AI View have covered the development of the EU AI Act.

On 12 December 2023, the EU Commission published a helpful list of FAQs providing guidance on some of the details of the AI Act and about AI regulation more generally.

The FAQs cover topics such as:

  • Why AI needs to be regulated;
  • Which risks the AI Act addresses
  • How the use of risk categories will work, and how higher-risk systems are identified;
  • How the AI Act seeks to future-proof the regime;
  • How the AI Act aims to protect fundamental rights; and
  • How the AI Act will be enforced.

Read the Commission's FAQs here.

Updated Product Liability Directive brings Artificial Intelligence into scope

On 14 December 2023, the Council of the European Union and the European Parliament reached a consensus on modernising the EU's Product Liability Directive (PLD) to address the challenges posed by the digital age and the circular economy. The PLD is intended to impose strict liability on producers and other economic operators for products that cause harm. Harm under the current regime means personal injury or death, damage to property. However, the proposed new PLD is likely to add destruction or irreversible corruption of data not used for professional purposes.

The final text of the revised PLD is yet to be published; however, it is expected to include an expanded definition of scope of 'products' that claimants can pursue claims in relation to, to include digital manufacturing files and software, including AI.

In previous drafts the revised PLD was said not to impose liability in relation to free and open-source software not used commercially, unless such software has been offered for a price or for personal data not exclusively used for improving the software's security or compatibility. It will be interesting to see if the final draft retains this exclusion, which is intended to encourage innovation for public benefit, without fear of liability.

The revised PLD will apply to products placed on the EU market 24 months after its enactment, with EU countries required to incorporate the directive into national law by that time. The agreement awaits formal endorsement by Member States' Council representatives. If approved, the text will go through the formal adoption process.

Read the press release here.

ISO introduces international standard for AI management systems

On 18 December 2023, the International Organisation for Standardisation (ISO) introduced a new management system standard (ISO/IEC 42001:2023 (paywalled)) for organisations involved in developing, providing or using AI-based products or services. The standard underwent a public consultation phase and is designed to be applicable to organisations across all industries, of any size and type.

ISO/IEC 42001 is a management system standard, sharing common features with other ISO standards such as ISO 9001 (quality management) and ISO 27001 (information security management). These standards provide a set of best practices, rules, and guidance to help organisations manage risks and operational aspects, and ISO/IEC 42001 can be integrated with these existing management systems.

The standard aims to ensure the development and use of AI systems that are trustworthy, transparent, and accountable, and stresses the importance of ethical principles like fairness, non-discrimination, and privacy. ISO/IEC 42001 is designed to assist organisations identify and mitigate AI-related risks, enhancing efficiency and reducing costs. It is also intended to help maintain regulatory compliance, including data protection requirements.  

Council of Europe's Committee on Artificial Intelligence publishes draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

On 18 December 2023, the Council of Europe's Committee on Artificial Intelligence (CAI) advanced its Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which is currently undergoing refinement and is expected to undergo a third and final reading (AI Convention).

The AI Convention aims to create a legal framework applicable worldwide, ensuring that AI systems respect human dignity, democracy, and legal principles, with a strong focus on ethical AI practices. It also proposes controlled environments for the testing and development of AI, to encourage safe innovation without compromising human rights or democratic principles. Additionally, the framework sets out mechanisms for effective implementation, such as international cooperation, periodic reviews, and the inclusion of non-state actors in the oversight process.

The need for such a framework is underscored by various high-profile AI initiatives that raise critical questions about AI's influence on democracy, education, and healthcare. For example, AI has enormous transformative potential in fields like the Human Genome Project, prompting reflection on the ethical and societal implications of AI's advancements, reinforcing the importance of the AI Convention in guiding ethical AI development and use.

Access the draft AI Convention here.

UKSC: AI cannot be an 'inventor' under UK patent law

On 20 December 2023, the UK Supreme Court ruled that AI cannot be recognised as an 'inventor' under UK patent law, in the case of Thaler v Comptroller-General of Patents, Designs and Trade Marks.

The decision came after Dr. Stephen Thaler and his team argued for an AI system named 'DABUS' to be credited as the inventor of two creations related to food and beverage packaging and light beacons. The Court emphasised that the question of whether AI-generated inventions should be patentable, or if the definition of 'inventor' should include AI, are issues that require legislative action.

The Court made three key findings:

  1. Under the Patents Act 1977, an 'inventor' must be a natural person, which excludes AI systems like DABUS;

  2. Thaler could not claim patents by virtue of owning DABUS, as the doctrine of accession (a mode of acquiring property that involves the addition of value to the property through labour or the addition of new materials) does not apply to intangible inventions; and

  3. Thaler's failure to name a human inventor meant that his patent applications were considered withdrawn after the prescribed 16-month period.

The ruling aligns with decisions made in the United States and Europe, and underscores the current legal stance that only human beings can be inventors for the purposes of patent law. While this does not affect the patentability of AI-generated inventions per se, it does clarify that patents can only be granted if a human is named as the inventor. This outcome has significant implications for the AI industry, highlighting the potential need for legislative updates to address the evolving role of AI in innovation.

Read the UK Supreme Court's judgment here.

UN AI Advisory Body publishes an interim report: Governing AI for Humanity

In December 2023, the United Nations Secretary-General's AI Advisory Body published its interim report, Governing AI for Humanity, calling for better alignment between international norms and AI technology development and deployment.

The report is segmented into discussions on AI governance, detailing opportunities, risks, challenges, and initial recommendations. The report focusses on the global governance deficit in the AI sector, but questions as to how far and how quickly regulators should be moving remain, striking the balance between precaution and progress.

The AI Advisory Body and its members will engage with all stakeholders over the coming months. Individuals, groups, and organisations are encouraged to provide feedback via the online submission form.

The deadline for submitting inputs is 1 April 2024, 04:59am GMT. The final report will be released in the summer of 2024, ahead of the Summit of the Future.

Read the interim report here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.