EU Artificial Intelligence Act – Perspectives in healthcare
The European Commission has recently proposed a regulation on artificial intelligence, with key repercussions for the healthcare and life sciences industry.
Background
On 21 April 2021, the European Commission issued a Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (the 'Proposal' or 'proposed regulation'). If adopted, this new piece of EU legislation will be implementing harmonised rules on artificial intelligence (AI) systems and integrate them into existing EU legislative frameworks.
The Proposal has reached the European Parliament and the Council of the European Union. It will follow the ordinary legislative procedure, which could mean several years of legislative back-and-forth between the main EU institutions before a final regulation is published and enters into force.
Rules established by the Proposal will be relevant for a variety of sectors, but with a significant impact on Healthcare & Life Sciences.
Pharma is going through an unprecedented digitalisation process, where software is increasingly used for a variety of purposes from (pre-)clinical research and drug development to patient monitoring and doctor-assisted workflows. Medical devices (including Medical Device Software) and in vitro diagnostic medical devices are explicitly captured and regulated by the Proposal with great impact for the MedTech industry. Lastly, many stakeholders supplying or using Digital Health technologies will be concerned as they may see their inventions regulated as a low-risk or high-risk AI system (see more below) even though they have so far escaped CE marking requirements applicable to medical devices.
This piece provides an overview of the proposed regulation and its impact on those industries.
Purpose
The Proposal lays down:
- harmonised rules for the placing on the market, the putting into service and the use of AI systems in the EU;
- prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems;
- harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
- rules on market monitoring and surveillance.
AI systems are software
The Proposal defines AI system as software that meets two cumulative conditions:
- it is developed with one or more of the techniques and approaches listed in Annex I of the proposed regulation; and
- it can, for a given set of human-defined objectives, generate outputs (such as content, predictions, recommendations or decisions) influencing the environment they interact with
The following techniques and approaches (used for software development) are covered by Annex I of the Proposal:
- machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
- logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
- statistical approaches, Bayesian estimation, search and optimization methods.
AI systems become industrial products
This broad definition of AI systems presented above brings under its scope a very large number of software-based technologies which so far have successfully stayed out of the existing EU New Legislative Framework establishing a common legal framework for so-called industrial products.
The fact that they have so far escaped both horizontal EU legislation (e.g. Regulation (EC) No 765/2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products or Regulation (EU) 2019/1020 on market surveillance and compliance of products) and vertical EU legislation (e.g. Regulation (EU) 2017/745 on medical devices or Regulation (EU) 2016/425 on personal protective equipment) is about to change.
In short, the reform establishes an interplay between the EU New Legislative Framework and AI systems from a regulatory compliance standpoint, such that all economic operators in any supply chain for AI systems will have legal and regulatory obligations to comply with.
Under the proposed regulation, AI systems may be regulated in two different ways.
As components of products that are already covered by the EU New Legislative Framework, such as medical devices and in vitro diagnostic medical devices, for those AI systems intended to be used as safety component of products that are subject to third party ex-ante conformity assessment, irrespective of whether the AI system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded). As regards this first category and, more specifically, high-risk AI systems related to products covered by the EU New Legislative Framework, the requirements for AI systems set out in the Proposal will be checked as part of the existing conformity assessment procedures under the relevant legislation (e.g. Regulation (EU) 2017/745 on medical devices). This leads to an interplay of legal and regulatory requirements, which the proposed regulation solves as follows: safety risks specific to AI systems are meant to be covered by the requirements of the Proposal, whereas existing NLF legislation aims at ensuring the overall safety of the final product and therefore may contain specific requirements regarding the safe integration of an AI system into the final product.
As products in their own right, for those AI systems designed to operate with varying levels of autonomy and be used on a stand-alone basis. In acknowledging that AI system (defined as software) may be a product, the Proposal strengthens product liability risks for manufacturers or providers and could lead to meaningful modernisation of certain national product liability regimes which have historically limped when regulating software products.
In both instances, the proposed regulation assigns new, AI-specific, legal and regulatory obligations to those manufacturing, importing or distributing the system on the EU market. It further introduces technical and ethical standards for software which also are able to offer guidance as to liability of businesses involved either in the AI systems themselves or in their use as (safety) components of products.
Interplay between existing sectoral legislation and new AI legislation
This interrelationship heavily relies on a risk-based classification of AI systems posing an 'unacceptable', 'high' or 'medium-low' risk to the Union's values and public interests.
The first category creates a blacklist of practices ranging from AI systems that deploy subliminal techniques beyond a person's consciousness or that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, to certain 'real-time' remote biometric identification systems used in publicly accessible spaces.
The second is represented by high-risk systems. These are considered as such where the following cumulative conditions are fulfilled:
- The AI system is intended to be used as a safety component of a product covered by EU harmonised legislation, or is itself a product covered by EU harmonised legislation.
- The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party assessment with a view to the placing on the market or putting into service of that product pursuant to EU harmonised legislation.
All EU harmonised legislations relevant for the purpose of ‘high-risk’ categorisation are listed in Annex II to the proposed regulation. The list includes Regulation (EU) 2016/425 on personal protective equipment, Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in vitro diagnostic medical devices, among many others.
In addition, certain AI systems referred to in Annex III of the proposed regulation are also considered high-risk. Annex III covers, for example, AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons, AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services (including by firefighters and medical aid) and AI systems intended to be used by competent public authorities to assess a health risk posed by a natural person who intends to enter or has entered into the territory of a Member State.
Notably, despite partly relying on the need of a third-party assessment prior to market entry, the risk classification under the Proposal does not hinder the one under sectoral law. Thus, an AI system integrated into e.g. a medical device may be classified as high-risk under the AI Regulation, but it may still retain a lower category of risk under the EU medical devices legislation.
The third category is composed of medium-low risk or non-high risk AI systems. In line with proportional approach the regulation does not envisage an as strict regime. Although most of the enhanced compliance requirements will still be applicable some of the requirements, such as transparency are minimal. In this case, the proposed regulation encourages code of conducts to foster voluntary compliance of the mandatory requirements for high-risk AI systems.
Enhanced compliance standards, including an AI-specific CE marking for high risk systems
The Proposal has also set enhanced compliance standards across several aspects of a software irrespective of whether the product is to be embedded; these being:
Data and Data Governance: High-risk AI systems making use of techniques involving training of models with data should be developed on the basis of training, validation and testing data sets up to the standards established by the proposed regulation.
Technical Documentation:The TD should include all the necessary information for the notified bodies and authorities to verify compliance with all the requirements set in the proposed regulation. In the specific case of medical devices, one single technical documentation should be drawn up, but it should include all the elements concerning the Proposal as well as the information required under the applicable EU harmonised legislation (e.g. Regulation (EU) 2017/745 on medical devices).
Record-keeping and post-market monitoring: High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events while the system is operating and throughout its lifecycle. Particular attention is paid to the occurrence of situations that may result in the AI system presenting a risk and to facilitate post-market monitoring.
Transparency and provision of information to users: high-risk AI systems are to be designed and developed to ensure sufficient transparency to enable users to interpret the outputs and use them appropriately. For that purpose, the system falling into this category of risk should be accompanied by satisfactory and comprehensible instructions in a digital format.
Human oversight: all AI systems should be designed and developed to include appropriate human-machine interface tools so to enable effective oversight by natural persons. Such oversight aims at preventing or minimising the risks to health and safety or fundamental rights within its intended use or under conditions of reasonably foreseeable misuse. The proposed regulation acknowledges that this can only be possible in circumstances where the natural person fully comprehends the capacities and limitations of the system and is able to monitor its activities so to detect anomalies dysfunctions or unexpected performance. In this context, human oversight is also pivotal to prevent or minimise automation bias and intervene by way of a 'stop' button or similar procedure.
Accuracy, robustness and cybersecurity: High-risk AI systems should be designed and developed to ensure an appropriate level of accuracy, robustness and cybersecurity and perform consistently throughout their lifecycle. While accuracy metrics are to be provided in the instructions, the Proposal sets a resilience standard as to errors, faults or inconsistencies that may occur within the system or the environment within which it operates. Resilience is extended to attempts by unauthorised third parties to alter their use or performance by way of exploitation of potential system vulnerabilities.
In addition to an already complex picture, the European Commission is also set to institute and manage a community database where all the stand-alone high-risk AI systems should be registered. In order to improve the monitoring activity efficiency, the Commission has proposed some additional and specific obligations as to the documentation required for such registration.
With regard to non-high-risk products, manufacturers are given the opportunity to self-regulate via non-binding codes of conducts and an overall lighter regime to comply with.
Last but certainly not least, to indicate conformity with the proposed regulation, a specific CE marking for high-risk AI systems should be obtained by their providers before they hit the market. This follows a conformity assessment procedure led by a notified body (designated under the proposed regulation and only for AI systems intended to be used for the remote biometric identification of persons) or by the manufacturer himself (for all high-risk AI systems) and the drawing up of an EU declaration of conformity.
Industry should prepare for the fact that development and market entry of high-risk AI systems are going to be significantly impacted - and likely delayed - by this new requirement following the entry into force of the proposed regulation.
In the current configuration, the Proposal comes with a two-year transition period following its entry into force, which leaves little time for producers and notified bodies to get ready.
Regulatory sandboxes to reduce the regulatory burden and to support SMEs and start-ups
Given the potential impact of such drastic regulatory upgrade, the Commission has envisaged the creation of regulatory sandboxes to foster AI innovation. To ensure that cross-sector technological advances took place in compliance with the regulation and other relevant union and national legislation, they established a controlled experimentation, a testing environment both during the development and pre-marketing phase. The proposed regulation should be the legal basis for domestic provisions granting a priority access to small and medium-sized enterprises and start-ups as well as third-party assessment fees proportional to the size of the company.
These instruments will be in practice created, managed and regulated by the national authorities which are yet to be identified. Nonetheless, the proposal pre-announces common rules to ensure a uniform application and a framework for cooperation between relevant authorities.
Dissuasive administrative fines
The changes brought by the Proposal are supported by hefty administrative fines, such as:
Up to €30,000,000 or up to 6% of total worldwide annual turnover for the preceding financial year, whichever is higher, for infringements related to (a) non-compliance with the prohibition of the AI practices referred to in Article 5 of the Proposal (Prohibited AI practices); or (b) non-compliance of any AI system with the requirements laid down in Article 10 of the Proposal (Data and data governance).
Up to €20,000,000 of up to 4% of total worldwide annual turnover for the preceding financial year, whichever is higher, for non-compliance with any requirement of the Proposal other than those mentioned above.
Up to €10,000,000 of up to 2% of total worldwide annual turnover for the preceding financial year, whichever is higher, if incorrect, incomplete or misleading information is being supplied to notified bodies and national competent authorities in reply to a request.
Do not hesitate to contact us should you have any further question.


.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)





