The EU AI Act

The regulation of Artificial Intelligence (AI) continues to develop.

09 February 2024

Publication

The regulation of Artificial Intelligence (AI) continues to develop. On 2 February 2024, the pre-final text of the EU AI Act (the AIA) was unanimously endorsed by all 27 EU Member States. The formal adoption vote is provisionally scheduled for 10-11 April 2024. The AIA is then expected to enter into force shortly after publication in the EU Official Journal, following which it will have a staggered implementation as set out in the AIA itself. The legislation is likely to have a significant impact on employers and HR businesses that use AI in the EU.

What is the AIA?

The AIA contains a definition of "AI systems" (that is aligned with the OECD definition1) and provides details of the obligations for various actors involved in the development, supply and deployment of AI systems.

The AIA categorises AI systems by risk, both in terms of how those systems are used and, in some cases, the nature of the technology involved. Broadly speaking, the greater the risk, the more burdensome the obligations.

The AIA establishes five categories, with increasing levels of obligation:

  • The majority of AI systems should fall into the category of minimal risk. These applications will be essentially unregulated.

  • AI systems that interact with natural persons will be subject to limited transparency requirements. (For example, a chatbot for first-line HR and employee relations support will need to make clear to the user that they are interacting with a machine, not a human.)

  • General Purpose AI ("GPAI") models will be subject to specific governance and transparency obligations (particularly where they present "systemic risk").

  • Certain AI systems will be designated as high-risk and subject to comprehensive compliance obligations, especially for providers.

  • AI systems which present a clear threat to fundamental rights will be prohibited on the basis that they present an unacceptable risk. (Emotion recognition systems in the workplace are an example of a biometric AI system which would be banned outright.)

Companies not complying with the AIA will be potentially liable for very significant administrative fines, ranging up to the higher of EUR 35m or 7% of global annual turnover.

Relevance to employment

In the AIA there is a list of AI systems which are explicitly classified as high-risk due to their potential harm to health and safety or fundamental rights. Two categories are relevant to the employment relationship:

  • AI systems intended to be used for recruitment or selection purposes (notably for advertising vacancies, screening or filtering applications and evaluating candidates in the course of interview or tests); and

  • AI intended to be used for making decisions in the workplace (notably for promotion or termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance).

There is, however, an exception for high-risk AI systems where there is "no significant risk of harm to the health, safety or fundamental rights of natural persons, including by not influencing the outcome of decision making". What this means is not yet clear but the European Commission will provide guidelines specifying the practical implementation of this exception (including a list of practical examples) within eighteen months of the AIA's entry into force.

Obligations on deployers and providers

The majority of the obligations for high-risk AI systems apply to 'providers', which are the companies which develop or procure AI systems with a view to placing them on the market or putting them into service under their own name or trademark. In the employment context, this is likely to include comprehensive worker management programs (for example, Bamboo HR, Workday Human Capital Management or SAP Success Factors), to the extent they have integrated AI tools.

The obligations on 'providers' of high-risk AI systems are extensive, including undergoing conformity assessment procedures before supplying the AI system, putting in place enhanced risk management, internal audit and data governance measures and registering high-risk AI systems in an EU-wide database. Compliance with these requirements will require significant time and investment by providers.

'Deployers' of AI systems, on the other hand, are the organisations using an AI system under their authority such as businesses utilising HR management software. These businesses will be subject to requirements, albeit fewer of them. Nevertheless, employers will need to be aware of the following obligations and comply with them:

  • Completing a fundamental rights impact assessment ("FRIA") before using a high-risk AI system;

  • Taking appropriate technical and organisational measures to ensure compliance with provider instructions;

  • Allocating competent, properly qualified and resourced human oversight;

  • Ensuring relevant and sufficiently representative input data (to the extent the deployer exercises control over it);

  • Keeping records of logs generated by the high-risk AI system (if under the deployer's control); and

  • Monitoring the operation of the high-risk AI system and reporting incidents to the provider and relevant national supervisory authorities.

Employers using AI tools in the field of recruitment or decision-making in the workplace will need to engage with these obligations, not least the mandatory requirement to prepare an FRIA before using a high-risk AI system. The approach to executing a FRIA will vary based on the nature and scope of the high-risk AI system in question and how it balances with fundamental rights such as non-discrimination and privacy. Completing an FRIA will be a detailed exercise; it will require employers to designate the categories of individuals impacted by the system and assess its impact on their fundamental rights, its impact on the broader public interest and its accessibility for persons with disabilities.

The AIA also specifies particular obligations for GPAI models. These primarily fall on providers and focus on transparency requirements and the data used to train such models. Whilst not directly relevant to deployers of GPAI models, given the sensitivity surrounding this topic (and the potential liabilities associated with inaccurate training data) employers utilising GPAI should be alive to these additional stricter obligations and ensure compliance by the providers supplying such systems.

Deemed provider provisions - when the deployer is considered a provider

A key concern for employers will be the possibility to be considered a 'provider' if they put into service a high-risk AI system under its own name (or trademark) or if it makes a "substantial modification" to a high-risk AI system that has already been put into service by another provider. Employers will need to be aware of the consequences of being subject to the extensive additional obligations on providers under these deemed provider provisions.

Extraterritoriality

The AIA establishes obligations on organisations with a link to the EU market. As well as applying to employers located in the EU who deploy AI systems for use in recruitment / HR services, it will also apply to employers who deploy AI systems in non-EU member states if the output produced by the AI- system is used in the EU. Therefore, if employers in, for example, the UK or Asia, use their AI systems to make decisions regarding their employees in the EU, they will have to comply with the AIA (including completing an FRIA and monitoring obligations).

Businesses with an international presence will be faced with a decision as to whether to implement consistent AIA standards globally or to slim down their use of AI in the EU in order to avoid falling within the AIA regime.

Timeline

The AIA will be effective in around 2026. During the lead up to this date, various guidance, technical standards and supporting legislation will be published to assist with compliance. The two-year timeline is subject to some important exceptions, including that the prohibition on banned AI systems will be introduced after six months and the GPAI obligations after 12 months.

There are also grandfather provisions in the AIA for AI systems already on the market. The obligations under the legislation will only apply to high-risk AI systems put in service before the implementation of the AIA if a significant change is subsequently made to their design. For GPAI models already on the market, the obligations will apply after 24 months.

What action should we take now?

All employers should take steps to engage proactively with the AIA and make preparations to ensure future compliance:

  1. Understanding and mapping how the organisation is using and/or procuring AI tools within the fields of recruitment and employment, and what plans are in place going forward. This will allow the business to conduct a gap analysis of the legal and regulatory risk.

  2. Diverging national level interpretations of the AIA may add a level of complexity, so multinational companies ought to be tracking implementation across EU member states and keeping up to date with further guidance issued by local regulators.

  3. Businesses should approach existing and new vendors to request further due diligence information to allow the company to assess the likely category of the AI system.

  4. Organisations will need to satisfy themselves about the provider's compliance with the AIA. It may be necessary to cease using certain HR/recruitment systems, which would risk an operational issue if not considered early enough.

  5. Discussing the AIA as early as possible with the correct internal stakeholders. For example, talent acquisition teams need to be aware of the forthcoming regulations before procuring recruitment software. Unwinding systems will be disruptive and costly.

To help our clients with these and other AI actions, we have developed an AI Toolkit that contains various tools that legal teams can use to deal with AI legal and regulatory risk. Please contact us if you would like to receive the Toolkit. 

Simmons & Simmons AI Group comprises over 100 lawyers and non-lawyers (including data scientists and AI engineers) across all of its jurisdictions and practice areas, including Employment. We have AI law expertise and experience across our offices in Europe, the Middle East and Asia.


1 "A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.