European Commission publishes White Paper on Artificial Intelligence

The European Commission has published a White Paper on Artificial Intelligence, including proposals for the regulation of AI.

20 February 2020

Publication

On 19 February 2020, the European Commission published a White Paper on its approach to AI. The White Paper, an early draft of which leaked in January, sets out policy proposals to encourage the development and uptake of AI in the EU and plans for a new regulatory framework to address the risks posed by AI.

The White Paper proposes a package of policies, including measures to improve research and development, developing a skilled workforce and promoting the use of AI in the public sector.

The majority of the White Paper covers proposals for a regulatory framework for AI (through updates to existing EU legislation and new AI-specific regulation), designed to achieve the Commission’s ambitions for trustworthy AI.

Updates to existing EU legislation

EU legislation on consumer protection, data protection, anti-discrimination and fundamental rights is in principle fully applicable to uses of AI. However, the White Paper acknowledges that this legislation could be improved to address novel risks posed by AI.

Although the White Paper does not go into detail on what changes to existing legislation can be expected, it does highlight five areas which may require changes to legislation:

  1. Effective application and enforcement. The lack of transparency of
    certain AI applications can make it difficult to identify and prove
    breaches of the law. Existing legislation may need to be amended to
    ensure it can be effectively applied to AI.

  2. Scope of existing legislation. Outside of certain sectors, EU
    product safety legislation currently has limited application to
    software and to services (as opposed to products). The scope of such
    legislation may need to be amended to cover AI.

  3. Changing functionality of AI applications. The changing nature of AI
    applications, either as a result of regular updates or incremental
    improvement via machine learning, means that new risks can arise
    after a product is brought to market. Existing legislation, which
    focuses on the risks present when a product is taken to market, may
    need to be updated to reflect this.

  4. Uncertain allocation of responsibilities. Existing EU legislation
    allocates responsibility to the producer who places a product on the
    market. This will not always reflect the supply chain for AI
    applications in which, for example, AI may be added to a product
    after it has been placed on the market.

  5. Changing concept of safety. AI applications may give rise to risks
    that existing EU legislation does not address explicitly, such as
    cyber threats and personal security risks. Legislation may need to
    be updated to reflect these new risks.

The points above suggest that there may be changes to product safety legislation to address specific AI risks.

A new regulatory framework for AI

Who will be subject to the new rules? The Commission proposes a risk-based approach, with mandatory rules applying only to companies (whether in or out of the EU) that use AI in high-risk applications. These will be where both: (1) the nature of the sector means that significant risks may arise, e.g. in healthcare, transport or energy; and (2) the use of AI means that a significant risk is likely to arise, e.g. where AI can affect the rights of individuals or companies, or pose a risk of harm or damage. Facial recognition or employment screening are likely to be automatically considered high-risk, regardless of the sector.

Although mandatory rules are only expected to apply to high-risk AI applications, the White Paper also suggests a voluntary labelling regime. This would allow firms to make themselves subject to regulatory requirements on a voluntary basis and be awarded a quality label for their AI applications.

What will the new rules cover? The White Paper recommends rules on:

  • Transparency. Organisations using high-risk AI applications will be
    required to provide information about those applications, their
    capabilities and limitations to those who may be affected by them and
    to regulatory authorities. Separately, individuals will need to be
    clearly informed when they are interacting with an AI application and
    not a human being.

  • Robustness and accuracy. High-risk AI applications will need to be
    technically robust and accurate. This is likely to include ensuring
    that the outputs of AI applications are reproducible, that AI
    applications can adequately deal with errors and inconsistencies, and
    are resilient to attacks.

  • Human oversight. Human oversight of high-risk AI applications will be
    required. This may involve human review either before or after an AI
    application makes a decision, or real time monitoring of AI
    applications in operation. The appropriate type and degree of human
    oversight will depend on the AI application in question.

  • Training data. The data used to train high-risk AI applications will
    need to be comprehensive and non-discriminatory.

  • Record keeping. Users of high-risk AI applications will be required
    to keep records of the programming of those applications, the data
    used to train them and, in some cases, the data itself. These records
    will need to be made available on request to regulators and retained
    for a reasonable time period to allow effective enforcement of
    AI-related legislation.

Will facial recognition be banned? This was discussed in the leaked White Paper. The final version suggests there may be further specific requirements for biometric identification (including facial recognition) and the Commission expects to launch a debate on this. Immediate plans for a ban or moratorium on the use of facial recognition technology in public spaces appear to have been shelved.

How will the new regulation be enforced? The White Paper envisages that high-risk AI applications will need to complete a conformity assessment before being made available to consumers. National regulators will then be responsible for ongoing monitoring and enforcement.

What do I need to do now?

The Commission has opened a consultation on the White Paper until 19 May 2020. Companies that use AI applications may wish to comment (we can assist with this if helpful).

The regulation of AI is a key priority for new Commission President, Ursula von der Leyen, and legislation is expected to be brought before the European Parliament later this year.

Companies should start preparing for increased regulation now, e.g. by:

  • Considering whether their AI applications are transparent, robust,
    accurate and have sufficient human oversight.

  • Retaining records of the development and training of their AI
    applications, as these may be requested by regulators or form a part
    of conformity assessments in future.

  • Creating or adapting internal governance structures for AI use,
    ideally combining technical, legal and compliance functions. This
    should include addressing the use of AI applications in supply chains
    and outsourcing contracts.

Simmons & Simmons’ Artificial Intelligence Group

Our AI Group comprises lawyers across various practice areas and jurisdictions who can assist companies and individuals with legal issues arising in relation to AI.

We would be happy to advise on the White Paper (including on any compliance risks for you or your business), or on any other legal issues relating to AI.

In collaboration with Jacob Turner of Fountain Court Chambers and Best Practice AI, we recently launched the first AI Healthcheck and Compliance Framework service, which can also assist with AI-related compliance.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.