A call to action on facial recognition technology

We consider the EU’s new guidelines on facial recognition set out in Conventions108+ and the implications for future regulation in the area of FRT.

12 February 2021

Publication

On 28 January the EU’s Directorate General of Human Rights and Rule of Law issued new guidelines on facial recognition. The increasing use of facial recognition technology (FRT) has proved challenging, the guidance comes amid increasing use of FRT systems and appreciation of the lack of standards in its deployment.

Analysis on the legal, moral and social impacts has been mounting over recent years, alongside high profile legal challenges (such as the successful challenge in the Bridges Case and decisions by the likes of IBM to limit its involvement in facial recognition systems) are creating the momentum towards real regulatory action. Although these guidelines are not binding they give an indication of what may be to come. As such they should be considered by those using or planning to use FRT. The principles set out in in this Convention are relevant to governments as well as businesses, from those who develop and manufacture FRT to those who buy and use such technology.

What is the Convention addressing?

The new guidance (called Convention 108 +) defines FRT technology as the automatic processing of digital images containing individuals’ faces for identification or verification. FRT is considered to be biometric in nature if it is used as for unique identification or authentication.

The Convention amends a previous protocol known as Convention 108. It contains guidelines for developers, manufacturers, and service providers as well as users of FRT. This recognition that each ‘party’ in the supply chain has a role to play, is an important factor in the governance of FRT. This ought not be the sole responsibility of the purchaser or the developer. Although not a novel idea1 , it is nonetheless important that this is recognised clearly.

Use and purpose – lawfulness

Not surprisingly the first section calls on users of FRT to ensure that they are fully compliant with the relevant legal framework in which they are operating and that they use FRT proportionally. Organisations (including law enforcement bodies) should ensure proportionality when considering the purpose to which the technology is to be put. The first key principle seeks to ensure that there is a detailed explanation of this purpose and addresses issues such as the reliability and accuracy of the technology. The guidance also clearly states that consent should not, as a rule, be the legal ground relied on for the use of facial recognition undertaken by public authorities; this is because of the imbalance of powers between data subjects and public authorities. This recognition of the a- symmetrical power imbalance will be welcomed by many commentators and reinforces the need for lawful purpose to be clear and supported by strong explanations.

The guidance draws a distinction between the controlled and “uncontrolled environments”, the latter being the most intrusive. The use of Live FRT, in open places such as shopping centres, hospitals, schools, is seen as posing a greater risk of infringing individual rights, suggesting that the threshold for use in such scenarios is higher.

Regulatory interventions

The Convention suggests in a number of places a role for legislators and regulators to take a more active role. This is more than a nudge and is in keeping with the increasing calls for regulatory action on these issues, and wider AI deployment. It is doubtful that all the regulatory action called for will be actioned at national level by many, but it is likely to increase the pressure on governments to look for ways to develop soft interventions, encourage industry standards, and possibly reach for the rule book on red line issues.

For example: Affect recognition tools are increasingly used in recruitment, promotion, and HR/employment processes. Despite increasing popularity, the Convention suggests that affect recognition which is used to determine employment or access to education or insurance, should be prohibited because of the level of risks involved. It is doubtful that prohibition will be adopted by governments, albeit the evidence suggests that such technologies pose a high risk of being misunderstood and poorly used.

One of the more interesting suggestions is that legislators and decision makers should introduce some form of independent certification process for FRT: this would enable developers and manufacturers (as well as service providers and those using the technology) to demonstrate compliance. The suggestion is that different levels of certification could be offered dependent upon the type of AI being used, whether algorithms or more structural technology such as integrating the algorithm with other systems or technology.

Overall, we believe it is likely that the calls for regulatory standards to be created will be welcomed by both commentators and privacy activists. It may also be welcomed by those developers who invest in creating robust and well-designed systems – the removal of the chilling effect of regulatory uncertainty is often a welcome development.

Data sets and Data subjects

Concerns about the bad data in, bad data out, are reflected in the Convention. It raises the risks associated with FRT databases and using FRT in combination with other technology to extract information. An example given is using images from social media, captured for one purpose, which is then used to for other purposes. The call is for legislators to put in place strict legal restrictions. Mislabelling, unintended discrimination and poor training datasets are specifically referenced as risks which developers would need to take active steps to avoid, in addition to safeguards to prevent and protect the revealing of other sensitive data.

The Convention also envisages a more knowledgeable purchaser of these tools. It suggests approaches such as transparency of reliability of the data sets, to “facilitate choice of acquisition”. Although a commendable idea, this is likely to be only as successful as the ability of the purchaser to then judge the relevance of the answer – is a 70% reliability score good enough? For this to be a useful lever in reducing the risks, it will need to be accompanied by an increase in basic AI literacy amongst users.

Individual and Privacy Rights

The Convention offers a clear view on issues related to consent. In relation to public authorities, it states that consent should not be used, as a rule, as the legal basis for the use of FRT, and the same applies to private entities carrying out tasks similar to those of a public authority. Where FRT is used by private organisations, the Convention suggests that although the consent of the individual would be permissible, the quality of that consent is important. In order for consent to be considered freely given, the Convention suggests that alternatives to the FRT should be offered to an individual (we assume in order to ensure there is a real choice to be made rather than a yes/no).

The Convention suggests designers, manufacturers and service providers should ensure their corporate end users are provided with support and guidance. Examples include sample language for privacy policies and easy to understand signage. Both aspects would be supported by the introduction of certification systems (see below).

Governance and Audit

In both the public and private sector, consideration should be given to impact assessments and audit trails. Organisations are urged to engage with supervisory authorities (which will differ of course by jurisdiction). This will be even more challenging where global deployment of FRT and FRT related technology is envisaged and is a good reason for regulators and industry bodies to be working cross border to align standards and expectations.

The emphasis is on:

  • accuracy, reliability, and testing,
  • eliminating disparities and discrimination, which includes “using synthetic datasets based on sufficiently diverse photos” – by gender, skin colour age and morphology; and
  • updating data sets, systems, and security.

There is an extensive section on how to ensure accountability and that organisations have taken appropriate measures to ensure compliance particularly with the data processing requirements. The suggested measures include:

  • publishing transparency reports,
  • training programmes and audit procedures for those in charge of processing FRT,
  • internal review committees to assess and approve any processing which involves FRC data; and
  • data protection impact assessments.

Ethics

Echoing the broader debate, going beyond legal obligations, governments and organisations are urged to consider the ethical framework in which the technology is being used. Recommendations include having an independent ethics advisory board and a cross section of experts to make better informed decisions on, and assessments of, the governance framework Although obvious, good corporate governance for business in the digital AI enabled world is critical: as some high profile examples have shown, if not thought through fully and embedded into the governance structures of the organisation, they may fail to have the impact desired.

Conclusions

So, the Convention gives food for thought and is a useful addition to the growing call for regulation. It would be as well for business leaders (developers, manufacturers, buyers, and users) to pay heed to the risks it identifies. They could also use the recommendations and principles as measures to test how their current deployment of FRT measures up and how this could be improved.

The growth of FRT use is being seen globally. For instance, in Asia, against the backdrop of increasing private sector adoption of FRT, the Singapore Government has announced that FRT will eventually be used as a means of accessing Singapore’s national digital identity program. This would allow access to more than 400 digital services, including tax returns and applying for public housing. Singapore’s Senior Minister of State for Communications and Information announced in Parliament in March 2020 that the Personal Data Protection Commission and the Government Data Office will publish guidelines on the responsible use of biometric technology. At the time of publication, no official guidance on the use of FRT has been issued in Singapore. Readers are encouraged to watch this space for this specific development and evidence of the Convention’s recommendations being adopted.


1See for instance: https://tlsprdsitecore.azureedge.net/-/media/files/topics/research/algorithms-in-criminal-justice-system-report-2019.pdf?rev=ffc06e85e9c244ceaa9f160f27a8b2b3&hash=D1F64FAFF4FBE536DA22B6599C10E5D9;
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/905267;
or http://www3.weforum.org/docs/WEF_Framework_for_action_Facial_recognition_2020.pdf

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.