TechNotes – Top 10 issues for facial recognition technology

The most pressing legal issues for facial recognition technology (FRT)

23 February 2021

Publication

“The algorithms of the law must keep pace with new and emerging technologies… this is the first time that any court in the world had considered [Automated Facial Recognition technology].”

South Wales Police case, September 2019

1. Trend towards prohibiting / restricting FRT use: The use of FRT is increasing on a global scale. A Marketsandmarkets report on the Facial Recognition Market published in December 2020 forecasted annual growth of 17.2%, with a global market size of US$8.5bn by 2025. At the same time, however, there is an increasing trend towards prohibiting or restricting the use of FRT, particularly in the US where the use of FRT in law enforcement is prohibited in various places (notably, in California). The EU has previously raised the possibility of a blanket ban on FRT use, although forthcoming EU regulation may seek only to restrict / regulate its use. We recommend organisations involved in the development and use of FRT follow legal developments in their key jurisdictions.

2. Bias and discrimination: It's becoming increasingly clear that, like humans, AI systems can cause and perpetuate bias and discrimination. AI biases may be introduced by data, developers or the failure of AI systems to take account of changing circumstances. In each case, such biases can lead to discrimination and unfair outcomes. This is a particularly acute risk with FRT, where characteristics such as gender and ethnicity are intrinsic to the use and accuracy of the technology. There are high-profile instances, including in law enforcement, of FRT causing discrimination. Organisations using FRT should ensure they have taken all steps to understand and eliminate the risk of bias and discrimination in its use.

3. GDPR and lawful processing: FRT generally involves the use of personal data, which is regulated by the GDPR in Europe. Under Article 6, there needs to be a lawful basis for processing this data (for example, consent, legitimate interests which are not outweighed by the rights of individuals, or processing which is necessary for performance of a contract or to comply with a legal obligation). Biometric data is also likely to constitute “special category” data under the GDPR, in which case it cannot lawfully be processed at all unless the Article 6 requirements are met and one of the exemptions in Article 9 applies (for example, if there is a “substantial public interest”). Commercial uses of FRT may not meet these GDPR requirements.

4. DPIAs / legitimate interest assessments: Prior to making use of FRT, it is crucial that organisations properly document the basis on which the processing of any biometric data is justified (assuming the GDPR applies). Where they seek to rely on “legitimate interests” as the lawful basis for processing, they should, as a minimum, complete a legitimate interest assessment. Under the GDPR, a data protection impact assessment (“DPIA”) is required for high-risk processing, which is likely to include FRT use. The UK Court of Appeal held last year that the South Wales Police had unlawfully used FRT as, amongst other reasons, its DPIA was deficient.

5. Human rights issues: The use of FRT raises sensitive human rights considerations. For example, FRT can be used to monitor individuals in public spaces, often without their knowledge or consent. This raises potential Article 8 issues (right to respect for private and family life), which were considered in the South Wales Police case noted above (where it was found that the police had breached Article 8 because it was not using FRT in accordance with the law). More generally, FRT has the ability to restrict a person’s liberty and freedoms (for example, when it is used in a law enforcement context), again raising sensitive human rights issues.

6. Public bodies using FRT: Public bodies are increasingly using FRT in connection with their functions; particularly, for example, in law enforcement. As well as the risks highlighted in this note, public bodies may be subject to additional obligations which they are at risk of breaching in using FRT (the “public sector equality duty” in the UK, for example). This was evident from the South Wales Police case noted above in which, for example, the Court of Appeal said that the police should have done more to satisfy themselves that the technology was free of bias.

7. FRT risks in recruitment: FRT is increasingly used in HR / recruitment processes, particularly with the rise of video interviews (which has been accelerated by the Covid-19 pandemic). For example, FRT can be used to evaluate candidates’ responses to questions based on an analysis of various data points (for example, facial expressions and use of language). There are, however, clear risks in using FRT in this context. For example, aside from obvious downsides where the FRT is inaccurate, any inherent biases in the technology can lead to discrimination and could put the employer in breach of its obligations, including under equalities legislation.

8. Explainability / transparency: FRT involves the use of complex AI systems which may suffer from a lack of transparency or interpretability. For example, FRT often uses deep neural network algorithms which can suffer from the AI “black box” issue in that, as humans, we cannot always explain how the technology has reached a particular decision. The “explainability” of AI systems is already an important component of ethical AI use and we expect that it will be reflected in the future regulation of AI. Organisations using FRT should therefore seek, as far as possible, to understand how it works and to ensure that they have captured that understanding in writing.

9. Licensing and liability: In many cases, a party looking to use FRT will seek to licence this technology from a provider. The licensee should consider suitable contractual protection, given the risks with FRT; for example, representations and warranties regarding the development and functioning of the FRT (eg that it has been appropriately developed and tested, that it is free of bias and that its use does not infringe third party IP) and indemnity protection if eg the FRT infringes third party IP, breaches any laws or is not fit for purpose. Equally, the licensor will want to ensure that the licensee remains liable for any unlawful use of the FRT eg any GDPR breaches. Organisations involved in the development, licensing or use of FRT should ensure that their contracts specifically reflect the key risks relating to FRT.

10. Deepfakes: Deepfakes are synthetic media created by decoding the real medium and encoding a fake replacement. Deepfakes can occur in FRT use; a recent example being Channel 4’s deepfake Queen’s speech (which sparked nearly 400 Ofcom complaints). Deepfakes pose unique reputational risks and potential infringement of personality rights. Whereas a written false statement may not be accepted as truth by the reader, material purporting to constitute real video evidence is unlikely to be questioned in the same way, so any adverse message delivered by the deepfake could prove more damaging. If your organisation is affected by a deepfake, the advice is to act fast. Contact the host platform requesting that the deepfake be taken down, and subsequently ensure that all links are removed from search engines. Any damage can then be assessed, but the immediate goal should be to limit views of the deepfake, which can often be achieved without legal action.

Found this article useful? Read others in our TechNotes series.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.