ICO publishes final guidance on explaining decisions made with AI
On 20 May 2020, building on an earlier consultation, the ICO and The Alan Turing Institute published joint Guidance on explaining decisions made with AI.
On 20 May 2020, building on an earlier consultation (see our previous summary here), the ICO and The Alan Turing Institute published joint Guidance on explaining decisions made with AI. The Guidance provides practical advice on explaining AI and is the first detailed explanation of the approach to explainability expected by the UK data protection regulator.
The Guidance underlines the growing importance of explaining AI-assisted decisions (a requirement of the GDPR that has also become an increasingly important component of ethical or trustworthy AI). We believe the Guidance - which is impressively detailed - should be considered by all organisations that use or are planning to use AI, particularly where that use involves decision-making about individuals using their personal data.
The Guidance is divided into three parts: (1) the basics of explaining AI; (2) the practicalities of explaining decisions taken using AI; and (3) the steps organisations should take to explain their use of AI.
What decisions need to be explained?
The first part of the Guidance covers the basics of explaining AI and sets out need for accountability and the legal basis under which explanations are required.
The GDPR does not deal with AI explicitly, but gives individuals a number of rights where their personal data is used for automated decision-making or profiling:
- Right to be informed. Individuals have the right to be informed of the existence of solely automated decision-making that produces legal or similarly significant effects. This right requires individuals to be provided with meaningful information about the logic involved and the envisaged consequences of such decisions.
- Right of access. Individuals have a right of access to information on the existence of solely automated decision-making, and to meaningful information about the logic involved and the envisaged consequences of the decisions.
- Right to object. Individuals have the right to object to the processing of their personal data, including its use for profiling in certain circumstances.
- Right not to be subject to solely automated decisions. Individuals have the right not to be subject to solely automated decisions producing legal or similarly significant effects.
The Guidance emphasises that even where an AI-assisted decision is not part of a solely automated process, the GDPR still requires organisations to be able to explain it to any individuals affected and encourages companies to proactively share this information where possible. The Guidance highlights the importance of ensuring all processing is also compliant with the Equality Act 2010 (ie. not discriminatory).
The Guidance confirms that the requirement to explain AI is broad; it is likely to apply wherever AI or related technology is used to process personal data and assist in decision-making processes.
How do you explain AI in practice?
The second part of the Guidance covers explaining AI in practice and sets out detailed guidance on how organisations should explain their use of AI. This part of the Guidance is aimed primarily at the technical teams tasked with explaining their organisations' uses of AI-assisted decision-making.
The Guidance sets out a suggested approach to explainability:
- Prioritise explanations. Organisations should identify and prioritise explanations of those aspects of their AI-assisted decision-making that are likely to be most important to the individuals affected.
- Collect information. Organisations should gather information and data in a way that allows for straightforward explanations. This includes detailed labelling and documenting ongoing risk assessments.
- Build an explainable system. Organisations should consider explainability from the outset and ensure that information can be filtered and extracted easily from the AI system. This can be evidenced by a rationale explanation that provides meaningful information about the underlying logic of the AI system. If black box models are used, then organisations should employ technical explanation techniques.
- Translate rationale into easily understandable reasons. Organisations should be able to convey how the process works to the individuals affected. This involves translating any mathematical rationale into plain language.
- Prepare implementers to deploy AI system. Organisations should provide appropriate training to human decision-makers involved in AI-assisted decision-making processes. This training should include basic knowledge of machine learning and its limitations.
- Consider presentation. Finally, organisations should consider what medium will be most appropriate to present explanations to the individuals affected. It may be appropriate to explain decisions using a website or app, or in writing or in person.
These steps are not binding and do not form a statutory code of practice for explaining AI. However, they are intended to clarify good practice for explaining to individuals the decisions that have been made using AI systems to process their personal data.
What does explaining AI mean for your organisation?
The third part of the Guidance covers what explaining AI means for your organisation and sets out detailed guidance on the roles, policies, procedures and documentation that can be put in place to assist explainability. This part of the guidance is aimed primarily at senior management and offers an overview of how organisations should adapt to explain their use of AI.
- Organisational roles. Explaining AI-assisted decisions should involve stakeholders from every part of the decision-making pipeline, including product managers, developers, those who use the AI systems, compliance teams and senior management.
- Policies and procedures. Organisations should have policies and procedures in place that cover all the explainability-related considerations and actions required from employees, from concept to full deployment of AI decision-support systems. The rules should make it clear why they are in place and who they apply to.
- Documentation. Every stage of AI-assisted decision-making processes should be documented. This includes documenting both the design and implementation of the system, and the eventual explanation of its outcomes. The key objective is to provide documentation that can be understood by people of varying technical knowledge.
Again, this guidance is not binding and the ICO acknowledges that there can be no one-size-fits-all approach. However, the Guidance nevertheless provides an insight into the kind of operational changes that organisations will be expected to make in order to explain their use of AI-assisted decision making.
Next steps
The Guidance provides detailed, practical advice on how organisations can comply with the requirements of the GDPR and the trend towards stronger requirements in relation to the explainability of AI.
The ICO repeatedly emphasises that the Guidance is not binding. However, as with other ICO guidance, the Guidance provides a strong indication of the steps that the UK regulator will expect organisations to take in order to comply with their obligations.
Organisations should actively consider how the Guidance will apply to their use of AI-assisted decision making.
Simmons & Simmons' Artificial Intelligence Group
Simmons & Simmons' Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI and ML.
We would be happy to advise on the Guidance (including on explaining your organisation's AI-assisted decisions or any related risks for you or your business), or on any other legal issues relating to AI.
_11zon.jpg?crop=300,495&format=webply&auto=webp)





