On 2 December 2019, the ICO and The Alan Turing Institute launched a joint consultation on Draft Guidance on Explaining decisions made with AI. The Draft Guidance provides practical advice on explaining AI and is the first detailed insight into the approach to explainability expected by the UK data protection regulator.
The Draft Guidance underlines the growing importance of the explaining AI-assisted decisions (a requirement of the GDPR that has also become an increasingly important component of ethical or trustworthy AI). We believe the Draft Guidance - which is impressively detailed - should be considered by all organisations that use AI, particularly where involving decisions about individuals or their personal data.
The Draft Guidance is divided into three parts: (1) the basics of explaining AI; (2) the practicalities of explaining decisions taken using AI; and (3) the steps organisations should take to explain their use of AI.
What decisions need to be explained?
The first part of the Draft Guidance covers The basics of explaining AI and sets out the legal bases under which explanations are required. The GDPR does not deal with AI explicitly, but gives individuals a number of rights where their personal data is used for automated decision-making or profiling:
Right to be informed. Individuals have the right to be informed of
the existence of solely automated decision-making that produces
legal or similarly significant effects. This right requires
individuals to be provided with meaningful information about the
logic involved and the envisaged consequences of such decisions.Right of access. Individuals have a right of access to information
on the existence of solely automated decision-making, and to
meaningful information about the logic involved and the envisaged
consequences of the decisions.Right to object. Individuals have the right to object to the
processing of their personal data, including its use for profiling
in certain circumstances.Right not to be subject to solely automated decisions. Individuals
have the right not to be subject to solely automated decisions
producing legal or similarly significant effects.
The Draft Guidance emphasises that even where an AI-assisted decision is not part of a solely automated process, the GDPR still requires organisations to be able to explain it to any individuals affected.
The Draft Guidance confirms that the requirement to explain AI is broad: It is likely to apply wherever AI or related technology is used to process personal data and assist in decision-making processes.
How do you explain AI in practice?
The second part of the Draft Guidance covers Explaining AI in practice and sets out detailed guidance on how organisations should explain their use of AI. This part of the guidance is aimed primarily at the technical teams tasked with explaining their organisations’ uses of AI-assisted decision-making.
The Draft Guidance sets out a suggested approach to explainability:
Prioritise explanations. Organisations should identify and
prioritise explanations of those aspects of their AI-assisted
decision-making that are likely to be most important to the
individuals affected.Collect information. Organisations should gather the information
required to explain both their general decision-making processes and
individual instances in which a decision is made.Develop a rationale explanation. Organisations should develop a
rationale explanation that provides meaningful information about
the underlying logic of their AI systems. If black box models are
used, then organisations should employ technical explanation
techniques.Translate rationale into easily understandable reasons.
Organisations should consider how to convey their explanation to the
individuals affected. This involves translating any mathematical
rationale into plain language.Prepare implementers to deploy AI system. Organisations should
provide appropriate training to human decision-makers involved in
AI-assisted decision-making processes. This training should include
basic knowledge of machine learning and its limitations.Consider contextual factors. Organisations should consider the
context of their decision-making in order to determine how
explanations should be delivered. This includes considering the
purpose of decisions, their impact on individuals, the types of data
used, the time individuals will have to reflect on decisions and
what group of people decisions are being made about.Consider presentation. Finally, organisations should consider what
medium will be most appropriate to present explanations to the
individuals affected. It may be appropriate to explain decisions
using a website or app, or in writing or in person.
These steps are not binding and do not form a statutory code of practice for explaining AI. However, they are intended to clarify good practice for explaining to individuals decisions that have been made using AI systems to process their personal data.
What does explaining AI mean for your organisation?
The third part of the Draft Guidance covers What explaining AI means for your organisation and sets out detailed guidance on the roles, policies, procedures and documentation that can be put in place to assist explainability. This part of the guidance is aimed primarily at senior management and offers an overview of how organisations should adapt to explain their use of AI.
Organisational roles. Explaining AI-assisted decisions should
involve stakeholders from every part of the decision-making
pipeline, including product managers, developers, those who use the
AI systems, compliance teams and senior management.Policies and procedures. Organisations should have policies and
procedures in place that cover all the explainability-related
considerations and actions required from employees, from concept to
full deployment of AI decision-support systems.Documentation. Every stage of AI-assisted decision-making processes
should be documented. This includes documenting both the design and
implementation of the system, and the eventual explanation of its
outcomes.
Again, this guidance is not binding and the ICO acknowledges that there can be no one-size-fits-all approach. However, the Draft Guidance nevertheless provides an insight into the kind of operational changes that organisations will be expected to make in order to explain their use of AI-assisted decision making.
Next steps
The Draft Guidance provides detailed, practical advice on how organisations can comply with the requirements of the GDPR and the trend towards greater explainability of AI.
The ICO repeatedly emphasises that the Draft Guidance will not be binding, even when finalised. However, as with other ICO guidance, the Draft Guidance provides a strong indication of the steps that the UK regulator will expect organisations to take in order to comply with their obligations.
Organisations should consider how the Draft Guidance will apply to their use of AI-assisted decision making.
The Draft Guidance is now subject to a consultation, which will close on 24 January 2020. Organisations that wish to contribute to the consultation can do so here.
Simmons & Simmons’ Artificial Intelligence Group
Simmons & Simmons’ Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI and ML.
We would be happy to advise on the Draft Guidance (including on explaining your organisations’ AI-assisted decisions or any related risks for you or your business), or on any other legal issues relating to AI.








_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)

