AI Explainability Statements: what are they and why they're important?

Watch our webinar on demand to find out about AI Explainability Statements for which we were delighted to be joined by the ICO to discuss this important topic.

Simmons & Simmons, Best Practice AI and Jacob Turner of Fountain Court Chambers recently advised Healthily on the world’s first AI Explainability Statement to receive input from a data regulator, the ICO.

Click here for further information and a link to Healthily’s AI Explainability Statement.

> View on demand here

AI is a complex and inherently opaque technology, which has the capacity to act in unforeseeable ways and cause harm. There is a growing consensus on the need for organisations to explain their use of AI and there have even been recent legal challenges against companies for using AI without sufficient explainability. This concept also forms an important part of the EU’s draft AI Regulation.

We were delighted that Alister Pearson and Abigail Hackston, both Senior Policy Officers at the ICO, joined us for this webinar to discuss:

  • The importance of AI explainability and AI Explainability Statements
  • The process of producing an AI Explainability Statement
  • Legal and ethical risks around AI explainability, including a summary of recent legal challenges in the Dutch courts and by the Italian data protection authority
  • The future of AI explainability, including the UK Government’s consultation on the GDPR

We were also joined by Jonathon Carr-Brown, Healthily’s COO, who was involved in producing Healthily’s AI Explainability Statement.

The webinar included a panel discussion on AI explainability and audience Q&A.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.