FCA article on transparent AI in financial services

The FCA has issued an article emphasising transparency in the use of AI.

24 February 2020

Publication

As part of its collaboration with The Alan Turing Institute, the FCA has published an article emphasising the benefits of transparent AI in the financial services sector: AI Transparency in financial services – why, what, who and when?

The blog refers to the recent survey by the BoE and FCA on machine learning in the FS sector (which we summarised here) and it follows previous articles by the FCA on responsible AI (Artificial Intelligence in the boardroom and Explaining why the computer says “no”)

Summary

The salient points of the FCA’s latest article are:

  • Why transparency is important: The FCA reiterates that transparent AI leads to more trustworthy AI, which ultimately fosters greater public acceptance of AI. More specifically, the FCA says transparency is important because (1) it can address concerns about an AI system eg its reliability and any discrimination / bias, and (2) it allows customers to understand (and potentially challenge) AI-based decisions.
  • What transparency means: We are often asked “what exactly do I need to explain about my AI system?” Helpfully, the FCA suggests that organisations should explain not just (1) how its AI model operates (eg the algorithm, the decision-making process etc.), but also (2) the wider process in which the AI system operates (eg the problem it was designed to address, data procurement, validation and testing, monitoring of the system, how is the system used etc.).
  • Context is key: As the ICO has also said, transparency will mean different things for different stakeholders. The FCA distinguishes between (1) internal stakeholders, (2) customers, (3) regulators, and (4) third parties eg shareholders and the wider public. An AI system which eg evaluates loan applications from consumers is likely to need a higher degree of transparency than a system which eg assists with due diligence in the on-boarding of new institutional investors.
  • "Transparency matrix”: Given that FIs are using AI for different functions and given the different stakeholders involved, the FCA suggests that organisations develop a “transparency matrix” which maps out what information about each AI system should be made available to each stakeholder.

What should you do next?

AI regulation is coming (see eg our recent article on the EU’s AI White Paper) and it is likely to address transparency. We are advising clients – particularly in the FS sector – to get ahead and ensure they are able to explain their use of AI eg through explainability statements, AI principles, internal training and board briefings. Regulators will start to expect more transparency from FIs on their use of AI, as the FCA’s comments demonstrate.

We can assist you with this through our AI Healthcheck and Compliance Framework service. Please feel free to get in touch with us about this.

Simmons & Simmons’ AI Group

Our dedicated AI Group comprises lawyers across various practice areas and jurisdictions who can assist companies and individuals with legal issues relating to AI and ML.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.