Bank of England and FCA report: machine learning in financial services

The Bank of England and the FCA have published a report on the use of machine learning in the UK financial services sector.

21 October 2019

Publication

On 16 October 2019, following a survey of the use of machine learning by more than 100 UK banks, asset managers, insurers and others, the BoE and FCA published a joint Report on Machine learning in UK financial services. The Report assesses the adoption of machine learning technology by UK financial services firms, and considers risks and attitudes regarding its deployment.

The Report represents an attempt by the BoE and the FCA to establish a dialogue with financial services firms, academics and other regulators about the use of machine learning in financial services. Although the Report is not intended to lead to any specific regulation, the BoE and the FCA acknowledge that it may help to provide a platform for identifying where regulation is appropriate in future.

What is machine learning (ML)?

ML is the most common use of AI. Conventional software or computer systems are pre-programmed by humans with rules and they have no discretion to make their own decisions. By contrast, ML harnesses a computer’s powerful processing ability to review often large datasets, detect patterns in those datasets and, ultimately, make decisions based on those patterns autonomously. Autonomous decision-making is the most important unique feature of AI, particularly from a legal perspective.

What did the Report find?

The Report considered a range of questions regarding the current adoption of ML in the financial services sector, along with firms’ perceptions of the potential benefits, risks and constraints and found:

  • Extensive adoption of ML. ML is already being used extensively in UK
    financial services. Two thirds of survey respondents reported that
    they already have live ML applications in place, with deployment most
    advanced in the banking and insurance sectors. Increasingly, ML is
    being used for front-office functions as well as back-office
    functions.

  • ML risks. Firms generally consider ML to be an amplifier of existing
    risks, rather than a creator of new risks. Data quality, lack of
    model explainability and poor model performance in novel situations
    were seen as key risks. The Report highlights that model validation
    and governance must keep pace with ML development.

  • Risk management frameworks. The majority of firms (57%) govern their
    use of ML through existing risk management frameworks, while a
    smaller proportion (12%) have established specialist committees for
    ML. Other firms have adopted ML principles or are in the process of
    setting up ML ethics functions. The Report highlights that risk
    management frameworks will need to evolve as increasingly complex ML
    technology is deployed.

  • Limited explainability. Firms found that regulatory requirements to
    explain decision-making when using ML present a significant
    challenge. The Report notes that, at present, explainability
    techniques are used as part of model validation frameworks in less
    than half of cases. This will need to increase in the future.

  • Use of human oversight. Firms use a variety of safeguards to control
    the risks associated with ML. The most common safeguards involve
    ensuring some human oversight of systems, commonly using ‘alert
    systems’ or ‘human-in-the-loop’ mechanisms. The Report acknowledges
    that such measures can be a helpful tool for identifying when models
    are not working as intended.

  • Regulation not a barrier to deployment. Most firms (75%) do not
    consider regulation to be an unjustified barrier to the use of ML at
    present. However, the Report notes that regulation may need to be
    updated to account for developments in ML.

  • Need for clarity on existing regulation. Some respondents noted that
    additional guidance on the application of existing regulation could
    serve as an enabler for further deployment of ML. The Report accepts
    that, as with any new technology, it may not be obvious how existing
    norms and rules apply to ML, and indicates that regulators will seek
    to ensure that firms are able to apply the existing regulatory
    framework to ML.

What do I need to do now?

The Report is not intended to set out any new requirements or guidance for firms.

However, it does highlight the importance of considering how existing regulation applies to ML (and AI in general).

For example, the Report acknowledges that the use of ML could pose a meaningful prudential or conduct risk for financial institutions. More specifically, the Report highlights the need for regulators to consider the use of ML as a factor affecting the nature, scale and complexity of information technology activities subject to an ICT Risk Assessment carried out in accordance with the EBA SREP Guidelines. The EBA SREP Guidelines set out a common methodology for EU regulators to conduct supervisory review and evaluation of financial institutions’ operational risk under the CRD IV Directive and ICT Risk Assessments form part this process.

The FCA has previously made it clear that firms must take responsibility for their use of ML and AI at board level. Explainability has become a common theme in discussions about the regulation of AI and the FCA has emphasised that decisions taken by machines must have “sufficient interpretability” to enable firms to understand and explain them. The FCA has also stressed that firms should be transparent with customers about the use of ML in decision-making, whether decisions are being taken about customers or on their behalf. Where personal data is used in ML systems, firms should also be considering the requirements of the GDPR.

You should consider whether your organisation is using ML / AI in accordance with existing regulation and /or regulatory / ethical guidance.

Next steps?

We expect that reports such as this will lead to further dialogue about how AI should be used responsibly in the financial services sector. Trustworthiness is an important feature of AI and regulators will be keen to ensure that regulatory / ethical guidance, and eventually regulation, satisfies this important feature.

As a part of this dialogue, the BoE and FCA have announced the establishment of a public-private working group on AI and ML to explore possible approaches to the issues raised by the report.

Simmons & Simmons’ Artificial Intelligence Group

Simmons & Simmons’ Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI and ML.

We would be happy to advise on the Report (including any risks for you or your business), or on any other legal issues relating to AI.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.