In September 2021, the International Organization of Securities Commissions (IOSCO) published its Final Report on the use of Artificial Intelligence (AI) and Machine Learning (ML) by market intermediaries and asset managers. The Report includes proposed guidance for local regulators in IOSCO member jurisdictions to address conduct risks associated with the development, testing and deployment of AI and ML. Although the guidance is not binding on local regulators, it is indicative of the best practices that any financial services firm that is using AI and ML should consider adopting.
Background to the Final Report
AI and ML is increasingly used by market intermediaries and asset managers, for example, to support their advisory and support services, risk management, and selection of trading algorithms and portfolio management. However, the use of this technology can create or amplify certain risks, which could potentially have an impact on the efficiency of financial markets and result in consumer harm. The use of and controls surrounding AI and ML within financial markets is a current focus for regulators across the globe and IOSCO has identified the use of these technologies by market intermediaries and asset managers as a key priority.
In preparing its Report, IOSCO examined best practices arising from the supervision of AI and ML, surveyed and held roundtable discussions with market intermediaries, and conducted outreach to asset managers to identify how AI and ML is being used and the associated risks. The results of this review were published in the Consultation Report released in June 2020, which identified the following principal areas of AI/ML risks and harms:
- governance and oversight;
- algorithm development, testing and ongoing monitoring;
- data quality and bias;
- transparency and explainability;
- outsourcing; and
- ethical concerns.
Final Report – 6 Measures
Based on its review, IOSCO’s Final Report sets out guidance to assist member jurisdictions to supervise market intermediaries and asset managers that use AI and ML. The guidance consists of 6 measures setting out expected standards of conduct. Although the guidance is not binding on local regulators, it is indicative of the best practices that any financial services firm that is using AI and ML should consider adopting. In the remainder of this update, we explain each of the 6 measures, starting with the key wording from the Report, followed by our suggested key actions for firms.
Measure 1 – Governance and responsibilities
"Regulators should consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. This includes a documented internal governance framework, with clear lines of accountability. Firms should designate an appropriately senior individual (or groups of individuals), with the relevant and current skill set and knowledge to sign off on initial deployment and substantial updates of the technology, which might also be combined with an already existing person overseeing overall technology and/or data."
Key actions for firms:
- Designate a member of senior management to be responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML: The accountability of senior management for the overall performance of the firm extends to the actions and outcomes of AI and ML models, including externally sourced models. Senior management need the appropriate technical knowledge (whether personally or through designated personnel in the firm) required to effectively oversee the use of AI and ML techniques by the firm.
- Establish a documented internal governance framework, reflecting clear lines of accountability that include:
- procedures to approve the development, deployment and updates of algorithms and to solve problems identified when monitoring algorithms;
- a documented internal governance framework, including roles for legal, compliance and risk management functions;
- appropriate controls and governance frameworks to oversee and challenge outcomes from AI and ML models and their underlying data;
- formulating an AI and ML methodology document and have an audit trail of the use of AI and ML by the firm across the life cycle of the AI and ML models; and
- assessing whether the technology is applied consistently with the firm’s risk appetite and client’s risk tolerance and in an ethical manner.
- Designate a senior individual or group with appropriate skills and knowledge to sign off on deployment and updates of AI and ML technology
Measure 2 – The development, testing, and ongoing monitoring of AI and ML techniques
"Regulators should require firms to adequately test and monitor the algorithms to validate the results of an AI and ML technique on a continuous basis. The testing should be conducted in an environment that is segregated from the live environment prior to deployment to ensure that AI and ML:
(a) behave as expected in stressed and unstressed market conditions; and
(b) operate in a way that complies with regulatory obligations."
Key actions for firms:
- Establish frameworks for testing and monitoring algorithms to validate the results of an AI and ML technique on a continuous basis: The behaviour of AI and ML may change in an unforeseen manner as more data is processed over time. AI and ML techniques should be monitored continuously as the algorithms adjust and transform and do not behave in unintended ways owing to a subtle shift in operating conditions or new data. Firms should also ensure, where appropriate, that adequate kill switch functionality is built into their control framework.
- Conduct tests in an environment segregated from the live environment prior to deployment
- Test AI and ML to ensure techniques:
- behave as expected in stressed and unstressed market conditions; and
- operate in a way that complies with regulatory obligations.
- Comply with regulatory obligations: The use of AI and ML should be properly assessed and tested in light of their compliance risks, which may include market abuse, data privacy, risk management, and cybersecurity. Risk and compliance functions should be involved in the development and testing of AI and ML and in monitoring the AI/ML post-deployment.
Measure 3 – Knowledge and skills required by firms’ staff
"Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present."
Key actions for firms:
- Assess whether the firm has adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm uses
- Assess whether compliance and risk management functions are able to:
- understand and challenge the algorithms produced by the firm; and
- conduct due diligence on any third-party provider.
- Establish multi-disciplinary teams involving the business line users of the technology, data scientists, IT and database administration staff, and risk and compliance functions
- Establish processes and documentation to ensure continuity of AI and ML solutions in the event of departures of key staff
Measure 4 – Outsourcing
"Regulators should require firms to understand their reliance on and manage their relationship with third-party providers of AI and ML, including monitoring providers’ performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. Where appropriate, this agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance."
Key actions for firms:
- Assess reliance on and manage relationships with third-party providers of AI and ML, including by monitoring performance and conducting oversight
- Put in place clear service level agreements and contracts with third-party providers:
- clarifying the scope of the of the outsourced functions;
- clarifying the responsibility of the service provider;
- setting out clear performance indicators; and
- determining rights and remedies for poor performance.
Measure 5 – Transparency and disclosure
"Regulators should consider what level of disclosure of the use of AI and ML is required by firms, including:
(a) Regulators should consider requiring firms to disclose meaningful information to customers and clients around their use of AI and ML that impact client outcomes.
(b) Regulators should consider what disclosures they may require from firms using AI and ML to ensure they can have appropriate oversight of those firms."
Key actions for firms:
- Determine what information customers and clients need about their use of AI and ML to understand the nature of, and key characteristics of the products and services that they are receiving, and how they are impacted by the technology
- Consider how to disclose the information in a way that is non-discriminatory and is easily comprehensible, including the level of detail required by customers and clients (for example, in the form of an AI Explainability Statement)
Measure 6 – Data quality and bias
"Regulators should consider requiring firms to have appropriate controls in place to ensure that the data that the performance of the ML and AI is dependent on is of sufficient quality to minimise biases and sufficiently broad for a well-founded application of AI and ML."
Key actions for firms:
- Establish controls to ensure that the data used in AI and ML technology is:
- of sufficient quality to minimise biases by checking the quality of sources used and relevance and completeness of data;
- representative of the target population so they do not lead to exclusion phenomena; and
- sufficiently broad to be appropriate for use in AI and ML solutions.
- Establish processes and controls to identify and remove unjustifiable biases from data sets
- Analyse the outputs of the AI and ML for the risk of discrimination
- Run specific training courses among technical and non-technical staff involved in AI and ML to raise awareness of potential data biases
.jpg?crop=300,495&format=webply&auto=webp)





.jpg?crop=300,495&format=webply&auto=webp)











