Download the full visual bulletin here, text below.
IOSCO report on the use of AI and ML by market intermediaries and asset managers
7 September 2021
Who is affected
Financial services firms using AI and ML, particularly market intermediaries and asset managers.
What is it?
The International Organization of Securities Commissions (IOSCO) published a report to provide guidance on supervising market intermediaries and asset managers using AI and ML.
The guidance consists of 6 measures which reflect the expected standards of conduct relating to:
governance and responsibilities
the development, testing and ongoing monitoring of AI and ML techniques
knowledge and skills required by firms’ staff
outsourcing
transparency and disclosure
data quality and bias
Read more in our Insights article here and you can find the full IOSCO report here.
What should I do?
Although the guidance is not binding on local regulators, it indicates the best practices that firms using AI and ML should consider adopting.
You should review the IOSCO report and read our Insights article that sets out key actions for firms.
Our recommended actions include establishing internal governance and control frameworks for developing and monitoring algorithms and multidisciplinary teams with an adequate mix of skills and experience to implement these frameworks.
Potential amendments to UK data protection law
10 September 2021
Who is affected?
Firms which use AI in decision making processes.
What is it?
The UK government is considering removing a clause in its data protection laws which allows people to seek human review of AI decisions, with the aim of developing a pro-growth and trusted data regime for the UK.
Part of the consultation focuses on the responsible development of AI and invited responses to inform the government's next steps. One key area examines the potential removal of Article 22 GDPR which currently allows people to seek a human review of AI-based decisions (e.g. an AI decision to award a loan or recruitment aptitude tests to filter candidates).
Removal of this article would contrast with data regimes proposed by other authorities around the world which aim to give citizens more control over AI-based decisions. Read the consultation here.
What should I do?
UK legislation is likely to be impacted by the outcome of the consultation. Should the legal obligation be lessened, UK companies may wish to change their AI workflows to enjoy the new
freedoms.
The results of this 10-week consultation will also be closely followed by the European Commission to see if the reforms breach EU standards of data privacy. Watch this space for potential legal action on behalf of EU citizens.
World’s first AI Explainability Statement reviewed by the ICO
17 September 2021
Who is affected?
All companies interested in or involved in the use of AI.
What is it?
Healthily - one of the world’s leading AI smart symptom checkers - supported by Simmons & Simmons, Best Practice AI and Jacob Turner of Fountain Court Chambers, has published the first AI Explainability Statement to have been reviewed by the UK Information Commissioner’s Office (ICO).
Healthily’s AI Explainability Statement provides an explanation of how it uses AI in its app. It includes information on why the AI is being used, how the AI system was designed and how it operates.
There has been an increased regulatory focus on disclosure and transparency, especially when dealing with AI-based processes. Although not yet legally mandated, AI Explainability Statements, such as Healthily’s, show companies to be proactive in aligning with global best practices and regulatory guidance.
The statement provides a non-technical explanation to customers, regulators and the wider-public, aiding an increased public understanding and confidence in the use of AI technology.
Read Heathily’s AI Explainability Statement here and Simmons’ press release.
What should I do?
Firms which currently implement AI as part of their core activities should consider preparing an AI Explainability Statement in order to comply with industry best practices as well as binding legislation.
Get in touch if we can help with your AI Explainability Statement.
The UK Court of Appeal rejects appeal confirming AI system cannot be an inventor
21 September 2021
Who is affected?
IP lawyers, developers and users of AI.
What is it?
The UK Court of Appeal held that an AI system cannot be named as an inventor on UK patent applications. Dr Stephen Thaler had applied for two patents which named his AI system, DABUS, as the inventor. However, the Court of Appeal found that DABUS, could not qualify as a patent inventor because it was not a 'person'.
Dr Thaler's claim is one of several parallel applications around the world and has seen some success in the Australian and South African courts. His case has raised significant questions as to whether the current state of patent law is sufficient to deal with AI. Notably, the UK government's national AI strategy has set out plans to consult on how copyright and patents should protect AI-generated inventions and creations works.
Read the judgment here.
What should I do?
This decision confirms the state of the law in relation to AI inventors. However, this decision is expected to be appealed at the Supreme Court.
The UK government's national AI strategy has set out to consult on how copyright and patents should protect AI-generated inventions and creations works (read the strategy here).
Watch this space!
The UK released its National AI strategy
22 September 2021
Who is affected?
UK firms developing or working with AI systems.
What is it?
The 10-year plan is part of the UK Government’s plan to become an AI superpower and sets out three main aims:
invest and plan for the long-term needs of the AI ecosystem
support the transition to an AI-enabled economy
ensure the UK gets the national and international governance of AI technologies right
To achieve these aims, the strategy sets out three pillars for growth: (1) long-term investment into the AI ecosystem, (2) ensuring AI benefits all sectors and regions, and (3) effective AI governance.
The plan will include the launch of a National AI Research and Innovation Programme, the trial of an AI Standards Hub involved in global AI standardisation and the launch of a consultation on copyright and patents for AI.
Read the press release here and the full UK National AI Strategy here.
What should I do?
The UK aims to position itself as a nation with clear rules, applied ethical principles and a proinnovation regulatory environment for AI.
You should continue to monitor the government’s follow-on publications and consultations and in particular any potential changes to the AI regulatory environment.
OECD report on AI in business and finance
24 September 2021
Who is affected?
Financial services firms using AI.
What is it?
The report from the Organisation for Economic Co-operation and Development (OECD) identifies that the increasing use of AI in the financial services sector offers many opportunities for firms and consumers, but that it also raises policy issues around security, fairness, explainability and market stability. The report combines detailed market analysis and case studies of AI systems used by firms with frameworks for implementing and designing policies for AI systems for firms and regulators to address the risks identified.
Find the full OECD report here.
What should I do?
Review the OECD report and consider whether you can take any actions now, for example in relation to communications with consumers, designing systems for transparency and explainability, and systems and controls for algo and HFT strategies.
China published its first set of guidelines on AI ethics
26 September 2021
Who is affected?
Firms in China which use or are interested in AI.
What is it?
As part of its ambitions to become a global AI leader by 2030, the Chinese government has published a set of AI ethical guidelines - the first of their kind in China.
The guidelines, titled "New Generation Artificial Intelligence Ethics Specifications" puts forward six basic ethical requirements:
enhancement of human well-being
promotion of fairness and justice
protection of privacy and safety
ensuring controllability and credibility
strengthen accountability
improve ethical literacy
In particular, the guidelines specify that humans should maintain "full autonomous decision-making power" and that AI is forbidden to "harm public interests".
The emphasis on protecting and empowering users reflects China’s ongoing efforts to exercise greater control over an expanding tech sector. Read the guidelines here.
What should I do?
The guidelines are a work in progress, and will evolve over time to reflect changing societal and economic needs and the rapid development of AI.
The ethical requirements should not be unfamiliar. This publication should be a prompt to companies to review their AI-related policies and activities to ensure compatibility with the global expectation for fairness, transparency and accountability.
FCA and Bank of England AI Public-Private Forum meeting on AI governance
1 October 2021
Who is affected?
Financial services companies using AI and ML.
What is it?
The AI Public-Private Forum (AIPPF) met to discuss the role of governance in the safe adoption of AI and ML in UK financial services.
The focus of the meeting was to identify and discuss key issues relating to roles and responsibilities, governance structures, transparency and communications, standards, auditing and regulatory framework.
Key takeaway points from the discussion include recommendations that:
- Business areas be responsible for the governance/outputs of AI models, which would involve appointing an accountable executive for each relevant business area.
- Firms establish governance mechanisms that ensure people at every stage of the AI lifecycle sign-off on their respective aspects.
Whilst the discussions at AIPPF meetings may not be indicative of future Bank of England or FCA policies, they should inform the development of best practices.
Read our Simmons Insights article here and find the meeting minutes here.
What should I do?
You should review the minutes of the AIPPF meeting and consider taking steps to ensure that your firm’s AI governance, systems and controls, and risk management frameworks reflect best practice.
Read our guide here for our recommended actions for companies arising from the meeting.
The European Parliament called for a ban on facial recognition in public spaces
5 October 2021
Who is affected?
Those that live, work and travel in the EU.
What is it?
The European Parliament adopted a resolution which calls on the European Commission to ban the processing of biometric data for law enforcement purposes.
The European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) produced a report on AI in criminal law, which highlights the risk of mass surveillance of a population given the risks of bias. Based on the report, EU lawmakers argued that:
- citizens should only be monitored if suspected of a crime
- social scoring systems which rate citizens based on behaviour and/or personality should be outlawed
- human operators must always make the final decisions
- public authorities should use open source software
Read the report here and the resolution here.
What should I do?
Monitor the progress of EU institutions’ discussions and law-making. If your business operates in the EU, these laws will affect you.
White House science advisors are calling for an AI ‘bill of rights’
22 October 2021
Who is affected?
Those that live, work and travel in the USA.
What is it?
The White House Office of Science and Technology Policy is in the process of consulting the public to develop AI policy goals which are representative of the entire population, limit bias and increase transparency.
Suggestions on what the so-called bill of rights may include:
- a right to know when and how AI is influencing a decision that affects your civil rights and civil liberties
- freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets
- freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace
- a right to meaningful recourse if the use of an algorithm harms you.
Read the White House announcement here.
What should I do?
The White House held several events in relation to the use of AI in democratic participation: find links to watch the events here.
IMF paper “Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance”
22 October 2021
Who is affected?
Financial services firms using AI and ML.
What is it?
This International Monetary Fund (IMF) paper discusses the impact of AI and ML on the financial sector. It highlights the benefits these technologies bring, but also the risks they could pose to the integrity and safety of the financial system.
The risks and policy considerations discussed in the paper are:
- embedded bias
- explainability and complexity of AI/ML systems
- new cybersecurity risks
- data privacy
- robustness of AI/ML algorithms
- potential impact on financial stability
The paper suggests that a proper framework and strategy for managing explainability risks is needed for the financial sector. As well as a need to develop clear minimum standards and guidelines, coupled with improvements in monitoring frameworks to keep up with the evolving nature of AI/ML technology.
The paper calls on regulators to welcome the advancements of AI/ML in finance and to undertake the necessary preparations to capture its potential benefits and mitigate its risks.
Read the discussion paper here.
What should I do?
The IMF’s guidance is likely to inform the approach by global regulators on the use of AI in financial services.
You should be aware of the IMF’s views and use them to ensure that the use, controls, and oversight of AI in your firm reflect best practice.
EBA discussion paper on ML for IRB models
11 November 2021
Who is affected?
EU financial institutions that use IRB models to calculate regulatory capital requirements for credit risk.
What is it?
The discussion paper from the European Banking Authority (EBA) seeks industry feedback on practical aspects related to the use of ML in the context of Internal Rating-Based approach (IRB) models, with the aim of setting supervisory expectations on how ML can adhere to the Capital Requirements Regulation (CRR) when used in the context of IRB models.
The EBA recognises that the use of ML has currently been limited due to concerns that certain characteristics of ML may make it challenging to comply with CRR requirements. These concerns relate primarily to the interpretability of results, insufficient understanding of ML by users and management functions, and the difficulty in evaluating the generalisation capacity of models, i.e. overfitting.
Addressing these concerns, the discussion paper provides a set of principle-based recommendations to ensure ML models adhere to CRR requirements in the context of the IRB framework. These recommendations include avoiding unnecessary complexity, ensuring models can be documented properly, focussing on validation.
Read the discussion paper here and slides from the public hearing here.
What should I do?
You should review the discussion paper and, if you would like to engage in the consultation process, coordinate internally to prepare responses to the EBA’s particular questions. The deadline for submission of responses is 11 February 2022. The consultation form can be accessed here.
Additionally, if you are considering using ML models in the context of IRB models or you are seeking approval to apply ML for regulatory purposes, you should consider the recommendations set out in Section 4 of the discussion paper.
UK Government launches an Algorithmic Transparency Standard
29 November 2021
Who is affected?
UK based companies using or interested in AI.
What is it?
As part of the UK Government’s National Data Strategy, it launched one of the world’s first Algorithmic Transparency Standards. The Cabinet Office’s Central Digital and Data Office (CDDO) worked with the Centre for Data Ethics and Innovation (CDEI) along with experts and the public to design the standard.
The standard involves two tiers: (1) a description of the algorithmic tool, including how and why it is being used, and (2) detailed information about the training data and how the tool works.
The UK Government hopes that increased transparency will promote innovation by providing better visibility of the use of algorithms and allowing unintended consequences to be mitigated at an earlier stage. The standard is currently being piloted by several public sector organisations and will be developed further based on the feedback from the pilot scheme.
Algorithmic tools will be in scope if they meet criteria in each of the three following areas: (1) technical specifications (i.e. involving complex statistical analysis, complex data analytics or ML), (2) potential public impact and (3) involved in decision making.
Read the press release here and the Algorithmic Transparency Standard here.
What should I do?
Whilst the pilot scheme only involves government and public sector employees, it indicates the level of disclosure which may be mandated in the future.
Companies should consider whether they would be able to meet these standards as a matter of best practice.
ICO announces provisional intention to fine Clearview AI c.£17m
29 November 2021
Who is affected?
Developers and users of FRT and personal data processing software.
What is it?
The Information Commissioner's Office (ICO) has announced its provisional intention to fine Clearview AI Inc just over £17m. The ICO also issued a provisional notice requiring Clearview to stop the further processing of personal data in the UK and delete the data currently held.
This follows a joint investigation by the ICO and the Australian data regulator (OAIC) into Clearview's use of images, data scraped from the internet and the biometrics for facial recognition. The ICO's preliminary view is that Clearview breached UK data protection laws including by failing to:
- process information fairly
- have a lawful reason to collect data
- meet higher data protection standards under the GDPR
- inform data subjects about what is happening to their data
Clearview has the opportunity to make representations in respect of the alleged breaches, but the ICO expect to have made their final decision by mid-2022.
Read the ICO press release here.
What should I do?
This announcement shows the ICO is prepared to enforce breaches of UK data protection regulations.
You should review your personal data processing workflow to ensure that it complies with the UK GDPR and other UK data protection regulations.
Get in touch if we can help with your review.
.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)

