AI: stay smart - Summer 2021

Stay up-to-date on key AI regulatory updates around the world with our bulletin.

03 September 2021

Publication

Download the full visual bulletin here, text below.

“AI in Financial Services” report published by The Alan Turing Institute and commissioned by the FCA

11 June 2021

Who is affected?

Financial services providers in the UK.

What is it?

The Alan Turing Institute has published a report on “AI in Financial Services”, which was commissioned by the FCA.

This report is the latest publication resulting from the public policy collaboration between The Alan Turing Institute and the FCA. The report offers detailed definitions of AI, ML and automation, as well as exploring their potential benefits and risks in the financial services sector, including:

  • Understanding and managing data quality
  • Opacity of AI models
  • Structural changes to supply chains
  • Unwarranted exclusions and denials of services

The report dedicates an entire chapter to the importance of internal and external transparency when utilising AI systems and repeatedly emphasises the importance of “responsible AI”.

Although the principles and ideas discussed are not new, the report is clear that its contents will support the FCA’s future work in this sector.

Read the report here.

What should I do?

The report’s focus on transparency suggests that it will be a key pillar of any future FCA regulation of AI. Financial services companies should ensure that they are prepared to be transparent about their use of AI with both users and regulators.

Facial Recognition and Biometric Technology Moratorium Act reintroduced to the senate

15 June 2021

Who is affected?

All those interested in the regulation of AI.

What is it?

A Bill dealing with facial recognition technology (FRT) and biometric technology has been reintroduced to the US senate.

If passed, the Bill would, amongst other things, prohibit federal agencies from using FRT, voice recognition and the AI technologies that interpret biometric data. The Bill would also provide a private right of action for individuals whose data had been used in violation of the proposed restrictions.

The Bill has been endorsed by a wide-range of civil liberty organisations, including ACLU and LGBT Technology Partnership and Institute.

Read the Bill here.

What should I do?

It is unlikely that this Bill will pass, as similar proposals have previously failed to pass the committee stage. However, continued pressure by senators may indicate that some form of federal regulation of FRT is inevitable.

Developers and users of FRT in the US should monitor the progress of the Bill.

Guidance on the use of LFR in public places published by the ICO

18 June 2021

Who is affected?

Developers and users of LFR in the UK.

What is it?

The ICO has published detailed guidance on the use of live facial recognition technology (LFR) in public places.

The guidance confirms that data protection law applies to LFR when it processes personal data, biometric data, special category data, or criminal offence data – which is likely in almost all cases.

When using LFR, the ICO recommends:

  • A "data protection by design" approach (i.e. incorporating data protection at every stage of the LFR lifecycle)
  • Taking steps to identify and mitigate the risks of bias and discrimination inherent to LFR
  • Increased transparency on the effectiveness of LFR
  • The introduction of industry-wide standards to access and describe accuracy
  • Enhanced education of controllers on how LFR works and their potential data obligations

The Commissioner also pledges, amongst other things, a proactive audit of LFR systems in deployment and a continued engagement with Parliament on the application of data protection law to LFR.

Read the guidance here.

What should I do?

The ICO has emphasised that it stands ready for any complaints regarding breach of data protection rights, so developers and users of LFR should ensure their systems comply with the ICO’s guidance.

Report on “A Proposal for Identifying and Managing Bias in Artificial Intelligence” released by the NIST

21 June 2021

Who is affected?

Developers and users of AI in the US.

What is it?

The National Institute of Standards and Technology (NIST) released a draft AI report, which “is intended as a step towards consensus standards and a risk-based framework for trustworthy and responsible AI”.

The paper proposes a detailed approach for identifying and managing AI bias throughout the three stages of AI lifecycle: (1) Pre-Design, (2) Design and Development, and (3) Deployment. For example, the report encourages researchers to include statements identifying potential societal impact when submitting their work to journals and suggests that developers actively critique their own techniques to help root out biases.

The report also includes a glossary offering definitions for a range of biases, and it will be interesting to see whether these are eventually codified into legislation.

Read the draft report here.

What should I do?

Developers and users of AI systems should consider submitting feedback via email (ai-bias@list.nist.gov) by 10 September 2021. NIST have requested feedback using this template where possible.

Please note that all information is subject to release under a FOA request.

"Ethics & Governance of Artificial Intelligence for Health” guidance published by the WHO

28 June 2021

Who is affected?

All those interested in the use of AI in healthcare.

What is it?

The WHO (World Health Organization) has published guidance which identifies the ethical challenges and risks associated with the use of AI in the health sector.

The WHO endorses six key ethical principles:

  1. Protecting human autonomy
  2. Promoting human well-being and safety and the public interest
  3. Ensuring transparency, explainability and intelligibility
  4. Fostering responsibility and accountability
  5. Ensuring inclusiveness and equity
  6. Promoting AI that is responsive and sustainable

The relatively novel concept of responsiveness is similar to accuracy, a common principle in other AI guidelines.

The report also proposes a large number of recommendations to ensure the proper governance of AI, including, for example, suggesting that international agencies (such as the Council of Europe and OECD) collaborate to develop a plan to address the ethical challenges of AI in healthcare, and emphasising the importance of all countries having clear and effective data protection laws and independent authorities to enforce them.

The WHO have committed to considering more specific guidance for additional tools, as well as monitoring AI updates and updating the guidance accordingly.

Read the report here.

What should I do?

The WHO’s guidance is likely to have influence on any legislation in UN countries.

It is therefore important that those developing or using AI in the health sector consider the application of the WHO’s principles and recommendations to their projects to mitigate future compliance risks.

GPDP fine company for GDPR breaches relating to algorithmic processing of data

05 July 2021

Who is affected?

Developers and users of AI in the EU and particularly compliance teams and data protection officers.

What is it?

For the first time, a data protection authority - Italy's GPDP - has fined a company for GDPR breaches relating to its algorithmic processing of personal data.

Amongst other things, the GPDP found that the Italian company:

  • Failed to notify workers that it was using their personal data in automated decision-making (in breach of Art 13(2)(f))
  • Failed to provide "meaningful information about the logic involved" in this automated decision-making (also in breach of Art 13(2)(f)
  • Failed to safeguard the workers' rights and freedoms in the context of the automated processing of their personal data (in breach of Art 22(3))

The Italian SA ordered the company to bring their processing operations into compliance with the GDPR and fined the company EUR 2.6 million.

Read the press release in Italian here and an extract in English here.

What should I do?

Explainability is a legal obligation under the GDPR, and we expect to see similar challenges to the use of algorithmic systems in the future. Developers and users of AI systems should ensure that their systems are GDPR compliant.

Beta version of AI and data protection toolkit released by ICO

20 July 2021

Who is affected?

Developers and users of AI systems in the UK and particularly compliance teams and data protection officers.

What is it?

The ICO has released the beta version of its AI and data protection toolkit, which contains useful information on the data protection risks created by AI and suggestions for practical steps organisations can take to mitigate these risks.

The toolkit includes a risk statement to help organisations understand the data protection implications of using AI and suggestions on best practice.

The toolkit also provides a framework methodology to audit AI to ensure compliance with data protection laws.

See the toolkit here.

What should I do?

The ICO are welcoming suggestions and feedback on the toolkit, as well as volunteers to test the toolkit on live AI applications.

If you’re interested, you can email AI@ico.org.uk. Please get in touch with us if you would like
advice on doing so.

World's first patent for a device invented by an AI system granted in South Africa

28 July 2021

Who is affected?

IP lawyers, and developers and users of AI.

What is it?

South Africa has granted the world’s first patent to a device invented by an AI system. The innovation concerns interlocking food containers, designed to make it easier for robots to grasp and stack them, and an emergency warning light system. The so-called inventor is DABUS (Devise for the Autonomous Bootstrapping of Unified Sentience), an ‘AI creativity’ machine which simulates human brainstorming and seeks to replicate the human innovation process.

Patent applications for inventions by DABUS to US, UK and EU patent offices have previously been unsuccessful, and the decision in South Africa has led to a widespread backlash from IP experts.

The team behind DABUS are appealing in the UK, Europe and the US, so the true impact of the recent decisions in South Africa and Australia (below) remains to be seen.

See the patent here.

What should I do?

South Africa’s patent examination process is considered less stringent than others, meaning that the potential knock-on effect of this decision may be limited.

Report on Lawtech and ethical principles published by the Law Society

28 July 2021

Who is affected?

Developers or users of legal technology in the UK.

What is it?

The Law Society has published a report on Lawtech and related ethical
principles.

The report provides an overview of the main ethical considerations and concerns surrounding the use of Lawtech, and proposes a framework for small firms and sole practitioners wishing to procure Lawtech.

The report also details five Lawtech Principles (all of which are to be viewed through the overarching duty to act in the best interests of clients), including:

  1. Compliance
  2. Lawfulness
  3. Capability
  4. Transparency
  5. Accountability

These principles are familiar from other ethical guidance on AI and show the increasing proliferation of guidance on ethical or responsible AI.

Read the report here.

What should I do?

The Law Society intends to conduct a review of small firms and sole practitioners’ experiences of using the principles when purchasing Lawtech – watch this space!

Australian courts ruled that an AI system can be named an inventor in a patent

30 July 2021

Who is affected?

IP lawyers, and developers and users of AI.

What is it?

The Australian courts have ruled that AI systems can be considered inventors for patent purposes.

Courts in other jurisdictions have previously declined to recognise AI as inventors, in most cases due to the stringent definition of “inventor”, meaning that patents have only been granted to human persons.

This development has been extremely controversial. Many are concerned that machine inventors could stifle human creativity, and even incentivise ‘patent generating machines’ that churn out inventions with little to no human intervention.

Read the judgment here.

What should I do?

As with the recent South African IP application, this decision is not expected to have wide-reaching consequences in the IP community, but the true impact remains to be seen.

FRT deployed across Indian Railway

26 August 2021

Who is affected?

Developers and users of FRT, especially those operating in Asia.

What is it?

Almost 500 facial recognition cameras have been deployed across 30 railway stations in India.

This project was initially announced in January (here), but a recent FT Report highlighted the recent wider deployment. The system, developed by start-up NtechLab, seems to be a further escalation of the government’s use of video surveillance across India.

Others examples include the Indian Police Service recent use of FRT to identify and arrest over 1,000 individuals involved in a rally and the Delhi law enforcement’s FRT missing child system.

The country’s Personal Data Protection Bill (similar to EU’s GDPR) has also recently stalled, causing some concerns regarding the protections available to individuals wishing to control their personal data.

What should I do?

This enhanced deployment has been very controversial so it will be interesting to see if there is any legal or regulatory backlash.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.