AI View - December 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

02 December 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

1. Bank of England and FCA publish findings from third AI and machine learning survey in UK financial sector

2. Australia Senate Committee releases report with recommendations for AI regulation

3. Hong Kong publishes circular to licensed corporations on the use of generative AI language models

4. Germany creates DSK Artificial Intelligence Working Group

5. FSB releases report on the financial stability implications of AI

6. techUK publishes paper on the implementation of ethical AI principles

7. G20 leaders issue declaration on addressing global challenges in AI

Bank of England and FCA publish findings from third AI and machine learning survey in UK financial sector

On 21 November 2024, the Bank of England and the Financial Conduct Authority (FCA) released the findings from their third survey on AI and machine learning in the UK financial services market. This survey, conducted earlier in the year, builds upon previous FCA studies from 2019 and 2022, aiming to provide a comprehensive overview of AI adoption across the sector.

The report highlights that:

  • 75% of firms are currently utilising AI technologies;
  • An additional 10% of firms plan to implement AI solutions within the next three years;
  • 17% of AI applications involve foundation models, highlighting the integration of advanced AI systems in financial services; and
  • One-third of AI use cases are implemented through external partnerships with third-party providers.

Firms identified certain key challenges, including ensuring the safety, security and robustness of AI models, as well as addressing talent shortages and skill gaps. In terms of governance, 84% of firms reported having an accountable individual overseeing their AI frameworks. However, firms also reported the complexity of accountability structures, with many firms designating multiple responsible parties.

Read the full survey here.

Australia Senate Committee releases report with recommendations for AI regulation

The Australian Senate Committee has released a report with 13 key recommendations for AI regulation and strategy in Australia, after considering 245 submissions and holding six hearings.

Central to the report is the recommendation for a new “AI Act” to manage high-risk AI applications, drawing inspiration from the EU and Canada, and suggesting a move towards domain-specific regulations to better fit within Australia's legal system, similar to the approaches of the UK and Singapore.

The report also addresses the concept of AI sovereignty (being the ability of a country’s governments, industry and society to use technologies productively and effectively to meet their needs in the absence of inputs sourced from other countries) and cautioning against AI nationalism and its impact on global supply chains. It proposes market incentives and the use of contract law, especially the unfair contracts term regime, to encourage fair copyright practices and protect data and IP in AI training.

Overall, it advocates for a balanced regulatory framework that combines broad and sector-specific regulations to effectively manage the evolving AI landscape in Australia.

Read the full report here.

Hong Kong publishes circular to licensed corporations on the use of generative AI language models

On 12 November 2024, the Securities and Futures Commission (SFC) of Hong Kong issued a circular on the use of generative AI language models in financial institutions. While recognising their potential to improve efficiency in tasks such as client interactions and research, the SFC highlighted risks including inaccurate outputs, data leakage and over-reliance on AI-generated responses.

The circular outlines key principles for managing these risks, covering areas such as senior management accountability, model validation, cybersecurity and third-party provider oversight. Licensed corporations are urged to implement robust controls and adopt a risk-based approach tailored to their specific use cases.

Read the full circular here.

Germany creates DSK Artificial Intelligence Working Group

On 15 November 2024, the Conference of Independent Data Protection Authorities in Germany (DSK) announced the creation of a specialised working group to address data protection requirements for AI systems. The initiative aims to develop clear guidelines to ensure AI technologies comply with existing data protection laws and uphold privacy standards.

The working group will focus on assessing the risks posed by AI and providing actionable recommendations for developers and users of AI systems. This step reflects Germany’s commitment to proactively managing the challenges of AI in line with privacy and data protection principles.

Read the full announcement here (in German).

FSB releases report on the financial stability implications of AI

On 14 November 2024, the Financial Stability Board (FSB) published a report on the financial stability implications of AI, highlighting its rapid adoption and the emerging risks associated with its adoption and use.

Three of the key action points considered in the report to address potential financial stability risks from AI adoption are:

  • Identifying data gaps for monitoring developments in AI use: Authorities are encouraged to use both regular and special surveys, along with (a) reports from regulated entities and public disclosures, to track the impact of AI on financial stability, and (b) increasing dialogue with the private sector, including financial institutions and AI developers.
  • Assessing regulatory and supervisory frameworks: Standard setting bodies (SSBs) and national authorities, including the FSB itself, could consider the financial stability implications of sector-specific regulatory and supervisory frameworks on market fairness, particularly relating to the integration of fintech firms.
  • Enhancing regulatory and supervisory capabilities: The FSB, co-ordinating with the relevant SSBs, could consider facilitating information sharing and the adoption of best practices across jurisdictions, involving non-financial authorities where necessary.

Read the full report here.

techUK publishes paper on the implementation of ethical AI principles

On 11 November 2024, techUK released a paper on applying the UK’s 2023 AI Regulation White Paper, titled ‘A Pro-Innovation Approach to AI Regulation’, and its ethical principles, being safety, transparency, fairness, accountability and contestability, using practical tools such as risk assessments, audits and bias detection.

The paper highlights international standards (such as the ISO/IEC standards for technological products, services and processes), transparency tools (e.g. model cards) and governance mechanisms such as ethics boards and human-in-the-loop systems. The paper also touches on case studies that showcase successful implementations of ethical principles, including bias monitoring in healthcare and accountability frameworks for safeguarding AI.

Read the full paper here.

G20 leaders issue declaration on addressing global challenges in AI

On 18-19 November 2024, the G20 leaders met in Rio de Janeiro, Brazil to address global challenges under the theme of “Building a Just World and a Sustainable Planet”. The G20 leaders acknowledged the transformative potential of AI while emphasising the need for responsible and inclusive governance frameworks.

Key commitments included:

  • Safe and ethical AI: Promoting the development and deployment of AI in a human-centric, transparent and ethical manner, and addressing challenges such as fairness, accountability, data protection and privacy.
  • Pro-innovation regulation: Advocating for governance approaches that balance innovation with risk management, enabling AI to drive global progress responsibly.
  • Global co-operation: Strengthening international collaboration on AI, ensuring developing countries benefit through capacity building and reduced digital divides.
  • Labour market impact: Addressing AI’s implications for workers’ rights, encouraging fair integration of AI technologies into workplaces, and promoting digital literacy and inclusion.
  • Future initiatives: Welcoming the continuation of AI-focused discussions under South Africa’s G20 presidency, including a high-level task force on AI and innovation.

The declaration underscores the importance of leveraging AI responsibly to foster sustainable development and global equity.

Read the full declaration here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.