Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
FCA, PRA and BoE issue updates on UK AI regulation
Japan issues generative AI guidelines for industrial developers, providers, and users
UK DRCF announces the launch of their AI and Digital Hub
JAMS publishes rules for AI disputes
Italy’s Council of Ministers adopts a draft Bill on provisions and delegation to the Government on AI
Four NIST draft publications enhancing the safety, security, and trustworthiness of AI systems
Indonesia announces draft AI Bill
FCA, PRA and BoE issue updates on UK AI regulation
Previous issues of AI View have covered the feedback received from the Bank of England (BoE), Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) on the regulation of AI in the financial services sector.
On 22 April 2024, in response to a request from the UK Government, the BoE and the PRA published an update on its strategic approach to AI. The update outlines their ongoing exploration of the adoption and implications of AI technologies by financial services firms. On the same day, the FCA published its updated AI strategy, following the Government’s publication of its pro-innovation strategy in February 2024.
Headlines include:
- The FCA maintained its stance as a technology-agnostic, principles-based, outcomes-focussed regulator, a position echoed by the PRA.
- The PRA emphasised that being technology-agnostic does not mean being technology-blind, indicating a focus on understanding and addressing technology-related risks.
- Neither the FCA nor PRA have introduced new AI-specific rules, but they do not dismiss potential future regulatory adaptions.
- The FCA provides examples of how it will regulate AI within their existing framework, referencing Threshold Conditions, Principles (2,3,6,7,8,9), SYSC, SMCR, and the Consumer Duty.
- The FCA shows a clear focus on operational resilience, critical third parties, and outsourcing, stating a growing urgency to proactively address risks in these areas.
- The FCA also highlights the Consumer Duty, suggesting that AI usage might be incorporated into annual Consumer Duty reports, which could impact retail consumer outcomes or assist in monitoring and evaluating those outcomes.
- Although not directly relevant to those not in scope of the Consumer Duty, the FCA's suggestion indicates a potential trend towards regulatory reporting referencing the use of AI.
- The FCA is actively monitoring the area of quantum computing.
Read the joint letter from the Bank and PRA here. Read the FCA's updated AI strategy here.
Japan issues generative AI guidelines for industrial developers, providers, and users
On 19 April 2024, Japan's Ministries of Internal Affairs and Communications and of Economy, Trade and Industry jointly published its AI Guidelines for Business.
In line with the current trajectory of AI governance, the Guidelines outline key principles for AI business actors, including maintaining human dignity, ensuring safety, fairness, privacy protection, ensuring security, transparency, and accountability. They also stress the importance of agile AI governance, which involves continuous and rapid cycles of environment and risk analysis.
Highlights include:
- AI developers are advised to ensure proper data training, consider bias in algorithms, deploy mechanisms for security measures, ensure verifiability, and explain conformity to common guiding principles.
- AI providers are expected to take actions against risks, consider bias in configurations and data of AI systems and services, deploy mechanisms and measures for privacy protection, and provide relevant stakeholders with information.
- AI business users are advised to use AI systems and services properly, consider bias in input data or prompt, implement security measures, and provide relevant stakeholders with information.
- For AI business actors involved in advanced AI systems, additional guiding principles are provided, including mitigating societal, safety, and security risks, ensuring fair competition, and promoting innovation.
Read a provisional translation of the guidelines (from Japanese into English) here.
UK DRCF announces the launch of their AI and Digital Hub
On 22 April 2024, the UK Digital Regulation Cooperation Forum (DRCF) announced the launch of its AI and Digital Hub.
Key takeaways from the announcement include:
Purpose: The DRCF's AI and Digital Hub aims to assist innovators in navigating complex regulatory requirements, boosting confidence in launching new products, services, and business models.
Free Advice: The Hub provides a single source of free and informal advice from four regulators, helping to foster innovation and support UK economic growth.
Eligibility: Innovators developing a new AI or digital product, service, or business model can submit a query to the Hub if their proposal is innovative, largely digital or uses AI, benefits consumers, businesses and/or the UK economy, and falls within the scope of at least two of the four DRCF members' regulatory remits (CMA, Ofcom, ICO, FCA).
Case Studies: The Hub will publish the outcomes of the queries it addresses as case studies on its website, allowing a wider range of innovators to access previous informal advice. Innovators can request to anonymise the case study and the Hub will consult with them to ensure confidentiality concerns are addressed.
Application Process: Innovators can apply by filling in the Hub application form, providing details about their AI or digital product, service or business model, their query, and how they meet the eligibility criteria.
Informal Advice: The advice provided by the Hub is informal and not legally binding. It does not replace the need for independent legal advice and does not provide public or commercial endorsement or certification of compliance with the law.
View the DRCF's announcement here.
JAMS publishes rules for AI disputes
On 15 April 2024, the Judicial Arbitration and Mediation Services (JAMS) published its rules (AI Rules), an updated model dispute resolution clause, and protective order for AI disputes.
The AI Rules provide for the designation of certain information as "Confidential" or "Highly Confidential" and set forth procedures for the protection of such information. They also allow for the possibility of settlement and consent awards, as well as the imposition of sanctions for failure to comply with obligations under the rules. Of note, the JAMS AI Rules don't address the use of AI in arbitration proceedings.
Access the JAMS AI Rules here.
Italy's Council of Ministers adopts a draft Bill on AI regulation
On 23 April 2024, the Italian Council of Ministers approved a Bill on AI that aims to balance the opportunities offered by new technologies and the risks related to their misuse.
The Bill introduces rules that promote the use of new technologies to improve citizens' living conditions and social cohesion, while providing risk management solutions based on a human-centric vision.
In summary, the Bill:
promotes the use of AI in various sectors to improve productivity and launch new economic activities for social welfare;
ensures that people with disabilities have full access to AI systems without discrimination;
regulates the use of AI in public administration, judicial activity, and national cybersecurity;
introduces the need for cybersecurity compliance throughout the life cycle of AI systems and models;
provides for the protection of users and copyright law, with specific provisions for the use of AI in the workplace, health and disability sector, and intellectual professions; and
establishes a monitoring body for the adoption of AI systems in the world of work and provides for investments in the sectors of AI, cybersecurity, and quantum computing of telecommunications.
Access the Council's (Italian language) press release here.
Four NIST draft publications enhancing the safety, security, and trustworthiness of AI systems
On 29 April 2024, the U.S. Department of Commerce announced several initiatives following President Biden's Executive Order on the Safe, Secure and Trustworthy Development of AI. The U.S. National Institute of Standards and Technology (NIST) released four draft publications aimed at enhancing the safety, security, and trustworthiness of AI systems. The draft publications cover various aspects of AI technology and are open for public feedback until 2 June 2024:
AI RMF Generative AI Profile (NIST AI 600-1): based on the AI Risk Management Framework (RMF), the guidance presents the idea of "profiles" to allow organisations to customise their specific use cases, sectors, or applications according to their unique needs, risk tolerances, and resources. The draft "GenAI Profile" explains how to manage risks throughout the Generative AI lifecycle and addresses common risks across different sectors. The guidance also offers a set of "actions" to assist organisations in governing, mapping, measuring, and managing Generative AI risks, organised by AI RMF subcategories, each linked to specific risks, and varying based on their relevance to different AI actors.
Secure Software Development Practices for Generative AI (NIST SP 800-218A): focuses on secure software development practices for Generative AI and Dual-Use Foundation Models. Introduces the concept of "profiles" to tailor specific use cases and sectors to unique requirements and risk tolerances, and outlines the importance of secure software development practices, risk management, and the protection of software from unauthorised access, emphasising the need for the continuous monitoring of software execution performance and behaviour in software development environments.
Reducing Risks Posed by Synthetic Content (NIST AI 100-4): lays out methods for detecting, authenticating, and labelling synthetic content, such as digital watermarking and metadata recording. These techniques can be used to embed data identifying the origin or history of audio-visual content to help verify its authenticity.
A Plan for Global Engagement on AI Standards (NIST AI 100-5): presents a plan for global engagement on AI standards, emphasising the importance of scientifically sound AI standards that are accessible and amenable to adoption, reflecting the needs and inputs of diverse global stakeholders, and developed in a process that is open, transparent, and driven by consensus.
Indonesia announces draft AI Bill
On 23 April 2024, the Ministry of Communication and Informatics of Indonesia announced the commencement of the drafting of regulations for the governance of AI technology.
The regulation is being developed based on best practices from various countries, including the application of the Readiness Assessment Methodology (RAM) recommended by UNESCO.
The Ministry is using both horizontal and vertical approaches to formulate AI-related regulations. The horizontal approach involves regulation through the Electronic Information and Transaction Law, Personal Data Protection Law, and a circular letter from the Ministry of Communication and Information on AI Ethics. The vertical approach is sectoral, such as in the financial and health sectors.
Access the Ministry's (Indonesian language) press release here.
If you have any questions (or feedback) or would like to discuss any of these updates further, please contact Minesh Tanna, Global AI Lead at Simmons & Simmons.
.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)


.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)





