AI View: June 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

26 June 2024

Publication

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

This edition brings you:

  1. EU Commission consults on the use of AI in finance

  2. EDPS publishes guidelines on generative AI for EU institutions and bodies

  3. UK Labour Party commits to AI regulation in election manifesto

  4. EU AI Office and AI Board commence operations

  5. Singapore publishes Model AI Governance Framework for Generative AI

  6. Hong Kong data protection authority publishes Model AI Personal Data Protection Framework

EU Commission consults on the use of AI in finance

On 18 June, the EU Commission published a consultation to gather insights from stakeholders on the application of AI in financial services, and its interplay with existing financial services legislation. Stakeholders have until 13 September 2024 to respond to the consultation.

The consultation aims to identify the primary use cases, advantages, obstacles, and risks associated with the development of AI applications within the financial sector.

Read more here. If you have any questions or would like assistance with responding to the consultation, please reach out to us.

EDPS publishes guidelines on generative AI for EU institutions and bodies

On 3 June, the European Data Protection Supervisor (EDPS) issued new guidelines for EU institutions on the processing of personal data when using generative AI systems. The guidance is intended to aid EU institutions, bodies, offices and agencies in their compliance with relevant requirements when processing personal data and formulating new policies, but is likely to be relevant for generative AI more widely.

Key points include:

  • The EDPS notes that web scraping for data collection may (not that it will) violate data protection principles (even if the data is publicly available).
  • Data Protection Officers (DPOs) should have a clear understanding of the AI system's lifecycle, ensure proper documentation is in place, and review compliance in data sharing.
  • Data minimisation in AI systems can be achieved by limiting data collection and processing, using quality datasets and implementing robust data governance procedures.
  • Accuracy in AI systems is ensured by verifying datasets, controlling output data and regular monitoring.
  • Security risks in AI systems can be mitigated by implementing specific controls, using trusted data sources and regularly updating risk assessments.

Read the full guidelines here.

UK Labour Party commits to AI regulation in election manifesto

The UK Labour Party has indicated an intention to introduce AI regulation, on securing a victory in the upcoming parliamentary election. According to the manifesto, certain developers of AI models could expect a shift from voluntary AI codes of conduct to formal legislation.

The manifesto sets out that Labour will retain the AI Safety Institute, but the rules will be enhanced and legislated. The aim is to provide a legislative foundation for new market entrants.

Labour also indicates plans to maintain the current government setup with the Department for Science, Innovation and Technology, promising to streamline bidding processes for UK start-ups and improve data access for researchers and innovators. Additionally, Labour plans to introduce a Regulatory Innovation Office (RIO) to foster innovation and ensure regulators are held responsible for any regulatory delays.

Read Labour’s manifesto here.

EU AI Office and AI Board commence operations

The AI Office became operational on 16 June. The Office will guide the implementation of the EU AI Act across the EU's 27 Member States, and regulate and oversee safety evaluations for "general-purpose" AI models.

The EU AI Board, formed by Member State representatives, has also commenced operations, with the first meeting of the Board having taken place on 19 June to discuss the forthcoming implementation of the AI Act.

Read more about (i) the AI Office here and (ii) AI Board here.

Singapore publishes Model AI Governance Framework for Generative AI

The Infocomm and Media Development Authority of Singapore has released a Model AI Governance Framework for Generative AI.

The updated Framework expands on the previous model for "traditional" AI, offering a more extensive approach to managing emerging risks associated with the use of generative AI.

The Framework identifies 9 dimensions, or aspects, to foster a reliable and holistic AI ecosystem. It offers actionable recommendations that model creators and policymakers can implement as preliminary measures, including in particular:

  • Accountability: Incentivising responsible behaviour in AI development.
  • Data: Ensuring data quality and addressing contentious training data.
  • Testing and Assurance: Encouraging third-party testing and common AI testing standards.
  • Security: Addressing new threats from generative AI models.
  • AI for Public Good: Democratising access, improving public sector adoption, upskilling workers, and sustainable AI development.

Read more here.

Hong Kong data protection authority publishes Model AI Personal Data Protection Framework

The Office of the Privacy Commissioner for Personal Data (PCPD) has issued the "Artificial Intelligence: Model Personal Data Protection Framework" to address challenges posed by AI to data privacy. The Framework provides recommendations and best practices for organisations using AI. This includes:

  • Establishing AI strategy and governance: Formulating an AI strategy and governance considerations, establishing an AI governance committee and providing AI training to employees.
  • Conducting risk assessments and human oversight: Performing comprehensive risk assessments, formulating a risk management system, adopting a risk-based management approach and deciding on the level of human oversight based on the risks posed by AI.
  • Customising AI models and implementing and managing AI systems: Preparing and managing data for AI systems, testing and validating AI models, ensuring system and data security and managing and continuously monitoring AI systems.
  • Communication and engagement with stakeholders: Regular and effective communication and engagement with stakeholders to enhance transparency and build trust.

Read more here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.