AI View: July 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

29 July 2024

Publication

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

EU AI Act update

The EU AI Act will enter into force on 1 August 2024, marking a significant milestone in the regulation of AI.

To mark this occasion, we will be publishing a special edition of AI View on the same day. This focused edition will provide a brief overview of the Act, along with a selection of our useful resources.

This edition brings you:

  1. UK, EU and US competition authorities sign Joint Statement on competition in AI foundation models and products

  2. European Commission requests feedback from AI companies as part of AI Pact

  3. European Data Protection Board advocates for DPA’s to take key role in AI Act enforcement

  4. Dutch Data Protection Authority publishes third report on tackling AI risk

  5. Monetary Authority of Singapore outlines planned AI guidance for financial sector

  6. Singapore issues new AI guide on synthetic-data generation

UK, EU and US competition authorities sign Joint Statement on competition in AI foundation models and products

On 23 July, the competition authorities of the EU, UK, and the US issued a landmark joint statement on the regulation of generative AI. The UK Competition and Markets Authority (CMA), joined the European Commission, the US Department of Justice and the Federal Trade Commission in highlighting the transformative potential of AI, whilst warning of the significant risks posed to fair competition in the market.

The statement identifies key concerns, such as the concentration of critical AI inputs, the potential entrenchment of market power by tech giants and the risks associated with industry partnerships. To foster innovation and protect consumers, the regulators emphasise principles of fair dealing, interoperability, and preserving choice in AI markets.

The regulators have committed to monitoring the AI landscape in order to address emerging risks, such as algorithmic collusion and consumer deception. This coordinated approach might signal a new era of international cooperation in tech regulation.

Read the Joint Statement here.

European Commission requests feedback from AI companies as part of AI Pact

The European Commission has shared a set of draft voluntary commitments as part of the ‘AI Pact’, an early compliance initiative aimed at anticipating the EU AI Act’s strict regime for providers of high-risk AI applications.

Companies have until 21 August to provide their comments on the commitments. Final versions of the pledges will then be disclosed and discussed during a workshop scheduled for 4 September, with the aim of collecting the official signatures by the second half of September.

The AI Pact contains three core commitments:

1, adopting an AI governance strategy for the uptake of AI,
2, compliance with the EU rulebook, and
3, promoting AI literacy amongst those deploying high-risk AI applications.

Additional pledges for AI developers and deployers focus on risk assessment, data quality, transparency, and human oversight.

The AI Pact aims to mitigate risks to health, safety, and fundamental rights during the two-year transition period before the AI Act's full application on 1 August 2026.

Read the Commission’s press release on the AI Pact here.

European Data Protection Board advocates for DPAs to take key role in AI Act enforcement

During its most recent plenary, the European Data Protection Board (EDPB) adopted a statement emphasising the importance of the role of Data Protection Authorities (DPAs) in the upcoming EU AI Act framework.

In the statement, the EDPB recommends that DPAs are designated as Market Surveillance Authorities (MSAs) for high-risk AI systems in law enforcement, border management, justice and democratic processes. This is on the basis that DPAs have the requisite experience and expertise in dealing with the impact of AI on fundamental rights, in particular the right to protection of personal data, that can be leveraged to contribute to a trustworthy AI ecosystem.

According to the statement, benefits of designating DPAs as MSAs include:

  • better coordination between regulatory authorities,
  • enhanced legal certainty, and
  • greater supervision and enforcement of data protection and AI law.

The EDPB has also urged EU Member States to consider DPAs as MSAs for other high-risk AI systems, and has (i) proposed that DPAs act as single points of contact for the public (at both the EU and local levels), and (ii) called for clear lines of cooperation between MSAs and other regulatory authorities.

Finally, the statement requests that Member States provide additional human and financial resources to DPAs to ensure that they can manage their new responsibilities.

Read the EDPB’s Statement here.

Dutch Data Protection Authority publishes third report on tackling AI risk

The Dutch Data Protection Authority (AP) has issued its Spring 2024 report on the risks associated with AI systems and the necessary design requirements to mitigate these risks. The AP conducted the report as part of its role as the coordinator for the supervision of algorithms and AI, which aims to address the rapid development of AI technologies and the management of risk.

The AP has identified certain areas where AI systems have stood out, including genAI, detection of housing fraud and HR management. The report also outlines notable risks, such as the rapid expansion of AI putting quality standards under pressure, the abuse of genAI as a threat to cybersecurity and that governments are not currently equipped to use AI systems in the public sector.

Additionally, the report recommends updating the national AI strategy to address modern AI challenges, increasing coordination among stakeholders and taking timely action on emerging risks for responsible AI development and implementation.

Read the report here, and the press release here (both in Dutch).

Monetary Authority of Singapore publishes annual report and outlines planned AI guidance for financial sector

On 18 July, the Monetary Authority of Singapore (MAS) released its Annual Report for the 2023/2024 financial year. At the press conference, Managing Director Mr Chia Der Jiun, delivered a speech covering key areas across central banking, developmental and regulatory.

With respect to AI, Mr Chia identified a number of items that MAS aim to publish in 2025, including:

1, a detailed study on how genAI will affect jobs in the financial sector, with
recommendations by MAS on new skillsets needed to perform changing
or new job roles,
2, an industry-led AI Governance Handbook to aid the development of good AI
governance practices, and
3, a set of good practices for addressing AI models, technology and cyber risks.

MAS is also considering the release of supervisory guidance next year.

In addition, Mr Chia announced that MAS will commit a further USD 100 million to support financial institutions in building capabilities in quantum and AI technologies.

Read the Annual Report here and watch Mr Chia’s speech here.

Singapore issues new AI guide on synthetic-data generation

Singapore’s Personal Data Protection Commission has recently published a guide on generating synthetic data, particularly for use in the training of AI models.

The guide identifies synthetic data as ‘artificial’ data that has been generated using machine learning models or algorithms, created by training a model on a certain dataset in order to mimic the characteristics and structure of the original data. Synthetic data has an array of use cases, from generating training data sets for AI models to complex data analysis. However, there is a risk that confidential information from the source data can be leaked, or re-identified, due to the similarity between synthetic and source data.

The guide sets out processes for organisations to generate and deploy synthetic data responsibly, in order to balance the potential value of using synthetic data with potential risks to data privacy.

The guide will be offered as a resource within the Singaporean Government’s Privacy Enhancing Technology Sandbox, which also includes a checklist of good practices for generating synthetic data, in order to guard against any possible risk of re-identification.

Access the guide here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.