AI View: January 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

07 January 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

EU AI Act: less than 4 weeks for Prohibitions and AI Literacy

It is now less than 4 weeks to go until the provisions prohibiting certain AI practices (Article 5) and requiring AI literacy (Article 4) apply to organisations.

We are advising extensively on both of these aspects, including interpretation and application of the prohibitions (which are not straightforward to apply and guidance is not yet available) and implementation of AI literacy measures (for which we have launched a dedicated 3-tiered AI Literacy Programme to ensure compliance with the Act; see here for further details).

Please get in touch if you'd like our help with the EU AI Act.

AI View

This edition brings you:

  1. South Korea passes comprehensive AI regulation

  2. EDPB publishes opinion on using personal data in AI model development and deployment

  3. European Commission releases second draft of General Purpose AI Models Code of Practice

  4. US Treasury publishes report on AI in the financial services sector

  5. Philippine privacy commission releases AI guidelines

  6. FINMA publishes guidance on AI use in financial institutions

  7. Japan issues updated AI guidelines for developers, providers and users

South Korea passes comprehensive AI regulation

On 26 December 2024, South Korea's National Assembly passed the Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act) to promote AI development and establish a trust-based framework.

The Act sets out obligations for businesses, including the following:

  • Confirmation of impact: Businesses must assess whether their AI systems are considered high-impact and may seek confirmation from the Minister of Science and ICT.

  • Transparency requirement: Businesses must inform users when their products or services use high-impact or generative AI systems, including indicating when outputs are AI-generated, such as deepfakes.

  • Safety requirement: Providers of large-scale AI systems are required to identify, assess and mitigate risks throughout the AI lifecycle and report their findings.

  • Safety and reliability requirements: For high-impact AI, businesses are required to develop risk management plans, explain the AI system, implement user protection plans, ensure human oversight, maintain safety documentation and perform tasks set by the National AI Committee.

  • Impact assessment: Businesses must assess the impact of high-impact AI systems on fundamental rights.

  • Local representative designation: Foreign businesses that meet certain sales / size criteria must designate a local representative to submit safety measures, request ministry confirmation for high-impact AI systems and support safety and reliability measures implementation.

The AI Basic Act will take effect from January 2026, while the Government is aiming to quickly implement supporting rules and guidelines to integrate the law effectively into the market.

Read more here.

EDPB publishes opinion on using personal data in AI model development and deployment

On 17 December 2024, the European Data Protection Board (EDPB) issued Opinion 28/2024 (Opinion) providing guidance on the processing of personal data in relation to AI models, emphasising the role of General Data Protection Regulation (GDPR) in protecting data protection rights and promoting responsible innovation. The Opinion covers the following topics:

  1. Anonymity of AI Models: AI models trained with personal data cannot always be considered anonymous - an AI model is considered anonymous if the likelihood of direct extraction of personal data used for AI development and obtaining personal data from queries is insignificant. Supervisory authorities will assess these claims based on the controller's documentation and methods to ensure data anonymity.

  2. Legitimate interest as a legal basis: The EDPB proposes a three-step test for using "legitimate interest" as a legal basis for AI data processing, highlighting the need for the interest to be:

    • legitimate, which means that the interest shall be lawful, clearly articulated and not speculative;

    • necessary for the purpose pursued; and

    • not overridden by the interests or fundamental rights and freedoms of the data subjects. The balance of interests against data subjects' is therefore crucial.

  3. Consequences of unlawful data processing: The EDPB discusses the impact of unlawfully processed data during AI model development on the subsequent processing of personal data during the deployment of the AI tool. It stresses individual case assessment and the potential need for controllers to perform appropriate compliance assessment demonstrate compliance with the GDPR, especially in cases of previous unlawful data processing.

Read the full Opinion here.

European Commission releases second draft of General Purpose AI Models Code of Practice

On 19 December 2024, the European Commission published a detailed second draft of the General Purpose AI Models Code of Practice (GPAI CoP), following a comprehensive feedback process involving around 1,000 stakeholders.

The GPAI CoP is designed to help GPAI model providers meet their obligations under the EU AI Act along the full life cycle of the models, including by elaborating on the methodology for assessing and mitigating risk posed GPAI models with systemic risk.

According to the press release, the second draft has been designed to be both adaptable and forward-looking. The Commission also highlights that the second draft is a work in progress, with the focus being primarily on providing clarifications, incorporating vital details, and ensuring alignment with the principle of proportionality.

To further enhance the document's clarity and practicality, the European AI Office has outlined plans for additional refinements, with this iterative process including more feedback sessions. The release of a third draft is scheduled for February 2025.

The second draft of the GPAI CoP is available to download here.

US Treasury publishes report on AI in the financial services sector

On 19 December 2024, the US Department of the Treasury (Treasury) released a report following the issuance of a public request for information on the uses, opportunities and risks of AI in the financial services sector. The report highlights the increasing use of AI throughout the financial sector and the potential for AI to broaden opportunities while also amplifying certain risks, such as those related to data privacy, bias and third-party providers.

Specifically, the report recommends:

  • Cross-party collaboration: Continued co-operation between governments, regulators and the financial services sector to promote consistent and robust standards for uses of AI.

  • Gap-filling: Further analysis and stakeholder engagement to explore solutions for gaps in the existing regulatory frameworks.

  • Enhanced risk management: Continued co-ordination between financial regulators to improve risk management frameworks and clarify supervisory expectations on the application of frameworks and standards.

  • Information exchange: Further facilitation of financial services-specific AI information sharing, to develop data standards, share risk management best practices and improve the understanding of emerging AI technologies.

Read the full report here.

Philippine privacy commission releases AI guidelines

On 19 December 2024, the Philippine privacy regulator, the National Privacy Commission (NPC), published guidelines on how the country's Data Privacy Act 2012 (DPA) applies to AI systems (NPC Guidelines). They outline the need for clearly informing data subjects about the use of their personal data for AI training and testing, as well as the governance mechanisms required such as for human intervention in decision-making and review of the systems' output.

On the principle of fairness, the NPC Guidelines state that personal information controllers (PICs) should also consider systemic bias, human bias and statistical bias when assessing or auditing their AI systems, and implement mechanisms to identify, monitor and limit them. Finally, there is clarification that publicly available personal data are still protected by the DPA even if they are publicly accessible, and that a lawful basis for PICs to process them is still needed.

Read the full NPC Guidelines here.

FINMA publishes guidance on AI use in financial institutions

On 18 December 2024, FINMA, the Swiss Financial Market Supervisory Authority released Guidance 08/2024 (Guidance) on governance and risk management when using AI. The Guidance aims to help financial institutions (FIs) protect their business models, and understand the risks associated with the use of AI, including those related to IT, cyber and data security, quality and availability.

In the course of its supervisory activities, FINMA notes that most banks and other supervised FIs are still in the early stages of AI development and that the corresponding AI governance and risk management structures are still being established. With this in mind, FINMA highlights the following primary risks of using AI tools:

  • Quality, accuracy and bias: The now widely recognised flaw in many AI tools that they may generate responses that are inaccurate, incomplete or even fabricated.

  • Explainability: Users generally do not know the underlying logic or mechanisms behind AI-generated answers, creating challenges when, for example, FIs need to explain responses to third parties such as clients, auditors or FINMA itself.

  • Data protection: Users may be inputting confidential information into tools not designed for the secure handling of such data.

Read the full Guidance here.

Japan issues updated AI guidelines for developers, providers and users

On 25 December 2024, Japan's Ministries of Internal Affairs and Communications and of Economy, Trade and Industry jointly updated the AI Guidelines for Business (Guidelines), which were originally published in April 2024.

The Guidelines place an emphasis on combating AI-generated disinformation and misinformation, highlighting the critical need for businesses to:

  • Be aware of the potential for AI to generate false information in media.
  • Develop skills to discern such misinformation and craft effective prompts for accurate generative AI responses.
  • Access clear guidance and reliable information from public institutions.
  • Establish protocols for detecting and removing false information.
  • Collaborate with fact-checking organisations to ensure information accuracy.

These measures aim to promote the responsible use of AI-generated content and enhance the integrity of information among and between businesses.

Read a provisional translation of the Guidelines (from Japanese into English) here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.