Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
European Commission opens consultation on transparency requirements under EU AI Act
UK government announces AI Assurance Innovation Fund and publishes policy paper on third-party AI assurance market
UK Financial Conduct Authority publishes update on its approach to AI
Singapore Ministry of Law opens consultation on use of generative AI in the legal sector
Dutch Ministry of Economic Affairs publishes EU AI Act guide
South Korea Ministry of Science and ICT releases draft Enforcement Decree of the AI Framework Act
Swiss Federal Council proposes federal AI coordination office
South Korea PIPC adopts amendments to guidelines on AI personal data impact assessments
1. European Commission opens consultation on transparency requirements under EU AI Act
On 4 September 2025, the European Commission opened a consultation on the development of guidelines on transparency requirements for certain AI systems under Article 50 of the EU AI Act.
The purpose of the consultation is to collect input from a wide range of stakeholders to inform the Commission guidelines and a Code of Practice on the detection and labelling of artificially generated or manipulated content.
The consultation is aimed at a variety of stakeholders, including providers and deployers of interactive and generative AI models and systems, and of biometric categorisation and emotion recognition systems. Target stakeholders also include private and public sector institutions using such systems, academic and research institutions, governments and the general public.
The guidelines will assist providers of AI systems in fulfilling their transparency obligations under the AI Act, which requires such providers to notify users if they are interacting with an AI system and label AI-generated content.
The transparency requirements aim to enable natural persons to recognise interaction with, and content generated or manipulated by, AI systems, thus reducing the risks of impersonation, deception or anthropomorphisation and fostering trust and integrity in the information ecosystem.
The consultation will last until 2 October 2025.
Read the press release from the European Commission on the consultation here.
2. UK government announces AI Assurance Innovation Fund and publishes policy paper on third-party AI assurance market
On 3 September 2025, the UK Department for Science, Innovation & Technology published a policy paper announcing the launch of a new UK AI Assurance Innovation Fund worth £11 million, with applications due to open in Spring 2026. The paper also confirms that regulators will receive £2.7 million of government funding to contribute towards the development of their regulatory capability.
The policy paper also addresses the third-party AI assurance market, which aims to build trust and confidence in AI systems, promoting investment in AI technologies. The UK plans to grow its third-party AI assurance market over the next ten years from £1 billion to nearly £19 billion.
However, the policy paper identifies four key market barriers that must be overcome to achieve this growth:
Quality - the infrastructure to ensure that AI assurance providers are supplying high-quality goods and services is still developing.
Skills - UK AI assurance providers are struggling to find employees with the requisite combination of knowledge and skills to provide effective assurance.
Information access - AI assurance providers have noted a lack of access to information on AI systems.
Innovation - there are limited forums that promote collaborative research and development on AI assurance in the UK.
Read the policy paper here.
3. UK Financial Conduct Authority publishes update on its approach to AI
On 9 September 2025, the UK Financial Conduct Authority (FCA) published an update setting out its approach to AI, which focuses on the safe and responsible adoption of AI in UK financial markets.
The update states that the FCA does not plan to introduce further regulations on AI, instead relying on existing frameworks which are deemed to address many of the risks posed by AI and its adoption.
The update discusses how the FCA’s approach adheres to the five key principles identified by the UK government in relation to AI regulation: safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The update also includes examples of how the FCA’s existing frameworks, including the Consumer Duty and the Senior Managers and Certification Regime, are relevant for using AI safely, identifying obligations for firms to:
design products and services that meet the needs of their target customers and provide fair value;
communicate in a way that meets the information needs of customers; and
provide support that meets the needs of customers.
Read the FCA's AI update here.
4. Singapore Ministry of Law opens consultation on use of generative AI in the legal sector
On 1 September 2025, the Singaporean Ministry of Law opened a public consultation on its Guide for Using Generative AI in the Legal Sector (the Guide).
This consultation forms part of the Ministry of Law’s efforts to support and promote the digitalisation and overall transformation of the country’s legal sector. The Ministry of Law has developed the Guide in order to ensure that legal professionals are informed users and purchasers of generative AI tools, while still remaining cognisant of their professional commitments.
The Guide outlines three key principles that legal professionals should adhere to in order to use generative AI tools responsibly, ethically and effectively: professional ethics, confidentiality, and transparency. The Guide also provides practical guidance on how to:
develop an AI framework;
analyse needs and identify potential applications for AI;
identify and evaluate generative AI tools;
implement and train generative AI tools; and
conduct review and improvement processes.
The consultation will last until 30 September 2025.
Read the Guide here.
5. Dutch Ministry of Economic Affairs publishes EU AI Act guide
On 4 September 2025, the Dutch government published a guide to help organisations understand and prepare for the EU AI Act, which introduces comprehensive regulations for the development and use of AI systems across the EU. The guide provides an overview of the key provisions, risk categories, and compliance obligations under the AI Act, which will be implemented in phases starting February 2025, with most requirements taking effect by mid-2026. The guide sets out a four-step explanation of what the AI Act means for organisations:
Step 1 (Risk): Is our (AI) system covered by one of the risk categories? The guide outlines the categories of systems which include prohibited and high-risk AI systems, as well as general purpose AI models and systems.
Step 2 (AI): Is our system 'AI' classified according to the AI Act?
Step 3 (Role): Are we the provider or deployer of the AI system? The guide sets out the difference between providers, which develop and place AI systems on the market, and deployers, which use the AI systems.
Step 4 (Obligations): What obligations must we comply with? The guide discusses the various obligations that must be complied with by providers and deployers of high-risk AI systems.
Read the guide here.
6. South Korea Ministry of Science and ICT releases draft Enforcement Decree of the AI Framework Act
On 8 September 2025, South Korea’s Ministry of Science and Information and Communication Technology released the draft Enforcement Decree of the AI Framework Act (enacted in December 2024). The Enforcement Decree, once finalised, will be issued by South Korea’s President in order to execute the AI Framework Act.
The draft Enforcement Decree sets out design requirements, including the obligations for notifying users of generative and high-impact AI, as well as compulsory labelling of deepfake results. The draft also requires safety assurance for high-performance AI, which includes risk identification, risk assessment, risk mitigation and emergency response planning. The key provisions of the draft Enforcement Decree include:
Promotion of the AI industry - support for training data construction projects, as well as for the introduction and utilisation of AI technologies.
Obligations on transparency and safety - focusing particularly on obligations around the use of high-impact AI.
Grace period for administrative fines - although the AI Framework Act imposes fines of up to KRW 30 million for breaches and non-compliance, the draft Enforcement Decree introduces a grace period for fines, with its duration to be determined following consultation with stakeholders.
The Enforcement Decree is set to bring the AI Framework Act into force in January 2026.
Read the press release here (Korean-only).
7. Swiss Federal Council proposes federal AI coordination office
On 8 September 2025, the Swiss Federal Council outlined its priorities for 2026, with a focus on prosperity, cohesion, security and sustainability. Among the key initiatives is a plan to address rapid advancements in AI by evaluating the expansion of a federal coordination office. This office would aim to establish a unified strategic framework and enhance coordination across federal agencies, fostering innovation, trust and synergies in the use of AI systems. The initiative reflects Switzerland’s commitment to ensuring responsible and effective AI governance amidst technological progress.
The Council’s digitalisation agenda also includes consultations on a framework law for the secondary use of data, which seeks to establish legal conditions for reusing data in strategically significant areas. This could have implications for AI applications that rely on large datasets for training and development. Additionally, the Council plans to revise the Telecommunications Act to strengthen the security of critical infrastructures, a move that may intersect with AI-driven systems in sectors such as telecommunications and cybersecurity.
Read the Federal Council’s announcement here (German-only).
8. South Korea PIPC adopts amendments to guidelines on AI personal data impact assessments
On 5 September 2025, the amendments to the guidelines on personal data impact assessments adopted by South Korea’s Personal Information Protection Commission came into force.
The guidelines apply to public institutions utilising AI, and can also be a point of reference for private institutions that are using or that plan to use AI. The guidelines cover:
limits on sensitive data;
legality of data use;
retention of training datasets;
accountability between developers and operators;
safeguards against vulnerabilities;
acceptable use policies for generative AI; and
reporting of inappropriate data leaks.
Read more about the amendments to the guidelines here (Korean-only).














_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)





