AI View | August 2025

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

20 August 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

This edition brings you:

  1. UK Law Commission publishes discussion paper on AI and law

  2. South Korea Personal Information Protection Commission releases guidelines on personal information processing for development and use of generative AI

  3. Saudi Data and AI Authority releases report on agentic AI

  4. European Commission opens public consultation on digitalisation and AI in the energy sector

  5. Indonesia opens public consultation on AI national roadmap and ethics guidelines

1. UK Law Commission publishes discussion paper on AI and law

On 31 July 2025, the UK Law Commission published a discussion paper titled “AI and the Law” (the Paper). The Paper examines legal issues raised by AI in England and Wales and aims to foster discussion on areas most in need of legislative reform.

The Paper outlines the nature of AI, its operational mechanisms, and the potential challenges it presents across private, public, and criminal law. The Paper is a comprehensive document, but, in short, it addresses these issues under three main themes:

  • AI autonomy and adaptiveness: AI systems can operate independently and evolve, raising questions about liability when harm occurs.
  • Interaction with and reliance on AI: Over-reliance on AI outputs, such as hallucinations in legal research, can lead to errors and professional misconduct.
  • AI training and data: Copyright infringement and data protection issues arise from AI training on large datasets, including biased or discriminatory outcomes.

The Paper considers granting AI systems legal personality to address liability gaps, though this raises ethical and practical concerns. It notes that the criteria for eligibility (e.g. autonomy, awareness) and the scope of rights and obligations would need careful definition.

The Paper does not contain specific proposals for legislative reform.

Read the Paper here.

2. South Korea Personal Information Protection Commission releases guidelines on personal information processing for development and use of generative AI

On 6 August 2025, the Korean Personal Information Protection Commission (PIPC) issued comprehensive guidelines titled “Personal Information Processing Guide for the Development and Use of Generative Artificial Intelligence” (the Guidelines).

The Guidelines aim to address the challenges and provide stage-specific best practices for handling personal data in the context of generative AI technologies (such as large language models), minimise legal ambiguity, and facilitate compliance with the Personal Information Protection Act (PIPA).

The Guidelines outline how to legally and safely process personal data during the lifecycle of generative AI, from development to deployment. They target AI developers, service providers, and organisations using generative AI.

The Guidelines break down generative AI development into five stages, each with privacy considerations:

  • Goal setting: Define clear, lawful purposes for data processing.
  • Strategy planning: Conduct privacy impact assessments and adopt privacy by design principles.
  • AI training and development: Implement safeguards like anonymisation, differential privacy, and adversarial attack prevention.
  • System deployment and management: Test for risks, create acceptable use policies, and ensure transparency.
  • Privacy governance: Establish internal oversight led by a Chief Privacy Officer.

The Guidelines also recommend specific measures to address risks associated with personal data processing in generative AI. These include ensuring the accuracy and security of data, preventing unauthorised access, and implementing robust data governance frameworks. Organisations must also consider the implications of using publicly available data and ensure compliance with applicable laws.

Read the Guidelines here (in Korean only).

3. Saudi Data and AI Authority releases report on agentic AI

On 3 August 2025, the Saudi Data and Artificial Intelligence Authority published a report examining the dimensions, technological advancements, and applications of agentic AI at global and national levels.

The report outlines agentic AI’s core capabilities and its integration into sectors including healthcare, retail, and transportation. It highlights the transformative potential of AI agents in enhancing operational efficiency and user experiences. However, it also emphasises the need for sector-specific regulations to address unique challenges, such as data sensitivity in healthcare or safety concerns in transportation.

The report also identifies a range of technical, organisational, and ethical challenges, for example:

  • Limitations on causal reasoning capabilities: Agentic AI systems face challenges in understanding and reasoning about cause-and-effect relationships, which limits their ability to make contextually accurate decisions.
  • Transparency concerns: The lack of clear and explainable decision-making processes in agentic AI systems raises significant transparency issues.
  • Shortages in skilled human resources: Organisations struggle with a shortage of skilled professionals capable of developing, deploying, and managing agentic AI systems effectively.
  • Cybersecurity risks: Agentic AI systems are vulnerable to cybersecurity threats, including unauthorised access and malicious exploitation.
  • Impact on culture: The integration of agentic AI systems can disrupt organisational culture, creating resistance to change.

To address these issues, the report proposes a governance framework integrating data governance, AI ethics, and human oversight. The framework includes:

  • Comprehensive threat modelling: Modelling to identify types of risk associated with different development and deployment processes of specialised agents
  • Robust architectural controls: Enforce least-privilege principles to prevent executional capacity beyond the defined functional scope and apply context-based restrictions to reduce the risk of exploitation, misuse, or unauthorised behaviours
  • Continuous monitoring: Implement real-time oversight of agentic AI, with security audits and red-teaming to uncover vulnerabilities
  • Accurate documentation: Maintain accurate records of all development processes, operational processes and monitoring reports

Read the report here (in Arabic only).

4. European Commission opens public consultation on digitalisation and AI in the energy sector

On 5 August 2025, the European Commission launched a public consultation and call for evidence for its upcoming “Strategic Roadmap for digitalisation and AI in the Energy Sector” (the Roadmap). The Roadmap is due for publication in Q1 2026.

The Roadmap aims to outline policy actions that can drive faster innovation and adoption of AI technologies across the energy sector. The Roadmap will build on the 2022 EU Action Plan on digitalising the energy system and related sectoral initiatives and aligns with broader EU digital and AI strategies.

The Roadmap seeks to address four core challenges:

  • Access to quality data: A major barrier to the uptake of innovative energy services and AI solutions is the lack of consistent, high-quality and interoperable energy data. This hinders the training and deployment of AI models, delays innovation and reduces the ability to optimise operations across the energy value chain.
  • Slow adoption and fragmentation: Slow and uneven uptake of digital solutions caused by legacy systems, resistance to change, and fragmented national approaches undermines the EU’s ability to build an integrated smart energy system.
  • Rising energy demand of digital technologies: Growing energy demand from digital technologies, particularly from data centres running AI, may strain grids and increase emissions without efficiency measures.
  • Intrinsic risks related to large-scale deployment of digital and AI tools: Adoption of digital and AI tools in the energy sector involves multiple challenges, particularly when they are integrated into critical energy infrastructures. Promoting transparency and explainability is essential to ensuring public trust.

The five main objectives of the Roadmap include:

  • Accelerating deployment of digital and AI tools in the energy system
  • Fostering research, innovation, and co-ordination in the energy sector
  • Sustainably integrating data centres’ electricity demand into the energy system
  • Enhancing transparency and risk oversight
  • Establishing a coordination and governance framework

The call for evidence will close on 5 November 2025.

Read the public consultation and call for evidence here.

5. Indonesia opens public consultation on AI national roadmap and ethics guidelines

On 8 August 2025, Indonesia’s Ministry of Communication and Digital Affairs announced the launch of a public consultation on two policy documents aimed at shaping the country’s approach to AI. The policy documents consist of the White Paper on the National AI Roadmap (the White Paper) and the Conceptual Framework for AI Ethics Guidelines (the AI Ethics Guidelines).

The White Paper was developed by the Indonesian AI Roadmap Task Force - a body comprised of government members, academics and members of civil society. It intends to serve as the strategic basis for future regulatory and policy actions relating to the development and use of AI in Indonesia.

The AI Ethics Guidelines draw on a 2023 circular on AI ethics and seek to promote the inclusive, sustainable and responsible use of AI technologies.

The consultation remains open until 22 August 2025.

Read the public consultation, the White Paper and the AI Ethics Guidelines here (only in Indonesian).

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.