Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
UK Information Commissioner's Office issues strategy on using AI and biometrics
Ofcom outlines its AI governance approach for online platforms, broadcasters and telecom companies
UK Financial Conduct Authority to launch Supercharged Sandbox for firms to experiment with AI
House of Lords adopts Data (Use and Access) Bill including provisions for copyright protection in AI development
New York moves closer to introducing AI Training Data Transparency Act and Responsible AI Safety and Education Act
European Data Protection Board publishes guidelines on law and compliance in AI security and data protection
Vietnam passes Law on Digital Technology Industry that encourages AI development
UK Government likely to delay AI regulation amid plans to develop a more comprehensive AI bill
UK Information Commissioner's Office issues strategy on using AI and biometrics
On 5 June 2025, the UK Information Commissioner's Office (ICO) unveiled its new AI and biometrics strategy, which aims to ensure organisations are developing and deploying new technologies while adhering to the law, as well as to support such organisations in their efforts to innovate and grow while protecting the public.
The ICO highlights that a lack of transparency about how organisations use personal information can cause the public to lose trust in AI and AI-powered biometric technologies. To tackle this, the strategy outlines how the ICO will:
set clear instructions for the responsible use of AI and automated decision-making (ADM) through a statutory code of practice for organisations that are developing or deploying AI;
build public trust in generative AI foundation models by working with developers to ensure personal information is used responsibly and lawfully in the training of their models;
ensure that ADM systems are governed and used fairly, particularly focusing on recruitment and public services; and
promote the fair and proportionate use of facial recognition technology by working with law enforcement to ensure that individual rights are protected.
Read more about the ICO's new strategy here.
Ofcom outlines its AI governance approach for online platforms, broadcasters and telecom companies
On 6 June 2025, the Office of Communications (Ofcom) outlined its approach to support the safe innovation and use of AI across the sectors that it regulates, such as online platforms, broadcasters and telecom companies.
Ofcom notes that these sectors are experiencing the emergence of new opportunities, as technologies such as AI continue to evolve. For example:
Online platforms rely on automated content moderation to detect harmful content quicker and at scale, enhancing safety for their users.
Broadcasters employ AI to produce real-time captions, translate content into a variety of languages, and to provide automated dubbing and audio descriptions, thus increasing accessibility.
Telecom companies use AI to keep their networks secure, and have the potential to use AI in the future for better network management.
Ofcom says that it is working on a number of initiatives to support AI innovation, such as:
creating safe spaces to experiment with AI;
providing large data sets to help train and develop AI models; and
aligning with other regulators through the Digital Regulation Cooperation Forum to understand newer AI applications.
Read the full report on Ofcom's approach here.
UK Financial Conduct Authority to launch Supercharged Sandbox for firms to experiment with AI
On 9 June 2025, the UK's Financial Conduct Authority (FCA) announced that it will launch a Supercharged Sandbox to support innovation through enabling financial services firms to safely experiment with AI.
By collaborating with NVIDIA, the Supercharged Sandbox will provide access to data, technical advice and regulatory support, and is open to any financial services firm that is seeking to innovate and experiment with AI.
Firms can already apply to use the Supercharged Sandbox now through the FCA's website, with successful applicants able to begin experimenting with AI from October 2025.
Read the full press release from the FCA here.
House of Lords adopts Data (Use and Access) Bill including provisions for copyright protection in AI development
On 11 June 2025, following its final agreement in the House of Lords, the Data (Use and Access) Bill (the Bill) was adopted.
To recap, the Bill was first unveiled in the UK Parliament in October 2024, and is a landmark piece of legislation aimed at modernising how data is used and accessed across the private and public sectors. Since last October, several amendments to the Bill have been proposed by the House of Lords. The set of amendments that the House of Commons ultimately accepted include:
reducing the timeframe for economic impact assessments and AI copyright reports from 12 months to 9 months;
widening the scope to include AI systems developed outside of the UK;
introducing enforcement provisions for copyright protection in AI development with regulatory involvement; and
requiring a six-month progress statement on the economic impact assessment and AI copyright report.
Read more on the Bill here.
New York moves closer to introducing AI Training Data Transparency Act and Responsible AI Safety and Education Act (RAISE)
On 10 June 2025, a bill for the AI Training Data Transparency Act -- which would require generative AI developers to disclose the training data for their models -- passed the New York State Assembly and has been delivered to the Senate for consideration. If passed, developers will have to disclose information including a description of training data, information on ownership, whether the data was purchased or licensed, and if it comprises personal data. This Act mirrors closely the California AI Transparency Act, which takes effect in January 2026 with similar requirements.
On 12 June 2025, another AI-related bill for the Responsible Artificial Intelligence Safety and Education (RAISE) Act passed the New York State Assembly and Senate, and is now pending Governor approval. The RAISE Act will require developers of frontier AI models to create a safety plan to mitigate risks such as automated crime and bioweapons, and to disclose serious incidents relating to their models. The RAISE Act mirrors closely California's proposed and vetoed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, and will take effect ninety days after the Governor's approval.
Read the bill on the AI Training Data Transparency Act here, and the bill on the RAISE Act here.
European Data Protection Board publishes guidelines on law and compliance in AI security and data protection
On 5 June 2025, the European Data Protection Board published guidelines on "Law and Compliance in AI Security and Data Protection". The guidelines are intended to provide data protection officers with comprehensive training to address skill gaps in AI security and managing personal data.
The guidelines also include practical case studies focused on the GDPR , the EU AI Act and other relevant EU regulations. The guidelines highlight the necessity to understand the lifecycle of AI within organisations and its implications for data protection compliance.
You can access and read the guidelines here.
Vietnam passes Law on Digital Technology Industry that encourages AI development
On 14 June 2025, Vietnam's National Assembly passed the Law on Digital Technology Industry, a landmark legislation that introduces a range of incentives to transform the country's digital economy, including incentives to promote AI development. This makes Vietnam amongst the first countries to introduce and enact a standalone law that is solely dedicated to the digital technology industry.
While the law largely focuses on crypto, the law also affects AI development, adopting a forward-looking and risk-based approach to AI. AI systems will be categorised as high-risk and non-high-risk, with the former being subject to compliance obligations. Products will also need to be clearly labelled as being powered by AI. The law also outlines prohibited uses of AI, which includes AI systems that indiscriminately produce facial recognition databases, infer emotions in workplaces and education settings other than for medical or safety purposes, and classify individuals using biometric data to deduce sensitive information.
The Law on Digital Technology is set to take effect on 1 January 2026.
Read more about the Law on Digital Technology here.
UK Government likely to delay AI regulation amid plans to develop a more comprehensive AI bill
It has been reported that the UK Government's proposals to regulate AI will likely be delayed by at least a year, while UK ministers plan to produce a more comprehensive AI bill to address safety concerns and AI's use of copyrighted materials.
While the Government had initially planned to introduce a narrowly drafted AI bill to focus on frontier models and safety testing by the UK's AI Security Institute, this proposal has not progressed, reportedly following concerns that such a bill may dissuade AI companies from investing in the UK.
Read the relevant press release here.







.jpg?crop=300,495&format=webply&auto=webp)











