Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
Practical Insights as the UK starts to move from theory to practice in the regulation of self-driving vehicles.
On 23 February 2026, we published an analysis of the UK's evolving regulatory framework for self-driving vehicles, explaining how the Automated and Electric Vehicles Act 2018 and the Automated Vehicles Act 2024 interact on liability, authorisation and safety oversight. The analysis also considers who pays when an automated vehicle causes harm, how responsibility may be allocated between insurers, OEMs and software providers, and what organisations should do now ahead of secondary legislation expected in 2026-2027.
Read the article here for more insight into these developments.
This edition brings you:
UK Government outlines AI and copyright plan
EU Parliament Committees agrees position on AI Omnibus Proposal
US White House publishes National Policy Framework for AI
Act introduced into US to repeal moratorium on state AI legislation
Washington state enacts legislation regarding AI deepfakes
UK House of Lords advances the proposal of a criminal offence for unsafe AI chatbots
Singapore rolls out AI risk management toolkit for financial institutions
Russian Ministry of Digital Development opens consultation on Draft AI law
1. UK Government outlines AI and copyright plan
In March 2026, the UK Department for Science, Innovation and Technology (DSIT) and Department for Culture, Media and Sport (DCMS) published a "Report on Copyright and Artificial Intelligence" (the Report), following a 2024-25 consultation. The Report confirms that the DSIT and DCMS will not proceed with (i) a broad text and data‑mining exception with opt‑out for AI training, or (ii) tightening existing UK exceptions or extending UK copyright rules specifically to models trained abroad but deployed in the UK.
As a result, the current UK copyright framework remains essentially unchanged: copying of protected works for commercial AI training in the UK will continue to require a licence, and the government has signalled that it intends to retain some level of copyright protection against unlicensed text and data mining in order to safeguard the UK's creative sector.
The Government's broadly "wait and see" stance on legislative change prolongs uncertainty for all stakeholders. AI developers face ongoing uncertainty as to how far they will need to disclose their training data sources and methods when marketing models in the UK. Notably, the Report suggests the following areas of focus:
- Input transparency and web‑crawler disclosure: The Report signalled support for greater transparency over training‑data sources and web‑crawler behaviour as a possible basis for future legislation, ensuring obligations are proportionate for SMEs and individual developers.
- Digital replicas: Recognised growing risks from realistic AI‑generated replicas of a person's face or voice, the Government will explore options to address unauthorised digital replicas, including a potential new digital replica or broader personality right, aiming to distinguish legitimate imitation from harmful or unacceptable uses while preserving space for innovation and expression.
- Labelling and output transparency: There is support for clear labelling of wholly AI‑generated content, with a more nuanced approach for AI‑assisted works. The Government will work with industry on labelling and provenance (including watermarking and metadata), monitor regimes and seek interoperable solutions to build trust and combat deepfakes and disinformation.
- Technical tools and standards: The Report backs market‑led development and adoption of tools such as robots.txt and machine‑readable "do not train" signals, and will develop best‑practice principles on their use (including respecting machine‑readable reservations).
- Licensing and market‑based solutions: No statutory licensing, levies or similar interventions are proposed.
- Enforcement and regulatory oversight: The existing UK IP enforcement framework is viewed as broadly adequate but practical barriers remain, especially for smaller right holders. There is no current proposal to create a new AI‑and‑copyright regulator or expanding an existing regulator's remit.
- Computer‑generated works (CGWs): The Report proposes removing the specific UK copyright protection for wholly CGWs without a human author, on the basis that it is little used and misaligned with the rationale for copyright.
Read the Report here, and for further analysis read our article here.
2. EU Parliament agrees position on AI Digital Omnibus
On 25 March 2026, the European Parliament adopted a simplification ("Omnibus") proposal amending the AI Act (the EP Position), aiming to clarify and stagger application dates for certain high‑risk AI rules, introduce a ban on AI "nudifier" systems, and provide greater flexibility and support for growing companies.
The EP Position provides for the following key elements:
- Application dates for high‑risk AI systems: Rules relating to high‑risk AI systems explicitly listed in Annex III of the AI Act (including those involving biometrics and systems used in critical infrastructure, education, employment, essential services, law enforcement, justice and border management) would apply from 2 December 2027. Rules relating to high‑risk AI systems already covered by EU sectoral safety and market‑surveillance legislation (Annex I) would apply from 2 August 2028.
- Watermarking Obligations: The obligation on providers to comply with watermarking obligations for AI‑generated audio, image, video or text content for existing AI systems would be extended to start from 2 November 2026.
- Ban on "nudifier" apps: A new prohibition would apply to AI systems that create or manipulate sexually explicit or intimate images resembling an identifiable real person without that person's consent. The ban would not apply to systems with effective safety measures that prevent users from generating such images.
- Bias mitigation and personal data: Service providers would be permitted to process personal data to detect and correct biases in AI systems, subject to safeguards and only where strictly necessary.
- AI Literacy: Providers and deployers would be required to support, rather than ensure, the AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. The European Commission would be tasked with promoting AI literacy for the wider population and issuing guidance on practical implementation for providers and deployers.
The formal trilogue process, with the Commission and Council on the final form of the AI Act amendments will now begin.
Read the press release here.
3. US White House publishes National Policy Framework for AI
On 20 March 2026, the White House published legislative recommendations setting out a 'National Policy Framework for Artificial Intelligence' (the Framework), in line with the executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" issued in December 2025 (the Executive Order).
The Framework proposes a light‑touch federal regime that would pre‑empt certain state AI laws, while carving out limited exceptions (for fraud, consumer and child safety laws, to preserve states' ability to regulate local zoning of AI infrastructure and state use of AI).
The Framework proposes seven pillars to guide future federal AI legislation, focused on child protection, community resilience, intellectual property, free speech, innovation, workforce readiness and federal-state alignment:
Safeguards for minors: Federal legislation should require AI platforms likely to be accessed by minors to implement robust parental controls, privacy‑protective age‑assurance and product features that provide safeguards against sexual exploitation and self‑harm and confirm that existing child privacy laws apply to AI. The Framework recommends that federal law does not pre-empt States' ability to enforce generally applicable child protection laws, including prohibitions on child sexual abuse material, even where material is AI generated.
Community role: AI infrastructure build‑out should not increase residential electricity costs; law enforcement and national security agencies should be equipped to address AI‑enabled impersonation and frontier‑model risks; and small businesses should receive grants, tax incentives and technical support to adopt AI.
IP rights: Courts should resolve whether training on copyrighted material is fair use; Congress may enable collective licensing frameworks without dictating when licences are required; a federal right should protect against unauthorised AI‑generated digital replicas (with clear First Amendment exceptions); and copyright developments should be closely monitored to identify any gaps in creator protection.
Free speech: Congress should bar the federal government from coercing AI and tech providers to modify content based on partisan or ideological goals and provide effective remedies where agencies attempt to censor speech on, or dictate outputs of, AI platforms.
Innovation: Regulatory sandboxes should support experimentation with AI applications; federal datasets should be made available in AI‑ready formats; and no new cross‑cutting AI regulator should be created, with oversight instead routed through existing sectoral regulators and industry‑led standards.
Education: Existing education, training and apprenticeship programmes should be updated (through non‑regulatory measures) to include AI skills; federal efforts to study AI‑driven task and job realignment should be expanded; and land‑grant institutions should be strengthened to deliver technical assistance, demonstrations and AI‑focused youth programmes.
Establishing a federal policy framework: Congress should create a national AI standard.
Read the Framework here.
4. Act introduced in US to repeal moratorium on state AI legislation
On 20 March 2026, Representatives introduced into the House of Representatives the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act (the GUARDRAILS Act), with companion legislation being introduced into the Senate.
The GUARDRAILS Act would repeal the executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" issued in December 2025 (the Executive Order), which was designed to establish a moratorium on state-level AI policies and pre-empt state AI laws, including through threatened litigation and leverage over federal broadband funding.
The GUARDRAILS Act would render the Executive Order void and prohibit the use of federal funds to implement, administer, enforce or carry it out, with the aim of preserving states' authority to enact AI-related safeguards addressing risks such as algorithmic bias, data privacy and consumer protection in the absence of comprehensive federal AI regulation. Its sponsors criticise the Executive Order and the Framework for seeking to block state AI protections without creating clear, enforceable federal guardrails, and frame the legislation as necessary to protect both the public and the role of Congress in setting national AI policy.
The GUARDRAILS Act was referred to the House Committee on Energy and Commerce and the Committee on the Judiciary. To become law, it would need to be passed by a majority in the House and Senate, but would risk a presidential veto which could only be overridden by a 2/3 majority in both the House and Senate.
Read the GUARDRAILS Act here.
5. Washington state enacts legislation regarding AI deepfakes
On 16 March 2026, the Governor of Washington signed into law the Substitute Senate Bill 5886 (the Law), which will become effective on 11 June 2026. The Law amends the state's personality rights statute to reinforce the property right in an individual's identity and to address the unauthorised use of AI-generated or manipulated digitally created impersonations.
The Law applies to the use of a living or deceased individual's or a "personality's" name, voice, signature, photograph, forged digital likeness, or likeness in goods, merchandise or products, in advertising, or for fundraising and solicitation of donations, regardless of whether the use is for profit or not for profit in Washington state. It applies to individuals and personalities worldwide, irrespective of domicile or citizenship, and covers both those who commercially exploited their identity during life and those who did not.
The Law includes the following key elements:
Scope of personality rights: Every individual or personality has a property right in the use of their name, voice, signature, photograph, forged digital likeness, or likeness. These rights persist after death and are inheritable or otherwise transferable, for example in a contract.
Definition of "forged digital likeness": The definition covers (i) visual representation (persistent or real-time) of an actual, identifiable individual and (ii) audio recording of an actual, identifiable voice. The image or recording must (a) be digitally created, adapted, altered or modified so as to be indistinguishable from a genuine image or recording, (b) misrepresent the person's appearance, speech or conduct, and (c) be likely to deceive a reasonable person into believing it is genuine.
Infringement: Anyone who uses or authorises the use of a protected attribute (including a forged digital likeness) contrary to the Law, without the owner's consent, infringes these rights.
Penalties: There is a fine and liability for actual damages suffered and any profits as a result of the infringement. If the infringement involves forged digital likeness, penalties include liability for non-economic damages even if the infringement did not result in any profit.
Read the Law here.
6. UK House of Lords advances the proposal of a criminal offence for unsafe AI chatbots
On 19 March 2026, the House of Lords agreed an amendment (the Amendment) to the Crime and Policing Bill (the Bill) to create a specific criminal offence for "unsafe" AI chatbots, backing campaigner‑led proposals over the Government's preferred, more flexible regulatory model.
The offence would be enforced within the framework of the Online Safety Act and carries penalties of up to five years' imprisonment. By contrast, the Government's preferred approach would give ministers power to extend the Online Safety Act to AI services by secondary legislation, without creating a new criminal offence.
Notably, the Amendment provides for the following key elements:
- Scope of the offence: It would be a criminal offence to develop or supply an AI chatbot that produces content promoting terrorism, violence, or threats to national security or public safety, where that system is made available on the UK market. There would be no need for intent to be proven and a low threshold for liability to be established.
- Risk‑assessment and mitigation duties: Companies would be required to carry out risk assessments and take steps to address identified risks before making AI chatbots available in the UK. These duties are intended to align with, and be guided by, the Online Safety Act regime.
- Enforcement: Enforcement would be informed by the Online Safety Act framework and overseen by the relevant online safety regulator.
The Bill is proceeding to the House of Commons, where ministers may seek to overturn or revise the Amendment.
Read the Amendment here.
7. Singapore rolls out AI risk management toolkit for financial institutions
On 20 March 2026, the Monetary Authority of Singapore (MAS) published an AI Risk Management Toolkit for financial institutions (the Toolkit), following the conclusion of Phase 2 of its collaborative industry initiative, Project MindForge. Phase 2 broadened the project beyond banking to include the insurance and asset‑management sectors and extended its scope from generative AI to cover traditional AI, generative AI and agentic AI technologies.
The Toolkit is intended to support responsible deployment of AI in the financial sector through practical, implementation‑focused guidance. It comprises an AI Risk Management Executive Handbook (covering strategic considerations and implementation practices), an AI Risk Management Operationalisation Handbook (providing detailed operational guidance) and a set of implementation examples and case studies designed to help firms apply risk management practices consistently across different AI systems. It is closely aligned with MAS's proposed AI guidelines, which address governance, risk assessment, lifecycle controls and enabling processes to support safe and effective AI use.
MAS is currently reviewing consultation feedback on the proposed guidelines and will establish a workgroup under its BuildFin.ai initiative to promote adoption of the Toolkit and to address risks arising from emerging technologies, including agentic AI.
Read the press release and access the Toolkit here and the MAS's proposed AI guidelines here.
8. Russian Ministry of Digital Development opens consultation on Draft AI law
On 18 March 2026, the Ministry of Digital Development opened a public consultation on the draft Law on Fundamentals of State Regulation of the Fields of Application of Artificial Intelligence Technologies (the Law).
The Law establishes a principles-based framework for the development, implementation and use of AI in Russia, and is expected to come into force on 1 September 2027.
The Law applies to developers, operators, owners, users and bodies of state power, both individuals and legal entities, in connection with the development, implementation, use and other application of AI technologies in Russia. However, it does not apply to AI used for defence, state security, or law enforcement purposes, unless otherwise provided by federal law or presidential acts.
The Law imposes specific duties for developers, operators, owners and users of AI systems:
Developer: Must ensure the safety of the model by avoiding discriminatory functionality, implementing warning systems about prohibited uses, documenting the model's architecture, logic and limitations, modelling potential risks, and setting procedures for maintenance and control of AI-enabled facilities.
Operator: Must provide safe-use instructions, test the system to identify unlawful use, inform users about the system's purpose and limitations, maintain and monitor AI-enabled facilities, immediately suspend operation if there is a threat of harm, keep records of incidents, and appoint responsible persons for safe operation.
Owner: Must set access rules including a clear ban on unlawful uses, take measure to prevent unlawful use, inform users that they are interacting with AI (unless this is obvious), fulfil additional obligations if the service has over 500,000 daily Russian internet users, and implement technical mechanisms limiting creation of unlawful content.
User: Must comply with the access rules set by the owner, and use AI consistently with the applicable law, and refrain from attempting to bypass built‑in safety and control mechanisms or alter prescribed operating parameters.
Key additional features of the draft include:
Trusted AI Model: Only "trusted AI models" (which meet established safety and quality requirements and are included in a state register) may be used in state information systems and critical information infrastructure. The Government sets the procedure for inclusion in the register and the requirements for trusted status.
Transparency: Individuals have the right to refuse AI-based services and to receive services in a non-AI form, as well as the right to pre-trial appeal and compensation for harm caused by unlawful use of AI.
IP Rights: Objects of IP created using AI services are protected IP under the Civil Code, provided they meet originality and protectability criteria, regardless of whether they are created by a human or automated system.
Enforcement: Authorised federal executive bodies are empowered to carry out continuous monitoring and analysis of the consequences of the application of AI technologies including the collection and systematisation of information about incidents, threats and risks.
The consultation closes on 15 April 2026.
Read the Law here.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)




_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)







.jpg?crop=300,495&format=webply&auto=webp)
