Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
Practical Insights on AI and Legal Advice Privilege
As AI becomes embedded in business and legal workflows, protecting legal advice privilege has never been more critical. On 4 March 2026, we published the AI & Legal Privilege Guide and Policy Framework to assist lawyers and clients to navigate the use of AI without compromising privilege. This resource explains where and how privilege is lost in AI use, and provides organisations with a practical, policy ready framework that can be adopted and tailored to their own risk profile.
This edition brings you:
New York adopts Responsible AI Safety and Education Act
Colorado proposes revised AI Bill
UK Digital Regulation Cooperation Forum publishes paper on agentic AI
Bank of England publishes letter on AI in financial services
Egypt publishes AI governance framework and guidelines
China issues draft guidance on OpenClaw AI agents
1. New York adopts Responsible AI Safety and Education Act
On 27 March 2026, the Governor of New York signed the Responsible AI Safety and Education (RAISE) Act (the Act). The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York.
The Act imposes transparency and safety obligations on deployers, including:
- Publication of frontier AI frameworks detailing risk assessment and mitigation processes.
- Mandatory transparency reports before deployment.
- Regular updates to safety frameworks.
- Prohibitions on misleading statements regarding risks.
- Reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified.
- Periodic submission of internal risk assessments.
The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and designates the Department of Financial Services (DFS) as responsible for setting out the detail of obligations.
The DFS will oversee compliance with disclosure and registration requirements, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations.
The Act is set to come into force on 1 January 2027.
Read the Act here.
2. Colorado proposes revised AI Bill
On 17 March 2026, the Governor of Colorado endorsed the Colorado AI Policy Working Group’s proposal of a revised AI Bill (the Bill) to amend the state’s current AI Act, SB 24-205 (the Act).
The Act regulates high-risk AI systems, meaning AI systems used to make consequential decisions in areas such as employment, education, housing and insurance, whereas the scope of the Bill is narrower, focusing on automated decision-making technology.
The Bill also imposes less onerous transparency and monitoring obligations than the Act. If enacted, the Bill would remove many of the requirements under the Act, including:
- Active protection of consumers from algorithmic discrimination.
- Implementation of risk management programs and policies.
- Conducting of annual impact assessments.
- Publication of specific information about AI being used.
The Bill is yet to be formally introduced. The Act is due to take effect on 30 June 2026. If passed, the Bill would come into force on 1 January 2027, effectively replacing the Act’s deadline.
Read the Bill here.
3. UK Digital Regulation Cooperation Forum publishes paper on agentic AI
On 31 March 2026, the Digital Regulation Co-operation Forum (DRCF) published a paper exploring agentic AI in the context of existing UK law and regulatory frameworks. The DRCF comprises four regulators – the Competition and Markets Authority, Financial Conduct Authority, Information Commissioner’s Office, and Ofcom – and the paper considers the implications of agentic AI across four areas: governance; data protection and cybersecurity; consumer rights and interests; and market dynamics and competition.
All four regulators agree that AI agents fall within existing UK legal regimes and that obligations of transparency, fairness, safety, consumer protection, and competition continue to apply as agentic AI develops.
The paper identifies opportunities for consumers, businesses, and regulators in relation to agentic AI, while highlighting risks including lack of transparency due to bundling of AI actions, collusion between agents, dependence on providers of AI infrastructure, and hyper-personalisation of AI-generated content.
Going forward, the DRCF intends to continue shaping a joined-up regulatory approach to agentic AI, with plans for further horizon-scanning work in 2026-27.
Read the paper here.
4. Bank of England publishes letter on AI in financial services
On 1 April 2026, Bank of England (BOE) Deputy Governor, Sarah Breeden, and Prudential Regulation Authority (PRA) CEO, Sam Woods, published a letter on AI in financial services in response to the Government’s joint ministerial letter of January 2026, requesting that regulators publish a plan for AI innovation and report annually on their regulatory approach.
The letter sets out the BOE and PRA’s plans to enable safe AI innovation. It outlines ongoing engagements with practitioners and other regulatory bodies, such as the BOE and FCA’s upcoming biennial survey of AI adoption in the financial sector, and an AI Consortium to provide a platform for stakeholder input on the capabilities, development, and deployment of AI.
The BOE is keeping under review its technology-agnostic approach to regulation and considering whether further action or guardrails might be needed for the responsible adoption of AI in the financial sector.
Read the letter here.
5. Egypt publishes AI governance framework and guidelines
On 15 March 2026, Egypt’s National Council for AI published its guide (the Guide) to Egypt’s National AI Governance Framework (the Framework). The Framework is a suite of AI governance and policy instruments, including the National AI Strategy and National Guidelines for Trustworthy and Responsible AI, which set out Egypt’s envisaged approach to AI regulation. The Framework also encompasses existing sector-specific regulations such as Egypt’s personal data protection law and fintech law.
The Guide outlines the overarching principles and structure of Egypt’s AI governance. It explains the Framework’s risk-based regulatory approach, with AI systems categorised into four tiers:
- Tier 1: Prohibited AI Systems (Unacceptable Risk)
- Tier 2: High Risk AI Systems
- Tier 3: Limited Risk AI Systems
- Tier 4: Minimal or No Risk AI Systems
Each tier will have specific compliance obligations, which are to be detailed in further guidance.
The Guide states that the successful implementation of the Framework is intended to directly inform the drafting of Egypt’s future AI legislation.
Read the Guide here.
6. China issues draft guidance on OpenClaw AI agents
On 31 March 2026, China’s cybersecurity standards body, TC260, issued draft guidance aimed at mitigating security risks in the deployment and use of OpenClaw AI agents, the free and open-source AI agents developed by OpenClaw and capable of carrying out multi-step tasks without human intervention.
The guidance is directed at individual users of OpenClaw agents and organisations managing internal use. It recommends measures at the installation, configuration, use, and removal stages, advising cloud-based deployment of agents where possible rather than installation on everyday terminals, to increase data security and operational stability.
Individuals are advised to dedicate separate devices for use of OpenClaw agents, limit inputs and access to sensitive data, and back up key data on a regular basis. Organisations should establish policies for AI governance, maintain a register of approved agents, log and audit agent activities, and provide targeted training for staff. The guidance flags the need for organisations to monitor and detect shadow agents deployed by employees without approval.
The guidance is open for public comment until 15 April 2026.
Read the guidance here (Chinese Mandarin only).


.jpg?crop=300,495&format=webply&auto=webp)


.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)


