Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
UK Government delays plans to introduce AI regulation
Swiss Federal Council publishes AI regulatory approach
UK Government issues Playbook for public sector AI use
Illinois introduces AI Safety and Security Protocol Bill
Virginia high-risk AI bill passes through legislature
European Parliament Research Service publishes briefing on AI Act and GDPR
1. UK Government delays plans to introduce AI regulation
The UK Government has reportedly postponed its plans to introduce an AI regulation bill (AI Bill), possibly in light of wider geopolitical developments and approaches to AI regulation (e.g. in the US administration). The long-awaited AI Bill is now unlikely to be presented in Parliament before the summer, according to sources within the Labour Party.
The AI Bill is expected to address concerns about the potential risks posed by advanced AI models by requiring, for example, developers to submit these models for testing by the UK’s AI Security Institute.
Despite the delay, the UK Government has stated that it remains committed to developing legislation that balances the benefits and risks of AI, and has plans to launch a public consultation in due course.
Read more here.
2. Swiss Federal Council publishes AI regulatory approach
On 12 February 2025, the Swiss Federal Council presented its regulatory approach to AI, focusing on innovation, the protection of fundamental rights and increasing public trust in AI. This approach was based on a report commissioned by the Federal Council and prepared by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Department of Foreign Affairs (FDFA).
Key points of the Federal Council’s AI regulatory approach include:
- Incorporation of the Council of Europe's AI Convention: Switzerland will integrate this convention into its national law to align with international standards.
- Sector-specific legislative amendments: Necessary legislative changes will be made on a sector-specific basis, with general regulations limited to areas central to fundamental rights, such as data protection.
- Development of non-binding measures: In addition to legislation, measures such as self-declaration agreements or industry solutions will be developed to implement the convention.
The Federal Department of Justice and Police (FDJP) will prepare a consultation draft by the end of 2026. This draft will implement the Council of Europe's AI Convention by defining the necessary legal measures, particularly in areas such as transparency, data protection, non-discrimination and supervision.
DETEC, together with the FDJP, the FDFA and the Federal Department of Economic Affairs, Education and Research, will develop an implementation plan for other measures not covered by legislation by the end of 2026. This plan will aim to ensure compatibility with the regulatory approaches of Switzerland's main trading partners and involve both internal and external federal stakeholders.
Read more about the regulatory approach here.
3. UK Government issues Playbook for public sector AI use
On 10 February 2025, the UK Government published an AI playbook (the Playbook) which provides comprehensive guidance on the safe, responsible and effective use of AI technologies within Government.
The Playbook includes 10 principles that civil servants should uphold when using AI. It explains what AI is, its capabilities, limitations and risks, and how to select, buy and deploy AI solutions in government.
The Playbook outlines several areas of legal considerations that are crucial for the development and deployment of AI solutions:
- Compliance with data protection laws: AI systems must comply with the UK General Data Protection Regulation and Data Protection Act 2018. This includes ensuring that personal data is processed lawfully, fairly and transparently.
- Data protection impact assessments (DPIAs): Conducting DPIAs is mandatory when AI systems involve high-risk processing of personal data in order to identify and mitigate privacy risks.
- Accountability: Organisations must have clear ownership of risk and responsibility for compliance with data protection laws.
- Liability: Contracts for technology services may need to incorporate procedures for system errors and outages.
- Ownership and rights of intellectual property (IP): Contracts should clearly define the ownership and usage rights of any IP generated during AI projects.
- Equality Act 2010: AI systems must comply with the Equality Act 2010 and assessments should be undertaken to ensure that AI systems do not discriminate against individuals based on protected characteristics.
The Playbook includes an appendix with real-life use cases of AI adoption in various Government departments and public sector organisations, which illustrate the capabilities and risks of AI use in the public sector.
Read the Playbook here.
4. Illinois introduces AI Safety and Security Protocol Bill
On 18 February 2025, the Artificial Intelligence Safety and Security Protocol Bill (HB 3506) (the AI Safety Bill) has passed its initial introduction in the Illinois General Assembly and is now under review by the Rules Committee to determine its next steps. The AI Safety Bill aims to establish comprehensive safety and security measures for the development and deployment of AI models. Under the AI Safety Bill, AI developers shall produce, implement and make publicly available detailed safety and security protocols. The protocols must include specific information on managing critical risks, testing procedures and security protections.
Key provisions of the AI Safety Bill include:
- Safety and security protocols: Developers must create and follow a documented protocol detailing how they will manage critical risks, including thresholds for intolerable risks, testing procedures and security measures.
- Risk assessment reports: Developers are required to publish a risk assessment report every 90 days, covering recent risk assessments and any new or modified AI models that pose higher levels of critical risk.
- Third-party audits: At least once a year, developers must hire a reputable third-party auditor to assess compliance with the safety and security protocols. The audit report must be published within 90 days of completion.
- Redaction and whistleblower protections: The AI Safety Bill allows for the redaction of sensitive information in published documents to protect trade secrets, public safety and national security. It also includes whistleblower protections, enabling employees to report critical risks anonymously.
- Enforcement and penalties: The Attorney General can bring civil actions against developers for non-compliance, with penalties up to $1,000,000. The AI Safety Bill also allows for injunctive relief if a developer's activities pose an imminent threat to public safety.
Interestingly, the AI Safety Bill bears many resemblances to the provisions of the latest draft of the Code of Practice for general-purpose AI (GPAI) models under the EU AI Act, which is currently being worked on.
The AI Safety Bill will now be reviewed in detail by the Rules Committee. This committee will decide whether the AI Safety Bill should be assigned to a substantive committee for further consideration. If assigned, the substantive committee will hold hearings, possibly amend the bill, and vote on whether to advance it to the next stages. If it passes the committee, it will go through a second and third reading in the chamber, where a final vote will be taken.
Read the AI Safety Bill here.
5. Virginia high-risk AI bill passes through the legislature
On 20 February 2025, the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) (the AI DDA) was adopted in the Virginia General Assembly. This means the AI DDA has been approved by the legislature and now awaits the Governor's signature to become law. The AI DDA applies to developers and deployers of high-risk AI systems that conduct business in Virginia.
Key aspects of the AI DDA include:
- Developer obligations: Developers must prevent algorithmic discrimination, disclose AI system limitations, and provide detailed documentation on evaluation, intended uses and performance of the AI system.
- Deployer obligations: Deployers are required to implement risk management policies, conduct pre-deployment impact assessments, and disclose information about the AI system’s purpose, nature and data used, especially for consequential decisions.
- Documentation and disclosure: Substantial modifications to AI systems must be documented and disclosed within 90 days for developers and 30 days for deployers, with exemptions for trade secrets, security risks or legal compliance.
- Enforcement and penalties: The Attorney General has exclusive authority to enforce the provisions of the bill, with civil penalties for violations ranging from $1,000 to $10,000.
The AI DDA will now go to the Governor for approval. If signed, the law will take effect on 1 July 2026, and will be enforced by the Virginia Attorney General.
Read more here.
6. European Parliament Research Service publishes briefing on AI Act and GDPR
On 26 February 2025, the European Parliament published a briefing on the interplay between the AI Act and the GDPR, focusing on how the AI Act prohibits algorithmic discrimination and the GDPR places strict controls on the processing of special categories of personal data.
The report noted that Article 10(5) of the AI Act permits the processing of special categories of personal data, but only to the extent strictly necessary for bias monitoring, detection, and correction in high-risk AI systems, with appropriate safeguards to protect fundamental rights. It further emphasised that AI systems classified as high-risk under the AI Act require compliance with GDPR principles, including obtaining explicit consent for processing special categories of data.
For AI systems processing special category data under Article 10(5) of the AI Act, the report outlined several conditions that must be met to comply with the GDPR:
- Robust cybersecurity measures must be implemented to prevent data leaks.
- Adherence to GDPR principles including data minimisation, purpose and storage limitation, integrity and confidentiality, and privacy by design.
- Special categories of data should be processed only when strictly necessary to protect fundamental rights, such as non-discrimination.
- The grounds set out in Article 9 GDPR for the processing of sensitive data should be present, such as the consent of the data subject and that the processing of such data is strictly necessary.
The briefing concluded that the GDPR’s requirements on processing data may prove restrictive given the extensive use of AI across various sectors and mass processing of personal and non-personal data, and that a reform of the GDPR or further guidelines might be necessary to address these issues.
Read the report here.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)


.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)





