Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
California enacts SB 53, establishing rules for AI safety and accountability
EU Commission launches AI Act Service Desk & unified guidance portal
EU Commission seeks feedback on draft guidance and reporting template on serious AI incidents under the EU AI Act
Canada launches AI strategy taskforce
Vietnam opens public consultation on draft AI law
Competition Commission of India releases report on AI and competition
EU unveils strategy for ‘sovereign’ AI ecosystem
European Ombudswoman launches inquiry into AI standards development
1. California enacts SB 53, establishing rules for AI safety and accountability
On 29 September 2025, Governor Gavin Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act (the Law) which aims to bolster California’s leadership in trustworthy, cutting-edge AI. The Law takes effect on 1 January 2026 introduces new transparency requirements, safety mechanisms, and protections to balance innovation with public accountability.
Key features of the Law include:
- Transparency of safety practices: Large “frontier” AI developers must publicly disclose a frontier AI framework detailing how they integrate national, international, and industry standards for safety and risk mitigation.
- Critical incident reporting: The Law mandates reporting of critical safety incidents and catastrophic risk assessments, notifying California’s Office of Emergency Services and making oversight more visible.
- Whistleblower protections: Employees at in-scope AI labs are protected when disclosing internal concerns about safety risks or violations, provided they have “reasonable cause”.
- Public compute infrastructure (“CalCompute”): The Law establishes a public computing consortium to democratise access to AI infrastructure for startups, researchers, and innovators.
Read the Law here.
2. EU Commission launches AI Act Service Desk & unified guidance portal
On 8 October 2025, the EU Commission rolled out an AI Act Service Desk alongside the Single Information Platform, aiming to streamline and support the consistent implementation of the EU AI Act across all member states. These initiatives centralise resources, tools, and expert assistance to guide stakeholders through their legal obligations under the regulation.
Key features include:
- AI Act Explorer: A user-friendly tool that allows browsing of the full text of the AI Act, its chapters, recitals, and annexes with intuitive navigation.
- Compliance Checker: An interactive diagnostic tool designed to help organisations assess whether their AI systems or general-purpose AI models are subject to obligations under the AI Act, and to clarify next steps towards compliance.
- National Resources & FAQs: The platform aggregates member state materials, FAQs and clarification documents to provide locally relevant guidance.
- Service Desk inquiry support: Stakeholders can submit questions via an online form to the AI Act Service Desk, where a team of experts that are working closely with the EU AI Office can respond to clarify uncertainties.
- Accessible and current interface: The Single Information Platform is maintained as a central, evolving hub to keep stakeholders informed about updates, new guidance, and implementation timelines.
Explore the full platform offering and its tools here.
3. EU Commission seeks feedback on draft guidance and reporting template on serious AI incidents under the EU AI Act
On 26 September 2025, the European Commission published a draft guidance and reporting template under Article 73 of the EU AI Act, inviting public feedback until 7 November 2025. These documents aim to clarify how providers of high-risk AI systems should identify, assess, and report “serious incidents” in compliance with forthcoming obligations.
The key features of the documents include:
- Clarifying definitions and scope: The draft sets out what constitutes a “serious incident” (direct or indirect harms to life, health, infrastructure, rights, property or environment) and provides practical examples to help interpretation.
- Tiered reporting deadlines: Depending on severity, providers will be required to notify national authorities within a window of 2 to 15 days upon becoming aware of the incident.
- Interplay with other EU regimes: The guidance addresses overlaps with existing reporting obligations such as those under NIS2, DORA, and CER, clarifying when AI Act rules are supplementary vs. exclusive.
Although these obligations will take effect from August 2026, the draft is intended to assist providers in preparing ahead of time and feedback is sought on forthcoming guidance.
Access the downloadable draft guidance and template here.
4. Canada launches AI Strategy Task Force
On 26 September 2025, Canada announced the creation of an AI Strategy Task Force and opened a 30-day national consultation, aiming to shape the next phase of Canada’s AI strategy.
Canada has already invested an estimated CA$742 million in AI since 2017 (through its Pan-Canadian AI Strategy) and launched a CA$2 billion sovereign AI compute strategy in 2024. This initiative is intended to position Canada as an AI leader by drawing on diverse voices from academia, industry, civil society and the public.
Key elements include:
- Broad thematic advice sought: The consultation will cover research & talent, AI adoption in public and private sectors, commercialisation, scaling Canadian AI firms, safe and trustworthy AI, education & skills, infrastructure, and security.
- Task Force composition & consultation networks: The Task Force includes experts from universities, AI firms, innovation bodies and civil society, who will actively consult their networks to submit recommendations.
- “National sprint” for public input: From 1 to 31 October 2025, the public and stakeholders can submit views via the Consulting Canadians portal.
- Timeline & deliverables: In November 2025, the Task Force will publish key, actionable ideas and proposals to feed into Canada’s renewed AI strategy.
Read the official news release here.
5. Vietnam opens public consultation on draft AI law
On 29 September 2025, Vietnam’s Ministry of Science and Technology released the draft Law on AI (the Draft Law) for public consultation. The Draft Law seeks to create a unified governance regime for AI activities, supporting innovation while ensuring transparency, accountability, and national security.
Key features of the Draft Law include:
- Comprehensive scope and definitions: The Draft Law applies to all AI activities within Vietnam or those affecting Vietnamese citizens and markets, defining key terms such as “AI system”, “AI agent”, and “national AI infrastructure”.
- Human-centred and ethical principles: All AI systems must comply with fairness, transparency, explainability, accountability, and respect for human dignity, alongside national sovereignty and safety.
- Risk-based classification: AI systems are divided into unacceptable, high, medium, and low-risk categories, with stricter requirements for high-risk uses, including registration and safety assessment.
- Transparency and labelling obligations: Users must be notified clearly when engaging with AI systems, and AI-generated content such as deepfakes must be labelled appropriately.
- Prohibited AI practices: The Draft Law lists nine categories of banned AI uses, including those that threaten public order, security, or fundamental rights.
- Governance and oversight structure: A National AI Committee, chaired by the Prime Minister, will coordinate policy, standardisation, and supervision across sectors.
- Support and incentives for innovation: The Draft Law proposes an AI Development Fund, tax incentives, regulatory sandboxes, and open-source development support to foster domestic AI capacity.
- Penalties and enforcement: Serious violations may result in revenue-linked fines or suspension of AI systems in cases of significant risk or harm.
The consultation is set to close on 20 October 2025.
Read the Draft Law (in Vietnamese) here.
6. Competition Commission of India releases report on AI and competition
On 6 October 2025, the Competition Commission of India (CCI) released its Market Study on AI and Competition (the Report), conducted via the Management Development Institute Society (MDIS). The Report explores how AI adoption is reshaping competition in India, highlights emerging risks, and offers policy and industry recommendations.
Key features of the Report include:
- Rapid AI uptake & market impact: The Report finds that AI technologies are being adopted quickly across diverse sectors, transforming business models, competitive dynamics and regulatory responses.
- Competition challenges ahead: The Report flags issues such as information asymmetries, dominance in AI infrastructure, and potential anti-competitive strategies that could stifle innovation if left unchecked.
- Policy & enforcement proposals: To address these risks, the CCI will host a national conference on AI regulation, run workshops on competition compliance, strengthen its technical capacity, and coordinate with other regulators.
- Practical guidance to businesses: Enterprises are encouraged to carry out self-audits of their AI systems, improve transparency in algorithmic use, and put checks in place to detect distortive practices.
- Focus on enabling a level playing field: The Report also calls for enhanced access to compute infrastructure, regulatory support for AI innovators, and efforts to remove barriers for new entrants.
Read the Report here.
7. EU unveils strategy for ‘sovereign’ AI ecosystem
On 8 October 2025, the European Commission unveiled its Apply AI Strategy (the Strategy), backed by a €1 billion investment to accelerate AI deployment across key European industries. The Strategy aims to strengthen technological sovereignty and harness practical innovation under Europe’s regulatory framework.
Key features of the Strategy include:
- Sectoral focus: The Strategy targets key industries including healthcare, energy, mobility, manufacturing, defence, agriculture, biotech and digital services.
- Regulatory easing & support: The Strategy builds on the AI Continent Action Plan by offering simplified regulatory pathways, compliance tools, and funding support for startups and SMEs facing burdens under the AI Act.
- Digital deployment centres: Plans include deploying AI screening centres in healthcare and pushing agentic AI models in manufacturing, climate science, and pharmaceuticals.
- Funding & collaboration: The €1 billion will be drawn primarily from EU programmes such as Horizon Europe and Digital Europe, with possible co-funding from member states and private investors.
- Wider role of Apply AI: The Strategy aims to bridge the gap in AI integration, as currently only 13.5 % of EU firms utilise AI. It seeks to embed AI into public services, innovation infrastructure, and industrial systems.
Read the Strategy here.
8. European Ombudswoman launches inquiry into AI standards development
On 26 September 2025, the European Ombudswoman formally opened an inquiry into the European Commission’s process for developing harmonised AI standards under the EU AI Act, following concerns over transparency and accountability.
The inquiry centres on how private standards bodies such as the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) have been engaged, notably checking whether their internal meetings, membership, and decision-making are sufficiently open and balanced, and how the Commission monitors and oversees their work.
As part of this process, the Ombudswoman has requested detailed documentation and explanations from the Commission about governance, stakeholder representation, and procedural safeguards.
Read the official announcement of the inquiry here.







.jpg?crop=300,495&format=webply&auto=webp)


_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)









