Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
On the EU AI Act, the provisions around prohibited AI and AI literacy came into force on 2 February 2025. Organisations in scope of the EU AI Act should ensure that they are not using prohibited AI and should take steps to implement AI literacy across their organisation.
This edition brings you:
US Government rescinds AI Executive Order
ASEAN Digital Ministers adopt Expanded Guide of AI Governance and Ethics
CNIL publishes its 2025 - 2028 strategic plan, including legal framework on AI
German Federal Office for Information Security updates guide on generative AI models
Competition Bureau of Canada issues report regarding feedback on AI and competition
US Government rescinds AI Executive Order
On 20 January 2025, President Donald Trump rescinded a 2023 Executive Order of former President Joe Biden that established a regulatory framework for AI in the US.
The 2023 Biden-era Order aimed to regulate AI by, for example, placing mandates on companies to disclose key safety data and test outcomes in advance of their public release. The Order would also have had the effect of establishing a US AI Safety Institute to manage AI risks.
With the rescission of Biden's Order, the US currently has no regulatory framework in place for AI, therefore leaving a gap in governance - at least at federal level - as compared with the comprehensive regulatory measures instituted by Europe. AI laws and proposals continue to develop at State-level, however.
President Trump's act may be indicative of a "lighter touch" federal regime that is likely to find support from proponents of strong AI development in the US.
Read more here.
ASEAN Digital Ministers adopt Expanded Guide of AI Governance and Ethics
During the fifth meeting of the Association of Southeast Asian Nations (ASEAN) held on 17 January 2025, Digital Ministers adopted the Expanded Guide of AI Governance and Ethics (the 2025 Guide). This builds upon the 2024 Guide, including further policy recommendations to target 6 key risks posed by generative AI:
mistakes and anthropomorphism;
factually inaccurate responses and disinformation;
deepfakes and malicious activities;
infringement of IP rights;
privacy and confidentiality issues; and
propagation of embedded biases.
To combat these key risks, the 2025 Guide proposes policy recommendations, including:
Accountability: establishing shared responsibility across the generative AI (Gen AI) ecosystem to ensure ethical use and transparency.
Trusted Development and Deployment: applying best practices for safety, ethics and functionality in Gen AI governance, emphasising safety and model evaluation.
Content Provenance: developing solutions to identify AI-generated content, addressing deepfakes and misinformation risks.
Testing and Assurance: encouraging research on Gen AI's social and technical impacts, focusing on safety and societal benefits.
AI for Public Good: promoting Gen AI's potential for public sector improvement and economic development, including creating a collection of use cases and enhancing public awareness and education.
Read the full expanded guide here.
CNIL publishes its 2025 - 2028 strategic plan, including legal framework on AI
On 16 January 2025, the Commission nationale de l'informatique et des libertés (CNIL), France's data protection authority, published its 2025-2028 strategic plan (the Plan).
The Plan seeks to protect personal data and ensure that individual rights are guarded following the introduction of new technologies, most importantly AI.
Key highlights of the Plan include:
AI as Axis 1: The CNIL notes that the popularisation of Gen AI is accompanied by an increase in malicious or misleading content, and aims to combat this by raising public awareness of the challenges posed by AI and the championing of individual rights.
Collaboration and Harmonisation: The CNIL commits to collaborating with European and other international authorities to better harmonise AI governance to:
provide guidance to stakeholders;
clarify the applicable rules; and
ensure AI systems comply with these applicable rules by conducting monitoring throughout the lifecycle of an AI system.
Read the announcement here (in French).
German Federal Office for Information Security updates guide on generative AI models
The German Federal Office for Information Security has published an updated guide on Gen AI models and the associated capabilities and risks involved with large language models (LLMs), image generators and video generators (the Gen AI Guide).
The Gen AI Guide advises German companies and authorities to conduct risk analyses before they integrate Gen AI into their workflows, with special attention to be paid to the following:
User education and data management: educating users on the risks and opportunities of Gen AI, and carefully selecting and securing training data.
Sensitive data: conducting extensive pre-deployment testing, and treating models trained on sensitive data with extra caution.
Transparency and auditing: clearly communicating risks, countermeasures and limitations to enhance transparency, and implementing filters to monitor inputs and outputs to prevent unintended actions.
Vigilance and expertise development: staying alert to input manipulations and developing expertise through non-critical experimentation.
Read the Gen AI Guide here.
Competition Bureau of Canada issues report regarding feedback on AI and competition
On 20 March 2024, the Competition Bureau of Canada (CBC) published a discussion paper on the impact of AI and its potential effect on competition in Canada. On 27 January 2025, the CBC followed this by issuing a report summarising the feedback received, which included:
AI market dynamics:
AI market dynamics should be distinguished from other digital markets in their complexity and application to a wide range of sectors, including banking and finance, healthcare, and retail and e-commerce.
Startups are emerging in the AI sector, despite the continuing presence of a few large companies.
New and smaller firms face multiple barriers to entry, including access to data, computational power and human expertise.
Enforcing competition in the AI market:
AI can lead to anti-competitive conduct which may require new enforcement mechanisms.
Vertical integration, partnerships and investments can provide both benefits and risks.
There is a risk that algorithmic pricing and AI-driven collusion may lead to anti-competitive conduct.
Promoting competition in the AI market:
Canadian legislation should be tech-neutral and apply broadly regardless of the technology involved.
The CBC should be encouraged to conduct more thorough studies into the AI market.
International collaboration should be encouraged to improve AI policy standardisation.
Read the full report here.

.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)








.jpg?crop=300,495&format=webply&auto=webp)


_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)





