Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
UK ICO publishes report on Agentic AI
EU Commission launches consultation on building open digital ecosystems
New York unveils proposals to protect children online and restrict harmful AI chatbots
Japan to amend data privacy law to facilitate AI training
NIST opens consultation on security risks in autonomous AI agent systems
California adopts AB 489, extending false licensure rules to AI in healthcare
US House of Representatives passes bill to limit foreign adversaries' remote access to critical technology
1. ICO publishes report on Agentic AI
On 8 January 2026, the Information Commissioner's Office (ICO) published its 'Tech Futures: Agentic AI' report (the Report), exploring how autonomous, goal-directed AI "agents" may evolve and the data protection implications for UK organisations and consumers. The Report signals support for innovation while setting expectations for safe, lawful deployment.
Key features of the Report include:
Definition of "agentic AI": These are defined as systems that can plan, act and interact with digital environments (and other agents) to complete tasks. For example, shopping AI agents that compare prices, manage budgets and place orders with user permission.
Key data protection risks: Organisations must address issues such as lawful basis, purpose limitation, data minimisation, accuracy, and robust accountability. This includes establishing clear roles, maintaining audit trails, and providing redress mechanisms when agents act autonomously.
Human oversight and safety controls: Meaningful human review should be incorporated, guardrails for high-risk actions (such as payments, identity verification, and handling sensitive data) should be implemented, and strong security measures should be put in place to prevent misuse or escalation.
Transparency to users: Organisations should inform individuals when an agent is acting on their behalf, what data it uses, its limitations, and when decisions can be contested or appealed.
The ICO will continue to monitor developments throughout 2026 and will engage with developers and deployers to clarify legal expectations as agentic AI use cases scale.
Read the report here.
2. EU Commission launches consultation on building open digital ecosystems
On 8 January 2026, the European Commission opened a call for evidence to shape its forthcoming European Open Digital Ecosystem Strategy. The initiative aims to enhance technological sovereignty and competitiveness by scaling the use of open-source solutions across the EU.
The call for evidence seeks input on barriers, incentives, and governance relating to open-source software and hardware, particularly in critical sectors and public services. This comes as the Commission prepares to set a strategic approach for the open-source sector, including a review of the 2020--2023 Open Source Software Strategy.
The call for evidence is open to developers, businesses, public administrations, academia, and civil society across Member States.
Feedback received will inform a Commission Communication on European open digital ecosystems, expected in the first quarter of 2026.
Read the call for evidence here.
3. New York unveils proposals to protect children online and restrict harmful AI chatbots
On 5 January 2026, New York Governor Kathy Hochul unveiled a legislative package aimed at protecting children online, with a particular focus on restricting AI chatbot features for minors using social media and gaming platforms. The proposals are part of a broader set of online safety and youth mental health initiatives.
Key measures include:
Disabling AI chatbot features for children: Platforms would be required to disable certain AI chatbot features for users under 18, in order to reduce risks from manipulative or unsafe interactions.
Privacy by default: The highest privacy settings must be applied for minors by default, such as blocking messages from non-connections and disabling location sharing.
Age verification and parental controls: The package expands age-verification requirements (including for online games) and introduces enhanced parental controls to help parents limit children's in-app purchases.
The measures build on previous New York actions, including safeguards for AI companions, social media warning labels, and restrictions on addictive feeds.
Read the proposals here.
4. Japan to amend data privacy law to facilitate AI training
On 9 January 2026, the Japanese government announced plans to revise the Personal Information Protection Law, aiming to enable AI developers to train models on certain categories of sensitive personal data without prior consent. The proposed changes are intended to enhance Japan's AI capability and global competitiveness.
The revisions would permit model training on sensitive data such as criminal records and medical histories, as well as race, without the need to obtain individual consent.
The government contends that large-scale data learning is essential to improve AI accuracy, and that current consent requirements are impeding progress. The changes will introduce fines and penalties for malicious operators, including those trading large volumes of personal data.
The proposed amendments are scheduled for submission to Japan's ordinary Diet session, which is the main annual sitting of the national parliament, beginning on 23 January 2026.
Read press coverage of the proposal here.
5. NIST opens consultation on security risks in autonomous AI agent systems
On 9 January 2026, the US National Institute of Standards and Technology (NIST) launched a consultation on the security of AI agent systems. The initiative seeks input from industry stakeholders and researchers to support the secure development and deployment of autonomous AI agents.
Key areas of focus include:
- Security risks under review: The consultation addresses specific risks that arise when AI model outputs interact with software systems, such as indirect prompt injection, data poisoning, and specification gaming.
- Purpose of the consultation: NIST aims to gather insights that will inform the creation of voluntary guidelines and best practices for managing security challenges associated with AI agents.
- Impact: The resulting guidance will be designed to help mitigate risks that could affect public safety and national security.
The consultation is open until 9 March 2026.
Read the consultation here.
6. California adopts AB 489, extending false‑licensure rules to AI in healthcare
On 1 January 2026, California enacted AB 489 (the Bill), a law extending false-licensure rules to AI in healthcare. The Bill aims to prevent AI systems from implying they are licensed health professionals. It applies existing "title protection" requirements to both the developers and deployers of AI used in healthcare settings, as well as to the AI's advertising and functionality.
Key provisions of the Bill include:
Prohibited conduct: AI or GenAI may not use terms, letters or phrases that indicate or imply a healthcare licence or certificate (for example, suggesting the system is a "doctor" or "M.D.") when no such licence exists.
Liability: Developers and deployers of such AI are responsible if their tools or marketing misrepresent licensure status.
Scope of application: The prohibition covers both the functionality of the AI (how it presents itself in use) and its advertising or promotional materials.
Enforcement: The law extends California's existing Business & Professions Code protections and aligns with California's unfair competition and false advertising regimes.
Read the Bill here.
7. US House of Representatives passes bill to limit foreign adversaries' remote access to critical technology
On 12 January 2026, the US House of Representatives passed the Remote Access Security Act (the Bill), a bipartisan measure that updates the Export Control Reform Act to confirm that US export controls cover remote and cloud-based access to controlled technologies, including advanced AI chips, by entities in foreign adversary countries.
The Bill is intended to close the cloud loophole by allowing controls on providing or enabling remote compute that could otherwise help adversaries to bypass restrictions on physical exports.
Read the Bill here.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)




.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)






