AI View - May 2026

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

14 May 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

Webinar: EU AI Act Transparency Guidelines.

On 8 May 2026, the European Commission published its draft Guidelines on the four Article 50 transparency obligations for AI systems under the EU AI Act. These apply from August 2026 and the Guidelines give critical guidance for organisations providing or deploying AI systems subject to the AI Act.

Join members of our AI Group for a short webinar to hear about the key points from these draft Guidelines and how they may impact your organisation.

Register for the webinar here.

This edition brings you:

  1. EU lawmakers reach agreement on AI Act amendments

  2. EU Commission shares first draft of Transparency Guidelines for the EU AI Act

  3. US Senate Judiciary Committee advances bill restricting children’s access to AI chatbots

  4. US lawmakers introduce sweeping federal AI legislation

  5. Connecticut lawmakers approve legislation to restrict addictive social media features for minors and introduce guardrails for AI tools

  6. China cyber regulator expands AI enforcement campaign to emerging security risks

1. EU lawmakers reach agreement on AI Act amendments

The European Parliament and the Council have reached a provisional agreement on amendments to the EU AI Act as part of the EU’s Digital Omnibus.

For our analysis of the agreement - including the key points of agreement, areas of uncertainty, and what organisations should be doing now - please see our Insights piece here.

2. EU Commission shares first draft of Transparency Guidelines for the EU AI Act

On 8 May 2026, the European Commission published draft guidelines clarifying the transparency obligations for AI systems under Articles 50(1)-(4) of the AI Act (the Draft Guidelines). The Draft Guidelines sit alongside the Code of Practice on transparency of AI-generated content (the TCoP), which is a voluntary code that providers and deployers can use to demonstrate compliance with the marking and labelling obligations in Art. 50(2) and 50(4).

The Draft Guidelines, among other things:

  • Confirm that entities merely disseminating AI-generated content are not considered ‘deployers’.
  • Introduce an exemption from the machine‑readable marking requirement in Article 50(2) for certain business‑to‑business systems.
  • Clarify that content generated and made available before 2 August 2026 will not require labelling.

However, there are some potentially unhelpful points:

  • Prescriptive requirements for machine-readable marking and detection: Providers must adopt a multi-layered approach to marking and provide detection mechanisms for AI-generated or manipulated content.
  • Broad definition of ‘deep fake’: which covers persons, events, etc. that merely resemble something that “can exist or could have existed” (not just specific real people or things).
  • Prescriptive requirements for ‘clear and distinguishable disclosures’: Disclosures hidden under menu options will not be compliant and must be easily understood by a broad audience (including e.g. children), implying obtrusive disclosures.
  • Broad requirement to mark public interest text content: Labelling applies broadly to content “accessible by an indeterminate, fairly large number of unrelated, potential readers”.
  • Narrow exemption for human review: Superficial or procedural checks will not suffice, nor will the mere existence of an editorial policy.
  • Negative implications for not signing TCoP: Non-signatories can expect more RFIs and will need to provide more detailed explanations of their compliance measures.

The Draft Guidelines are subject to a consultation, which is open for comment until 3 June 2026.

Read the Draft Guidelines here.

3. US Senate Judiciary Committee advances bill restricting children’s access to AI chatbots

On 30 April 2026, the US Senate Judiciary Committee unanimously approved, on a bipartisan basis, S.3062, the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act (the Bill), advancing legislation aimed at restricting children’s access to AI “companion” chatbots. Despite the 22-0 vote in favour, senators acknowledged that the Bill faces significant hurdles on the Senate floor.

The Bill:

  • requires age verification to prevent under‑18s from accessing AI “companion” chatbots;
  • introduces criminal penalties for designing chatbots that generate sexual content in conversations with minors or promote violence or self-harm;
  • requires regular disclosures that chatbots are not human; and
  • prohibits chatbots from purporting to be professionals such as therapists or lawyers.

Senators from both parties frame the Bill as a response to perceived industry failures to protect children. However, technology and civil liberties groups criticise the proposal as overly broad, warning that it could unduly restrict protected speech and create privacy and security risks through mandatory age verification.

Within the committee, some members supported advancing the Bill while signalling that changes will be needed, including to narrow the scope of access restrictions. A companion bill has also been introduced in the House of Representatives, by Representatives Blake Moore and Valerie Foushee, mirroring the provisions of the Bill, indicating support for restrictions across the US political spectrum.

Read the Bill here.

4. US Lawmakers propose wide-ranging federal AI legislation

On 27 April 2026, California House Representatives, Ted Lieu and Jay Obernolte, introduced the American Leadership in AI Act (the Bill), a federal bill that consolidates more than 20 prior bipartisan proposals and recommendations of the Bipartisan Artificial Intelligence Task Force they co‑chaired.

The Bill covers:

  • AI standards, testing and evaluation;
  • research infrastructure and support for cutting‑edge AI research;
  • modernisation of federal AI governance, procurement and security;
  • protection for workers and support for small businesses;
  • safeguards against harmful deepfakes; and
  • AI education, literacy and inclusion.

The Bill would task the National Institute of Standards and Technology with helping to shape AI standards so that US AI systems can operate and compete globally. It would also launch multiple pilot programmes – including one focused on standards‑setting – as well as challenges to spur AI innovation within federal agencies.

To modernise public‑sector AI use, the Bill proposes measures to support agency implementation and procurement of AI, alongside an “accountability” provision addressing flawed, inaccurate or biased AI‑driven decisions affecting individuals.

The Bill would still need to be passed in the House of Representatives before moving on to the Senate.

Read the Bill here.

5. Connecticut lawmakers approve legislation to restrict addictive social media features for minors and introduce guardrails for AI tools

On 1 May 2026, Connecticut lawmakers gave final approval to state legislation that targets youth social media addiction and introduces new guardrails for AI tools, including chatbots and AI‑driven employment systems. The legislation now awaits signature by Connecticut Governor Ned Lamont, who has expressed support for the initiative.

The social media provisions, proposed by Attorney General Tong and Governor Ned Lamont, prohibit platforms from exposing minors to harmful and addictive algorithms and notifications without parental consent. They establish strong default settings on account privacy, usage time and notifications, including a ban on notifications between 9 pm and 8 am, with parental consent required to alter those defaults. Social media companies must also report annually on the number of minors using their services, how many have parental consent to use addictive algorithms, and minors’ average daily time on the platform, broken down by age and time of day. When a minor opens a social media app, a warning label pop‑up must inform them of the mental health risks associated with social media use.

The AI provisions require chatbot operators to implement protocols to detect self‑harm, clearly notify users that they are communicating with an AI companion, and bar minors from using chatbots that can encourage self‑harm or purport to offer mental health services.

The legislation also (i) introduces disclosure requirements where automated decision technologies are used in employment decisions, (ii) mandates that notifications of mass layoffs and plant closures specify whether they are related to AI or other technologies; and (iii) requires the state to develop a plan to address AI‑related employment impacts.

Read the legislation here.

6. China cyber regulator expands AI enforcement campaign to emerging security risks

On 30 April 2026, China’s Cyberspace Administration of China (CAC) launched a new four‑month “cleanup” campaign targeting what it describes as “chaos” in AI applications, broadening last year’s crackdown to cover emerging technical and content‑related risks. This is the second consecutive year of a dedicated enforcement push on AI‑related abuses, signalling a shift from rulemaking to stepped‑up supervision as the use of generative AI tools proliferates across sectors.

As in 2025, the campaign will roll out in two phases. The first focuses on AI services themselves, targeting providers that:

  • fail to complete mandatory LLM filings;
  • have inadequate safety capabilities, weak review and filtering mechanisms; or
  • have poor oversight of training data or fail to properly label AI‑generated content.

The CAC has also introduced more detailed labelling obligations, including requirements to ensure cross‑platform recognition of embedded metadata markers. New priorities include tackling:

  • AI data poisoning;
  • Abuse of “generative‑engine optimisation” techniques to manipulate AI‑generated search and recommendation results;
  • Security loopholes in open‑source AI communities; and
  • Risks linked to AI agents and “digital humans”, such as misuse of likenesses for livestreaming or professional services.

The second phase targets AI‑generated content and online behaviour, building on previous efforts against fabricated information, impersonation, astroturfing (misrepresentation of a funded campaign as grassroots support) and violent or vulgar content. New areas of concern include:

  • low‑quality, repetitive or culturally distortive “digital garbage”;
  • AI‑generated spoofs and “malicious remakes” of classic works and historical figures; and
  • harmful, sexualised or bullying content.

The CAC has urged local regulators to intensify oversight, require platform self‑inspections, close security loopholes and enhance technical capabilities to detect and prevent AI‑related risks.

Read press coverage here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.