Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
China mandates labelling of AI-generated content in new regulation
Governor of Virginia vetoes high-risk AI law
Italian Senate approves bill amending copyright protections for AI works
Japan’s Financial Services Agency publishes discussion paper on responsible use of AI in the finance sector
UK Copyright Licensing Agency expands licences to support generative AI use in the workplace
US NIST finalises guidelines to mitigate cyberattacks on AI systems
1. China mandates labelling of AI-generated content in new regulation
On 14 March 2025, the Cyberspace Administration of China released the final ‘Measures for Labelling Artificial Intelligence-Generated Content’ and the mandatory national standard ‘Cybersecurity Technology – Labelling Method for Content Generated by Artificial Intelligence’ (together the Labelling Rules) which aim to enhance transparency by imposing labelling requirements for AI-generated content. The Labelling Rules are due to come into force in September 2025.
The Labelling Rules aim to cover the whole lifecycle of AI-generated content, from its production to dissemination and use, and contain provisions for both implicit and explicit labelling. For content which might confuse the public, such as text or voice generation and face swapping, explicit labels must be placed prominently to show that they are AI-generated. The Labelling Rules provide for implicit labels to be embedded in the metadata of AI-generated content, although this is ‘encouraged’ rather than mandatory given the technical challenges it may present to businesses.
Service providers which disseminate information online are also required to verify whether metadata contains implicit labels and, if so, to disclose the presence of AI-generated content to the public.
Read the Labelling Rules here and here (only available in Chinese).
2. Governor of Virginia vetoes high-risk AI law
On 24 March 2025, HB 2094, a proposed law on the development, deployment, and use of high-risk AI, was vetoed by the Governor of Virginia. HB 2094 would have required developers of high-risk AI systems to provide detailed documentation outlining the limitations and performance evaluations of their systems and detailing measures taken to mitigate algorithmic discrimination risks.
HB 2094 would also have required deployers to disclose AI usage to consumers, particularly in cases of adverse decisions, and to offer consumers opportunities for appeal in such cases.
The Attorney General would have had enforcement authority and potential fines for breach would have ranged from $1,000 to $10,000.
The explanation of the Governor’s veto describes HB 2094 as one that ‘stifles progress and places onerous burdens’ on businesses and warns that it could have harmed ‘the creation of new jobs, the attraction of new business investment, and the availability of innovative technology’.
It remains to be seen whether the Democratic-controlled Virginia General Assembly will seek to override the (Republican) Governor’s veto, which would require a two-thirds vote in both houses. It will also be interesting to see if this impacts the progress of other state AI bills in e.g. Texas, Illinois and New York.
Read more here.
3. Italian Senate approves bill amending copyright protections for AI works
On 20 March 2025, the Italian Senate approved a bill on Provisions and Delegation to the Government on AI (the Bill). The Bill amends Law No. 633 on the Protection of Copyright and Neighbouring Rights by limiting copyright protections to works of human intellect. Under the Bill, works generated using AI tools are afforded copyright protection only if they represent the result of the human author’s intellectual effort.
The Bill also provides for access by AI models (including generative AI models) to certain information online. Under the Bill, AI models can reproduce and extract works or materials which are available online or in databases for the purpose of text and data mining, providing that such access is legitimate.
In relation to the EU AI Act, the Bill appoints two institutions to oversee compliance and implementation. The Agency for Digital Italy (AgID) is tasked with promoting AI innovation and overseeing the accreditation and monitoring of AI conformity bodies. The National Cybersecurity Agency (ACN), meanwhile, has responsibility for supervising AI systems and monitoring cyber security risks. In addition, the Bill appoints AgID as the notifying authority and ACN as the market surveillance authority under the EU AI Act.
The Bill is now subject to approval by the Chamber of Deputies, which is the final stage of approval before it can come into force.
Access the Bill here.
4. Japan’s Financial Services Agency publishes discussion paper on responsible use of AI in the finance sector
On 4 March 2025, Japan’s Financial Services Agency (FSA) published an AI discussion paper on the use of AI in the financial sector.
The paper is based on a survey of 130 financial institutions, as well as interviews with selected firms and international policy developments. The paper suggests use cases for both traditional and generative AI, including automation of business processes, customer interaction, risk management and market analysis.
The paper identifies challenges such as explainability, bias, cybersecurity and the potential for AI to be misused in financial crime. It also gives examples of effective AI governance measures adopted by financial institutions, including internal policies and staff training programmes, which are suggested as models to be adopted by others in the sector.
Read the full paper (only available in Japanese) here and read an English summary here.
5. UK Copyright Licensing Agency expands licences to support generative AI use in the workplace
On 18 March 2025, the UK Copyright Licensing Agency (CLA) announced that new permissions would be added to its commercial and public sector licences, allowing businesses to use copyright-protected content to prompt generative AI tools.
The new permissions will come into effect on 1 May 2025. Businesses will be able to upload, or copy and paste, third party materials into generative AI tools at work without infringing any copyright in the materials, subject to payment of a licence fee.
This follows research by the CLA involving 4,000 organisations, which found that 61% of professionals are already using generative AI at work, many of whom upload third party materials into their prompts.
Read more here.
6. US NIST finalises guidelines to mitigate cyberattacks on AI systems
On 20 March 2025, the US National Institute of Standards and Technology (NIST) issued final guidelines for AI developers and deployers to identify and mitigate cyberattacks on their systems (the Guidelines).
The Guidelines differentiate between predictive and generative AI systems, and set out the potential attacks relevant to each. The attacks are further classified with respect to the type of AI system, the stage of the machine learning process in which the attack is mounted, and the objectives and capabilities of the attacker.
By listing and analysing the various possible attacks that could be launched, the Guidelines aim to assist organisations in understanding the risk and severity of the attacks and to employ mitigation techniques accordingly. For each different class of attack, mitigation techniques are listed and technical detail is given on how they can be employed and the challenges they face. For example, the Guidelines set out three main classes of defences against adversarial evasion attacks: adversarial training, randomised smoothing and formal verification.
Read the Guidelines here.

.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)








.jpg?crop=300,495&format=webply&auto=webp)


_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)





