A masterclass in can-kicking: UK Government views on AI and copyright

UK Government’s 2026 AI and copyright report backs no reform, keeps the status quo, favours licensing and transparency, and extends uncertainty for creators.

26 March 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00
  • Despite five years of consultation, the UK Government has declined to back any of the four main options for the AI related UK copyright reform which it consulted on, shelving the previously preferred opt out text and data mining (TDM) exception (Option 3) and ruling out a broad TDM exception (Option 2), so the current legal position remains essentially unchanged for now.
  • This means copying of protected works for commercial AI training in the UK will continue to require a licence. It would be surprising if this position changes in the near future because the UK Government appears committed to retain some level of copyright protection against TDM to safeguard the UK’s creative sector.
  • The Government’s “wait and see” stance prolongs uncertainty for all stakeholders; while rights holders will welcome the decision not to push through a change to the law that would have allowed model developers to use their content more freely, their ability to enforce their rights remains unclear and to be tested in the courts.
  • Meanwhile, model developers have no certainty about the extent to which they will have to disclose their training data sources and methods if they market their models in the UK. However, the mood music from Government suggests that any transparency requirements that are introduced in the UK will be at least equivalent to the training data disclosure requirements under Article 53 of the EU AI Act.
  • This uncertainty could discourage developers both inside and outside the UK from focusing on the UK market, which would be hugely damaging to the UK’s AI agenda.
  • One of few areas in which the Government has given a clearer indication of its position is labelling of AI generated outputs, where the Government currently prefers industry driven tools, standards and a targeted labelling working group over immediate legislation.
  • The Government also expressed support for the abolition of the computer generated works right under s.9(3) of the Copyright Designs and Patents Act 1988 and is actively considering a new digital replica/personality right to address deepfakes and unauthorised commercial exploitation.

The Report and impact assessment

On 18 March 2026 the UK Government published two long-awaited documents on AI and copyright:

  • a substantive Report on Copyright and Artificial Intelligence (the “Report”) running to 125 pages; and
  • an accompanying Copyright and AI Impact Assessment (the “Impact Assessment”) running to 52 pages.

Both documents were produced pursuant to the Data (Use and Access) Act 2025 following the Government’s 2024/25 consultation on copyright and AI. Together, they were intended to set out the Government’s current thinking on how copyright should apply to AI training and use of AI, and how different reform options could affect the UK economy.

However, despite the Government emphasising the importance of “protecting the UK’s position as a creative powerhouse, while unlocking the extraordinary potential of AI to grow the economy and improve lives”, the approach has been to delay further providing clear direction on policy which will achieve this aim in the interests of both the UK’s creative industries and AI developers.

This article summarises the key points from both publications and highlights what they mean for rights holders and AI developers.

Background

The Report fulfils the obligation in section 136 of the Data (Use and Access) Act 2025 to report on the use of copyright works in AI training and on specific themes listed in the Act (access to data, transparency, technical tools, licensing and enforcement), plus two additional topics considered as part of the consultation, namely computer generated works and digital replicas.

The consultation ran from 17 December 2024 to 25 February 2025 and attracted 11,520 responses, the majority of which reflected the views of the creative industries (including many individuals), with a significant, but smaller, number from AI and tech companies, researchers and cultural institutions.

The Impact Assessment, required by section 135 of the same Act, provides an economic analysis of the four copyright policy options consulted upon, and considers the effects on copyright owners, AI developers and users, including individuals and SMEs. The Impact Assessment deliberately does not name a preferred option for the UK, but instead sets out the illustrative costs and benefits of each and identifies key uncertainties which the Government says necessitate further research and consultation before policy decisions can be made.

Assessment of the Four Policy Options

As reported previously, the consultation tested four main options for copyright and AI training:
1, Option 0 – Do nothing (status quo).
2, Option 1 – Strengthen copyright so licensing is required for AI development in all cases, potentially backed by transparency and market access measures.
3, Option 2 – A broad data mining exception, allowing text and data mining (TDM) for any purpose, with no opt out.
4, Option 3: A data mining exception which allows rights holders to reserve their rights, underpinned by supporting measures on transparency (the Government’s preference at consultation stage).

While Option 3 was cited as the Government’s preferred option at the start of the consultation, the Government has now shelved this proposal, as well as Option 2, and has stated that it now has no preferred option, due to the so far untested nature of new copyright exceptions and their impact on AI investment, as well as uncertainty as to the impact that copyright reforms may have on the market for copyright licensing.

Option 3 would create a TDM exception similar to Article 4 of the EU’s DSM Directive, permitting TDM for any purpose on lawfully accessed works, but only where rights holders have not expressly reserved their rights. This option was widely opposed (supported by only 3% of respondents), particularly among rights holders due to a lack of effective rights reservation mechanisms available to support effective opt-outs, as well as the regulatory burden this option would place on rightsholders, with individual and SME rightsholders likely to be most negatively affected due to their limited resources. But developers also expressed concerns that this option would require them to continuously review training data and potentially re-licence content they have already accessed in response to newly reserved rights. Developers also warned that this option would encourage high take-up of opt out mechanisms and make the UK a less attractive location for AI training.

The Impact Assessment emphasises that the effects of Option 3 are highly sensitive to opt out rates and the effectiveness of technical tools. If few works are reserved, the outcome resembles Option 2; whereas if many are reserved, the effect tends towards the status quo. In all cases, rights holders would face implementation, monitoring and enforcement costs, which would weigh more heavily on SMEs and individuals.

Option 1 would maintain the basic structure of current UK copyright but “strengthen” it in three ways:

  • Licensing in all cases: affirming that uses of copyright works in AI training always require permission, with existing exceptions interpreted narrowly.
  • Transparency obligations: detailed disclosure of training data sources and web crawlers, and labelling of AI outputs, potentially overseen by a regulator or public body.
  • Market access measures: applying UK copyright rules to models trained overseas, but placed on the UK market (e.g. through secondary infringement / importation concepts or EU style AI regulation), so that developers could not rely on more permissive overseas regimes when serving UK users.

The creative industries strongly favour this option in principle, with licensing viewed as the primary mechanism for fair remuneration, and transparency and market access proposed as ways to prevent “forum shopping” by AI developers.

While the Impact Assessment notes that this option could increase licensing and revenues for rights holders, particularly large ones with extensive catalogues and bargaining power; it may cause overseas AI providers to withdraw or delay deployment of models in the UK, if the cost and risk of adapting their global training practices to UK standards outweighs the value of the UK market. This could dampen the growth of the UK AI sector, reducing AI adoption and limiting the productivity gains that could be achieved by all UK sectors. Accordingly, the Government is reluctant to revise UK copyright law to apply it to AI systems developed abroad, and will instead allow UK courts to clarify the scope of secondary copyright infringement (see Getty Images v Stability AI- Getty has been granted permission to appeal and Stability AI is currently seeking leave for permission to appeal) and monitor international legal developments in this area.

Option 2 (overwhelmingly opposed by the creative industries) would introduce a broad, general purpose TDM exception allowing copyright works to be mined for any purpose, including commercial AI training, where the user has lawful access. Accordingly, this option was cited in the Impact Assessment as being the most attractive for AI development in the UK, by facilitating access to large data sets; while also being the option with the most negative implications for the creative industries by reducing incentives for licensing thereby presenting the greatest risk to rights holders revenue generation. Ultimately, in light of strong stakeholder opposition, technical uncertainty and the fast moving international context, the Government has ruled out adopting such a broad exception at this stage.

Due to the absence of any policy reform proposed by the Government, the status quo (Option 0) will continue, at least for now. Under the existing system, it is generally accepted that copying works for AI training in the UK will require a licence unless one of the existing exceptions applies (e.g. temporary copies, or TDM for non commercial research). While no reforms (for now) may be welcomed by some on the side of the creatives, a lack of transparency obligations on AI developers is perceived by many as posing a barrier to rights holders and creators enforcing their rights where a lack of disclosure of training data sources makes it near-impossible to prove what works have been used for AI training.

Lack of clarity about the level of transparency requirements that will apply when marketing models in the UK is also bad news for model developers because it frustrates their ability to plan their business strategy for the UK. This uncertainty could end up discouraging developers both inside and outside the UK from focusing on the UK market. That would be hugely damaging to the UK’s AI agenda.

As the Government has concluded that the consultation has not produced a clear “winning” option, it is now exploring alternative, but more targeted, reforms to UK copyright law, which were proposed by consultation respondents. Examples cited in the Report include:

  • an exception for data mining for science and research (including commercial research), allowing AI driven analysis of lawfully accessed material where outputs do not compete with, or act as substitutes for the original works;
  • an exception for public interest uses, e.g. online safety, content moderation, security, and detection of harmful or illegal content; and
  • a broad exception along the lines of Option 2 but accompanied by a statutory licence or levy akin to an option being explored by the Government of India.

The Government intends to gather further evidence on these alternative approaches in order to determine their potential for supporting the Government’s objectives.

Input transparency

There is strong consensus among rights holders in favour of greater transparency about the sources of training data and how they have been collected (e.g. by web crawlers), with over 90% of survey respondents indicating support for some form of disclosure.

Transparency is cited by rights holders as essential to their ability to effectively enforce their rights against misuse. Some rights holder are even calling for UK copyright law to be applied to AI models which are developed overseas but made available in the UK. Whereas, AI developers caution that detailed work by work disclosure may be technically challenging and costly, particularly for start-up developers, and may risk exposing valuable trade secrets.

While UK copyright law makes it an infringement to deliberately remove or alter rights-management information on copyright works, there is at present no requirement under UK law for AI developers to disclose the details of copyright works used to train their models.

The position taken by the Government is to work with industry stakeholders and monitor the approaches taken by the EU and California, in order to develop an approach that is effective yet proportionate, particularly for UK-based SME model developers, and which addresses AI models developed both inside and outside of the UK.

Output transparency and labelling

Most stakeholders support some form of labelling of wholly AI generated content, particularly to combat deepfakes and disinformation. However, due to the benefits of AI to support creative processes, there is a call for a more nuanced approach to AI assisted works (such as photographs and CGI use in films), where human creativity remains central to the output generated.

The Report acknowledges that some developers and service providers are already offering labelling services and tools in the UK, and reference is made to other jurisdictions (including the EU, California, China and South Korea) that have already legislated in this area.

The Government has acknowledged the public interest in the labelling of AI outputs by establishing a targeted working group to address the issues. This group intends to continue collaborating with industry and monitoring the development of new identification and provenance tools internationally, but is not yet proposing a statutory labelling regime for the UK.

Technical tools and standards

The Report devotes significant attention to technical tools and standards used to manage access to and the use of online content, including:

  • Site based controls, such as robots.txt under the Robots Exclusion Protocol, and the development of forthcoming IETF standards to allow more control over broad categories of web-crawlers.
  • Unit based controls, such as embedded metadata in file headers to restrict access (e.g. the TDMRep Protocol and tools like Glaze and Nightshade to resist style extraction).
  • Registries and notifications, such as the EU’s proposed centralised registry for opt outs, and AI firms’ own direct opt out notification processes.

In the Report, the Government recognises that these tools are primarily market driven and that any legislative or regulatory intervention must not inhibit ongoing innovation in this area. Notwithstanding, the Report also acknowledges the need for widespread adoption and buy-in from all stakeholders, including intermediaries, creators and AI developers, in order for any agreed standards to be effective. This includes ensuring that rights reservation methods available are not unduly burdensome on rights holders, and that, when such rights reservations are employed, they are respected by developers undertaking web crawling activities.

The Government’s policy is to work with experts and industry to develop best practice, while keeping the need for regulation under review and continuing to monitor the effectiveness of approaches being taken in other jurisdictions.

Licensing

Under UK copyright law, use of copyright works requires a licence from the copyright owner, unless one of the statutory exceptions applies.

The market for the licensing of copyright works for AI training is rapidly evolving, particularly for restricted or paywalled content, such as news, images and academic publications. The Impact Assessment notes that at present, licensing is market-driven and most of the reported deals are direct licences between large rights holders and large AI developers. However, collective licences for AI training are also starting to emerge (for example in publishing), which the Report acknowledges could complement the direct licensing already taking place.

Due to concern expressed during the consultation that licensing may not always benefit SME or individual rights holders and that it could restrict knowledge access, the Government is hesitant to intervene directly in licensing markets at this stage – for example by setting rates, imposing compulsory licences or introducing a statutory levy. Rather, the Government sees its role being to “enable” licensing by improving transparency, technical standards and access to public sector datasets (e.g. via the proposed Creative Content Exchange).

The Government proposes to monitor global developments and alternative licensing approaches such as that contemplated by the Government of India, before proposing changes to the current market-led approach.

Enforcement and regulators

The UK’s existing copyright enforcement framework is characterised as flexible, technology neutral and internationally highly regarded. The main practical difficulty for rights holders seeking to enforce their rights in cases of unauthorised use of copyright works is the cost of discovering and proving infringing use. Accordingly, representatives from the UK’s creative industries have called on the UK Government to establish stronger transparency rules to facilitate the detection of unauthorised use of copyright works for AI training.

Despite calls by some Parliamentarians for the Government to follow the EU’s regulatory approach to copyright and AI, the Report expressly states that the Government will not create a new dedicated AI/copyright regulator, nor will it give existing regulators (e.g. ICO, Ofcom, CMA) a general mandate over AI and copyright. Rather, the Government intends to keep under review the possibility of market access based regulation similar to the EU AI Act, but recognises the risk of deterring model availability in the UK. The Government also states that it will work with the judiciary and law enforcement to ensure enforcement remains effective and accessible, particularly for individuals and SMEs.

The consultation also sought views on the UK’s protection of computer-generated works. Section 9(3) of the Copyright Designs and Patents Act 1988 grants a specific form of copyright protection to “computer generated” literary, dramatic, musical and artistic works where there is no human author and where such works are original. The author in such cases is deemed to be the person by whom the arrangements necessary for the creation of the work are undertaken.

The Report concludes that this provision is increasingly at odds with the underlying rationale of copyright (which is to reward human creativity) and that it sits uneasily with the modern originality test (author’s own intellectual creation reflecting personal choices), which presupposes a human author.

Prior to the consultation, the Government’s preferred approach was to abolish this right unless the consultation produced strong evidence in support of it. However, despite most respondents expressing clear dissatisfaction with the right as drafted, the Government has decided to continue monitoring the use and impact of s.9(3) before pushing forward with legislative reforms to remove it.

A new UK personality right?

The Report also addresses “digital replicas” – AI generated imitations of a person’s voice or likeness, often in highly realistic “deepfake” form. It recognises both the legitimate uses (e.g. dubbing, accessibility, de aging in film) as well as the growing risks (commercial exploitation without consent, reputational damage, deception, and technology facilitated sexual abuse) and identifies a gap in the law.

At present, UK law offers only partial, piecemeal protection against “deepfakes”, primarily via performers’ rights, the law of passing off, defamation law, data protection rules and online safety regulations; and the consultation reflected a general sentiment that the current legal framework is inadequate for addressing the risks posed by unauthorised digital replicas. The Report notes that:

  • many performers and creators feel they do not have a reliable legal route to prevent, or obtain redress for, unauthorised digital replicas, particularly when they are not high profile figures; and
  • contractual protection alone is considered inadequate and does not provide artists with sufficient control over their image and likeness.

Several other jurisdictions (including Denmark and the United States) are developing new digital replica rights or updating existing “right of publicity” laws to deal specifically with AI generated likeness and voice. The Government therefore proposes to explore options for stronger protection against unauthorised digital replicas, including whether it would be beneficial to introduce a new digital replica or personality right in UK law, and how this would interact with existing IP, privacy and criminal frameworks.

Concluding thoughts

Evidently, the current legal framework in the UK is inadequate for achieving the Government’s goal of encouraging investment in AI while bolstering the UK’s creative industries; but finding a solution which achieves this, and which appropriately balances the interests of both the AI developers and creative industries, is proving to be a monumental challenge.

For now, the Government’s approach is to “wait and see”, leaving both sides of the debate with no choice but to do the same. The Government’s reluctance to provide clarity and direction in this area has been criticised for perpetuating the uncertainty which both creatives and AI developers cite as a barrier to progress. Lord Holmes of Richmond MBE (one of the UK’s leading authorities on digital regulation) has summarised the position for us as follows:

It’s difficult to imagine that creatives or developers will be delighted with no decision and more consultations? We never expected necessarily much from the Government update report but perhaps they could have committed to more than this. It seems that, as is the case across the Government’s AI ‘strategy’ it’s yet more wait and see. Wait and see - the perspective of the spectator, not the player.’ Whether creative or citizen, developer or deployer, clarity is required. The UK needs a cross economy cross sector AI Bill and the time for this is well overdue.”

In the meantime, we recommend that rightsholders and model developers prepare for a world in which:

  • licences are required for the large scale copying and commercial use of protected works for AI training in the UK (unless works are used for non-commercial research purposes); and
  • model developers are required to disclose information about the type of data they have used to train their models and the jurisdictions in which the training took place if they wish to market their models in the UK. We think it is likely that, at a minimum, these transparency requirements will be equivalent to the training data disclosure requirements in Article 53 of the EU AI Act. However, it is quite possible that UK training data transparency rules could end up being more onerous than those under the EU AI Act.

Simmons & Simmons will continue to monitor developments in this area and to report on the same. For more AI and intellectual property insights, visit our Insights page.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.