AI and Copyright – what is to come in 2026?

AI and copyright law are rapidly evolving across the UK, EU, and US, with major cases, policy updates, and new regulations set to shape the landscape in 2026

14 January 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

The intersection of artificial intelligence (AI) and copyright law is rapidly evolving, with significant legal, regulatory, and policy developments expected in the coming year. As AI technologies become increasingly sophisticated and integrated into creative and commercial processes, courts, governments and industry stakeholders are grappling with how best to balance innovation with the protection of intellectual property rights. This article provides an overview of the most significant recent developments and outlines what to expect in the next 12 months, with a particular focus on the UK, EU and US.

Getty Images v Stability AI [2025] EWHC 3343 (Ch)

Following her long-awaited judgment in the High Court case of Getty Images v Stability AI [2025] EWHC 3343 (Ch) (which we reported on here), Mrs Justice Joanna Smith was asked to consider applications by both parties for permission to appeal aspects of the case. The claimants (Getty Images and others) sought permission to appeal the dismissal of their claim for secondary infringement of copyright. Mrs Justice Smith granted permission to appeal on this issue, noting that it raised a pure question of law—specifically, the statutory construction of the term “infringing copy” under the Copyright, Designs and Patents Act 1988 (CDPA) in the context of AI models. She emphasised that this was a novel and important point, not previously considered by the courts, with potentially far-reaching implications for AI and software more generally. Accordingly, she has allowed Getty to pursue all their proposed grounds of appeal on this issue before the Court of Appeal.

However, the factual finding made by Mrs Justice Smith at first instance complicates this appeal. At first instance, while Mrs Justice Smith found that electronic copies stored in an intangible medium (such as an AI model) could constitute “infringing copies” under the Copyright, Designs and Patents Act 1994 (CDPA), she also found that AI models such as Stable Diffusion do not store copies of training data in the model weights. According to existing case law, if an article does not constitute a reproduction of a copyright work, then it cannot be an “infringing copy” under the CDPA. This raises the question of whether the first instance decision on this point can actually be reversed, if the factual findings made by Mrs Justice Smith are not overturned.

Conversely, the Judge refused the defendants, Stability AI’s, application for permission to appeal her findings of trade mark infringement. She found that the grounds advanced by the defendant had no real prospect of success, as they largely sought to challenge her factual findings and the identification of the average consumer in the context of watermarks. Mrs Justice Smith concluded that her findings were well-supported by the evidence and that Stability AI’s arguments did not meet the threshold for permission to appeal. Accordingly, should Stability AI still wish to appeal, they will need to seek permission directly from the Court of Appeal.

Interestingly, on 5 January 2026, the Judge ordered Getty Images to pay about 70% of Stability AI’s costs (despite Getty Images being a narrow winner on the trade mark claim). Further, Getty Images has been ordered to make an interim payment to Stability AI of about £4.4m (plus interest) in legal costs. These costs orders may be revisited upon appeal. The Judge has also ordered an inquiry as to damages on Stability AI’s (limited) acts of trade mark infringement.

In late 2024, the UK Government launched a consultation on copyright and artificial intelligence, which attracted substantial public interest—garnering over 11,500 responses before closing in February 2025. The consultation addressed a range of issues concerning generative AI and copyright law, and set out three broad options for intervention in this area, in addition to an option to make no changes to the current system. By way of reminder, the options proposed were as follows:

  • Option 0: Do nothing: Copyright and related laws remain as they are
  • Option 1: Strengthen copyright requiring licensing in all cases
  • Option 2: A broad data mining exception
  • Option 3: A data mining exception which allows right holders to reserve their rights, underpinned by supporting measures on transparency (government’s preferred option)

The most controversial of these was Option 3, which sparked strong opposition, particularly from major stakeholders in the creative industries, and led to extensive debate in Parliament. The controversy culminated in a legislative compromise during the passage of the Data (Use and Access) Act (DUAA). As a result, the Secretary of State for Science, Innovation and Technology is now required to publish two key documents by 18 March 2026: (1) an economic impact assessment of the options proposed by the UK Government’s consultation on copyright and AI; and (2) a report on the use of copyright works in AI development. In the interim, and as mandated by the DUAA, the Government has released a progress statement on these matters, ahead of the final report’s publication.

On 15 December 2025, the UK Government published its “Copyright and artificial intelligence statement of progress under Section 137 Data (Use and Access) Act”. The progress statement details the government’s extensive engagement with stakeholders as part of the consultation process; and provides a statistical overview of the consultation responses.

The consultation invited feedback not only on the main policy options for AI training, but also on specific elements such as transparency, technical standards, and licensing arrangements. Beyond the training of AI models, the consultation considered broader issues, including the application of copyright to computer-generated works, the labelling of outputs produced by generative AI, and digital replicas.

The UK IPO was keen to ensure that all responses were reviewed and analysed by human eyes, without the use of AI, and so it has created a taskforce of approximately 80 existing intellectual property policy officials and analysts to conduct this review.

The consultation results revealed that a significant majority—88% of respondents—favoured the requirement for licences in all cases where copyright works are used to train AI models (Option 1). The other options received considerably less support: 7% preferred maintaining the current legal framework (Option 0), only 3% supported the government’s preferred option of introducing a copyright exception for text and data mining (TDM) with rights reservation, and only 0.5% backed a broad exception without rights reservation (Option 2). A small proportion (1.5%) did not indicate a preference.

There was strong backing from the creative industries for statutory transparency requirements to underpin licensing arrangements for AI training. Conversely, most technology sector respondents, including AI developers, supported either an exception with rights reservation (Option 3) or a broad exception (Option 2). Additionally, several respondents proposed alternative options to those set out by the government, such as exceptions specific to research activities.

The progress statement also explains that the UK IPO has convened expert technical groups on 4 key themes, namely:

  • Control and technical standards
  • Information and transparency
  • Licensing
  • Wider support for creatives

These working groups, comprising over 50 experts from diverse fields including music, publishing, film, visual arts, videogames, collective management, research, academia, and technology, provide a forum for in-depth policy discussion and the development of practical solutions.

Alongside these efforts, a cross-party Parliamentary working group has been convened to engage MPs and Peers on key policy issues such as transparency, licensing, creator remuneration, and the need for workable technical solutions. The findings and perspectives from these groups will inform the government’s forthcoming report on the use of copyright works in the development of AI systems.

The UK IPO continues to emphasise that copyright laws must protect creative works whilst also ensuring that the UK reaps the benefits of AI and remains a leading innovator on the world stage.

The Department for Science, Innovation and Technology (DSIT), working jointly with the Intellectual Property Office (IPO), and the Department for Culture, Media, and Sport (DCMS) will lay before Parliament both the economic impact assessment and the Report on the use of copyright works in the development of AI systems by 18 March 2026.

EU Code of Practice on Marketing and Labelling of AI-Generated Content

The EU published the first draft of its Code for Transparency and AI-Generated Content on 17 December 2025 (accessible here), with feedback on the draft due this month (January 2026). The purpose of the Code is to assist the providers and deployers of AI models to meet their obligations under Article 50 of the EU AI Act, which entered into force on 1 August 2024. Article 50 obliges such organisations to ensure that AI-generated or manipulated content is marked in a machine-readable format and that it is detectable as artificially generated or manipulated.

The draft Code is split into two sections: the first covers rules for marking and detecting AI content (aimed at providers), while the second covers the labelling of deepfakes and certain AI-generated or manipulated text relating to matters of public interest (aimed at deployers).

The first section requires providers of generative AI systems to mark outputs (including synthetic audio, image, video, and text) in a machine-readable and detectable manner. Providers must use a multi-layered approach to marking, combining techniques such as metadata embedding, imperceptible watermarks, and fingerprinting or logging. Providers are also required to facilitate the detection of AI-generated content by users and third parties by requiring signatories of the Code to provide publicly available detectors to enable users to verify the origin of certain content; support AI literacy; and maintain compliance frameworks.

Under the second section, deployers of AI systems must disclose the artificial origin of deepfakes and AI-generated or manipulated text in a clear and distinguishable manner, using a common taxonomy and icon. The Code provides detailed measures for different content modalities (e.g., video, audio, images) and requires accessible disclosure for all users, including those with disabilities. It also outlines compliance, training, and monitoring requirements, and encourages cooperation with market surveillance authorities and other stakeholders.

A second draft of the Code is expected to be published in March 2026, integrating the feedback received.

Publication of the final Code of Practice is expected by June 2026.

New AI Litigation – A Global Roundup

Meanwhile, several high-profile lawsuits are progressing across the US and Europe, with the outcomes likely to influence the global legal landscape for AI and copyright. We expect further developments in these proceedings during the course of 2026.

US

Attack the Sound v Kunlun

On 17 December 2025, independent musicians and songwriters filed a class action against Skywork AI and parent company Kunlun Tech in the Illinois Northern District Court. The plaintiffs allege that their copyrighted music and voices were misappropriated to train the AI-powered music generation programme, Mureka. Attack the Sound seeks damages under federal copyright law, state privacy statutes, and the Illinois Biometric Information Privacy Act. The defendants’ responses to the allegations are pending.

Lyon v Adobe

At the end of last year, Adobe was struck for the first time with an AI copyright infringement claim against it. On 16 December 2025, the author Elizabeth Lyon filed a class action against the tech company on behalf of a proposed class of affected copyright holders. Ms Lyon claims that Adobe trained its SlimLM AI models using the "SlimPajama" (also known as "RedPajama") dataset, which allegedly contained pirated copies of copyrighted books.

U.S. News and World Report v OpenAI

In November 2025, U.S. News and World Report filed a lawsuit against OpenAI. U.S. News alleges that OpenAI used its copyrighted articles and rankings to train ChatGPT without authorisation or payment, allegedly resulting in direct harm to its business and enabling the AI to reproduce its content without credit. The complaint also asserts that ChatGPT generates inaccurate information attributed to U.S. News, which could negatively impact its reputation.

The case is part of a larger, multi-district litigation and has been consolidated with similar lawsuits for consideration by the court. The core legal issue being considered across the consolidated cases is whether using publicly available copyrighted materials for AI training constitutes “fair use” under US copyright law.

California Newspapers Partnership v Microsoft and OpenAI

California Newspapers Partnership seeks damages from Microsoft and OpenAI for allegedly memorising and using copyrighted news articles to train their large language models (LLMs) without consent. It is also alleged that the use complained of has caused the LLMs to generate verbatim copies of the protected works in response to prompts, which California Newspapers Partnership’s claims undermines its subscription models for access to its works.

The case has been consolidated with a wave of similar lawsuits and is currently progressing before the New York's Southern District Court as part of a multi-district litigation.

Ziff Davis v OpenAI

Ziff Davis (digital medial publisher and owner of publications like PCMag, IGN, and Mashable) alleges that OpenAI used its copyrighted material without permission to train its AI models and that ChatGPT outputs sometimes provide incomplete or false summaries of their content.

The complaint was brought in April 2025, but several elements of the complaint were dismissed by the Judge in an order issued on 16 December 2025, following a motion to dismiss filed by OpenAI. Nevertheless, the claims of contributory copyright infringement and removal of copyright management information have been allowed to proceed and the case continues before the U.S. District Court for the Southern District of New York.

Europe

Robert Kneschke v LAION

Photographer Robert Kneschke sued LAION, a non-profit organisation creating AI research datasets like LAION-5B, after his watermarked image appeared in their dataset. Kneschke’s images had been downloaded from Shutterstock, which subjected them to terms and conditions prohibiting their scraping without consent. On 27 September 2024, the Hamburg District Court found that there was infringement in the form of reproduction of Kneschke’s work in the dataset. However, LAION was permitted to rely on the exception to infringement of TDM for non-commercial scientific research purposes as LAION was deemed a research organization providing a dataset for non-commercial use.

This first instance decision was upheld by the Higher Regional Court of Hamburg on 10 December 2025, but the Hamburg Court went further in also finding that the general TDM exception in Article 4 of the Directive on Copyright in the Digital Single Market (Directive (EU) 20A/790) (DSM Directive) applied because Kneschke had not provided an express, machine-readable reservation of rights when the image was downloaded in late 2021.

The case continues as the Hambourg Court has allowed a further appeal to the Federal Court of Justice.

GEMA v OpenAI

In November 2025, the Munich Regional Court found that OpenAI’s use of copyrighted song lyrics in AI training and model outputs constituted copyright infringement.

In this case, GEMA, the German music rights collecting society, alleged that OpenAI used lyrics from 9 well known German songs to train its AI models and that these models reproduced the lyrics almost verbatim in the model outputs. The court ruled that there had been a degree of memorisation of the song lyrics within the AI models and that both this and the reproduction of the lyrics in the model outputs constituted infringement of copyright under German law.

The court also determined that the TDM exception did not apply because the permanent memorisation and reproduction of substantial parts of the copyrighted works fell outside the scope of the exception.

The judgment is not yet final and OpenAI has announced its intention to appeal the decision to the Munich Higher Regional Court. A referral to the Court of Justice of the European Union (CJEU) is also possible.

Like Company v Google Ireland Limited

On 3 April 2025, a claim by Hungarian publisher, Like Company, against Google’s Gemini AI for allegedly reproducing and summarising its press articles without permission, was referred to the Court of Justice of the European Union (CJEU). This referral represents the first opportunity for the CJEU to assess the application of EU copyright law to generative AI.

The CJEU is being asked to consider:

  • Whether the display of text from web pages and press publications in LLM-based chatbot responses constitutes a communication to the public under EU law;
  • Whether the process of training an LLM-based AI chatbot on copyright protected works constitutes an instance of reproduction;
  • If there is reproduction, whether the TDM exception under Article 4 of the DSM Directive applies; and
  • Who is liable for AI chatbot outputs that include copyright protected content in response to a user’s prompt.

However, the judgment is not expected until at least 2027.

Summary

The next 12 months could be pivotal for the future regulation and adjudication of AI in the context of copyright law. In the UK, it is hoped that the outcome of the Getty Images appeal and the UK Government’s forthcoming recommendations will shape the future legal framework in this area and indicate how the Government and the Courts will ensure an appropriate balance between the rights and interests of rightsholders and AI developers. The EU’s Code of Practice will set new standards for transparency and the labelling of AI-generated content, while ongoing litigation in the US and Europe are expected clarify the boundaries of copyright protection in the context of AI training and outputs.

For more AI and intellectual property insights, visit our Insights page.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.