On 14 May, Simmons & Simmons gathered legal experts, industry leaders, rightsholder representatives and policy advisors to discuss the evolving landscape of AI and its implications for copyright law. The event was led by partners, Joel Smith and Priya Nagpal from our copyright team in London. We have set out a brief report of the event below.
Introduction
With the UK AI and copyright consultation closed and the UK Government currently reviewing its options and the 11,500 responses received, we decided to kick off the event with an anonymous poll (through menti.com) of our attendees to understand which policy option they favoured from the consultation - a copy of this report showing the results of the poll can be downloaded on the right hand side of this page. However, most either agreed with following the Government's published preferred option of introducing a text and data mining exception for commercial use with a rights reservation (opt-out) for copyright owners or alternatively, to strengthen copyright with no wider exception. This highlights the range of views about an emotive debate.
When we asked our audience which issue they deemed most important in the event that the UK Government introduces an EU-style TDM exception, the majority considered transparency from AI developers as to the sources of data and works used for training to be top of the list.
AI Fundamentals
Our LLM Programme Manager, David Huston (who leads a team of data scientists and AI developers at Simmons), then provided an overview of the 'AI Fundamentals' which summarised the key technical issues arising from training AI models, including what happens with training data, prompting and outputs.
Panel 1: "The current landscape with growing AI adoption"
Our first panel, led by Priya Nagpal, discussed the state of copyright law in the UK around the use of copyright material for the training and development of AI models, as well as potential infringement issues and the status of outputs. For the panel, we were joined by Priyanka Raswant (Founder & CEO, Morar.ai), Sam Sharps (Executive Director for Policy & Politics, The Tony Blair Institute) and Abbas Lightwalla (Director of Global Legal Policy, IFPI).
The panel debated the thorny issue of whether the use of third party copyright works to train AI models does in fact involve acts of copyright infringement or whether these activities are similar to humans learning, for example when they read books.
Unsurprisingly, our panellists had very different views on this issue. Abbas Lightwalla from IFPI provided the perspective that copyright is implicated when developing a generative AI model, concluding that: "the UK's existing framework which requires licensing is the best way forward compared to an opt-out mechanism, which does not provide sufficient legal clarity for either party."
At the time, reports were beginning to emerge that the UK Government was potentially changing its preferred option (as mentioned above) and Ministers were looking more broadly at different proposals (including promoting licensing of training data and some form of transparency obligation for AI developers).
This was in response to lobbying from the creative industries (and organisations such as IFPI and the BPI) as well as Baroness Kidron's proposals in the House of Lords to amend the Data (Use and Access) Bill. At the time, the amendments centred around a transparency obligation for AI developers (requiring them to provide copyright owners with information about whether their works had been used as part of pre-training, training, fine-tuning and retrieval-augmented generation in the AI model, or any other data input to the AI model). Our panel debated how practical it would be for AI developers to implement any required transparency obligations.
Since then, the amendments to the Data (Use and Access) Bill have been the subject of Parliamentary "ping pong" between the House of Commons and the House of Lords. The Government resisted the amendments as premature given the ongoing consultation. The Data (Use and Access) Bill has now passed, without a copyright amendment. However, the Government has confirmed that the decision on AI and copyright has been put off for a year, likely after judgment in the Getty Images v Stability AI trial, so it can be dealt with in a new AI Regulation Bill to be put forward in July 2026. In the meantime, the Government has committed to publish a full impact assessment of the proposed legislation, as well as a comprehensive report looking at 6 areas, including transparency, technical standards and licensing options.
Sam Sharps from The Tony Blair Institute noted that, whilst many agreed at a policy level about adopting transparency obligations, "there might be some reluctance to comply with such obligations due to concerns over trade secrets and the specifics of what data is used."
It was clear from our poll about the potential measures that the Government could adopt that ranking highest (at 28%) was the concern that any opt-out needed to be in a machine-readable form (and how that would work in practice for both rights holders and AI deployers). The importance of having an effective opt-out mechanism for rights holders was also acknowledged by the EUIPO in its recent study into Generative AI developments from the perspective of EU copyright law (see the full report here). The EUIPO concluded that the capacity of rights holders to reserve their rights effectively is a pre-requisite for the development of an effective licensing market for use of copyrighted material as training data.
But as Priyanka Raswant pointed out, should the UK Government over-regulate in this area, it could result in the UK missing out on investment with AI developers choosing to relocate their operations to jurisdictions with a more attractive regulatory setup. In any event, a significant amount of AI model training had already been carried out and it may be difficult to "course correct" now.
Keynote speech: Andrea Appella (Associate General Counsel EMEA, OpenAI and Visiting Professor at Kings College London)
We were next joined by our keynote speaker Andrea Appella (Associate General Counsel EMEA, OpenAI and Visiting Professor at Kings College London), who underscored the landscape for the adoption of AI, the potential opportunities from further innovation and the backdrop of copyright protection. Andrea commented that in discussions around AI: "the debate should focus more on opportunities than threats."
Panel 2: "The future in an AI world"
Our second panel, led by Joel Smith, followed up on potential developments after the UK Government's consultation, and discussed licensing opportunities and the current litigation landscape in UK courts. For the panel we were joined by Louise Dreadon (Director of Legal, Content, Sky), Serena Dederding (General Counsel and Company Secretary, Copyright Licensing Agency), and Jaani Riordan (Barrister, 8 New Square).
So far, most AI and copyright-related litigation is taking place in the United States, but in the UK, the trial between Getty Images and Stability AI started on 9 June 2025 and it is likely to be one of the first full trial judgments internationally that sheds light on the complex questions in issue. Early reports suggest that the case will be hard fought, with a case management decision made by the trial judge on the first day, around a key point on the pleadings and a trade mark infringement argument based upon alleged damage to Getty Images' brand reputation through association with specific images, already being referred to the Court of Appeal for review.
Jaani Riordan from 8 New Square highlighted some of the issues of principle that may arise in copyright litigation, which range from what sources of training data have been used, where training has occurred, and who is the relevant person who performs the acts leading to output being generated. Jaani also identified some of the potential policy options for legislative reform. Insofar as disputes over the reasonableness of licence terms may in future arise, Jaani reminded us that the UK already has a Copyright Tribunal that can handle such disputes, but noted that, "The Tribunal's jurisdiction is statutory and is subject to territorial limitations. For example, it does not have jurisdiction under section 126 of the 1988 Act to set the terms of licences of foreign copyrights. If Britain would like to become a forum of choice for licensing disputes, consideration should be given to enlarging the powers of the Tribunal to make decisions in relation to multi-territorial licences."
Louise Dreadon considered that "a direct licensing model is probably the correct direction to go in" as a way of resolving disputes over AI training using copyright protected materials. This approach would protect and fairly remunerate content creators which is paramount to ensure that the UK creative industry, which creates billions for the UK economy every year, is not impacted. Dreadon also commented that having a robust legal framework in place to govern such arrangements is needed to protect the technology companies in future.
This was echoed by Serena Dederding from the Copyright Licensing Agency who added that licensing is already occurring in the AI sector, including in the UK, and stated that: "Licensing, together with transparency, builds trust in AI."
We will be keeping our clients informed about further developments as they happen in this fast-moving area of policy, litigation and evolving commercial offerings.
Explore our website for more information about Simmons & Simmons' copyright and AI practices, here and here.









.jpg?crop=300,495&format=webply&auto=webp)

_(1).jpg?crop=300,495&format=webply&auto=webp)

