Legal professionals using AI: English courts warn about serious risks

The English courts have issued a stark reminder of the serious risks of legal professionals using AI in court documents, including risks to privilege.

05 March 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

The English courts have issued a stark reminder of the serious risks of legal professionals using AI in court documents, including risks to privilege.

In UK v Secretary of State for the Home Department [2026] UKUT 81, the Upper Tribunal (Immigration and Asylum Chamber) was asked to determine �� in two separate instances - whether the lawyers interacting with the court had conducted themselves by the proper professional standards (the so-called ‘Hamid’ jurisdiction).

In this case, the legal professionals had either themselves used AI, or their junior staff had used AI, in the context of immigration judicial review applications. This resulted in false case citations included in the documents put before the court.

The Tribunal decided that the legal professionals had not conducted themselves properly and that a referral to the relevant regulator was warranted, though in one case the solicitor had already self-referred. In doing so, the Tribunal made various remarks about the serious risks of legal professionals using AI:

  • First, the Tribunal issued an important reminder that “any legal practitioner who commences or pursues proceedings in a court or tribunal owes certain obligations to it” (para 36). This case was “principally about supervision and the obligation to ensure that the tribunal is not misled” (para 37).

    It didn’t matter in this case how the false case citations came about: “the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal. To fail to conduct such checks is wasteful of the tribunal’s time.” (para 37).

  • Second, on the issue of supervising junior practitioners who may use AI, the Tribunal said:

    A solicitor or other legal professional who delegates their work to another fee-earner remains responsible for the supervision of their work and for ensuring its accuracy. Such supervisors must ensure that fee-earners under their supervision are aware of the dangers of using non-specialist AI for legal research and drafting. Failures to do so, or to undertake appropriate checks on the drafting of fee-earners is likely to result in a referral to the Solicitors Regulation Authority or other regulatory body. A supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”.” (para 58)

  • Third, the Tribunal issued an important reminder about the perils of using open AI tools (e.g. the public version of ChatGPT), as opposed to closed AI tools (e.g. private instances of Microsoft Copilot):

    Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the regulatory body and should, in any event, be referred to the Information Commissioner’s Office.” (para 60).

The Tribunal stressed that it was not seeking to dampen the use of AI:

We do not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice. The software which is currently available is of enormous benefit in properly focused legal research, as it is in other contexts such as large disclosure exercises." (para 18)

But the lessons from this case are clear: where lawyers use non-confidential AI tools, resulting in a breach of confidentiality and a loss of privilege, or where lawyers are involved (directly or as a supervisor) in the creation of court documents which include ‘hallucinated’ case citations, they run the serious risk of a referral to the SRA (or equivalent).

As the impact of AI use in court proceedings has become clear, various bodies have responded. The Law Society has issued Guidance on Generative AI, to which the Upper Tribunal referred in this case. In February 2026, the Academy of Experts issued Guidance for Expert Witnesses on the use of AI. The Civil Justice Council has also launched a consultation on the Use of AI in Preparing Court Documents, which closes on 14 April 2026.

On 04 March 2026 Simmons & Simmons published the AI & Legal Privilege Guide and Policy Framework to assist lawyers and clients to navigate the use of AI without compromising privilege.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.