New Guidance for Expert Witnesses on AI

The Academy of Experts has published Guidance for Expert Witnesses on the use of Artificial Intelligence (AI) to assist experts on the use of AI in their work.

24 February 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

The Academy of Experts has published Guidance for Expert Witnesses on the use of Artificial Intelligence (AI) to assist experts on the use of AI in their work.

AI in Expert Evidence

AI technologies are being utilised in a variety of expert witness areas, from forensic analysis and medical diagnostics to financial modelling and digital forensics. The potential advantages - speed, accuracy, and objectivity - are significant, but so too are the risks, including hallucinations, bias, and the challenge of validating complex algorithms. Recognising these issues and the lack of existing guidance, the Academy of Experts, led by Membership Committee Chair and Simmons Partner and Global AI Lead, Minesh Tanna, has created a framework to help expert witnesses navigate the use of AI. The framework is intended to apply generally (rather than in any particular jurisdiction). 

The risk profiles of AI use

Importantly, the guidance differentiates between Prohibited, High Risk and Low Risk AI use, suggesting different approaches for each. It emphasises that experts remain ultimately responsible for their evidence and their duties to the court or tribunal.

An expert may be prohibited from using AI if the client engagement prohibits it, if it breaches any regulatory prohibition, or if it would breach the expert's duty of independence because it effectively outsources their opinion to the AI.

Examples of High Risk use are AI systems that analyse data, create counter-factual scenarios, generate content for the expert's report or are used to form the expert's opinion. These sorts of use are likely to compromise the expert's duties to the court and would require a high level of human oversight and transparency in order to be justified.

Examples of Low Risk use include using research tools which may use AI, administrative use for document organisation, or spelling and grammar checking tools. These clearly pose a low risk of the expert's evidence misleading the court and may not need to be disclosed by the expert.

Helpfully, the guidance includes a section setting out a range of examples of AI use by experts and categorising these, in places adding a level of detail beyond the three main categories used.

Key Principles

The new guidance for expert witnesses emphasises several core principles, which apply particularly to High Risk uses:

  • Transparency: Experts should check whether they are permitted to use AI and clearly disclose to those instructing them when AI has been used in their analysis, including the nature and scope of the technology applied. They should explain why the particular AI tool was chosen and exactly how it was deployed, keeping a written record of key steps in the process.

  • Explainability: The expert must be able to explain how the AI system operates, its limitations, and the steps taken to validate its outputs.

  • Reliability: Evidence generated or assisted by AI must be robust, reproducible, and subjected to appropriate scrutiny.

  • Bias and Fairness: Experts should assess and disclose any known biases in the AI system or data set and discuss the potential impact on the findings.

  • Competence: Only those with sufficient knowledge and expertise in both their primary field and the relevant AI technology should use AI while acting as an expert witness. AI tools must be selected carefully for their purpose and experts should not just select those with which they are most familiar.

  • Vigilance and oversight: Experts should continually reflect on their use of AI and its appropriateness and ensure adequate human oversight to identify any potential errors or hallucinations.

What this means for you

The introductory wording to the guidance, from the former President of the Supreme Court Lord Neuberger, contemplates that a time will come where a professional may be criticised for not using AI to increase efficiency and accuracy. There is clearly no prohibition on many uses of AI by an expert, but great care and transparency is required.

The guidance encourages expert witnesses to consider disclosing their use of AI to the court or tribunal and the other side. Given the overriding duty to the court and the risk of it emerging under cross-examination that AI has been used, we would suggest that it will usually be appropriate for an expert report to set out how and why AI has been used where this is anything other than very low risk.

Experts and those instructing them will need to stay abreast of evolving legal and technical standards in AI, as well as new tools and capabilities. Open discussions about the potential use of AI before it is undertaken will be essential if parties are to avoid their expert's evidence being undermined by questions on their reliance on AI.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.