Financial Markets Disputes View: December 2025

This December edition covers AI risks in courts, compliance updates, crypto disputes, FCA/ICO actions, AMF rulings, and key financial cases.

19 December 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

UK Disputes & Investigations Predictions for 2026

2026 is set to be defined by fragmentation across regulation, technology, geopolitics and financial markets, creating pressure points for businesses through intensified scrutiny of asset valuations, heightened corporate accountability, escalating AI and cyber risks, and increasing cross-border complexity. Our 2026 UK disputes predictions examine how these challenges may develop, where disputes risks and regulatory issues are likely to emerge, and the practical implications for boards and legal teams. Read our predictions here. You can also listen to our corresponding podcast here.

Use of AI in court

The English courts’ attitude to AI is developing fast. Emphasising their capacity to adapt to new developments, the judicial leadership have made a point of embracing AI in numerous speeches. However, more recently we have seen a growing concern over pleadings and submissions relying upon authorities which either don’t exist (the hallucinations of Large Language Models (LLMs)) or are irrelevant to the specific legal context. A number of barristers and solicitors have already been referred to their regulators for unintentionally misleading the court with AI-generated submissions.

A couple of very recent cases serve to emphasise the point. In the first case, a Litigant in Person had submitted a skeleton that referred to "bogus" authorities no doubt "falsely created by AI". The court was at pains to emphasise that "reliance upon false citations" is just as unsatisfactory for LIPs but warned that "where (as seems at least possible here) the citation was included in a document authored or reviewed by a lawyer, without attribution, whether for reward or pro-bono, for use by the litigant in person, that lawyer may, upon identification, be subject to a reference for misconduct or potential contempt". In the second case, a wasted costs order was made against a firm of solicitors which cited two fictitious authorities in an application to amend Particulars of Claim. Placing false material before the court is improper and unreasonable and negligent conduct and justified a wasted costs order under CPR Part 46.8. Referring to a "significant and growing problem", HHJ Charman noted that lawyers who cite fictitious cases "must face serious consequences". Apparently, the draft witness statement that included the two fake cases had been generated using "LEAP legal software" which includes "a built-in research function that automatically suggests case law". The software appears to be in wide use and approved by the Law Society.

Speaking at a recent conference, Mr Justice Waksman also warned about the dangers of AI in the context of experts’ reports. He suggested that a solicitor who insisted on an AI generated draft expert report would be in serious breach of their duty. Likewise, an expert who accepted instructions on that basis would be in breach of theirs.

We expect to see a consultation in 2026 on new court rules governing the use of AI in proceedings.

Do LLMs actually store data

From a legal perspective the answer at present would appear to be maybe, or maybe not, depending on where you are. Two recent cases demonstrate the problem.

In Getty Images v Stability AI, a case involving copyrighted images, the High Court concluded that they do not. "While it is true that the model weights are altered during training by exposure to Copyright Works, by the end of that process the Model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy. They are purely the product of the patterns and features which they have learnt over time during the training process."

In a case involving song lyrics, the Munich Regional Court I ruled that they do (Case No.: 42 O 14139/24). The song lyrics were "contained in the training data of the LLM" and were output "after entering simple prompts." Although the court roughly described the functioning of an LLM, the judgment is primarily based on the fact that the song lyrics were considered during model training and, in combination with a specific prompt, were partially output. This misjudges the functioning of an LLM. Output is not the same as input. An output is rather the interplay of training and model architecture (LLM-side) and prompting via an AI system (user-side). Given the importance of the question of whether LLMs store data or not, we expect that this question, for the EU, will ultimately be clarified by the ECJ. Read more on the case here.

Practical Insights for Navigating Crypto and AI Disputes

The business landscape is being reshaped by the rise of cryptocurrencies, digital assets, and artificial intelligence. These technologies promise efficiency and innovation, but they also bring new types of disputes and legal uncertainties. This note from Lijun Chui, a partner in our Singapore office, distils the latest thinking and practical tips for navigating disputes in this fast-moving space. Read more here.

SFO Compliance Programme Guide

The SFO has issued refreshed guidance examining how it assesses effectiveness and how its findings influence decisions on prosecution and resolution. The guidance clarifies the SFO’s expectations under the Bribery Act 2010 and the Economic Crime and Corporate Transparency Act (ECCTA) and signals greater alignment with international regulators. The updated guidance does not introduce new legal obligations or concepts but provides a consolidated and transparent framework for corporates to benchmark and evidence the effectiveness of their compliance arrangements, particularly in light of the new ECCTA failure-to-prevent fraud offence. Read more here.

Financial crime risks – systems and controls

The FCA’s Strategy for 2025 to 2030 identifies the fight against financial crime as one of its 4 priorities and its activities over the last year have demonstrated a clear intention to follow through on this. Recent developments suggest that there will be no let up.

The FCA has just published the findings from a multi-firm review focused on firms' business-wide risk assessment (BWRA) and customer risk assessment (CRA) processes. The key findings centre on how firms identify, understand, and manage risk. Firms’ controls were measured against the Money Laundering Regulations 2017, the FCA’s own Financial Crime Guide and SYSC as well as guidance provided by JMLSG and FATF. Most firms reviewed had a BWRA, but few are identifying relevant risks and tailoring the BWRA to the specific business. The FCA also expressed concern that some firms could not explain sufficiently how they are managing and mitigating identified risks. In terms of next steps, the FCA is expecting firms to consider its findings and examples of good and bad practice within the context of their own businesses, to continue to review their risk-based approach to systems and controls and to put in place “robust” financial-crime controls to manage and mitigate risks. It will continue to monitor progress through supervisory work.

Another FCA survey, has found that two-thirds of corporate finance firms, which help businesses raise money by connecting them with investors or lenders, are at risk of non-compliance with AML obligations. Key shortcomings included the absence of a documented business-wide risk assessment, missing evidence of customer due diligence, and gaps in the oversight of appointed representatives. The FCA emphasised again that firms must urgently “address any gaps in their financial crime control frameworks”. Read more here.

Finally, we understand that the FCA has sent a survey to asset managers and alternatives firms asking for more information on their anti-money-laundering, counter-terrorist-financing and counter-proliferation-financing processes. The FCA appears to be examining financial crime risks inherent in their businesses as well as their systems and controls arrangements and sanctions screening processes. Expect more on this soon.

New ICO enforcement guidance

The ICO’s decision not to investigate the data breach by the Ministry of Defence which exposed details of some 19,000 Afghans who had assisted UK forces has come intense scrutiny in recent months. In particular, it brought into stark focus the regulator’s approach to weighing cost against benefit when deciding whether or not to open a major investigation. In his evidence to the House of Commons, Science Innovation and Technology Committee, John Edwards, the Information Commissioner had this to say “We made a decision about where we deploy our resources. We are always making trade-offs. What would we have been wanting to find out — that a spreadsheet was sent? We knew that. That it contained concealed information? We knew that. That it was sent from the third party? Everything that could have been found was already known. In those circumstances, I think my staff quite properly made the decision that our resources are better deployed not with the single tool of investigation but with other means of influence.

The ICO is now consulting on new guidance about the process it follows when taking enforcement action using its powers under data protection legislation. This contains some detailed provisions explaining the approach to opening an investigation. The factors listed include the risk of harm to people caused by the processing, the scale of the actual or potential impact of the processing, the extent to which opening an investigation would support economic growth and improve compliance, whether it would support the ICO’s strategic objectives and the resource implications and risk involved in opening an investigation. The new transparency around this process is welcome. However, looking at these factors, it is still quite difficult to see how the Afghan leak decision was reached without significant weight being given to the fact that the source of the breach was a government department.

The consultation closes on 23 January 2026.

AMF decisions on distributor commissions

Recent rulings by the French market authority (AMF) have tightened the rules regarding the payment of commissions to fund distributors. According to Eternam decision, the management company must inform investors in advance, clearly and explicitly in the fund documentation, about any distributor commissions, including their nature and the method of calculation. Any lack of clarity or omission in this disclosure constitutes a breach, even if the information is corrected at a later stage.

The Altaroc decision adds that such payments are permitted only if the management company can demonstrate, with concrete records, that these commissions enhance the service provided to clients; this requirement applies to closed-ended funds as well. In summary, ex-ante transparency and documented service enhancement are both mandatory. Without both elements, distributor commissions are in breach of Article 24 of Regulation (EU) 231/2013.

Round-up of cases

Cases catching our eye this month included a Court of Appeal decision which examined, amongst other issues, client classification in the context of an appeal from a FOS decision.

Linear Investments Ltd v Financial Ombudsman Service Ltd [2025] EWCA Civ 1369 concerned the regulatory framework for categorising clients under COBS 3 in the FCA Handbook. The court upheld the Ombudsman's findings that Linear had failed to conduct an adequate assessment of Professor Willcocks' expertise under COBS 3.5.3R when accepting him as an "elective professional client". Professor Willcocks had ticked several boxes in his account opening form indicating that he had experience of trading equities and CFDs but he had not provided any further evidence as envisaged by the form. Linear had argued that these were more than mere “self-certification” but were unambiguous representations of fact as to his trading experience in CFDs. Snowden LJ indicated that he would have been inclined to accept this had the documentation been properly completed and accompanied by evidence. If a representation is unclear or incomplete, or if there are circumstances which suggest that it may not be accurate and call for further clarification or inquiry, the firm will not be able to hold the client to it. This analysis is consistent with existing financial services case law arising out of disputes over the classification of clients (eg Wilson v MF Global UK [2011] EWHC 138 (QB). Such cases have generally accepted that although the starting point for a firm's assessment of a client's knowledge and expertise may be the representations made by its client, the firm will not be able to rely upon those representations if there is a reason to make further inquiries. The case was remitted to the Ombudsman to determine an appropriate reduction for Professor Willcocks’ contributory negligence.

In URE Energy Ltd v Notting Hill Genesis [2025] EWCA Civ 1407 it was held that a party with an express contractual right to terminate a contract in certain events, who continues to perform for a period of months after such an event has occurred, is entitled to say that its conduct does not amount to an election to affirm the contract because it did not know that the contract entitled it to terminate. There is no rule of law that, for the purpose of the principle of waiver by election, a party is deemed to know the terms of its contract. Whether it has the relevant knowledge is a question of fact. Only where “blind-eye knowledge” could be shown would a party be deemed to have knowledge.

In case you missed it

Contract Masterclass Webinar Series – enhance your understanding of English contract law with concise 30-minute sessions. Covering recent legal developments, drafting tips, and dispute risks, these webinars are designed to keep you ahead. Watch on demand here.

Privilege & AI: Pitfalls and Tips for Legal Protection – as generative AI tools are increasingly used in the context of legal advice, important questions arise around the availability of legal privilege. View our webinar on demand.

Travel Agent(ic): Everything legal you need to know as you plot your journey to implementing Agentic AI – during this webinar, our experts discussed topics ranging from contracting challenges, the state of AI regulation and its impact, data privacy approaches, managing IP challenges, and more. Please click here to find the key takeaways from the webinar or here to watch on demand.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.