What do we mean by AI disputes?
- AI disputes arise from the deployment and use of artificial intelligence systems across organisations and sectors.
- Key risks stem from AI autonomy, technical complexity, and opacity, which can make outcomes difficult to diagnose or explain.
- These characteristics create challenges around attributing liability and identifying bias.
Why AI disputes – and why now?
- Organisations across sectors are rapidly adopting AI technologies.
- Disputes are already emerging, but there remains time to prepare.
- 2026 will be a pivotal year, with enforcement of the EU AI Act beginning in August 2026.
- AI disputes are not confined to technology companies; any organisation using AI may be affected.
How AI risks are manifesting
Regulatory risk
AI-specific regulatory regimes (such as the EU AI Act and deepfake legislation) sit alongside adjacent regimes, including privacy, consumer protection, and competition law.
Civil liability risk
Civil disputes may arise from contract claims, negligence and product liability, intellectual property infringement, and consumer or collective actions.
What we are already seeing
Copyright and IP: Claims relating to training AI models on protected content and outputs that resemble original works.
Data protection and privacy: Use of personal data in training sets and risks of models leaking information.
Consumer class actions: Allegations of misleading or biased outputs, often attractive to litigation funders.
Professional negligence: Over-reliance on AI leading to errors and disputes around reasonable reliance.
What we expect to see next
- EU AI Act enforcement from August 2026, involving over 100 national regulators and the EU AI Office.
- Product liability claims under the amended Product Liability Directive, effective from December 2026.
- Increased AI incident reporting obligations under the AI Act and cyber-security regimes.
- Follow-on civil claims arising from regulatory investigations and findings.
Practical steps for organisations
Be incident-ready: Have a clear internal response plan for AI incidents or pre-action correspondence.
Know your AI: Be able to explain how your AI systems reach decisions to defend them if challenged.
Map your evidence: Understand what logs, data, and documentation exist and how they can be preserved.
Protect privilege: Avoid using public AI tools for confidential or privileged material.
Audit contracts: Ensure indemnities, warranties, and dispute resolution clauses address AI-specific risks.
Build governance structures: Connect legal, risk, IT, and business teams with clear escalation routes.
Conduct regular AI audits: Check that AI systems operate as intended and capture evolving use cases.
This webinar series
This session forms part one of an eight-part webinar series running from February to December 2026, covering:
- Regulatory enforcement
- Class actions
- Negligence
- Intellectual property
- Product liability
- AI incident response
- Evidentiary issues
- Future-proofing against AI risk
Register to join us for the rest of the webinar series here.
Key contacts
- Minesh Tanna – Partner, Disputes & Investigations, Global AI Lead (minesh.tanna@simmons-simmons.com)
- Jonathan Schuman – Managing Associate, Disputes & Investigations (jonathan.schuman@simmons-simmons.com)
- William Dunning – Managing Associate, Disputes & Investigations (william.dunning@simmons-simmons.com)
- Hannah Sherriff – Supervising Associate, Disputes & Investigations (hannah.sherriff@simmons-simmons.com)

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)






_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)






