The EU ambition was always to develop and deploy ethical and trustworthy artificial intelligence (AI). “Though Europe’s AI systems might not be as fast as those in other regions of the world, trust in our AI would stimulate demand and deliver commercial value,” explains Kai Zenner. “And part of that vision was safety and liability. If something went wrong, or caused harm or damage, the route to redress would be transparent.”
Kai’s boss is Axel Voss, a Member of the European Parliament and rapporteur for the AI Liability Directive (AILD). He has long backed liability as a core component of the EU’s digital strategy. Yet, in May this year, the European Commission (EC) proposed the withdrawal of the AILD from its legislative agenda, citing “no foreseeable agreement”.
The AI liability imperative
Though it’s on hold for now, Kai argues that the AILD, or an AI Liability Regulation (AILR), would create opportunity and protections, without being overly onerous:
No regulatory burden: As an ex-post liability framework, the AILD does not introduce new legal obligations or require changes to AI-system design or production. Yes, it will likely mean that companies need insurance, but 85 per cent of all products and services sold on the internal market are already insured, according to the EC.
Covers AI-specific risks: AILD would cover AI-specific risks, including bias, discrimination, opacity and hallucinations. The existing ex-post Product Liability Directive, and ex-ante Product Safety Laws do not adequately address the unique liability characteristics presented by AI systems.
Full harmonisation of EU liability laws: An ecosystem of harmonised EU liability laws is integral to the EU’s AI strategy. It would increase trust, reduce legal uncertainty, litigation costs and court overload, while giving confidence to investors. In short, harmonisation would support both consumer safety and growth in Europe’s AI market.
Protects SMEs from disproportionate liability and market imbalance: Small and medium-sized enterprises (SMEs) face disproportionate liability claims because risk shifts downstream from large tech companies.
A joint-and-several-liability regime along the AI value chain would ensure liability risk is fairly shared, requiring entities responsible for harm or damage to contribute proportionately to redress. Protections for AI developments and deployments, with assumptions of equal shares, support clauses and a ban on unfair contractual terms, would further support SMEs.
Promotes innovation and fair competition: A study by the European Parliament Research Service found that an overhaul of the EU civil liability regime could add up to €498.3bn in value to the EU economy by 2030.
Why axing AILD could undermine Europe’s digital future
With the likelihood that the EU will not proceed with the AILD or AILR, all roads now point to the Draghi Report on European competitiveness. Authored in 2024 by Italian economist Mario Draghi, it stresses the importance of strengthening the EU’s digital capabilities and urges greater investment in digital infrastructure, AI and data systems. It also calls for coordinated action to close the EU’s digital gap with the US and China.
Though Kai broadly agrees with Draghi’s findings, he says four challenges and risks persist.
Post the AI Act, EU companies are hesitant to invest in and develop AI. Finding it difficult to understand whether their AI product is high-risk or prohibited under the Act, many choose to source their AI from US tech corporations instead, harming EU competitiveness.
Meanwhile, the New Legislative Framework (NLF) was not built with AI or digital systems in mind. Adopted in 2008 for tangible goods that don’t change post sale, NLF is a poor fit for self-learning, dynamic and constantly evolving AI systems, which serve more than one intended purpose. As they are difficult to contain within traditional product frameworks, companies will have to check their AI systems regularly to make sure they continue to conform with high-risk NLF rules.
Lack of certainty is driving companies to pay out for third-party conformity assessments, even though they aren’t necessary for most high-risk AI systems. As a consequence, an AI-specific auditing industry is emerging, which was never the EU’s intention.
And, finally, the complexity of the EU governance system discourages both investment and talent. Developers, investors and employees will take their resources and skills elsewhere.
In Kai’s views, these four ticking time bombs could undermine the European way on AI. He hopes it is not too late for the Commission to rethink its position on AI liability and, in doing so, seal a competitive edge for the trading block, which is built on a commitment to transparent, ethical and trustworthy AI.


_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)




