Questions about this article?
Talk to the authors

Ine Smisdom
Senior Associate
Digital Law | ICT
In our previous blog posts (EY Law BE | Artificial intelligence and (product) liability, EY Law BE | Prepare your company: the EU’s revised Product Liability…), we dived into two significant proposals of the European Commission: the Product Liability Directive (“PLD”) and the AI Liability Directive (“AILD”). The two proposals of the European Commission have a different regime and thus can complement each other and the AI Act.
The goal of the AILD was victims of AI-caused harm to have the same possibilities for redress as victims who suffered any other type of harm.
Recently, the European Commission has decided not to renew discussions on the proposed AILD.
Background: the need for AI liability rules
The AILD was first introduced in 2022, two years prior to the finalization of the AI Act.
The AILD was designed to introduce specific rules for damages caused by AI, holding providers of AI systems accountable. It aimed to function as a mechanism for addressing harm after it occurs, in contrast to the AI Act, which focuses on preventing such harms in the first place.
One of the key features of the AILD was the establishment of a rebuttable presumption of causality, which would ease the burden of proof for victims seeking to establish that an AI system caused them harm. Additionally, the AILD sought to empower national courts to order the disclosure of evidence related to high-risk AI systems suspected of causing damage, thereby enhancing accountability.
The shift away from the AILD
Despite the initial proposal of the AILD, the European Commission has recently announced that it will not renew discussions on the draft legislation. This decision, noted in the Commission's 2025 work program adopted on February 11, comes with a lack of consensus among stakeholders and increasing pressure from the technology industry for simpler regulations.
Implications for the future of AI regulation
The decision of the European Commission highlights the complexities and challenges of regulating emerging technologies like AI. While the initial proposal aimed to create a uniform framework for addressing damages caused by AI systems, the lack of consensus among stakeholders points out the difficulties in balancing innovation with accountability.
AI legislation?
The AILD might not have made it through the legislative process, but in the meanwhile the AI Act has entered into force (on 2 August 2024). While it does not govern the liability of AI systems as such, it does include obligations for various actors in the AI value chain, based on the risk an AI system poses.
The AI Act distinguishes between prohibited practices, high-risk AI systems, minimal risk AI systems and no risk AI systems, with an additional layer that relates to general purpose AI models and systems.
The prohibition to use certain practices has already become applicable on 2 February 2025, with more obligations coming into force on 2 August 2025, 2 August 2026 and 2 August 2027.
Conclusion
The European Commission's proposal for the AILD represented a significant step towards addressing the unique challenges posed by AI. However, recent developments indicate that this initiative may not see the light of day. The path forward will require a careful balance between fostering technological advancement and ensuring accountability in the deployment of AI systems.
As the Commission continues to refine its approach, stakeholders will be watching closely to see how it addresses the critical issues of liability, safety, and innovation in the AI landscape.
Curious to learn more about AI and liability? Read our previous blogposts:
Action Points
Talk to the authors
Ine Smisdom
Senior Associate
Digital Law | ICT