AI Liability: EU Commission withdraws Proposal for Directive
No harmonization of AI liability
On February 12, 2025, the EU Commission withdrew the AI Liability Directive. The Commission justified its decision by stating that no foreseeable agreement was reached on the directive.
Proposed AI Liability Directive
The proposed directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) was intended to establish certain rules for the assertion of non-contractual civil claims for damages caused by artificial intelligence. According to a survey conducted in 2020, the liability risk is the most important obstacle preventing companies from using AI.
The AI Liability Directive was part of a package of measures to support the introduction of AI in Europe, alongside the AI Act and the revision of EU product safety law. The AI Act is in force; Chapters I and II (General Provisions and Prohibited AI Practices) apply since February 2, 2025. The Product Safety Regulation is also in force; it applies since December 13, 2024.
Strict liability?
The proposed AI Liability Directive did not provide for strict liability for AI systems.
However, it should be noted that the new Product Liability Directive now expressly clarifies in Art. 4 No. 1 that software is considered a product within the meaning of the Directive regardless of its embodiment on a physical medium. This means that strict product liability also applies to AI systems, as every AI system is based on software. Until now, it has been disputed whether software constitutes a product within the meaning of product liability law.
Obligation to disclose evidence
The proposed AI Liability Directive was intended to significantly reduce the burden of proof for those injured by an AI system. The provider or user of a high-risk AI system should be obliged to disclose relevant evidence. The civil courts should be given the power to order this disclosure in damages proceedings.
Reversal of the burden of proof
The refusal to disclose should then give rise to a rebuttable presumption of a non-compliance with a relevant duty of care by the provider or user.
Furthermore, even in the case of AI systems that are not high-risk AI systems, a reversal of the burden of proof for the causal link between the defendant’s fault and the damage should apply under certain conditions: In these cases, the defendant should have to prove that it is not responsible for the damage.
These regulations will not come into existence for the time being. It remains to be seen whether the plan for an AI Liability Directive will be taken up again by the Commission at a later date.
Significance of the Decision
The Commission’s decision has been welcomed by the digital industry in particular.
No disclosure of evidence
From the point of view of the administration of justice, the decision is also to be welcomed.
The obligation of the Member States to introduce a power for the courts to order the disclosure of evidence represents a significant encroachment on the civil procedural law of the Member States. This applies in particular to Member States such as Germany, under whose civil procedural law the principle of production of evidence applies, i.e. the evidence must be produced by the parties and the court does not investigate the facts of the case or obtain evidence of its own motion. The principle of production of evidence is an outgrowth of the party autonomy applicable in civil law and is therefore ultimately one of the fundamental civil liberties.
No Reversal of the Burden of Proof?
However, the Commission’s decision should not be overestimated with regard to the regulations on the reversal of the burden of proof, which will not come into existence for the time being.
This is because the German Federal Court of Justice established a reversal of the burden of proof with regard to fault as early as 1968 within the framework of the generally fault-based non-contractual producer liability pursuant to Section 823 (1) of the German Civil Code. As a result, there are already hardly any differences in practice in Germany between liability under Section 823 (1) of the Civil Code and strict liability under the Product Liability Act, which came into force in 1990.
There is also a reversal of the burden of proof for fault in contractual liability in Germany (Section 280 (1) sentence 2 of the Civil Code).
These rules on the reversal of the burden of proof naturally also apply in claims for damages in which an AI was involved in causing the damage.
Conclusion
The Commission’s decision not to harmonize this area for the time being and to leave the regulation of AI liability to the Member States is a step in the right direction. After all, given the virtual dominance of US companies in this area, it is questionable whether the sometimes excessive regulation of IT law at the EU level really helps the European digital economy.