The world is becoming more entwined with digital and interconnected technology. These new directives guarantee that liability laws develop to protect consumers while encouraging technological innovation and sustainability.

The present article explores the key aspects of the revised EU product liability directive, how it addresses AI and software, its applicability to the Indian regulatory landscape, and how other jurisdictions are dealing with similar challenges.

The revised EU product liability directive officially came into force on December 8, 2024. However, its application will be gradual. EU member States must incorporate the provisions into their national laws by December 9, 2026.

This gives governments ample time to adapt existing frameworks and introduce new regulations compatible with the directive’s requirements. From this date, the new rules will govern products placed on the market within the EU, meaning that businesses will have to comply with the revised liability standards for products launched after this period.

The revised directive significantly broadens the scope of what constitutes a ‘product’. It incorporates risks presented by modern technologies and sustainability concerns:

Digital products: The directive now explicitly includes software, AI systems and digital files as products subject to liability rules. This is a critical step as software and AI are central components in many modern devices, from autonomous vehicles to home automation systems. For instance, if an AI-driven car’s algorithm causes a crash due to a defect, the manufacturer could be held liable under this directive.

Sustainability and circular economy: In line with the EU’s commitment to sustainability, the directive covers not only new but also refurbished, remanufactured and reused products. This aligns with the growing emphasis on the circular economy, where product lifecycle management becomes crucial. Products reused or refurbished must meet the same safety and liability standards as new ones, ensuring that consumers are protected regardless of a product’s age or origin.

IoT and connected systems: One of the most significant updates is including interconnected systems, such as IoT devices. The directive recognises that interconnected devices, ranging from home appliances to industrial machines, can present complex risks. If a defect in one component of an interconnected system causes harm, the manufacturer of the faulty component can be held liable.

A primary objective of the revised directive is to strengthen consumer protections while simplifying the process of filing liability claims. The following changes have been made to address the challenges that consumers face when seeking compensation:

Simplified claims process: In the past, consumers were often required to provide complex technical evidence to prove a product was defective. Under the new rules, consumers no longer need to offer intricate technical proof. Courts can now infer a defect or causation if the evidence is inaccessible. This will simplify claims, especially in cases involving AI and IoT systems where the technical details can be beyond the average consumer’s understanding.

Wider coverage of damages: The revised directive extends liability coverage to include personal injuries, property damage and data loss, resulting from defective products. This is particularly relevant in the digital age, where the failure of a software system or IoT device can lead to significant data breaches or personal harm.

Accountability for systemic failures: The directive holds manufacturers accountable for product defects, software bugs, cybersecurity breaches and failures in interconnected product ecosystems. For example, if a security vulnerability in an IoT device leads to a widespread data breach, the manufacturer could be held liable for the damage caused. AI and digital products have presented new challenges for liability laws, especially when these systems evolve and learn autonomously. The revised directive explicitly addresses these challenges, providing clarity on the responsibilities of developers, manufacturers and businesses.

Ongoing risks and lifecycle: One of the key challenges with AI systems is that they can evolve or learn after they are launched into the market. This creates ongoing risks, as developers may lose control over the system's operation once it starts self-learning. The new directive clarifies that developers are responsible for ensuring the safety of AI products throughout their lifecycle, even after the launch. For instance, if an AI system in a healthcare application leads to incorrect diagnosis due to an evolving algorithm, the developer remains liable for the defect.

Black box nature of AI: Many AI systems operate as ‘black boxes’, meaning their decision-making processes are not transparent or easily understandable. This can make it challenging to trace defects or determine faults. However, the directive helps to address this by allowing courts to infer liability based on circumstantial evidence. This will make it easier to hold developers accountable, even when the exact cause of failure is difficult to determine.

Algorithmic risks: The directive explicitly covers the risks posed by biased algorithms, faulty data inputs and cybersecurity flaws in AI systems. For example, if a financial AI system makes discriminatory lending decisions based on biased data, the system’s creators can be held liable for the harm caused to affected individuals.

Developers: The directive significantly burdens AI developers to ensure their systems are rigorously tested, validated and monitored. Developers must maintain clear documentation about how their systems work, the potential risks, and the safeguards to mitigate them. Failure to comply could result in liability for harm caused by defects or malfunctions.

Businesses using AI: Organisations implementing AI systems must ensure that these systems are safely integrated into their operations. For instance, a company that adopts an AI-based facial recognition system must be responsible for ensuring that the system functions as intended and does not cause harm, such as wrongly identifying individuals.

Shared accountability: In cases where multiple entities contribute to the development of an AI system, such as developers, data providers and hardware manufacturers, the directive holds all parties accountable for any defects that arise. This shared liability ensures that no one party can evade responsibility for harm caused by AI failures.

As India continues to develop its technology sector, it faces a growing need to modernise its product liability laws to keep pace with digital and AI-driven products. There are several lessons that India can draw from the EU’s revised product liability directive:

India’s current legal framework, which focuses on tangible goods and physical defects, must expand to cover digital and AI-based products. For instance, the Consumer Protection Act, 2019 must be updated to address the unique risks posed by software, AI and IoT products. India’s legal system should ensure that developers and manufacturers of digital and connected products are held to clear standards of safety and accountability.

India should adopt provisions similar to those in the EU directive that simplify the claims process. Consumers often face challenges when dealing with complex technical products, particularly in AI and IoT cases. Simplifying the evidence requirements for claims and allowing courts to infer defects would make the process more accessible to ordinary consumers.

While India encourages innovation, particularly in AI and IoT, it is equally essential to ensure consumer safety. India can draw inspiration from the EU’s balanced approach, which encourages innovation while setting clear standards for accountability. Developers and companies should be incentivised to create new technologies that are effective, safe and ethical.

The US handles product liability through tort law, relying heavily on strict liability, negligence and warranty claims. However, the rise of AI and digital products has created gaps in the legal framework. As in the EU, the US needs to adapt its laws to address the risks posed by AI and IoT products. Some states, such as California, have taken steps toward regulating AI through data protection and transparency laws, but there is still a long way to go.

The UK has taken a proactive approach to AI accountability, mandating risk assessments and monitoring for AI systems. In its AI strategy, the UK government has emphasised the need for clear regulations on AI safety, which will likely influence future product liability frameworks. India can adopt similar guidelines to ensure the safe deployment of AI in both private and public sectors.

Japan has long been a leader in technology and innovation, and its legal framework for product liability reflects this. Japan focuses on continuous monitoring and risk assessment of AI and IoT products, ensuring that products are tested and certified before entering the market. India can benefit from this approach by establishing a robust system of pre-market and post-market assessments for AI-driven products.

The revised EU product liability directive represents a significant step forward in ensuring that consumers are protected in the digital age, where the risks associated with AI, software and IoT devices are constantly evolving. The EU’s approach balances consumer safety with fostering innovation and sustainability. However, as technology evolves, the challenge will be maintaining this balance while accounting for new risks and promoting continued growth in the technological sector.

Adopting similar measures is crucial for India, aligning with global standards and ensuring the country can harness its technological potential while safeguarding consumers. The lessons from the EU’s revised directive offer a solid foundation for India to develop a legal framework that embraces innovation without compromising safety, accountability and fairness. With the proper legal updates, India can navigate the complexities of the digital age while protecting its citizens and promoting technological advancement. (The Leaflet — IPA Service)