Logo

Logo

Assessing liability of AI-driven devices

The modus operandi of A.I. is based on the principle of finding a corelation between two variables; it does not work like a human brain which tends to find the causation behind an effect.

Assessing liability of AI-driven devices

Representative Image

Law and technology certainly seem to be arch enemies, mainly because law has always played second fiddle to technology. As technology development continues to happen at an unparalleled pace, it is almost becoming impossible for the law to counter this progress with an appropriate and iterative response. This is mostly because the law-making process is slow compared to technological developments, especially keeping the onslaught of Machine Learning (“ML”) in mind.

Machine learning or Artificial Intelligence (an umbrella term) has been an all-pervading phenomenon, right from self-driven cars to automated houses. A.I. is here to stay and certainly it tags along questions of great legal anxiety, principally, who should be liable in cases of A.I. driven devices? These existential questions tend to originate from our legal system, which is predominantly based on the principle of bestowing liability, be it civil or criminal.

In questions of civil liability, it is comparatively easier to answer who will be liable in case of A.I. devices, knowing very well that Civil Liability questions are mostly based on compensatory jurisprudence. This means the victim of the tort committed (legal right infringed) needs to be compensated for the tort committed against the individual.

Advertisement

So, there are primarily three principles followed for cases of civil liability in cases of A.I. Firstly, it is the liability of the manufacturer who has not trained the device in question appropriately. This is famously termed as product liability. This phenomenon is very elaborately dealt with in the Consumer Protection Act, 2019, under which the question devolves on the mechanism of the A.I. device itself, whether it was created correctly (following the standard practice) and the data-set upon which it was trained was accurate and inclusive.

The modus operandi of A.I. is based on the principle of finding a corelation between two variables; it does not work like a human brain which tends to find the causation behind an effect.

A.I. only finds a pattern between two variables. For example, if I feed two data containing details about shoe size and salary, though there is no relation per se between these two variables, Machine learning will only seek a co-relation between these two variables. If shoe size 9 has a salary of Rs 30,000, shoe size 7 will have one of about Rs 20,000. So, it becomes important to assess the data-set used for training the device (A.I. driven) in question.

If the data-set is not accurate, or the quality of the data is not good, or there is inherent discriminatory undertone within the data set, it will lead to a flawed response. That is why the phenomenon of product-liability is gaining traction across the globe, under which the liability is shifted to the manufacturer in case the device is not functioning up to the standards. However, this is merely a small element of Machine Learning as a predictive analysis tool. There is other advanced A.I. which can contextualize the speech and the sentiment underlying it.

The second basis of civil liability questions in A.I. is based on the liability of the user. Under this scenario, the A.I. devices are seen as assistive tools for the user, meaning that if in the use of the device sufficient care is not taken by the user, the liability will be shifted onto the user. This is a common scenario which is being discussed in the legal fraternity, especially in cases of medical negligence.

Hospitals and doctors are utilizing advanced medical devices (A.I. driven) which assist them in conducting precise operations. In these cases, if the human user has predominant control in the use of the device which leads to a mishap, the liability will shift to the user. The third scenario of liability is based on the principle of ‘Collective Risk Management’. The idea behind this principle is to create a corpus (a pool of funds) for compensating the victim of a tort.

This is based on the idea of creating space for innovation. Whenever a new technology is to be introduced, which might have great impact on the society, a corpus may be created for use like an insurance scheme to compensate victims of a tort. Similar to this model of civil liability in cases of A.I., the criminal liability model is also based on the same premise. Either it will be user liability or manufacturer liability.

However it is conducive to mention the main principle underlying criminal liability are ‘Mens Rea’ (Intention) and ‘Actus Reus’ (guilty act), meaning both the principles need to be satisfied before someone can be held criminally liable. However, when Prof Gabriel Hallevy writes about direct liability of A.I. (The Criminal Liability of Artificial Intelligence Entities), he talks about cases in which the criminal liability principles can be relaxed, for example cases of offences committed by an infant, or by someone under the influence of alcohol or of unsound mind.

In all these cases the standards may be relaxed. All these scenarios bring a larger dysfunctional flaw in our legal system, which is predominantly based on the principle of imposing liability, to the fore. But the fact remains that A.I. driven technologies are breaking this matrix, thus it is time to come up with a new model of liability for addressing such questions in relation to novel technologies.

Advertisement