AI
Image Source - https://rb.gy/50ngmu

This article is written by Aryashree Kunhambu, pursuing Diploma in Cyber Law, FinTech Regulations, and Technology Contracts from LawSikho and Akshita Rohatgi, a student of Guru Gobind Singh Indraprastha University, New Delhi.

Introduction

Artificial intelligence (AI), although not entirely new, is one of the hot topics in the technology world accredited with disrupting the economy and favourably transforming society. It is rapidly being consumed in the mainstream functions of regular users with automated chat boxes, digital voice assistants and smart home devices, AI is everywhere. Policy think tanks and many individual experts specialising in the interface of technology and law have found an interesting question concerning AI. You see, today, there is no legal authority either nationally or internationally that prescribes that AI be a subject of law. This means that any actions or damages arising due to the subsequent implementation and operation of AI will not hold it responsible for such harm caused; bodily or otherwise. The conundrum of who shall be responsible in case of such damage and whether legal personhood should be granted to AI for liability assessment are a few of the topics that shall be covered in this article. 

In 2015, a handle by the name of ‘Random Darknet Shopper” (RDS) bought drugs, a Hungarian passport, as well as a sprite can with a hole. Then, it got arrested. For onlookers, the most intriguing part about this incident wasn’t RDS buying a baseball cap with a built-in camera or ‘Lord of the Rings’ ebooks. It was that RDS wasn’t an actual person, but a robot.

Download Now

RDS was programmed by a Swiss art group to purchase random items from the dark web for an art installation. The episode had an anti-climatic end when the robot was returned to its owners without any liability. The public prosecutor withdrew its prosecution, reasoning that the purchase of drugs was a ‘reasonable means’ for inciting a public debate on questions related to their art installation. The drugs were meant only for presentation in the gallery, and not for consumption. Thus, the purchase was safe and legal and the art group escaped without liability.

The RDS incident forms part of a broader trend of autonomous systems running on Artificial Intelligence (AI) algorithms clashing with law enforcement. It engenders a significant debate on who would be held liable in such cases.

What is artificial intelligence?

Artificial intelligence is the method of developing software and systems which can think intelligently like a human mind. AI systems are neural systems that consist of complex algorithms and data sets that are software generated rather than designed by humans. A problem is broken down into countless pieces of information and is linearly processed, bit by bit, to reach a realistic output. Some specific applications of AI include expert systems, natural language processing, speech recognition and machine vision. The human mind cannot understand the calculation or strategies applied by an AI system to reach a certain decision. Therefore, the ‘black box paradox’ or ‘explainability issue’ arises when dealing with artificially intelligent systems and legal liability. 

Challenges and risks associated with AI

The legal issues arising due to the liability paradox gives rise to the following challenges;

  1. Definition and regulatory framework for the operation of AI;
  2. Liability of humans using IT mediums such as AI software to cause harm;
  3. Privacy concerns regarding the collection, use and storage of personal data by AI systems;
  4. Discrimination and bias by AI programs;
  5. Surveillance by the government via facial recognition technologies and biometric database;
  6. Use of autonomous military weapons using AI to make decisions regarding when to kill someone;
  7. The liability of a self-driving car should it malfunction and crash.

The above-mentioned aspects are important areas of the larger issue of determining the liability of AI systems.

Artificial intelligence and determining liability

Ascertaining liability, civil and criminal, for damages or losses resulting from activities of an AI is a matter of priority, as it exercises control over itself in various degrees. Any entity which is granted legal personhood under the law is capable of being entrusted with certain rights and duties. Thus, the question as to if legal personhood should be granted to AI may be an advanced solution to our current liability issue, nevertheless, the merits and demerits of the same must be analysed.

Black-Box Paradox

A common problem foreseen by the legal systems is that many companies use AI-powered models based on the ideology that interpretability must be sacrificed for accuracy. These black-box models are created directly from data by an algorithm which means that even the developer of the code cannot interpret how these variables are combined to reach the predicted output. The human mind and the neural networks in an AI do not function in the same manner, thereby, even if all the variables were listed out, the complex functions of an algorithm could not be dissected. 

Such a paradox exists as, under English law, a claimant seeking remedy must show factual causation as well as legal causation. Both, the facts showcasing the AI’s illegal actions as well as the immediate injury or damage caused due to such illegal actions to the aggrieved party must be shown. In criminal cases, the actus reus and mens rea must be determined. As there is no way of understanding the internal processing of data in the AI, ascertaining the mental element is impossible. 

In some cases, however, even the human mind has exhibited certain ‘black box’ functions where actions taken by such a human could not be justified for any reason. Previously, courts have held humans responsible based on fault-based liability in such cases. Nevertheless, one can conclude that only a legal entity can be subjected to such sanctions. 

Legal personhood and status of AI

Kenji Udhara, an engineer who worked in the Kawasaki heavy industries plant was killed by a robot deployed to do specific manufacturing work. This was the world’s first reported death that took place caused by a robot. It was stated that the robot was not switched off while Kenji was repairing it and the robot’s system detected Kenji as an obstacle. It then violently pushed Kenji towards an adjacent machine with its powerful hydraulic arm, instantly resulting in Kenji’s death. This incident took place in the year 1981 and even after all this time, there is no criminal legal framework in the world that has any clarity on how to deal with such instances wherein robots are involved in the commission of a specific crime or injury to an individual.

In Saudi Arabia, an artificially intelligent humanoid called Sophie has been granted citizenship with rights and duties like all of its other citizens. In India, however, AI currently has no legal status as it is still in a nascent stage. The question of ascertaining liability, both civil and criminal, of an AI entity is conditioned upon whether legal personhood may or may not be granted upon it. While there might be moral and legal implications, practical and financial reasons may become an important factor for granting legal personhood to AI systems in the future. 

Criminal liability 

It has been said that AI has the potential to become more ‘human-like’ than actual humans by studying and learning our capabilities. AI systems can imbibe knowledge from different sources that they interact with. Subsequently, they use this newly acquired knowledge to make decisions. Often, it can weigh different options and make decisions even better than humans do. This sense is dubbed ‘rudimentary consciousness’. 

AI systems like Google Assistant or Alexa interact with us regularly. Most interactions are performed by the system itself, without a person programming every response. This active interaction with people makes it essential to define legal obligations and liabilities to acts committed by AI.

For instance, if an AI commits the offence of hate speech, sedition or incitement to violence, who will be held responsible? What if your smartwatch assistant tells you that people of a certain class or community do not deserve rights or are forceful and violent? What if an AI algorithm recommends that you kill someone and plans out the killing for you? If Rajnikanth makes a robot that goes on a killing spree, who would be responsible?

One of the most prominent schemes to determine criminal liability was postulated by Gabriel Hallevy, a prominent legal researcher and lawyer. He proposed a three-fold model to meet the essentials of criminal liability, i.e., actus reus (an act/ omission), mens rea (mental element) and strict liability offences (where mens rea is not required). The three-fold model proposed by Hallevy to examine offences committed by AI systems is

  • The perpetration by another liability of AI 

A minor, mentally challenged person or an animal who commits a crime is identified as an innocent agent as they lack the required mental capacity to constitute mens rea under criminal liability. The same is applicable in the case of strict liability. However, if they are used as a medium by a perpetrator to advance their illegal actions, then the person giving the instruction would be criminally liable. Therefore, under this model, the AI system shall be assumed to be an innocent agent and the person giving it instructions shall be deemed to be the perpetrator. 

In some cases, the offender does not have the capacity to understand the nature of an act they are committing or its consequences. In the landmark M’Naghten case, it was held that a person who does not know or understand the nature of the act committed by them or what consequences it might result in, can not be convicted for a crime. This corresponds to a well-settled principle in criminal law- actus non facit reum nisi mens sit rea, which means an act is not illegal unless there is a guilty mind accompanying it. If there is no mental element (mens rea) to commit the offence, the offender is usually absolved from liability. 

AI systems are programmed to make choices based on the evidence they collect. They do not have the choice to disregard evidence and not perform a certain act, unless programmed to abstain from it. It is argued that we can not impose liability on an entity that does not have the independent capacity to choose what to do since it does not have a guilty mind required for liability. This conundrum is settled by this model.

This perpetration by another model uses the doctrine of strict liability. The strict liability doctrine is commonly used to hold employers liable for acts committed by their employees or agents in the course of employment. The perpetration by another model views AI systems as innocent agents of the people they are working for. In case a crime is committed by the system, the intention to commit it is attributed to the programmer. Even if the actions of the system were not planned, intentional or even reasonably foreseeable, the programmer or user would be held liable. 

  • The natural probable consequence liability of AI 

Under this model, an AI user or programmer who should have foreseen an offence committed by the AI which any reasonable programmer or user ought to have seen as a natural and probable consequence of their actions and should have prevented it by taking the necessary measures accordingly, is held to be liable for the same. The two possible results are firstly where the AI commits an offence due to negligent use or programming then it would not be held liable but when it acts on its own, or in deviation of its programming, then it shall be held liable.  So for example, if in the case of the Ahmedabad doctor who performed a telerobotic surgery on a patient 32 km away, the robot started acting in a manner that its programme did not prescribe, then it would be held liable for any harm it would cause.  

The natural probable consequence model considers AI a simple machine that follows the directions in its programming. It is simply an advanced machine working at the behest of its human master. It rests on the assumption that an AI system simply follows the commands of its creator and master. So, the system is not responsible for its own actions. 

The AI system runs on algorithms made by people who know the laws and moral principles governing society. It is the duty of their creator to ensure the autonomous system does not violate these rules in its functioning. In case it does, the creator must be held liable. They look at AI as a machine to argue that future acts of an AI can be predicted and are thus, foreseeable. 

This model places the onus of due care and attention on the programmer. Reasonable care and caution must be taken by the programmer to avoid any harm that may be caused. The programmer must program the system in a way to avoid risks. Moreover, they must provide warnings to the end-users.

Criticism

Critics of this theory claim that it ignores what separates AI from other forms of technology- AI’s ability to learn and apply the learning in real-life scenarios. In cases where AI systems are made to choose a future course of action, it has a choice between the legally and morally justified and the illegal or immoral act. It holds the requisite rudimentary intelligence to provide the data and knowledge for making decisions. After this, the system itself decides the course of action to pursue. Since this rudimentary intelligence determines what option to choose, it is in control of its own actions.

This model holds a programmer liable even when the system was being used by another and learning from its environment. According to Sparrow in ‘Killer Robots’, this is as ill-reasoned as holding parents liable for acts of children who have left their care. Even if utmost care is taken, the creator can’t possibly determine the exact future course of action. AI is needed to perform tasks that people lack the requisite skills or ability to. Especially in cases of advanced AI that learn and adapt based on its surroundings, there is no way to predict how systems will behave. Holding creators to this high standard of care would inhibit the growth of this industry.

  • The direct liability of AI

This model covers all the acts that an AI performs which are not dependent on the programmer or the user. In cases of strict liability where mens rea is not required to be proven, the AI will be fully responsible. For example, if a self-driving car was met with an accident due to over-speeding, then it would be held liable as over-speeding for a self-driving car is a complete red area thus falling under strict liability. 

Popular culture is rife with robots running on complex algorithms that keep learning from their past actions as well as their environment. From a legal standpoint, this highlights how it is often difficult to determine why an AI system performed a certain action. These movies are often fictional accounts where a well-intentioned creator built-in various safeguards to stop their robot from getting out of control. Yet, the robot ended up learning from its surroundings and devised a way to get rid of its creator and started planning some variation of world destruction.

In this fictional world, who must be held liable? An essential principle of criminal and tortious liability is that a person is responsible only for the consequences of their actions that are reasonably foreseeable. Even after the safeguards, the plans for world domination are usually not reasonably foreseeable. Moreover, it feels wrong to affix accountability on a creator who had a slim idea about what the algorithm might learn. 

The Direct Liability model argues that instead of the creator, liability must be affixed to the AI system itself. It asserts that AI systems have the rudimentary consciousness to independently make their own decisions. They possess the knowledge of the probable consequences of their actions along with the intent to cause harm. So, AI systems are autonomous beings that are not controlled by another. They must be accountable for their own actions.

Criticism

A prominent critique against punishing AI systems is that AI can not actually be punished. If an AI system is convicted of an offence and subjected to negative treatment like being reprogrammed or terminated, it may not truly amount to punishment. According to HLA Hart, punishment involves pain or other unpleasant consequences. AI systems may resemble humans, but they can not experience pain or pleasure. Punishment is futile.

Further, AI systems work at the behest of a human benefactor and not for themselves. If they have no rights, they must not have any liabilities either. Those who can feel pleasure and the positive consequences of the system’s acts must be held accountable for the negative acts too.

Another significant critique comes from pragmatists who point out that degrees of autonomy in AI systems vary significantly. The level of independence this model assumes is futuristic and has not been reached yet. Most AI systems are not AI enough to make independent decisions. The owner of the AI system gives them directions to make these decisions. So, blanket liability to the system without any on the human perpetrator for whose benefit the system works would be a blunder. 

Corporate criminal liability

Currently, there is no system to hold these machines responsible for their acts. Consequently, businesses have free reign to take risks and use these systems at the expense of the general society. The doctrine of corporate criminal liability offers a resolution to that.

Corresponding to the concept of strict liability is corporate criminal liability. Strict corporate liability applies to cases in which corporations are performing an inherently dangerous activity. There is knowledge of the risk involved, and the entire corporation is blamed for its consequences, if it causes harm to society. 

This doctrine gives corporations the status of legal persons. With it, the corporation is assigned obligations as well as liabilities. This model uses organisational blame to incentivise businesses to take reasonable care and precaution in their experimentations. 

In India, corporations are recognized as juristic persons. The Supreme Court in Standard Chartered Bank v Directorate of Enforcement (2006) held that corporations can be liable for acts committed by it. While corporal punishment like imprisonment can not be meted out to this juristic person, corporates would be liable for a hefty fine. 

However, there is one significant drawback to this model. Victims of crimes by AI systems would face the costs of suing corporations with significant power and often, in foreign countries. This might end up making justice inaccessible for them.

Criticism

Criticisms of the perpetration by another model of liability for AI corresponds with those mounted against the doctrine of strict liability. Even if reasonable precautions were taken but an offence occurred because of the independent fault of the agent, an employer is held liable. It is argued that this imposes too much burden on the creators and inhibits the growth of the industry.

Further, it is contended that the agents themselves should be liable for their acts since they possess the requisites for a particular offence. In the context of the debate around AI, supporters of the direct liability model claim that AI systems have requisite information or knowledge of the nature and consequences of their actions. It has the ‘rudimentary consciousness’ to form an intention to commit a crime. So, it can make its own decisions.  The system must be liable in itself instead of holding someone else accountable.

Civil liability 

Usually, in a case, where a party is injured and can be compensated for the damage caused due to software, the recourse of criminal liability is not chosen. Instead, the tort of negligence is the path taken. Three elements to constitute negligence are the defendant’s duty to care, breach of such duty and injury caused to the plaintiff due to such a breach. The maker of the software has a duty towards his customer to maintain the standards of care prescribed or could face legal proceedings for various reasons such as;

  1. Developer’s failure to detect errors in program features and functions, 
  2. An inappropriate or insufficient knowledge base,
  3. Inappropriate or insufficient documentation and notices, 
  4. Failure to maintain an up to date knowledge base,
  5. ​​Error due to user’s faulty input, 
  6. Excessive reliance of the user on the output,
  7. Misusing the program.

Position in India

One can say that the existing regulatory framework at the national and international level for AI systems is inadequate to address the various ethical and legal issues concerning it. Discussed below is the relevant framework that is present in India in the context of ascertaining the liability and rights of AI systems.

The Constitution of India

Under Article 21 of the Constitution, the ‘right to life and personal liberty’ has been interpreted by the Indian judiciary to include within its ambit several fundamental and indispensable aspects of human life. In the leading case of R Rajagopal v. State of Tamil Nadu, the right to privacy was held to be implicit under Article 21 and is relevant with addressing privacy issues arising out of AI in processing personal data. Further,  in the landmark case of  K.S.  Puttaswamy  v.  Union of  India,  the  Supreme  Court emphasised the need for a  comprehensive legislative framework for data protection,  which shall be competent to govern emerging issues such as the use of  AI  in  India.  AI  may also be unfair and discriminatory and will attract Articles 14 and 15, which deal with the right to equality and right against discrimination respectively to protect the fundamental rights of the citizens. 

The Patents Act, 1970

Patentability of AI, inventorship (true and first owner), ownership and liability for AI’s acts/omissions, are some of the main issues under this Act with regard to AI. Section  6  read with  Section  2(1)(y)  of the  Act does not specifically mandate that  ‘person’  must be a  natural person, although that is conventionally understood or assumed to be so. At present, AI has not been granted legal personhood and would not fall within the scope of the act. 

The Personal Data Protection Bill, 2019

The processing of personal data of Indian citizens by public and private bodies located within and outside India is regulated in this bill. It emphasizes upon ‘consent’ for processing of such data by data fiduciaries, subject to certain exemptions.  When enacted into law,  this bill will affect the wide application of  AI  software that collects user information from various online sources to track user habits relating to purchase, online content, finance etc.

The Information Technology Act, 2000

Section 43A  of the  Information  Technology  Act,  2000  imposes liability on a  body corporate,  dealing with sensitive personal data,  to pay compensation when it fails to adhere to reasonable security practices.  This has a significant bearing in determining the liability of a body corporate when it employs AI to store and process sensitive personal data.

The Consumer Protection Act, 2019

Section  83  of the  Consumer  Protection  Act,  2019  entitles a complainant to bring an action against a manufacturer or service provider or seller of a product, as the case may be, for any harm caused to him on account of a  defective product.  This establishes liability for the manufacturer/seller of an  AI  entity for harm caused by it.

Tort Law

The principles of vicarious liability and strict liability are relevant to the determination of liability for wrongful acts or omissions of AI. In the case of Harish Chandra  v. Emperor,  the court laid down that there is no vicarious liability in criminal law as one’s wrongful acts if the AI entity may be considered as an agent. 

Conclusion

Recent studies show that currently, as we are at the transitional stage between ‘Artificial Narrow Intelligence (ANI) or weak AI and ‘Artificial General Intelligence’ (AGI) or strong AI, more advanced stages of development can achieve the production of explainable models of AI systems. Emphasis on the adoption of such explainable models of AI is necessary as using black-box models for high-stakes operations can have immense repercussions without any guarantee or legal sanctions available against the AI model. Besides, such models can also drastically help in understanding problems and solving them. Application of product liability, vicarious liability or even strict liability principles must be shifted to specific liability principles formulated for AI systems in accordance with the rule of law. Thus, achieving such systems is solely possible when AI systems are accredited with legal personhood and a regulatory framework to manage their operation. 

From the moment we are born, we are taught to be ‘good’. We are told to do the right thing and abstain from the ‘wrong’. This idea of right versus wrong is intrinsic to our understanding of the law. Our legal system presumes all people can make moral value judgements based on their self-consciousness and ability to decide. 

At the heart of this debate around AI’s liability lies the question of how independent or autonomous are these AI systems. Is AI a simple machine that works under its human master or an entity that has a significant capacity to make its own decisions? 

Blame for any error in judgment is only attributed to free beings. People have the choice to follow the legal or moral law, or not to. In case they don’t, they will be punished. This is why AI systems are not given the same rights as beings with free will are. 

Giving obligations to an entity without any corresponding rights is possible only if they work for another benefactor. Just punishing the system for negative acts done by it is unlikely to affect its human benefactor. This defeats the objective of punishment- be it preventive, incapacitative, deterrent, retributive, rehabilitative or compensatory. In this situation, it is essential to impose obligations and accountability on benefactors who stand to gain from the benefits of the actions of the AI. In the status quo, this is best achieved by the corporate criminal liability model.

References


Students of Lawsikho courses regularly produce writing assignments and work on practical exercises as a part of their coursework and develop themselves in real-life practical skills.

LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join:

Follow us on Instagram and subscribe to our YouTube channel for more amazing legal content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here