This article has been written by Vasu Manchanda, a student of Faculty of Law, Delhi University.
“Technological progress without an equivalent progress in human institutions can doom us.”
-Barack Obama
Introduction
Artificial Intelligence (“AI”), once considered fictional, is permeating through all the facets of society, be it healthcare, education, hospitality, logistics, and even law. The unprecedented pace at which it is proliferating, if unregulated, might prove to be a bane and have far unfathomable and unanticipated repercussions than intended advantages. This article argues that in the absence of any legal precedent or defined laws, it is tough to hold AI-powered robots accountable for its unfavourable effects. The dilemma of whether to hold the creator, deployer, or AI-based robot itself liable for its omission and commission is causing a major hindrance in its widespread adoption and reliance.
What are AI-based robots?
The term “Artificial intelligence” was coined by a computer scientist John McCarthy at a conference at Massachusetts Institute of Technology, the USA. He defined AI as the science of making intelligent and autonomous computer programs and machines. According to Bellman, AI is “the automation of activities that we associate with human thinking, activities such as decision-making, problem-solving, and learning.” Further, AI-based robots are smart, intelligent, and human-like. They are capable of working autonomously and independently without any human intervention. The basic example of an AI-based robot is the character of Arnold Schwarzenegger in the American science fiction film, The Terminator. The machine was programmed to help mankind but developed its intelligence on its own and became a threat to the world.
Integration of AI with various industries
AI is making its presence felt all over the world. It is giving a tough competition to archaic methods of operations adopted by the corporations. People who refuse to upskill themselves to keep up pace with the technological advancements are having a run for the money.
A few instances of various industries where AI-based robots are proliferating are as follows:
- Pizza Hut and Mastercard have deployed Pepper, a humanoid robot developed by Softbank, to provide customers with a personalized and memorable experience.
- An AI doctor system, Watson, developed by IBM, saved the life of a woman by identifying her ailment even when other methods adopted by the human doctors failed.
- One Plus, a leading Chinese smartphone manufacturer, has launched “Snowbots”, i.e. robots that fire snowballs and are controlled by 5G phones for its 5G-powered interactive snowball fight.
- At Enjoy Budapest Cafe, Hungary, AI-powered robot waiters not only serve drinks and food to customers but also crack jokes, have a conversation with the customers, and dance with the kids.
- Japanese scientists from Osaka University have invented an AI child robot named Affetto that can feel pain, has synthetic skin, and is capable of detecting changes in pressure. It has realistic skin and skeleton wrapped in artificial skin. It reacts to touches with different facial expressions such as frowning, grimacing, and smiling.
- Sophia, an AI human-like robot, developed by Hong Kong-based company Hanson Robotics had made international headlines in 2017 when it became the first robot in the world to become a citizen of Saudi Arabia. All the rights and duties of a citizen were vested in it.
- New Zealand Police department has recruited an AI police officer named Ella. Her name is an acronym for ‘Electronic Life Like Assistant.’ She is programmed to greet and interact with people at police stations and other public areas and provide a variety of non-emergency services and advice to them.
- AI-based robots are used for food and medicine delivery amidst the Coronavirus crisis. Averting risk to healthcare workers, the job of cleaning rooms and sterilizing isolation wards is also being performed by such robots.
Such AI-based robots are not only proving to be efficient but are also helping the corporations (deployers) earn better profits as they can continuously work around the clock without any fatigue. However, the growth of AI across all the industries in the world raises an important question to ponder over- Whether there are laws to regulate such robots? Whether Watson, Pepper, Ella, and Affetto, among others, be held accountable for their omission and commission?
Shortcomings of AI
The need to regulate AI arises from its probability of not performing as intended, going rogue, and having grave repercussions. AI has the following shortcomings:
Uncertainty
There is a possibility of AI-based humanoid robots causing damage to life and property. With certain inputs, it may perform differently. For instance, Sophia was once reported to have said that it wants to destroy human beings.
Privacy issues
AI systems and robots process users’ data to comprehend their consumption patterns and learn from the same. This might compromise the privacy of the users if the system gets hacked or the data is intentionally sold to a third party. Such data might further be used for commercial purposes or to conduct surveillance on the everyday activities of users.
Partial Knowledge
Intentionally or unintentionally, some unforeseeable contingencies may not be taken into consideration by the creators of AI at the time of programming it. This could compromise the safety of the users. In 2017, Facebook shut down its AI system as the bots created an entirely new language from scratch that only they could comprehend. They deviated from the scripted norms, thus, defying the objective.
No Legal Personality
In the absence of a proper legal personality, AI robots have no legal standing of entering into a contractual relationship. Also, no civil or criminal liability can be attributed to it because of the same reason.
High cost of maintenance
Incorporation of AI in various operations around the world is expensive and requires a high cost of maintenance. As a result of which, small and medium enterprises are not able to derive their benefit.
National Security Issues
AI might be a serious cause of concern in regards to personal and national security and can be used for espionage by the enemy countries. It can be used by alien, state, and/or non-state actors to collect, store, and share personal and non-personal data of citizens for ulterior motives. They can also be used to spread fake news and operate fake accounts on social media accounts. For instance, Russia was alleged to have deployed robots to spread fake news and meddle in the US 2016 elections.
Need for regulation of AI-based robots
It is pertinent to note that this article pertains to the need for regulation of independent and autonomous AI-based robots permeating in various industries. And, not AI-based software or applications per se. Unlike the USA that has enacted a few laws to regulate the conduct of AI-based robots in their industries, India has no such provision yet. The existing criminal, civil and regulatory laws pertain to persons, i.e. either human beings or companies. Even though companies are regarded as ‘artificial’ legal entities, the ambit of the word artificial is rather narrow and hence, not inclusive of AI. Further, AI cannot be considered an agent or a principal either under Section 182 of the Indian Contract Act, 1881, as it is not a “person”.
If a driverless car running on an AI program causes loss of life or property, there are no existing laws to hold the creator, deployer, or AI itself accountable for the damage incurred. Similarly, insurance companies also might refuse to honor the claim as no AI-related clauses would have been included in the insurance policy at the time of entering into a contract.
It is pertinent to note that while there is the landmark judgment of Jacob Mathew v. State of Punjab on the negligence of a medical practitioner in India; however, there is no such judgment on AI-based doctors, like IBM’s Watson if it’s deployed in India.
Thus, the legislature needs to frame laws and rules on AI-based robots to avert any repercussions that may arise from any injury inflicted on the society whether intentionally or due to a malfunction.
Holding AI-based robots accountable
In order to instil accountability, either AI needs to be treated as a human being or as an incorporated body that is an artificial legal entity.
According to a proposal by The European Parliament Committee of Legal Affairs, the sophisticated autonomous AI machines or robots should be declared “electronic persons”. This would make them accountable for their omissions or negative outcomes instead of their developers. However, the experts of ethics, law and robotics condemned the proposal because the AI technology hasn’t reached that stage to be granted such advanced status.
Glen Cohen, one of the youngest professors of law at Harvard Law School, feels otherwise. He asserts that some human beings might not be persons, and similarly, some persons might not be human beings. Further, there is a good possibility of an AI to be a person like and possess all the rights of a person.
Alternatively, AI can be considered an incorporated body which is an artificial legal entity. Doing so would make it subject to criminal, civil, company, and labor laws of the land. This way, AI would be liable for corporate fraud if it commits any breach of contract or has a negative outcome and shall further also have to adhere to the Corporate Social Responsibility (CSR) guidelines imposed by the Companies Act, 2013, on companies. Based on this approach, it was argued in Europe that AI should also pay taxes.
Further, Shlomit Yanisky-Ravid, professor of law at Yale Law School, has proposed the AI “Work Made for Hire” Model that suggests that the AI system is a creative employee or contractor of the human being or company that has deployed it. According to the model, control, responsibility, and ownership should be imposed on the artificial legal entity or human beings who deploy such AI-based robots for their utility. The accountability of any omission or commission of AI should be imposed on identifiable legal entities or human beings.
However, scholars of the law believe that autonomy and creativity make AI capable enough to be recognised as independent legal entities entitled to legal rights and duties. This is based on the premise that AI systems are intelligent and capable of making rational decisions, more or less like human beings. Thus, they should be considered independent legal entities. They are also similar to corporations that are non-human legal entities and are competent in possessing legal rights and obligations. Therefore, they should be held accountable for their negative outcomes.
Despite various propositions about the position of AI as an entity in our society, the question remains that if AI is declared and recognised as an “Artificial Person” the same as corporations, could it be made liable under tort, civil and criminal laws as well?
The legislature would also have to think about the most apposite way by which these “Artificial Persons” would be penalised. Since under criminal proceedings, any company cannot be imprisoned but can be only asked to deposit fine and under civil proceedings, it is asked to pay or compensate the plaintiff, would AI also meet the same fate?
However, at the end of the day, it is the people behind the corporation, the promoters, directors, and office-bearers of a company who have to arrange money or make other avenues possible to meet such judgments and decrees. If the same approach is adopted for AI as well then it would too go scot-free for any damage caused to public and property. Hence, the existing laws are inadequate and need to be amended or re-legislated to address the current and future issues about AI in India.
International Perspective
Various laws existing all over the world hold the programmers of AI accountable for their errors and thus need to be amended. English laws do not currently regard AI-based robots as an agent because only a “person” in his right mind can be an agent in law. Similar is the situation in the US, where the robots cannot be sued in the courts on the same grounds.
Some attempts by various countries to regulate AI are as follows:
India
India has no existing laws that recognise AI as a person or a corporation. In 2017, India’s Ministry of Industry and Commerce formed a task force on AI consisting of various governmental bodies namely, National Institution for Transforming India (NITI), the Defence Research and Development Organisation (DRDO), the Department of Science and Technology, the Ministry of Electronics and Information Technology, and the Unique Identification Authority of India (UIDAI) to regulate and make the use of AI software or applications in different industries. The task force released a detailed report stating that an accountability framework needs to be formulated that would require the data fiduciaries to weed out discrimination and prejudice in the results due to evaluative determination without human intervention, but no substantial guidelines have been enacted yet.
Australia
In Australia, national enforcement guidelines were issued that confirmed that under Australian Road Rule 297, it would be the human driver’s responsibility to comply with the traffic laws even when the vehicle is being run on some AI software. Further, Data Protection Regulations, the Notifiable Data Breach Scheme, and the Privacy Act generally apply to AI and machine learning in Australia.
Singapore
In 2019, the Singapore government released a model AI governance framework. It was revealed primarily for pilot adoption, feedback, for public consultation, and to bestow guidance to the private industry on ethical and governance issues that might arise while deploying AI solutions. It is based on two premises – private organizations using AI-based applications in decision-making needs to make sure that the process is transparent, ethical, and fair; and that the AI applications must be human-centric.
USA
In the United States of America, Berkman Klein Centre for Internet, Society at Harvard University and the MIT Media Lab have formed a US$27 million Ethics and Governance of Artificial Intelligence Fund to reduce the gap between the social sciences, computing, and humanities by facing the challenges of AI from an interdisciplinary perspective. Further, twenty-eight US states have introduced some legislation or guidance about autonomous vehicles.
New Zealand
The New Zealand government has been proactive when it comes to regulating AI and has issued various reports and research papers on the same. It has a clear stance on driverless cars running on AI-based applications that infer that, unlike other countries, there is no requirement for a driver to be present in the vehicle. However, the question of whose fault it was at the time of the accident could be well raised by autonomous driverless vehicles’ owners.
Suggestions and way forward
Various recommendations to uniformly regulate the conduct of AI are as follows:
- Uniform Recognition– AI-based robots are recognised only in a few countries as of now. There should be uniform recognition and regulation of them all over the world. AI systems are usually developed by major technology companies like Softbank, IBM, Amazon, etc. that are global in nature. Thus, along with individual country legislation, the coherent global legislative framework is also needed to regulate the conduct of AI globally.
- Liability of creator– Presently, in some countries, it is the creator who would be held liable for the wrongdoings of AI, whether intentional or unintentional. This should be looked into. If an AI-based robot is intelligent enough to take its own decisions, then it should be held responsible for its wrongdoings as well.
- Protection of sources of work– There should be clarity regarding the patent and copyright laws on the autonomous works of AI-based robots. The sources should be well protected.
- Protection of Data– The Data Protection Bill, 2019 should incorporate protection of data of the users that have been used as input for AI-based robots for their learning and functioning, to ensure that no such vital data is shared with a third party without the consent of the users. Proper safeguards should be provided to promote cyber security and national security.
Conclusion
AI-based robots are leaving their mark in almost all strides of life. However, they are being deployed without any lawful regulation. This is an issue of grave concern. AI is like a dark horse that everyone wants to bet on, but nobody wants to claim responsibility if it fails. However, the dark horse must not be allowed to turn into an unruly one, unsaddling its rider, and breaking into unwarranted jurisdictions. The world needs to be proactive in the regulation of AI-based robots and cannot afford to be reactive; otherwise, the damage would be beyond repair. Thus, a stringent legal and regulatory framework is required to regulate its conduct, protect users’ personal data, and safeguard their security and privacy concerns.
References
- Oscar Williams, “IBM’s Watson AI Saves Woman’s Life by Diagnosing Rare Form of Leukaemia”, Huffpost, August 8, 2016
- Samer Obeidat, “ How Artificial Intelligence is helping fight the COVID-19 pandemic”, Entrepreneur Middle East, March 30, 2020, available at: https://www.entrepreneur.com/article/348368
- Kristin Mangenello, “Defining Personhood in the Age of AI”, Thomas for Industry, November 15, 2018, available at: https://www.thomasnet. com/insights/defining-personhood-in-the–age-of-ai/ age-of-ai/
- Glenn Cohen, “Should We Grant AI Moral and Legal Personhood?”, Artificial Brain, September 24, 2016, available at: http://artificialbrain.xyz/should-we-grant-aimoral-and-legal-perpersonhoodpersonhood
- Brent Fisse and John Braithwaite, “The Allocation of Responsibility for Corporate Crime: Individualism, Collectivism and Accountability”, 11 SYDNEY L. REV. 468, 469, (1988).
- Chris Weller, “Bill Gates Says Robots That Take Your Job Should Pay Taxes”, Business Insider, February 17, 2017, available at: https://www.businessinsider.in/latest/bill-gates-says-robots-that-take-your-job-should-pay-taxes/articleshow/57209714.cms
- Shlomit Yanisky-Ravid, “Generating Rembrandt: Artificial Intelligence, Copyright and Accountability in the 3A era- The human like authors are already here- A new model”, Intellectual Property Law 2017 Mich. St. L. Review 659 (2017)
- Samir Chopra and Laurence F. White, “A Legal Theory for Autonomous Artificial Agents”, Michigan Publishing University of Michigan Press (2011)
- Ryan Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law”, 57 B.C. L. REV. 1079, 1080 (2016)
- Robert C. Denicola, “Ex Machina: Copyright Protection for Computer-Generated Works”, 69 RUTGERS U. L. REV. 251, 265, 271, 275 (2016)
- Kapil Chaudhary, “Why we need an AI code of ethics”, India Business Law Journal, April 2, 2019, available at: https://www.vantageasia.com/need-ai-code-ethics/
- Library of Congress, “Regulation of Artificial Intelligence” (January 2019), available at: https://www.loc.gov/law/help/legal-reports.php
LawSikho has created a telegram group for exchanging legal knowledge, referrals and various opportunities. You can click on this link and join: