Artificial Intelligence

This article is written by Pranav Sethi, from SVKM NMIMS School of Law, Navi Mumbai. This article, attempting to establish an interface between A.I. and law, discusses the question as to whether artificial intelligence could be held liable in criminal law, civil law, and the controversies surrounding AI technology. 

Table of Contents

Introduction 

“We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.” ~ Andrew Ng, Co-founder, and lead of Google Brain 

Artificial Intelligence (AI) will almost certainly change how we live and work. Its implementation has been referred to as the fourth industrial revolution due to its enormous potential. As with any big technological innovation, it brings with it both potential and challenges. On the one hand, various applications have been produced or are in the works that can dramatically enhance people’s standard of living. According to a study, by 2035, the yearly economic growth rate of 12 wealthy countries will have doubled. On the other hand, there is a chance that jobs will be lost. 

Download Now

According to a few years back the online edition of an editorial monthly of a reputed platform, the question of which rules would apply if a self-driving car killed a pedestrian was often trending on the web. The issue of legal liability for artificial intelligence is discussed in this article. It talks about whether criminal liability could ever apply; to whom it might apply; and, under what circumstances it might apply. Whether an AI program is a product subject to product development under civil law and the policy framework involved around the UK laws and Indian legislations has been discussed through this article. 

A common fear of job losses

According to existing estimates, job losses in the US are expected to reach 47 percent, 35 percent in the UK, 49 percent in Japan, 40 percent in Australia, and 54 percent in the EU during the next 10-20 years. No country can escape the effects of technological advancements in the age of globalization. However, by placing in position the required infrastructure and policies, the advantages can be maximized and losses can be reduced.

Defining artificial intelligence 

Artificial Intelligence (AI) is the study of the nature of human intelligence and the creation of intelligent artifacts that can do jobs that need intellect when executed by humans. Any significant technological innovation brings with it a variety of problems and possibilities. While AI is expected to promote significant economic growth, it is also expected to result in the loss of employment. As a result, the requisite policy and infrastructure must be in place.

Even though AI has been a subject of research because the concept was created in 1956, it has only lately led to the widespread implementation of intelligent applications for various disciplines and jobs. The work in the late 1950s and early 1960s was focused on the creation of broad procedures that could be used in a variety of fields. The findings were not reassuring, prompting the field’s first winter, which began in the late 1960s and lasted into the late 1970s.

Artificial intelligence is a man-made innovation

It might alternatively be defined as any type of man-made organism capable of using the functioning capacity to perform cognitive activities such as abstract reasoning and logical deduction, as well as learning new information on its own. Using these cognitive powers, this organism would also be able to develop long-term plans for the future. Of fact, until we get to the point when the programs we construct have true intelligence, this term won’t fully explain AI.

Current AI is well below this criterion; most algorithms can only work independently in a very limited domain, severely limiting their utility. Artificial intelligence platforms have subsequently acquired tremendous traction in this incredibly innovative society over the last decade, with highly technical and sophisticated technology being applied to construct inventive, intelligent, and intellectual AI systems. As a result, the day is not far away when these intelligent bots will begin to create useful and amazing inventions without the assistance of human brains.

Researchers viewpoint 

Some AI researchers believe that anything that replicates human intellect in any manner is “artificial intelligence,” while others believe that the only “artificially intelligent” systems are those which replicate how individuals understand. Many “artificially intelligent” algorithms are classified as complicated informational networks by those in the domain of information networks with “real” artificial intelligence designated for meta-level strategic thinking that is frequently referred to as “wisdom.”

Artificial intelligence (AI) has advanced and become more widely used in recent years. Just about every industry is hurrying to take advantage of AI’s capabilities, investing vast sums of money in the process. The potential for technology to promote productivity and creativity across a business is enormous. However, as the use of this technology grows, so do its drawbacks. Most programmers are often unaware of how an AI learns, adapts to new situations, and makes decisions. It would be difficult to decide who should be held responsible if something went wrong in this manner.

Self-driving system innovation in AI (artificial intelligence)

Human decision-making will undoubtedly drift into the context when AI develops at a quicker rate. In this setting, it is unavoidable that specific AI systems may fail to complete tasks. We will see an escalation in disagreements as a result of AI shortcomings in this area. An automated vehicle has murdered a woman in an Arizona roadway. When the first autonomous vehicles appeared on public roads in 2013, the main objective of automakers was to develop a self-driving car system that is plainly and demonstrably safer than a typical human-controlled vehicle.

Artificial intelligence and legal liability

We can draw our legal rights and responsibilities from the law. Following the law entails carrying out responsibilities and receiving benefits. As a result, legal conception for AI raises the topic of whether AI should have legal rights and responsibilities. Although the answer may seem progressive and advanced, a thorough examination should include a consideration of AI’s legal personality, as this would hold AI responsible for its actions.

Criminal liability 

Criminal liability for AI will necessitate the AI’s legal personhood and will be equivalent to the commercial criminal liability which some legal regimes recognize. Corporate criminal liability is seen as a fabrication; it is an interpreted type of liability in which the company is held liable for the actions of its individuals. Unlike businesses, AI would be made responsible for its actions rather than being responsible for the actions of others. Even while it appears to be a straightforward answer that does not violate the rule of law, it demands a more thorough examination.

Why do criminal laws carry a different approach for artificial intelligence 

When a person or individual executes a crime towards any other individual, he or she will undoubtedly be liable to the criminal laws that have been established in that country. Whenever it concerns artificial intelligence, therefore, any criminal activity perpetrated towards humanity by it may not be classified as a traditional crime, even though it has been committed with the help of a software or a robot that is not related to the person who created the software, program, or machine.

Consequently, to identify the criminal liability of crimes perpetrated by artificial intelligence, we must first evaluate whether artificial intelligence is a legal entity in and of as a whole as well as what the major obstacle has been in establishing and identifying the actus reus and mens rea, or the act and mental (intention) component, respectively.

Defining the black box problem

Computer and smartphone patrons remain dependent on complicated problem-solving algorithms to execute only the utmost simple activities effectively and swiftly, and artificial intelligence has established a major component of everyday life. It is similarly important for such algorithms to run efficiently and without errors, as well as for us to present the information, as this aids in any subsequent algorithm development. Nevertheless, when we try to figure out how the AI works, we hit a brick wall, and it’s difficult to describe what’s happening within.

It’s a big problem, but it’s now limited to massive profound training systems and neural systems. Because artificial intelligence is made up of complicated algorithms and data sets that are produced by software rather than by humans, these neural systems separate the problem into zillions of bits and then systematically analyze them bit by bit to provide a realistic result. Because the human mind does not work in the same way, we have no way of knowing what calculations the brain system is performing or what methods are being used.

As a result, this phenomenon is referred to as the “black box” or “explainability” dilemma, because there is no way to get access within the neural network to observe current activity during the problem-solving operation. This not only prevents us from getting the deep information needed to update the algorithm and subsequent calculations, but it also creates a host of security difficulties with AI and neural networks. As a result, it is claimed that a human will never be able to explain why a sound AI system using such self-generated methodologies or data sets arrived at a given response or made a specified “choice.”

Queries related to the AI framework’s choices and decisions

It is critical to remember that any examination of culpability in a civil or criminal action boils down to whether the relevant defendant’s activities or omissions were illegal as a result of the applicable AI framework’s choices and conclusions. Were their actions or omissions considered contract violations, negligence, or criminal offences? It is critical to emphasize that the defendant will be a legal person, not an AI framework, in any case.

To answer these types of questions, the court will not need to know why the relevant AI mechanism made the decision that resulted in the defendant’s allegedly illegal act or omission. We can’t help but imagine that we won’t be able to figure out why the AI framework made a decision. Nevertheless, in reality, the AI methodology did surely arrive at that particular choice which either produced or contributed to the defendant’s criminal act or omission.

For instance, a plaintiff who followed the defendant’s poor investment advice supplied by an AI algorithm lost money. According to the implied terms of the service contract, the plaintiff may argue that the defendant failed to use reasonable care and skill in his responsibility to provide investment advice. The plaintiff could be competent to prove a breach of duty as well as the defendant’s responsibility to use proper precautions without any justification as to why the ai provided such bad advice. As a result, courts may be able to find the defendant liable depending on the character and validity of the plaintiff’s view. Whether the counsel was provided by a robo-adviser or a human adviser, the court may have reached the same conclusion. It’s important to remember that the human brain has “black box” characteristics (we can’t always explain human behaviour), yet this hasn’t stopped courts from finding defendants accountable in the past.

Artificial intelligence: Is it a legal entity

The first death from a robot

Kenji Udhara, an engineer at a Kawasaki heavy enterprises company in which a robot was stationed to execute a specialized manufacturing job, was the first person to die as a result of a robot on this planet. As a consequence, when Kenji was fixing the robot, he neglected to close it, causing the robot to recognize Kenji as a barrier, following which he was mercilessly driven into an adjacent machine by the same robot’s powerful hydraulic arm, resulting in Kenji’s death almost immediately.

Numerous regulations of nations throughout the world at the moment, are incapable of establishing a definite criminal legal foundation for dealing with official situations where robots are implicated in the execution of a particular charge or injury to a person. With the development of AI, several new aspects have been introduced to the world, as well as the ability to manage with such a quick rate of change. It is also critical for states to enact legislation that clarifies the status of incidents and crimes involving robots and artificial intelligence Software.

Artificial intelligence’s lawful grounds

There is no formal regulation or statute in the Indian legislative framework that addresses the unborn child and its rights. Some statutes, on the other hand, not only acknowledge and state about an unborn child but also identify such a child as a legal person who only gains such rights after birth. The legislature is quiet on the concept of protections offered to such unborn and duties owing to certain unborn, which is intrinsically troublesome as a grey area in the legal arena. Similarly, AI systems are currently in their development, and the Indian legal system has yet to recognize them, which is a concerning situation, much alone the imposition of any rights, responsibilities, or liabilities on AI systems.

Since a particular person or corporation’s lawful position is intrinsically related to their autonomy, this position is bestowed not only on persons, but also on cooperatives, businesses, and organizations. However, no legal system has yet recognized artificial intelligence as a legal organization except for Saudi Arabia. Where a robot named Sophia, which is the state, has recognized an artificially intelligent humanoid as a citizen of the country with rights and duties equal to those of human beings, a noble person living within the state. The problem of establishing legal bodies for artificial intelligence robots or software depends on whether they can be assigned specific rights and responsibilities that would normally be given to a live human.

Ai responsibility for independent operations

Although a dwelling person is fully independent and can make his or her judgments, an artificial intelligence framework is formed by humans and operates according to the instructions of the programs that have been introduced to its framework to accomplish particular duties in a particular situation. But it is also responsible for operating independently. Although the corporate entities or businesses have the legal status of a different legal entity they are nonetheless jointly responsible to the stakeholders for any future liability arising from dealings that these businesses or corporations have engaged into.

Even though individuals created artificial intelligence, it is completely autonomous and capable of doing activities that may occur as a result of a breakdown or incorrect programming, resulting in the implementation of a criminal act even when the author of such an AI program had no intention of doing so. Under any country’s state law, the criminal liability of artificially machine intelligence is unclear.

As a result, only judicial judgments serve as the major source of judgment in instances where artificial intelligence is accountable for executing a particular crime (including or excluding the creator’s orders that produced such artificial intelligence robot software or algorithms).

Elaborating role of AI in criminal liability

According to Gabriel Hallevy, a distinguished legal scholar and attorney, various AI systems can satisfy the key elements of criminal liability.

  1. An act or omission that constitutes actus reus.
  2. Furthermore, mens rea, which necessitates knowledge or information.
  3. Strict liability offences, which do not require mens rea.

The actus reus criterion in showing criminal liability

Artificial intelligence machines and software’s criminal liability emphasizes that whenever a criminal act is committed, the requirement of law serves as the basic foundation. As a result, without an actus reus, an individual’s criminal liability cannot be founded, and in the specific instance of artificial intelligence, actus reus can only be instituted if the crime perpetrated by such a framework can be stipulated to a human being, allowing the very circumstance of commission of an act to be satisfactory to penalize and demonstrate an individual’s criminal liability.

The components of mens rea

When it pertains to mens rea, the prosecution must show that an act performed by an AI was done deliberately by the users towards the other person. The greatest level of mens rea is the knowledge that may or may not be corroborated by the purpose of a single person under whose direction or management an artificial intelligence robot performed a particular act. The minimum point of mens rea is when the user of an AI system is guilty of criminal negligence or misconduct, which should have been obvious to a rational human under his strict liability.

Hallevy presented three legal paradigms to consider when investigating AI-related offences:

  1. Perpetration by a Third-Party AI Liability.
  2. Artificial Intelligence’s Natural Probable Consequence Liability.
  3. The AI’s Direct Liability.

Perpetrator by another person

In the instance of a crime committed by an intellectually challenged person, a minor, or an animal, the offenders are considered innocent agents since they lacked the cognitive ability to form a mens rea, and this also applies to strict liability offences. However, if that innocent individual executed on somebody else’s orders, the individual delivering the instructions or the instructor would be held criminally accountable, such as a dog trainer who trains his dog to attack outsiders in the occurrence of a certain incident. As a result, according to this concept, while AI platforms or programmes are considered harmless agents, the user or system developer can be labelled as a perpetrator by another.

Natural-probable-consequence

In this concept, a component of an AI software designed for good purposes gets accidentally activated, resulting in the commission of a criminal act. Hallevy used the instance of Kenji Udhara, a Kawasaki heavy industries engineer who works in a company in which a robot was used to perform certain production tasks. As a result, when Kenji was fixing the robot, he failed to shut it down, which caused the robot to recognize Kenji as a threat to its assigned tasks and determine that even the most efficient strategy to neutralize this threat was to drive him into a neighbouring working machine. The powerful hydraulic arm of the similar robot mercilessly shoved him into an adjacent machine, killing Kenji almost immediately before returning to its job.

This model is used to determine “natural or likely consequence” liability, also known as “abetment,” under Chapter V of the Indian Penal Code, 1860 (hence referred to as IPC), which controls the culpability of those who aid and abet the execution of an offence. Hallevy outlines how an accomplice can be held accountable for an act even if no conspiracy is proven, as long as the accused’s conduct was a natural or foreseeable result of which the accomplices promoted or supported and had knowledge that a criminal plot was taking place.

Relevance of Section 111 of Indian Penal Code

In Indian criminal law, Section 111 of the Indian Penal Code (IPC) under Chapter V establishes the principle of expected outcome which states that the consequences of an act assisted and an act committed are not the same. Except for a probable outcome of abetment, the abettor will be responsible for the offender’s action in almost a similar manner and the similar manner as if he had assisted it. The broad consensus on abatement is that there can be no sentence for abetment unless an act is committed. In other cases, however, if the evidence is inadequate to indict the perpetrator but adequate to convict the abettor, the perpetrator may be acquitted while the abettor is likely to be found guilty based on evidence and circumstances.

As a result, AI platform programmers and operators may be held accountable for the AI software’s actions if they recognized the action was a natural or likely result of their AI system’s use. Nevertheless, separation needs to be made amongst AI systems intentionally built for criminal objectives as well as those with valid other goals when employing this principle i.e., where the AI system knows about criminal intentions and where it does not. The previous category of AI systems is covered by this concept, but the latter group may not be prosecutable due to a lack of information (but strict liability would apply to them).

Direct liability

This concept gives an AI system both actus reus and mens rea. An AI program’s actus reus is quite simple to assign. If the result of any operation made by an AI system turns out to be criminal conduct or a failure to respond in a scenario in which there was a responsibility to report, the actus reus of the charge has occurred. It’s difficult to assign a mens rea, hence the three-level system of mens rea enters into action here. In the instance of strict liability offences, when intention does not require to be established or is not needed, an AI system may be held guilty for the illegal act. An example of strict liability can be seen in the case of an autonomous self-driving automobile and speeding, where speeding is a strict liability offence. So, according to Hallevy’s approach, the legislation governing the criminal liability of over-speeding might be enforced in the same way that it would be implemented to humans driving a car driven by AI software.

Elaborating role of AI in civil liability

When software is flawed or a person is damaged as a result of using it, the legal actions usually charge negligence rather than criminal culpability. Gerstner addresses the three factors that must be proven in most negligence cases:

  • The defendant owed a duty of care to the plaintiff.
  • The defendant failed to fulfill that obligation.
  • The plaintiff was harmed as a result of the breach.

In the instance of the defendant’s duty of care, Gerstner pointed out that there is a duty of care owed by the software or system seller to the consumer; nevertheless, determining the quantity of standard care needed is difficult. If the system in question is an “expert system,” the degree of care would be competent at the very minimum, if not a specialist.

If the defendant breaks a duty, Gerstner suggests several scenarios in which an AI system could breach its duty of care, including:

  1. Failure of the programmer to discover faults in the program’s features and functionalities.
  2. An insufficient or incorrect knowledge foundation.
  3. Inadequate or incorrect documentation and notices.
  4. A failure to keep a current knowledge base.
  5. Error due to erroneous input from the user.
  6. The user’s undue reliance on the output.
  7. Exploiting the software.

Finally, whether AI systems can cause or be assumed to cause an injury to a plaintiff as a result of a breach is controversial. However, the crucial question in AI is whether AI systems, like most expert systems, advise a solution in a given scenario or whether the AI system, such as an autonomous automobile, makes the decision and acts accordingly. As a result, although the former scenario involves at least one external agency, making causality harder to prove, the latter situation involves no external agent, making causation relatively straightforward to prove.

Policy framework

Leading researchers Tom Allen and Robin Widdison in 1996 noted that “Soon our autonomous computers will be programmed to roam the Internet, seeking out new trading partners, whether human or machine, at this point, we must ask if, and if so, how, existing contract law doctrine can cope with new technology.” They observed that “Neither American nor English law, as it is now, would give legal standing on all computer-generated agreements.” It means that the legal concepts in place at the time were incapable of adapting to the harm caused by technology. It also resulted in the formation of a problem, determining how the present law should be amended. Although it has been nearly two decades since Allen and Widdison submitted their paper, and the contract formed through the interaction of interactive voice response systems (IVRS) is now recognized and contractually enforceable, the simple question remains: can existing legal doctrines deal with new, arising, and advanced technologies, as well as the damage caused by AI, and if so, how?

According to UNCITRAL’s explanatory note, any computer or machine designed for a person (natural person or legal entity) would be accountable for any produced communication by that computer or machine. On an introductory note Section 213 of Article 12 of the Electronic Communications Convention states that Article 12 is an essential clause that should not be intended to enable the creation of rights and obligations for an automated messaging system or a computer. Electronic communications created automatically by message systems or computers without direct human participation should be considered to be “originating” from the legal entity on whose behalf the message system or computer is controlled. Questions of agency that may arise in that framework are to be resolved according to rules that are not part of the Convention.

Strict liability comparison to UK criminal law

No particular or special provision in the Indian legislative regime addresses the liability of such acts performed by a user, operator, or developer using artificial intelligence software or systems. While being a common law system, India’s strict liability concept, which corresponds to Hallevy’s direct liability principle, is not as advanced as English law. The strict liability doctrine in UK criminal law has developed over time as a result of the aggregation of existent English laws with altered, updated sections, authoritative judicial decisions, and legislative enactments passed by parliament from time to time. A comprehensive formulation of Indian penal laws, on the other hand, gives no room for the judiciary to go much further than current statutes.

Role of Information Technology Act, 2000 in AI

To assess the offence and the penalty for abettors, the IPC’s principle of probable cause responsibility or abetment is adequate. The Information Technology (Amendment) Act, 2008, broadened the concept of abetment to also include acts or omissions through the use of encryption or any electronic methodology, to keep up with the schedule of technological advancement. The Information Technology Act of 2000 (hereinafter referred to as the IT Act), which attempts to govern all elements of modern-day technology, describes the computer and relevant terms such as software. However, the Internet of Things, data analytics, and AI are not covered by the IT Act, nor are the liabilities that may be committed by humans using these IT systems (specifically AI software). Given that the primary goal of the Act was to give a legal character to digital signatures and electronic records, the Indian legislature did not place a strong emphasis on the scope of responsibility deriving from AI acts and countermeasures.

Liability for damages

There is no particular Indian legislation dealing with the criminal or civil liability of such acts perpetrated with AI in the Indian regulatory system. As a result, it is important to note that India is one of the countries that is heading towards its implementation of the policies that will allow AI to be integrated into the entire government system, but at the same time, the legal system is neglecting the potential negative consequences of future cybercrimes committed using these highly technological and advanced AI systems.

Judiciary’s role in determining responsibility

The Indian judiciary is the only sign of hope left in the Indian legal system for dealing with these instances due to the legislative vacuum, the formulation of final penalty, and the criminal/civil liability of such acts perpetrated by AI systems, software, and robots against other humans. Even though there has yet to be a substantial landmark ruling on the guidelines for the use of artificial intelligence software or robots to prevent the commission of any criminal or civil violation against others, the judiciary is expected to do so with the rapid advancement of artificial intelligence to transfer such regulations and judicial precedents from which the use of artificial intelligence could be changed immediately by identifying the criminal and civil liability of such artificial intelligence systems that may trigger harm or damage to other individuals through numerous unethical practices such as phishing, hacking and data theft, etc.

These new opportunities have brought with them several of the new ethical issues, but are not restricted to:

  1. The liability issue raises several legal concerns.
  2. Advanced AI programmes capable of self-generating content have raised worries about intellectual property rights.
  3. Issues about the use of personal data, as well as privacy concerns.
  4. Discrimination by AI programmes plays a part in deciding who gets hired.
  5. Recognition of a person’s face.
  6. The use of self-driving military weaponry allows AI to make judgments on its own.
  7. when it comes to killing someone.
  8. Self-driving vehicles that will have to decide what to crash into on their own should. They are faulty.

Artificial intelligence controversies

“Every coin has 2 sides”, the adoption of AI in the workplace causes certain problems that are regarded as the largest scandals in humanity. Everybody can acquire complete information about controversial scandals at a certain moment, thanks to the Internet and its data storage potential.

Tesla settled two lawsuits with the U.S securities and exchange commission (SEC)

Elon Musk, the CEO of Tesla, is among the greatest well-known hi-tech businesses in the AI and Big Data fields. With his whimsical tweet suggesting the privatization of Tesla at $420 per share in August 2018, Elon Musk had to face a lawsuit from the US Securities and Exchange Commission (SEC). It was one of the most widely publicised scams, with millions of people following it throughout the world. Elon Musk claimed that a shareholders vote will be conducted before Tesla became a private corporation and that the decision was taken because the strain of being a public firm caused numerous diversions, which are not beneficial to future profitability.

Actions brought by US securities and exchange commission 

The SEC has brought a lawsuit over Musk, charging that he made false and deceptive representations in a tweet in which he did not check with the stakeholders before naming certain verified identities. The lawsuit was settled in September, and while Elon Musk did not accept or deny the charges, he was forced to stand down as CEO and become chairman of Tesla’s board of directors for three years.

Fines & complaints filed by US securities and exchange commission 

Elon was hit with a huge $20 million fine, while Tesla was hit with a $20 million fine but no charges of fraud. In December 2018, he stated that the SEC is doing an excellent job as the Shortseller Enrichment Commission. In February 2019, the SEC filed a complaint alleging that he was in contempt of federal court for blatantly violating the conditions of his Tesla agreements. The lawsuit is still ongoing, and the ultimate decision is still pending.

Facebook-Cambridge Analytica data scandal

The Facebook-Cambridge Analytica controversy is the greatest AI scandal in recent years, affecting millions of Facebook users’ real-time individual information. In March 2018, Facebook was hit with a series of problems as a result of a huge real security breach. Cambridge Analytica, a strategic consultancy firm, obtained over 87 million personal data records from active Facebook users without their permission. The large-scale real-time data was reportedly exploited in the 2016 presidential election to help Donald Trump, a former US presidential candidate. The very same dataset was reportedly used to sway the outcome of the Brexit referendum in favour of the Vote Leave campaign.

Mark Zuckerberg’s response to US Congress and actions taken to avoid further data breach

Mark Zuckerberg, the creator, and CEO of Facebook did not take action regarding Cambridge Analytica for months after knowing of the data leak. Facebook did not follow up with the political consultancy firm to see if all of the personal information had been taken and deleted. According to the violation of user privacy protections and the ensuing hate campaigns, Facebook has been sued. In front of the US Congress, Zuckerberg promised to implement adjustments and rules at Facebook to prevent future data leaks.

Uber v/s Waymo trade secret lawsuit against self-driving cars

Waymo is a Google-owned self-driving car firm that sued Uber in February 2017 for allegedly stealing AI-based self-driving car technology. Most of the criticism revolved around Anthony Levandowski, who before joining Uber focused on AI-based machine learning towards self-driving cars. Before leaving for Uber, Anthony transferred 14,000 files from Google folders containing crucial engineering knowledge to his laptop, according to the charge sheet. The important material was about self-driving car technology, which led to the Uber vs. Waymo civil case settlement in February 2018, in which Uber decided to pay a $245 million penalty.

Decision taken by the court

Uber was prompted to remove self-driving cars off the road in December 2016 after receiving traffic fines and increasing warnings from self-driving cars. As a consequence of Anthony’s inability to comply with the judicial order to give over the information, Waymo filed a trade secret case against him, and Uber fired him. Anthony was condemned to 18 months in jail, to be spent after the pandemic, and agreed to pay $757,000 in restitution to Google, as well as a $95,000 fine and $179 million to Waymo, in exchange for a plea deal.

Google Nightingale data controversy

Google, a Silicon Valley company, teamed together with one of the country’s top hospital systems on project Nightingale, a massive operation. The primary goal was to gather and analyze extensive healthcare real-time data from millions of patients in 21 states. Lab findings, diagnosis, medical records, such as health information, and patient private information were among the raw data. This real-time data harvest was taking place without the agreement of doctors or patients, even though about 150 Google employees had complete access to millions of patients and their emergency contacts.

Unethical data harvesting practices

This approach complied with federal health regulations and provided strong protections for patients’ raw private information. The debate grew in popularity as unethical data harvesting practices became more prevalent. Google was using AI and Big Data analytics to collect data to design new software, whereas the Health Insurance Portability and Accountability Act of 1996 mentioned that any hospital is expected to access data with business partners without their consent for the sole purpose of assisting with healthcare functions. Google was constantly in the news because it goes above and beyond legal standards in deciding not to educate people about their real-time data.

Photo-scraping scandal of IBM

IBM is the most well-known American multinational hi-tech corporation that uses AI and Big Data to generate world-changing technologies. In 2019, IBM was involved in a photo-scraping incident that centred on 1 million photographs of human faces. To improve the facial recognition AI-based system, IBM provided a set of real-time data with roughly 1 million photographs. The direct source of the images was discovered by NBC, and it was Flickr, an online photo-hosting site.

The debate triggered about AI-based platforms

To produce reliable knowledge through advanced machine learning, AI-based algorithms constantly demand real-time data. There was a debate about AI-based systems created expressly for photo-scraping from publically accessible sources with direct access to real-time data. Because of the quick expansion in the internet sector, this controversy raised awareness among social media users about how their data is being utilized. Companies have been forced to take personal information without legal consent due to Silicon Valley’s data monopoly and monetization. This sparked a wave of ethical and unethical debate regarding active users’ real-time data on social media networks.

Conclusion 

A future filled with competent AI-powered robots and technologies may appear frightening at first, but some believe that our future is bright, and the possibilities are far more than we can now grasp. Rather than discussing the potential benefits of AI and how we might utilize it to better ourselves, create an ideal society, and even explore other worlds, I’ve recently seen experts explaining the perils of AI and painting a picture of a Terminator-style doomsday scenario. This negative perspective is counterproductive and it should not allow it to stifle AI growth. Today, AI is not recognized as a separate entity under both national and international law, which means it cannot be held liable for the devastation it causes. As a result, the principle guaranteed by Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, which states that the person at whose behest the system was configured must ultimately be held liable for any act done or message generated by that system, may be extended to AI liability.

References


Students of Lawsikho courses regularly produce writing assignments and work on practical exercises as a part of their coursework and develop themselves in real-life practical skills.

LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join:

Follow us on Instagram and subscribe to our YouTube channel for more amazing legal content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here