Intellectual Property

This article is written by Oruj Aashna, from the University of Calcutta. The article addresses how artificial intelligence works, along with an analysis of AI’s legal issues and whether the AI is performing in an ethical manner. It also sets out regulatory steps taken by the government to mitigate the risk involved in artificial intelligence.    

Introduction 

Artificial intelligence (AI), as the name suggests, is a machine-based intelligence created by humans to deploy services in a more personalized way. One may think of AI as robust machine-like robotics, having cognitive skills and capable of thinking at a level of human intelligence, which in reality is not the case when we talk about advanced AIs: AI that uses machine learning techniques and algorithms. So, today we are very much in touch with AI, be it while surfing through Netflix to watch our favourite show or simply using Google maps for a road trip. AI is pretty much everywhere.

AI has heavily been deployed by now, and many organizations have installed AI in their management system. The underlying idea of attaching AI in an organization is to learn customer’s behaviour and predict their future actions through massive data they collect from end-users. Serving customers is way easy with this system. From the viewpoint of customers, it is the most helpful invention, impacting lives in a positive manner by simplifying and managing tasks according to our needs. For example, it directs us to the exact location, helps extract key elements from a legal contract, assists in research, shows recommendations as per our taste, etc. 

Download Now

We may think of AI as a significant achievement in the development of technology. But little do we know about the beneficiary (the person who gains) of artificial intelligence. Of course, it’s not the programmer who is the ultimate beneficiary, but it is the owner of an organization or even a government or its agency who gains an advantage. AI is handled and controlled by some people, and we know least about its paradigm. The AI process has created a “black box” not just for customers but also for its creator. Reason for calling AI a black box is that no one knows how AI is deciding on these data patterns (algorithms). 

The question of ethics and legality in AI comes into light when artificial intelligence software and machines do things that may be ethically or legally wrong when done by humans, such as invasion of privacy, human rights, property and health damage, etc. 

What is artificial intelligence 

The term ‘artificial intelligence’ was first coined by an American computer scientist and AI researcher John McCarthy. He made exceptional contributions in the field of computer science and mathematics that resulted in the invention of artificial intelligence and interactive computing systems. He invented the LISP high-level programming language that is still relevant in the area of artificial intelligence.  

According to the dictionary of computing, AI is defined as:

“…trying to solve by computers any problem that a human can solve, faster.” 

The above definition reflects the exact image people have for an AI i.e. a machine that is able to act and think like humans or we can say in a better way. A natural question one can have is – whether a mere machine can surpass human intelligence? Or whether AI is capable of making philosophical and abstract reasoning? The answer to all these questions is no. 

Artificial Intelligence cannot think or introspect on reasonings even close to human intelligence. Although It is true, it has intellect but that is infused by a human mind and can’t surpass or even reach a level of human intelligence. Moreover, its intellect relies on what is fed by humans. It learns to make decisions based on the masses of data stored in it. Therefore, data is what fuels artificial intelligence. 

Artificial Intelligence and ethics

There is a continuous debate around ethical conflicts in AI. The foremost concern in the populace about AI is its inscrutability and lack of transparency. The lack of transparency in AI is not just because it is a new technology but because of its complexity, making it impossible for a layperson to understand AI’s learning process. We don’t actually know how AI acts and comes to an unambiguous decision. AI operates under a veil with a few hands of developers to which no one can glance into. This has made AI a mysterious concept of what we call a “black-box theory”. 

Let’s take the example of Gmail spam. The AI algorithm used in Google’s Gmail identifies ‘spam’ and relocates the mail to the spam folder. But little do people know about its identification process that leads to the classification of spam. While this is just one example, if we look deep into the involvement of AI in our daily life, we will see its decisions are everywhere. 

Suppose a company says it follows all protection regulations and provides adequate transparency but cannot explain the process of AI’s algorithmic decision. In that case, it is not transparent enough and is not ethically bound even if it is legally compliant. The lack of transparency on how the AI makes decisions contributes to distrust in the general public.  

This leads me to my second question – Is AI neutral while making the decisions? The answer is -no! AI shares our workload by simplifying and making quick decisions, but making biased decisions by AI is possible. Since AI is man-made, the machine learning algorithm and program stored by a man can inherit a human’s biased nature into a machine. 

Cognitive biases can have life-altering consequences. It is, however, clouded as to how much or to what degree the human biases can creep into artificial intelligence systems. But cognitive bias dwells in the algorithm of AI in various forms. The AI system learns and trains itself according to a human’s instructions, which can reflect biases in any form, such as historical or social inequality. 

For example, Amazon stopped using the ‘hiring algorithm’ after discovering that the algorithm was biased against women. The company found that the algorithm favored applicants who used words like “executed” or “captured” (words most commonly found in men’s resumes).

In the context of AI and ethics, one cannot overlook the automated public sphere it has created in our environment. Rather than reading newspapers, we are now using Facebook, Twitter, Instagram to gather information, and these platforms are more efficient in discharging information and that too in a personalized manner. 

As a user of online media, we know that these platforms collect a large amount of personal data, be it a name, location, preferences, opinion, etc, and show results or news as per people’s preferences. It detects the algorithm and offers information that can attract maximum public views irrespective of whether it is accurate/decent or not. These organizations, via data, make a metric of success online virality and promote material that has received a good deal of attention or seems to match the public’s personalization profile.

Consequently, this process reduces pluralism by elevating profit consideration over the democratizing function of public discourse and effectively automating the public sphere. Decisions made by the public are now controlled and influenced by these platforms for profit maximization purposes. 

One main legal issue regarding AI is the liability in the event of any failure in AI technology. A question as to who will be responsible if any failure occurs while using these systems. Most of the time companies utilizing AI tend to get away from the responsibility. For instance, in Google’s “right to be forgotten” case, the company argued that they aren’t responsible for the results the search engine gives because it’s the algorithm that does that. 

The “black box” theory 

The term black box used for artificial intelligence is to signify the opacity it involves. In simple terms, the black box is an imperative system like a veil, which deceives the operations and inputs from the user. The most common technology or tool that is affected by the black box phenomenon are those which use artificial intelligence and/or machine learning techniques.

Why black boxes exist in AI

The AI is built with a deep learning model, which is typically conducted through a black box arrangement. Artificial neural networks consist of hidden layers of nods. Each node processes input and transfers the output to the next layer of nodes. Deep learning is one of the parts of artificial neural networks that learn on its own through the pattern made by the nodes.

The algorithm stores millions of data as input and connects specific data features to produce an output. Since data collection is self-reliant, the results produced by the algorithm are difficult to interpret. Even a data scientist is not able to solve the result the AI will create. 

The issue related to AI business or inscrutability arises from the black box. When the software is used for any critical operations, the employer or employee associated with that operation will have no clue about the process within the organization. The organization’s unknownness will cause massive damage in the organization if any error happens and go unnoticeable. Sometimes these damages will have expensive or even impossible repairs.   

If such a circumstance arises from black-box AI, it may continue long enough for the company to acquire damage to its reputation and, potentially, legal actions.  

Regulations on AI in India 

There is no regulation or law in India that regulates artificial intelligence, machine learning, or big data. But the Government has felt the need to look at the development and implications of artificial intelligence. The government, as of now, intends to amplify the application of artificial intelligence in the Indian environment. 

Several ministries and governments have stepped forward to take the initiative towards AI regulation. Some of these ministries include the Ministry of Electronics and Information Technology (MeitY), the Ministry of Commerce and Industry, the Department of Telecommunications (comes under the Ministry of Communication), and the NITI Aayog (here and here). 

According to the reports of NITI Aayog “National strategy for artificial intelligence”, the government intends to maximize the ‘late mover’s advantage’ in the AI sector for ‘consistently delivering homegrown pioneering technology solutions’ in AI as per its sole needs to help leapfrog and catch up with the rest of the world.

NITI Aayog released a draft for discussion with stakeholders revolving the area around responsible AI.

NITI Aayog plans to tackle the issues related to artificial intelligence and include suggestions like: 

  • Setting up an IP regime for AI innovations and a task force that will comprise the Ministry Corporate Affairs and Department of Industrial Policy and Promotion to examine and issue a modification to intellectual property law;
  • Developing a data privacy legal network to protect human rights and privacy and; and
  • creating a sectoral regulatory guideline encompassing privacy, security, and ethics.

MeitY constituted four committees for the development of the regulatory framework for artificial intelligence. The four committees are:

(a) The first committee for platforms and data on artificial intelligence;  

(b) Second committee for leveraging AI for identifying national missions in critical sectors; 

(c) Third committee for mapping technology capabilities, key policy enablers required across sectors, skilling and reskill; and 

(d) Forth committee for cybersecurity, safety, legal and ethical issues.

The four committees of MeitY, as mentioned earlier, lays down the following recommendations:

  • The development of an open National Artificial Intelligence Resource Platform (NAIRP) for knowledge integration and awareness for AI and ML;
  • Establishment of a committee of stakeholders to dissect the area of AI in a multidisciplinary way. The committee will review the existing laws to make the amendments or modifications to align with AI development;
  • The stakeholders shall deliberate whether AI should be considered a legal person and establish a scheme or compensation fund to compensate for damages in civil liability claims;
  • Use of procurement contracts by the government to focus on the best practice relating to security and privacy issues;
  • AI frameworks should design broad principles, and the companies should be allowed to make their internal programs in compliance with the said framework. This will provide flexibility to adapt to the technology development;
  • The government should propose the development of safety parameters and safety thresholds to ensure that human interaction with AI does not harm people and property in any way; and
  • Standards should be made to address the AI development cycle.

Fairness accountability and transparency in AI regulation in India

An AI regulation includes three elements ie. fairness, accountability, and transparency, also known as F-A-T. The FAT (fairness, accountability, transparency)  or FATE (fairness, accountability, transparency, and ethics) in AI regulation ensures that any AI-based solutions or applications must contain four elements for the safe, responsible, ethical, and accountable deployment of AI tools.   

Fairness

Fitness in AI means that it should not be biased against any group or segment. As discussed above, AI relies on huge data that is usually collected manually and provides us the results based on that data. The data collected are barely neutral in nature, observed in several cases.

For example, algorithms used by US courts predict the likelihood of the criminal reoffending the act. But recently it was observed that the algorithm prediction in the context of reoffending was highly inaccurate as it showed black defendants are more likely to re-offend. The prediction was perhaps based on historical criminal statistics in the US; they represented black criminals as reoffenders. The results were seen as overly biased and lacked fairness which is the essence of justice.  

The regulatory system in India has stepped forward to look into this issue to ensure fitness. In this context, NITI Aayog proposed AI data training solutions that will help,  guide, and develop unbiased AI. 

Its draft “AI for All” put forward an idea of utilizing technical solutions to ensure fairness in the data feeding process of AI. Some of these solution tools are IBM ‘AI Fairness 360’, open-source software used to detect biases in AI. The said software actively checks the data in AI by a machine learning technique called a state-of-art algorithm. Another tool for the said purpose is Google’s ‘What-If’ Tool (WIT) a user-friendly interface used for expanding understanding of black-box classification and machine learning (ML) models without writing any code. 

Some other tools are Fairlearn that assists data scientists and developers in improving AI and FairML, an open-source framework toolkit for auditing the machine learning model.

MeitY on other hand holds up to align with the government’s aid for self-regulation of AI in India. The ministry also proposed self-regulatory bodies for stakeholders so that they can test their technology solutions and formalize their best practices. The said proposal will prevent government intervention and the application of hard regulations into the algorithms of AI solutions.   

Accountability  

Accountability in AI is the determination of liability attribution in the event of loss or harm due to the use of AI solutions. Since AI solutions use big data, learn algorithms and display results according to those data. The cases of accountability and responsibility are more prominently seen in these self-learning AIs. Also, inadequate consequences reduce the incentive for responsible AI development and hinder grievance redressal.  

Proponents confer the idea of distributed responsibility for AI solutions. It seems distributed responsibility will stand out as a good conceptual solution because decisions based on AI are the result of interaction among several actors such as developers, designers, users, software, hardware, etc and it is important to distribute responsibility for each role.

However, split responsibility does not solve the problem in its eternity as in practicality it is not possible to allocate the exactly responsible actor given the number of interactions and other challenges. As the name suggests, ‘distributed responsibility is the distribution of responsibility equally but there are cases where one party is deemed more responsible than the other one. Also, responsibility attribution may be difficult as one or more parties may misrepresent their contribution to evade responsibility.     

According to NITI Aayog, there should be an attainable and practical solution for this issue. For instance, a shift of ascertaining fixed liability towards objectively identifying the fault and preventing such fault in the future will prove to be more practical. The think tank also proposed to sever other solution some of them are:

  • Introduction of safety measures to protect AI solutions if appropriate steps to monitor, test and improve the AI product took;
  • Introduction of actual harm policy to prevent a lawsuit for speculative damages;
  • Creating a framework to proportionally distribute any liability among stakeholders; and
  • Striking off a standard of ‘negligence’ from strict liability and replacing it to ascertain liability. 

MeitY came up with a framework that supported the idea of proportionate liability between the deployers and designers. The framework also acknowledges the formation of a flexible draft to direct and guide the self-regulatory framework by the stakeholders. It also raised questions as to whether or not AI should be considered as a legal person since the decisions are automated.  

Contrary to the above, the Indian Society of Artificial Intelligence and Law, a think tank devoted to AI and law, proposed fixing liability to the developers for damage caused by the AI/ML models.  

Transparency

Transparency means that the stakeholders of an organization should be in a position to disclose how AI is able to produce an output with a provided input. In short, the stakeholder should give a window to see how AI solutions are working. However, as discussed earlier in this article, it is difficult to unpack the inner workings of these algorithms and even developers are not able to understand AI’s results and its working. This takes us back to the black box phenomena (discussed earlier) where inputs and the functioning of AI are hidden.   

A central concern that emerges from the black box phenomena is a failure to explain how AI has reached a result. NITI Aayog and MeitY’s take on this issue are pretty similar to enhance the explainability of the algorithm. 

This method of ‘explainability in AI’ is now called ‘Explainable AI‘  (‘XAI’). XAI is an emerging area in machine learning that will address the question as to how black box decisions of AI systems are made and the steps involved in making such decisions. It will answer the question like: why did AI come to a specific conclusion? Why didn’t the AI system not come to some other solution? When did AI systems fail and succeed? How can AI systems correct errors?

Some other proposals of the NITI Aayog include: 

On the other hand, MeitY recommended that the government nominate a particular AI application that needs an explainable AI model to mitigate potential harm and discrimination.     

Analysis

Speaking about ethics in AI, it is an application of principles and values in a machine that will harness the harm it could potentially create. A whole body of ethical guidelines of right and wrong has been made in recent years to make technology stand ethical principles’ expectations as far as possible. But the moral applicability in AI and machine learning is still in question. 

However, an initiative towards safety-critical AI is one of the most lucrative ideas towards safe, ethical, and responsible AI. Safety-critical AI is a step towards building AI technologies that are safe, predictable, and trustworthy, along with the alliance of an individual’s ethical and normative expectations. 

India’s initiative toward AI regulations and responsible AI will immensely become a backbone not just for safe AI but also for the development of technology in India. The regulations will evade the threat involved in privacy and security during an interaction with artificial intelligence. 

One should not neglect the application of law in AI, as ethical AI will be unsustainable in the absence of regulation. Ethics and law should go hand in hand because if ethics are not enforceable, no one will follow those ethics. 

The government, developers, and companies shall come forth and bring about the policy to help build safe responsible AI. Since these three organizations utilize most of the data extracted by AI, the expectation to formulate a reliable AI relies on them.

Conclusion 

Heated arguments, about whether artificial intelligence is a threat to society, persist and will continue as long as AI and ethics are not aligned. According to the Ministry of Electronics and Information Technology, the application of artificial intelligence is currently for limited purposes. It states that even if a machine with higher intelligence is created that surpasses human intelligence, there is no reason to believe that it can dominate the world. It continues by stating that the development of artificial intelligence will also accelerate the need to control such machines.  

AI is a lucrative area where innovation will take its shape and produce a sustainable environment. Banning AI or applying complex rules in this area will hinder technological development and innovation in this field. In response to personal data regulation and non-personal data regulation, the tech sector said that applying onerous regulation will significantly impact innovation. The government, organization, and developer shall come together to solve the issues in AI related to data security and opacity. The involvement of the private sector is essential as the industry will ensure efficient and neutral solutions in the area of artificial intelligence. In India, however, the regulation around artificial intelligence is at its efficacy as the government and diplomats are still trying to understand AI and its negative/positive effect on society. But the development and popularity of AI are definitely not going to fade in the future. Hence the legal regulation in AI solutions is important.

References 


LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join:

Follow us on Instagram and subscribe to our YouTube channel for more amazing legal content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here