This article has been written by Rushikesh Mahajan, pursuing a Diploma in International Contract Negotiation, Drafting and Enforcement, and has been edited by Oishika Banerji (Team Lawsikho). 

It has been published by Rachit Garg.

Introduction

What we know so far is that the three discussion papers pertaining to Artificial Intelligence released in February 2021 and August 2021 in two parts respectively, were a subsequent upgrade to the June 2018 document released by Niti Aayog, a public policy think tank responsible for highlighting the utility of responsible AI. The document of 2018 provided a comprehensive overview of artificial intelligence, including its definition, potential applications, challenges for implementation in India, strategies for integrating AI into the economy, goals for enhancing efficiency, and recommendations for governmental action. The 2021 documents placed emphasis on two key areas, namely, the Principles of Responsible AI, which is discussed in Part 1, and the Operationalizing Principles for Responsible AI, which is addressed in Part 2 of the article. We will use the information given in these documents, and narrow it down to explain whether the present technologies that are using AI are actually being implemented rightly. The Part 1 of this Article will focus on understanding the definition of Artificial Intelligence from a global perspective. Part 2 will explain the characteristics AI is used as an integrated part of CCTV surveillance. Part 3 will analyse the shortcomings of the AI integration process in facial recognition. 

Download Now

Understanding the definition of artificial intelligence

Professor Gary E. Marchant, in his paper published by International Association of Defense Counsel, defined AI in its easiest form as, “the development and use of computer programs that perform tasks that normally require human intelligence.” 

Section 3(3) of the National Artificial Intelligence Initiative Act [NIAA], 2020, defines the term Artificial Intelligence as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to-

  1. perceive real and virtual environments;
  2. abstract such perceptions into models through analysis in an automated manner; and
  3. use model inference to formulate options for information or action.

Article 3(1) of the proposed Artificial Intelligence Act, 2021 states that the term “artificial intelligence system” means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;

From an Indian perspective as explained in the Appendix I of the discussion paper published in June 2018, it explains that “AI has been achieved when we apply machine learning to large data sets. Machine learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficacy over time.”

2019 Kumbh Mela : an example of implementation of AI in CCTV surveillance

Kumbh Mela is considered to be one of the largest religious gatherings on Earth and what we have seen so far is that artificial intelligence on every step of the way has somehow turned out to be useful especially in camera as a legal tool. The best example of implementation of AI can be seen in the Kumbh Mela that happened back in 2019 with a deployment of more than 1000 cameras on a whopping 3200 hectares area.  Few of the key ingredients as explained in the 2021 article written by Biru Rajak, Sharabani Mallick and Kumar Gaurav was that the whole system was capable of CCTV security surveillance, facial recognition, automatic number plate recognition system, and red light violation system. The usage of cameras with the help of AI that could identify suspected “trouble-makers” was best demonstrated in the 2019 Kumbh Mela. This shows that AI’s usefulness has effects on both individuals and society as a whole. 

The AI used here has a responsibility to identify the person by using facial detection algorithm, where it will go to the next step which will create a list of behavioural patterns that will fit the profile of potential criminal and then based on this information it will try to prevent a crime. It will eventually act as a Risk Assessment Tool. In a detailed article written by Emaneulla Halfeld, she has thoroughly explained the need of ethics and its  involvement in assessing the individuals who are incarcerated by criticising the Risk Assessment Tool by the name of COMPASS- (Correctional Offender Management Profiling for Alternative Sanctions) on assignment of danger score, by unfortunate application of machine bias. It in fact showed the discriminatory tendency of AI with detailed images rendering it untrustworthy. The problem here is, the same behavioural pattern will prevail in India if the laws to regulate AI are not implemented.  

Consequences of AI driven CCTV on individuals

How about we assess the situation by a few examples and questions? First, we can focus on the matter at hand relating to the personal impact of AI on an individual level, specifically in cases where the technology might erroneously identify an individual or falsely implicate an innocent person.

For example, if a decision is made by AI that a person ‘A’ is a said ‘Trouble-maker’ then the question that would be extremely difficult to prove in the court of law is how did that AI program arrive at the conclusion that the behaviour was suspicious? 

What factors did AI cameras hold into account while deciding that claim and how easy would it be for the court to understand the process taken by AI to reach that conclusion, and whether the claim that is made by the AI regarding suspicion, does the court of law hold that claim as legitimate? The process where it is not possible to understand the decisions made by AI and their machine learning capability is known as Black Box Problem. In such instances, it remains unclear as to who bears responsibility for the error made by the AI. It appears that the government has unintentionally acquired an excessive amount of authority to intrude upon the personal lives of citizens.

The responsibility for the decision cannot be attributed to the police or administrative system, as it was made by an artificial intelligence system, thereby creating a loophole on where to put the accountability bias. A wrongly accused person can file a defamation suit for damages against a person. The non-attribution of responsibility to AI is due to its lack of personhood, rendering it outside the scope of the criminal system which exclusively pertains to human entities. In this case this person will not be able to do anything. Furthermore, for the loss caused to the individuals on a personal level, the question arises whether who will compensate the wrongly accused. Should we hold the developers of that AI system responsible? Or should we hold the government agency responsible? Or the private commercial company or entity or think tank tendered by the government as a consultant? Or the government employee who made the decision to call in the person for questioning? Before putting AI to work in the Indian economy, these questions will come up and need to be answered. 

Consequences of AI driven CCTV on society

On the societal level, there are substantial implications of deploying AI into society as a solution. The problem here is that the entities investing in AI technologies will have the opportunity to design AI to cater to their needs which might influence their applicability on ground level. On a societal note, in an article written by Adam Schwartz, he criticised that the government using surveillance cameras and tools in Chicago’s Surveillance system was a troubling step towards the world explained in a dystopian novel 1984. His suggestion was to add a privacy safeguard to ensure protection of fundamental rights. One thing to note here is that the COMPAS tool which we had discussed previously was developed by a private company named equivalent

Since AI works on probability and accuracy, there is a high possibility of AI surveillance cameras to tag the same person ‘A’ as a said ‘trouble-maker’ creating a bias known as machine bias, where AI itself will make unconscious decision with the help of machine learning, that since this person ‘A’ was flagged before and this person might show same behavioural pattern inadvertently, the AI will tag him/her again, which is a grave violation of several constitutional fundamental rights. 

On the other hand there is a possibility of bias in AI, also known as Discriminating Artificial Intelligence, where individuals/ entities/ companies/ think tanks/ research centres who will design AI algorithms, will create a machine bias, meaning they will design it in a way which will cater to their own personal needs, ‘if I am a developer, I might feel like I need to design AI which will help my own community! Or worse, I will design it in a way that it will favour people from my hometown’ which will structurally violate Article 15.    

Suggestions

The enforcement of artificial intelligence in camera was implemented without passing a suitable regulatory law, like the Artificial Intelligence Act of the European Union and National Artificial Intelligence Initiative Act [NIAA], 2020. The implementation of Artificial Intelligence prior to the establishment of regulatory laws may result in significant disruption, as evidenced by the experiences of both the United States and the European Union. The deployment of AI technology preceded the establishment of governing and regulatory laws. We need to be able to fill out the gaps by answering the questions raised by implementing laws first and then applying these technologies subsequent to them. 

For example, according to the discussion paper published in June 2018 and the summarised article written by Professor V P. Gupta, facial recognition cameras come under Artificial Narrow Intelligence System, and based on that what we need to be able to do is to convert Artificial Narrow Intelligence System into Artificial General Intelligence System so that the inquiry pertains to the justification behind naming the person in question as a “Trouble-Maker”. The proposed measure is not only expected to have legal validity, but it is also anticipated to demonstrate a greater propensity towards furnishing rational justification.

Article 5 Point 1 of the Artificial Intelligence Act, 2021 outlines in great detail the activities that cannot be carried out while making use of artificial intelligence (AI), with the goal of preventing these issues from occurring. It should not be too difficult for our government to create new laws if they look to current laws from other countries for guidance and use that guidance as a starting point. A legislation, or at the very least, a strategy, to police AI should be passed by the legislative body so that it may be governed by being given boundaries.

In the same Act, Article 9 explains the detailed requirements that should be taken for formulating the Risk Assessment Tools. It is imperative to acknowledge the potential for employees tasked with overseeing AI Risk Assessment Tools to engage in privacy violations for personal gain. It is imperative to establish a mechanism within the government infrastructure, similar to the Right to Information Act of 2005, that would enable individuals to scrutinise or solicit an audit from the government in the event of any potential misuse of the CCTV surveillance system. It is imperative for the government to ensure complete transparency in elucidating any potential misuse of the Closed-Circuit Television (CCTV) system. 

In addition to the implementation of the currently enforced CCTV system, it is recommended that the government establish a compulsory insurance scheme and enforce it such that in the event of an error by the AI, compensation is provided to the affected party on behalf of the AI by the insurance company, thereby ensuring accountability on its part.

Conclusion

It can be inferred that in light of the government’s implementation of AI tools for welfare, it is imperative that corresponding safety measures be enacted to safeguard citizens against potential misuse of the technology, particularly in the absence of regulatory frameworks to govern its operation.

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here