This article has been written by Parul Chaudhary pursuing the Diploma in International Data Protection and Privacy Laws from LawSikho.
Artificial Intelligence is becoming smarter every day. A huge amount of data is being fed to AI tools in order to make them more “human-like”. Initially, the idea was to help humans do everyday tasks and avoid repetitive and mundane tasks by leaving it on the AI to take care of, which would perform those tasks in the most “human-like” way possible by a machine. As time progressed, the big players in the technology world and their research departments have tried to surpass human capabilities by making a machine better than a human, it can process information faster, get the work done more efficiently and minimize errors to a large extent. However, in the process of trying to make the machines smarter, we have opened a world of privacy externalities that need to be addressed. AI has intruder effects on not only your personal data but also that of others. So, there is an urgency for policy and the law to keep abreast of such changes and improvise its scope in order to better protect the privacy of all individuals.
The article aims at highlighting how the deployment of AI-centric technology may expose the world to several vulnerabilities which may prove to be a threat to security, privacy and data protection principles, and the need to bridge the gap between technological advancements and policy-making.
On one hand, AI tools are being used to enhance cyber security services, while on the other, the very same tool can be used to infringe the privacy of its users.
- Several AI- powered tools have been alleged to be collecting data even when the user is not using it specifically or feeding data into it by choice. Applications like Apple’s Siri, Google’s assistant, and Amazon’s Alexa have been suspected of listening to users’ conversations and recording personal or sensitive data by way of such eavesdropping. This can also lead to a situation of function creep, where the data being collected is used for purposes other than what it was intended for.
- Another aspect where the use of AI is infringing the privacy of individuals is by deploying the facial recognition system and storing personally identifiable information without the explicit consent of the individuals. Digital surveillance runs contrary to the concept of consent and compromises the right to privacy. In such a situation, although your data is part of a much bigger data set, it will not be impossible for the AI to de-anonymize you due to the recombinant nature of data, and thereby breaching your right to privacy, as guaranteed by the Indian Supreme Court in the famous KS Puttaswamy v. Union of India judgement.
- Yet another aspect that is linked with AI collecting and storing Big Data of its users is that it might be used for heuristic manipulation. AI technologies combined with machine learning can be used to identify and use human foibles against its users. The behavioural control can potentially lead to human manipulation, and then the day will not be far into the future where machines would control and govern human decision making because we would have fed almost all storable data over time.
- Additionally, there could be machine bias which may lead to social inequality and discrimination. The United States uses an AI-powered software called ‘Correctional Offender Management Profiling’ for alternative sanctions in their judicial system of hearing bail matters. In a report by ProPublica, it was found that the software was biased against African Americans. This biased opinion is unjust and prejudiced against a particular group of people without a reasonable basis. Another incidence of machine bias was reported when an AI- driven technology misidentified a bunch of black women. This can lead to targeting individuals on the basis of their race due to a defect in AI technology.
- In recent times, AI has also become the go-to tool for perverts and presumptuous politicians. ‘Deep fakes’ have already started making prominent rounds in these circles.
These are machine learning manipulated audio/video recordings that can be morphed and distorted in any fashion to achieve all sorts of nefarious goals. Such warped content can be put to use in blackmailing, political antics, defamation, and in the production of compromising videos/ audios. Video deep fakes have actively proven to be a nuisance in political campaigns and life of celebrities; it is also speculated that very soon, audio deep fakes will be rampant in contorting the public view. These audiotapes could be easily used to paint any individual as sexist, racist, or anti-national. The age-old tools of human confirmation, eyes, and ears will fail us in such scenarios.
Increased vulnerability of cyber attacks
Deep fakes are not the only concern that experts have regarding the eldritch exploitation of AI technology. Computer scientists believe that hackers and cybercriminals can use AI as both their slave and accomplice; they could use machine learning algorithms to teach artificial intelligence to adopt social engineering as its means of cyber attack. Such developed AIs can be termed ‘master manipulators’ and could cause massive breaches in personal security and evolve with time to digitally attack military strongholds and titanic corporations. This is not a far-off dystopian paranoia; it is a strong probability that could already be a reality under veils. Speaking of dystopia, AI has also been put to use in state surveillance. China is a nation-sized example of the same, using its ‘Orwellian’ facial recognition technology and well-placed CCTVs all over the country to monitor every citizen.
Other potential risks
Since the lines of real and unreal would be virtually blurred, the possibility of an influencer or social media personality being created out of thin air using AI and Deepfakes is not far-flung. Another incident where social media deployed AI was used to invade privacy was the use of a website called www.pleaserobme.com. It used a tweet by the user to identify whether he was at home or not and this information was being used by the burglars.
Even if we omit the negative uses and look at AI in an all-positive light, it still fails to be as flawless as it is said to be. Targeted Ads and misinformation campaigns appear to be two seemingly unrelated things but are all based on a careful network of machine learning and data analytics.
Even personal sensitive information like pregnancies have been accurately predicted by AIs; unfortunately, this too has been used for targeted marketing. But these ‘miracles’ can fall short in the case of machine/data bias which gets introduced when AI systems work with outdated, biased data for forecasting events. Unfairness cannot be put up against these non-sentient existences but, their exploitation could cause destruction to life, mental health, and property. Peaceful coexistence could be at stake when our institutions and authorities would be portrayed as evil tyrants and with all the evidence to back it up. Millions of people could be agitated without even actual human participation. By far, all the potential dark sides of this technology are still unknown, and job automation is the very least of our fears.
Now let us understand how the existing position of the law attempts to regulate AI-based technologies and ensure compliance with the data privacy laws.
Corroborating data privacy compliance by AI
Under the Charter of Fundamental Rights of the European Union, Article 8 recognizes the right to protect one’s personal data. The European Union General Data Protection Regulations do not specifically talk about AI, but it regulates automated processing and decision making. The regulation disallows important decisions about individual liberty and privacy to be taken by completely automated data processing and profiling through Article 22. This is a step towards maintaining fairness and transparency in the decision-making process and not allowing machines to dominate important life decisions or decisions in societal/national interest. This can be read into Article 22 and is recognized as the right to explanation. Another check on the development of AI is the right to erasure as provided under Article 17 of the GDPR. Without data, an AI tool cannot be fueled to perform activities or make decisions. AI technologies absolutely depend on data and a deletion request is a hindrance for its algorithm and performance.
Moreover, AI is not immune to attacks by hackers and data thieves. The AI attacks can create chaos in the existing, seemingly smooth running system and may lead to major losses. For instance, if an AI-driven car is hacked, then it can wreak havoc on the roads and may cause danger to life and property as well. If an AI-powered device like Alexa or Google Home malfunctions due to an alteration in algorithm or code, it can make life difficult for the people using the device. All such instances should be regulated as best as possible with the formulation of policies. Privacy-preserving AI should be used in order to promote privacy by design. Feature hashing schemes should be used as an effective tool for strengthening cyber security and defence mechanisms while using AI.
The Canadian Supreme Court, in R v. Spencer (2014), recognized informational privacy in the online space. It also talked about anonymity as an element of informational privacy. Thus, if AI technologies are used to de-anonymize individuals and re-identify them by collecting data from multiple sources, then it will cause a breach of the informational privacy of an individual.
According to the Indian Constitution, encroachment on privacy stands in violation of the fundamental right to privacy which has been read into the right to life and personal liberty under Article 21. In 2012, a report by a group of experts, constituted under the government of India, talked about technological neutrality and conformity with data protection and international privacy policies, but it did not specifically talk about any policy changes that can be made with special reference to AI. However, in a recent report, the Ministry of Electronics and Information Technology has proposed the use of an anonymization infrastructure for processing Big data in order to promote the privacy of individuals.
While the deployment of AI-powered tools generates issues regarding technological dependency and job replacement, the issue of intruding privacy cannot be overlooked. Restricting AI by privacy compliances may essentially mean stopping the development of the AI to its full potential, but that should not be a ground to bypass the need of protecting the privacy of individuals.
India has not yet passed the Personal Data Protection Bill of 2019, which makes us lag behind against the progressive approach undertaken by the European Union. The General Data Protection Regulation is a pioneering document in the world of privacy and data protection. Learnings must be taken from it and a suitable component regarding the regulation of AI and addressing its vulnerabilities should be incorporated within the new data protection policy in order to keep abreast with the requirements in a web-driven world.
Students of Lawsikho courses regularly produce writing assignments and work on practical exercises as a part of their coursework and develop themselves in real-life practical skills.
LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join: