This article is written by Advocate Nupur Mitra, LL.B (Jamia Milia Islamia), LL.M (London School of Economics), Fellow (Columbia University), and pursuing Diploma in Cyber Law, FinTech Regulations and Technology Contracts from LawSikho. The article has been edited by Smriti Katiyar (Associate, LawSikho).
Table of Contents
In this growing world of technology, business enterprises must incorporate human rights standards in all activities encompassing their businesses. Technology is the next generation of human operations, and technology corporations are emerging in numbers to introduce, develop and take such innovative changes to different heights. This is perceived to carry the human race to a new world that may seem fictional today but would develop into reality in no time. Consequently, its impact on human security and human development cannot be ignored, making it compelling to ensure human rights due diligence and human rights impact assessment in this evolving context in the corporate world of technology. The present paper aims to set forth a discourse on the need to ensure human rights within the realm of technology businesses, where humans play a pivotal role in having these complex machines and algorithms developed ‘for’ humans, ‘of’ the humans, and ‘by’ the humans. The conduct of technology businesses concerning Human Rights commitments is an essential component that would be a subject of analysis of the present paper, before marking the way for the divergent stakeholders ranging from governments to consumers.
International Law and the need of stakeholders of the technology industry to follow human rights principles
Due diligence and impact assessment have the potential to not only track and alleviate the areas of possible vulnerability and violations, but also to attain a human rights sensitized environment in the field of technology that is crucial to the healthy development of the technology industry as a unit. Against this backdrop, it is pertinent to refer to International Human Rights Law in the light of Technology Businesses.
The United Nations Human Rights Committee has reemphasized in 2011 that Article 19 (2) of the International Covenant on Civil and Political Rights (ICCPR) includes the freedom to receive and communicate information, ideas, and opinions through the internet, within the ambit of the traditional freedoms to expression and information.
The Human Rights Council stated in 2015 that rights online are enforceable at par with traditionally offline.
The United Nations has time and again held that international humanitarian law and international human rights law, both within the ambit of international law, apply their principles in cyberspace.
The UN Guiding Principles of Business and Human Rights (UNGPs) emphasizes the responsibility of the private sector to inculcate human rights due diligence among the business enterprises globally; assessment and mitigation of potential risks that may impact human rights along their way, and measures to remedy the adverse impact on human rights.
The UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in 2015 had re-emphasized the importance of respect for human rights and fundamental freedoms and necessary abidance of the UN resolutions linked to human rights on the internet and right to privacy in the digital era, by the governments.
Present conduct of technology viz-a-viz rights
The present paper focuses on one area of the technology industry that has utility in cyberspace, namely artificial intelligence (AI), and highlights the urgent need for such technologies to be regulated before they evolve as autonomous decision-making bodies.
A global void exists in the law, regulation, and development/implementation of AI Technologies. This involves risks that may outweigh the opportunities if they remain disregarded. Understandably, the indeterminate contours of this currently undeciphered area of technology make it challenging to anticipate and articulate a rigid set of laws or regulations, yet it is a necessary obligation upon stakeholders to decipher innovative strategies of checks and balance through effective policies and frameworks for regulation, before h a new form of capitalism that strives for profit at the cost of rights, freedoms and viable sources of income takes shape.
The UN High Commissioner for Human Rights Michelle Bachlet recently on 15th September 2021 warned of the use of artificial intelligence as a tool for automated decision making and profiling, due to its unwarranted impacts on rights to equality, employment, privacy, fair trial, life, health, and freedoms against arbitrary arrest or detention, freedom of movement, expression, peaceful assembly, and association. She emphasized stricter legal enforcement for the use of AI technology as it poses high risks for human rights.
The International Committee of the Red Cross in 1987 has held partly autonomous weapons as a matter of concern, and this holds true for fully autonomous models of the era.
It is worth admiration that technology is being adopted by various social media platforms to filter dissemination of objectionable and wrongful material and even combat terrorism, but the lack of transparency surrounding the content moderation protocol raises concerns as it may be tantamount to unwarranted restriction on legitimate free speech and thereby unlawful encroachment on the freedom of expression.
Automatic filtering of user-generated content during uploading contributes to the infringement of intellectual property rights. The misuse of automated technologies has a significant impact on the right to freedom of expression and the right to work; when bots, troll armies, and targeted spam are engaged – in addition to algorithms – to determine the future of content display and dissemination. Lack of data security is in contravention to the right to privacy – the basic human right in the era of transforming big technology.
AI algorithms and face-recognition systems have posed a potential risk to the protection of equality. People of colour have been depicted as gorillas in one of the advanced recognition software. Search engines have shown ‘black girls’ as sexually explicit materials. In one of the AI algorithms, the need for additional medical care for Black patients was conveniently undervalued and ignored. As per a report from the Carnegie Endowment for International Peace, more than 40% of the countries are actively in the usage of AI for the management of border security. Racially vulnerable and otherwise, hold a potential risk of being perceived as high-risk offenders, by way of predictive policing enabled by face-recognition technology being increasingly adopted in the criminal justice system. This is viewed to have a disparate impact of surveillance on populations that are already vulnerable in the hands of our criminal justice system, namely the refugees and irregular migrants. AI, by this, may become a tool or a weapon to factor systemic bias and ‘dirty data’. If a system is fed with human biases irrespective of being intentional or not, the result would inevitably be biased, thereby reinforcing the need to formulate guidelines for ethical use of algorithms within the domestic and international legal systems and the corporations in action.
The technology that is widely being welcomed by the nations, and the technology giants that are all set for high folds of annual turnover, maybe on the brink of leading humanity into a disproportionate vulnerability through exacerbation of discriminatory practices. Thereby violating Article 2 of the Universal Declaration of Human Rights and Article of International Covenant on Civil and Political Rights, both instruments being the foundation to the entitlement of human rights and freedoms, free from any forms of discrimination.
Though it is recognized that AI is well on its path towards the transformation of existing business and human lives towards betterment in terms of efficient machinery and services, this endeavour is also resulting in huge displacements of human labour – not only by humans but by machines too. It has been recently seen to have had robots fired company employees. The researchers and academicians have estimated that 35% of UK jobs are at high risk of digitalization in the next two decades and 47% of US jobs are at risk to fall prey to future automation through the AI-driven industry. There is no doubt that the efficiency of manufacture and quality of products has drastically raised its bars through the precise technology of AI, but it goes without denial that such precision and efficiency are costing the human workforce. These corporations and factories are now set on their mission to replace up to 90% of the human workforce to earn high profits through bulk productivity with the least errors and defects. This is not limited to the manufacturing industries, it goes beyond to encompass technology industries that sell AI-based software to replace personal assistants, translators, customer service providers, phone operators, and so on. Article 23 of UDHR, Article 6 of ICESCR, and Article 1 (2) of ILO Convention stand repudiated through the unfettered emergence of AI and similar technologies – thus holding a risk to the sustainability of human rights guaranteed to the population.
Philip Alston, UN’s special rapporteur on an extrajudicial, summary, or arbitrary executions in his report in 2010 unambiguously emphasized that the technological shifts in the humanitarian sectors through AI algorithms and modules have caused severe violations of International Humanitarian Law. In addition to being a humanitarian concern, fully autonomous weapons may pose a threat to civilian lives, causing civilian deaths and injuries. Innumerous, unintended, and non-targeted drone strikes from across borders backed by lack of ethical judgment, human empathy, and incapability of machines to distinguish between friends and foes, are a real threat to the world. It poses a far graver condition in the domain of war and armed conflicts, as such accidental deaths of civilians may aggravate conflicts among countries, resulting in risk to humanity at large. If this autonomous technology is not regulated at its nascent stage and not obligated to respect human rights outside war zones or within, it would soon elevate itself into a fully autonomous industry of autonomous weapons to destroy human lives.
The way forward
John Bing, a writer, and law professor, held that technology is practically subjected to the implementation of the law, and that is posing the legislation and implementation of rules, policies, and law circumscribing it, a real challenge, opposed to the traditional situation where humans generally are subjects of law.
Technology industries like AI have the potential to change the world into a better place to live and grow, with an essential caveat that it delivers ethical and value-enhancing applications of the technology and operates in fairness, privacy, and security. Since AI is perceived as a tool for the modernization of human society, the lack of stringent data protection policies offers the technology industry and its giants, a society ready to be digitally exploited and consequently debarred from the enjoyment of rights inherent in the very being of a human.
From fostering discrimination – to threatening privacy, life, and freedom of association – to invasive surveillance practices and illegal criminalization of innocents, AI has proven to be a threat to an array of human rights and basic freedoms. To reverse these trends, the implementation of proper standards by the stakeholders in our digitally transforming societies is the urgent need of the hour. Increased transparency in AI decision-making processes, better accountability for the technology-driven industries, and the ability for civil society to challenge unlawful and immoral implementation are duly necessary.
The Government of India has formulated few guiding policies in 2020 that accentuate conscious development of ‘Explainable AI’ to provide user-friendly explanations describing the AI process, backed by evidence; and a model namely ‘Federated Learning’ where algorithms are built on multiple decentralized collaborative devices without parting with the secured data kept on the local device.
The guiding policy prescribes technology best practices based on three broad principles, namely, (i) ‘Explainability’ using pre hoc statistical analysis and Post hoc experimental data analysis techniques; (ii) Privacy and data protection using ‘federated learning’ – training through collaborative devices where data is secured on a local device, ‘differential privacy’ – a theory that provides mathematical guarantees of confidentiality of user information, ‘homomorphic encryption’ advanced cryptology that allows performance of computations on encrypted data, or ‘no-knowledge protocols’ – encryption by way of only the account holder having access to the data; (iii) ‘Eliminating bias and encouraging fairness’ using open source tools that assist users to assess bias and improve fairness in machine learning models by analysing the data features, testing performance, visualising model behaviour across subsets of input data, throughout the AI application lifecycle like ‘AI Fairness 360’ by IBM, ‘What-If’, ‘FairML’ by Google and ‘Fairlearn’ by Microsoft.
Unless reliable measures are efficiently enforced to safeguard the interests of humanity, the future of human rights in this era of technology remains ambiguous. Security, dependability, equality, inclusivity and non-discrimination, privacy, transparency, and accountability are the key interests that the technology corporates ought to internalize in their policies for programming, delivering, functioning and operations. Responsible AI corporates ought to design algorithms based on problem scoping, effective data collection, data labelling for tracking human variability, secured data processing, testing and retesting for errors, user-centrality, and uniqueness of big technology – leaving aside either of the essentials, would forfeit the higher goal that is sought to be achieved. The technology industry needs to also look into the aspect of displacement of human jobs by automated machines, and offer effective schemes and jobs to compensate them adequately in such formidable situations. The vulnerable members of society, customers, and employees – all those affected by the outcomes of the algorithms for automation ought to be protected – and the technology corporates, their management, their authorized staff, their supply chain, ought to commit to fulfilling all human rights standards in favour of such sections of populations, while signing all agreements from bottom to top for the deployment of such technologies, in pieces or entirety, however applicable.
Students of Lawsikho courses regularly produce writing assignments and work on practical exercises as a part of their coursework and develop themselves in real-life practical skills.
LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join: