Data Privacy

This article is written by Raslin Saluja, from KIIT School of Law, Bhubaneswar. This article analyses the recognition of the principle of non-discrimination in data protection laws and its overall impact.

Introduction

Data protection laws today face the struggle of ensuring a balance between protecting privacy while making sure that there is no harmful impact on the innovation and growth of the industries. The countries need a legal framework that essentially addresses the impact of privacy and its related potential harms in their respective data economies. Among the other objectives of the data protection laws, they are also expected to protect the fundamental rights of the institutions involved. One of them is the non-discrimination principle which suggests that there should not be any unfair biases in the application of the laws and the regulations while processing and managing personal data must implement such a mechanism to uphold such principles. Thus, this article further explores the existing issues and potential solutions involving the said elements.

Brief history                                   

The right to privacy in form of data protection and non-discrimination principle both embody the fundamental rights. The right to private life has been recognized by the European Convention on Human Rights, 1950. Thus the principle of data protection stems from the said Convention’s right to private life. This corollary has led the European Court of Human Rights to give specific data protection principles the status of human rights. Since all the members of the European Union (EU) are also members of the Council of Europe subjected to the application of the European Convention on Human Rights, they also have to abide by the same.

Download Now

Further, the EU has its own set of regulations in form of its Charter of Fundamental Rights of the European Union which also grants to its people a right to private life and a distinct right to protection of personal data. Specifically, Articles 7, 8, and 21 lay down the rights to privacy, protection of personal data, and non-discrimination, respectively. More comprehensive and articulated versions of these rights can be found in Council of Europe’s Data Protection Convention 108 and the European Union’s General Data Protection Regulation (GDPR) which form the legal backbone of the data protection laws. Even the EU Treaties provide for a structure for a general guarantee of protecting the rights.

As for other countries, the Organisation for Economic Co-operation and Development (OECD) provides guidelines on the Protection of Privacy and Transborder Flows of Personal Data, where the nations that implement these in their domestic regime have to ensure that there is no unfair discrimination and proactively avoid such practices involving the individuals and the data subjects. For instance, it mentions that no unfair discrimination on such bases as nationality and domicile, sex, race, creed, or trade union affiliation should take place.

The fundamental rights vis-a-vis human rights aspect

For our understanding, we use fundamental rights and human rights as synonyms in the present context. Though GDPR as such does not exclusively deal with such prominent rights however under its aim as mentioned in Article 1 para 2, it hints towards its applicability while implementing the Union law in light of the rights recognized by the Charter. The data protection law invariably purports to defend the pillars of fairness and human rights about the usage of personal data by various institutions and organizations. These laws empower the data subjects (people whose data is in use) and require the data controllers (organizations) to adhere to the imposed obligations.

The foundation and the core of these principles lie on the major eight principles which require lawful, fair, and transparent processing of data. The non-discriminatory principle finds its space in terms of data dissemination, the allaying of the information asymmetry, and having a data protection impact assessment all of which would analyze, identify and mitigate the related risks of unfair biases and illegal discrimination. The GDPR contains several provisions that have the potential to limit digital discrimination against the poor while enhancing their economic stability and mobility.

The non-discrimination principle

Discrimination essentially involves distinctions between cases of similar status, that is cases that do not vary from each other on the points of significance that matter. This, however, still begs the question as to what is equal, on several occasions. Many non-discrimination statutes only apply to certain protected classes (characteristics), such as ethnicity, gender, or sexual orientation. In the field of EU anti-discrimination law, a similar interpretation is given relating to legislation which talks about discrimination on specific protected grounds (e.g., race, gender, disability, or age) and the general principle of equal treatment. This general principle is rooted in the idea that similar situations must not be treated differently and different situations must not be treated in the same way unless it is justified objectively. This is only assessed by a marginal test: as long as a difference in treatment has some rationality to it and is not completely arbitrary, the quality of the underlying reasoning is not further assessed.

Future ahead 

With the advent of technology, we see the development of Artificial Intelligence (AI) systems is on the rise. While the benefits are immense, it brings along its own set of problems. Though these systems are inevitably outlined in help, creativity, ingenuity, and human nature are all things that machines cannot replicate at this moment. AI cannot potentially function on its own as it’s supposed to without human intervention as they lack a mind of their own.

The deployment of this system involves personal data processing in various processes for different purposes. It entails automated decision-making based on algorithmic calculations which require us to ensure that the system is well equipped to provide accurate and non-discriminatory results fulfilling an individual’s expectations. Thus it will require a lot of transparency and fairness to provide satisfactory outcomes. The need to ensure compliance becomes even more significant when the use of AI is towards processing special categories of data such as personal sensitive information. In those cases, the UK’s GDP comes in handy as it managed to uphold such rights in the form of Articles 7, 9, and 10. It even recognizes automated decision-making separately under Article 22. Under this provision, it seeks to prevent algorithmic discrimination and may thus be deemed to be grounded on the prohibition of discrimination, which is recognized as a fundamental right of EU law under Article 21(1) of the Charter. The reasons why GDPR seems suitable are as follows:

  • It contains general rules which apply not only  to the processing of personal data done by humans but also by automated means;
  • It provides co-regulatory rules that require data controllers to analyze and mitigate the risks of the means used for processing on their own, thus giving them the discretion to self-regulate within the bounds of the general protection standards laid down by the GDPR.

The impact and structure of Article 22 of GDPR

This provision applies to all automated individual decision-making and profiling. It seeks to limit the occurrence of algorithmic decisions which do not include any substantive human intervention; it does so by narrowing the instances in which this kind of personal data processing might be lawful. As such, this norm is not directly concerned with avoiding biased or unfair decisions. The same is the scenario with Article 22(3). However, the wordings of  Article 22(4) in regards to a special category of data seem to overlap with the protected grounds that are mentioned in EU anti-discrimination law, as reflected in Article 21(1) of the Charter. This makes it apparent that Article 22(4) aims at preventing algorithmic discrimination, as it prevents important algorithmic decisions from being based on data that reveals an individual’s belonging to a protected ground under anti-discrimination law, except where the narrow exceptions outlined in Articles 9(1)(a) or 9(1)(g) apply.

The sensitive personal data render themselves a status of special category as they directly indicate an individual’s intimate details. Under this interpretation, Article 22(4) seems to rest on the assumption that, by removing such information from the dataset used by the algorithm, discrimination does not occur. However, it has been correctly observed that this data-purging mechanism falls short of effectively protecting the right to non-discrimination.

Effectiveness of Article 22(4) and its limitations

In circumstances where a predictive or machine-learning device that works on an algorithm is left to decide the outcome based on variables, it might learn how to discriminate. Oftentimes, even though developers do not intend to involve this unbiased response, the data used to train the algorithm may be intrinsically biased against one or more elements of the protected categories. This in turn results in unfair treatment of individuals belonging to protected groups during the automated decision-making phase. The way data is designed which is used to train and test the AI systems might lead them into treating certain groups less favourably without any reasonable justification. For instance, where links are developed between protected categories (e.g. sexual orientation or religious beliefs) and target variables (such as job performance).

There can also be instances where a proxy data gets attached to the protected categories like where an individual’s address pertains to a place that is densely populated with people of specific origin, and thus the automated data associates the person with that specific origin and might create bias and discrimination on that basis. It could be due to imbalanced training data or one which may reflect past discrimination, the cultural assumptions of the developers, prejudiced labelling of variables, or even inappropriate definitions, etc. As a result, the whole process becomes dissimilar from others. Therefore, a certain group might consistently suffer discrimination, even though the prohibition set forth by Art. 22(4) is, arguably, respected. However, at this point, we need to realize that GDPR does recognize this possibility and therefore even requires using appropriate technical and organizational measures to prevent it.

Besides these concerns, there is wide enforcement and compliance deficit. At this point, many data protection authorities are also heavily burdened in addition to lacking the power to impose crucial sanctions. The data protection law essentially applies to personal data which in various aspects makes automated decision making beyond its scope. These laws also use abstract norms for suitability in various situations.

A proposed remedy – a rights-based interpretation of Article 22(4) GDPR

Though as mentioned earlier, the provisions attempt to prevent discriminatory algorithmic decisions, however, a literal interpretation of this prohibition would deny any concrete anti-discriminatory impact. Herein it is suggested from the point of view of fundamental rights that Article 22(4) should be interpreted as covering data that immediately reveals belonging to a special category as well as the data which indirectly reveals such belonging in the context of the algorithmic decision-making process. Under this wider interpretation to avoid or prevent such discriminatory practice, the latter type of data should either be excluded from the dataset or processed in a way that reduces its chances of any correlation with special categories.

This particular understanding also appears to follow the decision in Google Spain SL v. Agencia Española de Protección de Datos (2014) of the EU’s Court of Justice, which has consistently held that data protection provisions should be interpreted to achieve their effectiveness in upholding fundamental rights. The rights-based reasoning of the Court when applying data protection law has also been focussed frequently after the adoption of the Lisbon Treaty, which has provided the Charter with “the same legal value as the Treaties”, thereby strengthening the fundamental rights aim of the GDPR.

Thus the best way to prevent Article 22(4) from being devoid of any substantial protective effect towards the data subjects, is suggested that in the exercise of any algorithmic decision making, the definition of “special categories of personal data” should be read in objective terms such as in form of a context-based, teleological and rights-based way, by taking into account the anti-discriminatory aim of the provision.

Status in India

India’s data protection aspects are majorly dealt with the new proposed legislation called the Personal Data Protection Bill, 2019 (PDPB) The judgment given in the case of Justice K.S Puttaswamy v. Union of India (2017) changed the landscape of the privacy jurisprudence in India. So far as discrimination is concerned, the Constitution of India recognizes it under Articles 14 and 15, on specific grounds and aspects of public life in terms of education, public spaces, and employment, however, the bill does not reflect this balance.

The regulations that PDPB seeks to impose are based on harm caused to business requirements. However, the definition of harm can have various implications in causing misdirections. So far there is no fixed interpretation connected with the term, but a discriminatory treatment comes under the scope of harm as under Section 3(20)(vi) of the Bill. However, it remains ambiguous as to what form and type of discrimination are exactly sought to be prevented.

However, so far as automated decision-making is concerned, PDPB does not address the harm that might occur from automated decision-making procedures. It does not provide any stringent specific provisions dealing with it. Just that it requires large-scale profiling to have a thorough assessment under PDPB. It fails to empower the data subjects by not giving them any rights to object to automated profiling, the only exception being the children. There are significant issues that still need to be answered.

Conclusion

As for India, it still largely remains in the developing stage. At present India does not provide significant protection to personal data concerning all or most of the common privacy principles, in any sector, to meet any international standards. However, for other countries, the recent developments do show a promising future. It is also for us to understand that the issues of discrimination are yet to come to a core. It is, therefore, crucial to have a good understanding of how both data protection and anti-discrimination operate, to grant the best possible protection to the individual.

References


LawSikho has created a telegram group for exchanging legal knowledge, referrals, and various opportunities. You can click on this link and join:

Follow us on Instagram and subscribe to our YouTube channel for more amazing legal content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here