This article is written by Parmeet Singh from Amity Law School, Delhi. This article highlights that the Algorithmic Decision System is used for the analysis of the huge amounts of personal data to conclude the correlations and generally, it derives information which is useful to make decisions. Algorithmic Decision System is widely used for employment screening, insurance eligibility, and marketing.
Table of Contents
Introduction
Algorithmic Decision System is also known as ‘ADS’. The Algorithmic Decision System helps to analyze huge amounts of personal data, to conclude the correlations, and generally, it derives information which is useful to make decisions. Algorithmic Decision Systems have been used from the very beginning for credit decisions, and in the present time, it is universally used for the purpose of employment screening, insurance eligibility, and marketing. Algorithmic Decision making is also used in the public sector for providing the government services and sentencing in criminal justice and probating the decisions. Today most of the decisions are generally made by computer algorithms because the computer has the advanced capabilities and has access to huge stores of data. The human being could make such decisions by their own, but still most of the people prefer the ADS method as it has advanced capabilities.
The advanced statistical techniques of Algorithmic Decision System seeks to find patterns in data without requiring the analyst to specify in advance with factors to use. They will often find new, unexpected connections that might not be obvious to the analyst or follow from common sense or theoretic understanding of the subject matter. As a result of that, they could help to discover new factors that would improve the accuracy of eligibility predictions and the decisions that are based upon them. In most of the cases, they improve the fairness of these decisions by expanding the pool of qualified job applications to improve the diversity of a company’s workforce.
Algorithmic Decision Making
Algorithms decision-making is a process for taking up decisions according to the GDPR (General Data Protection Regulation) and without human intervention. Algorithmic decision making for individuals is a decision based solely on automated processing. Algorithmic decision making is of a low impact if it does not have any effect on data subjects and does not even deprive people of getting their legitimate rights. Furthermore, even if the decision of Algorithmic decision making is binding for individuals and it violates the rights of an individual, then one should not be worried, because there are sufficient safeguards provided by the law for the protection of an individual. The notion of Algorithmic decision making is not a unitary concept but as per my understanding about the agenda, I believe that it comprises only a particular type of a decision, the Algorithmic decision making is broad, multifaceted and prone to be provided into several sub-categories. It is essentially important to distinguish between the automated decision-making, the procedural and the substantive before analysing the provisions mentioned in the GDPR the directive on Data Protection in Criminal Matters.
Algorithmic decision making is an automated process which makes a decision with the help of an algorithmic. As per the reading, I have observed that there is no such a common definition across the literature of algorithmic decision making. In the current era, the majority of the decisions are taken with the help of an algorithmic only, and the more data is increasing, more and more complex decision making is also increasing and the algorithmic intervention has become almost indispensable. There are certain safeguards which have been provided in algorithmic decision making, whenever a decision making is allowed, the data subject has to be provided with appropriate safeguards. The main purpose behind providing such safeguards is to prevent the individual from wrong or any kind of discriminations decision or a decision that violates the data subject’s rights and interests.
Working
In the present time, it is quite hectic to create ADS that are safe, secure, privacy-preserving, fair and explainable and it requires much more research to make it fall under certain conditions. While taking up any decision with the help of Algorithmic decision making it should always be in our minds that even well-engineered computer systems can create unexpected errors sometimes and unexplained outcomes for several reasons and we should have to put much more effort into creating it in such a way so that it could not come up with such errors.
Therefore, if we discuss about “safety”, ADS is a process which could cause unexpected and negative consequences in the environment. It would happen only when the training environment and the operational environment do not match, and the government is in the operation of working to make a balance between the training environment and the operational environment so that a better safety shall be provided to the ADS.
Further, if we discuss “security” ADS, it is the most complex system and it also subjects to many different types of attacks. The integrity and availability of ADS can be threatened by polluting its training dataset and by attacking its underlying algorithm and by exploiting the generated model at run-time, but we also need to know that the government is under operation to protect the database of an individual and make it much more secured and safe to each and every person so that no other person could get the information of any individual.
Therefore, it has been concluded that the ADS are complex systems and it is difficult to understand. ADS which rely upon machine learning are extremely challenging to understand and to explain since the models are generated automatically in it from training data.
The government is doing back-breaking work to put efforts and researching a lot to make the ADS more secure, safe, private, fair and available to each and every person. Further, if we talk about the “Privacy” ADS are trained on personal data. ADS model have been attacked several times to retrieve the information of the ADS model, and due to these attacks, only privacy concerns have been raised. It has been stated that we need to put much more effort into research to propose privacy-preserving ADS that achieve acceptable performance and privacy trade-offs and the government is also working with the team to make it private to everyone so that privacy of any individual shall not be violated.
Major issues
Risks of ADS for Individual
ADS disables the fundamental principles of equality, dignity, privacy, autonomy and free will, and it poses the risk factor to health, quality of life and physical integrity. The ADS leads to discrimination and it has been extensively documented in many areas, such as judicial systems, credit scoring, targeted advertising and employment. And we all are completely aware of that discrimination which results in daily lives which arises different types of biasness from the training data, technical constraints, or societal or individual biases. There should be a comparison between the risk of discrimination while the use of ADS and the risk of discrimination without the use of ADS.
Risks for the public sector
There are several risk factors of the ADS as well that ADS can be misused by states at the same time to control people, for example by identifying political opponents. Some of the organisations or States use these technologies to control the influence of citizen behaviour. These types of technologies are not so safe and it could be used by any individual to distort the information to damage the integrity of democratic discourse and the reputation of the government or political leaders.
Risks for the private sector
There are various risks which exist in the private sectors. It is a prime target for ADS when the tasks that are repetitive or pressured by time or tasks that could benefit from the analysis of high volumes of data.
Safety Issue
It has been reported by the majority of the states and organisations that the ADS model has safety issues. It is an important issue to consider in ADS, especially the failure of the physical system in ADS may cause fatal damages. There are several ADS failures which have been addressed with ad-hoc solutions, the ADS could cause harm to the sensitive information and It is most essential to give a definition of a unified approach to prevent ADS from causing unintended harm.
Integrity and availability Issue
ADS should not jeopardise integrity and availability. It is essential to guarantee that the integrity and availability are secure against malicious adversaries. It would be accessible to give first preference to their security properties in the context of these algorithms. The existing protection against mechanisms is not sufficient and it requires more research to be perfect.
Confidentiality and privacy Issue
Confidentiality and privacy is also an issue of ADS, for example, they can try to extract the information from any sources about the training data or they could even try to retrieve the ADS itself. Such type of attacks raise privacy concerns only as training data is likely to contain personal data. Other proposals rely on the distribution of the learning phase so that the training data does not leave the device which collects them, but this is not sufficient enough to secure the system, we need to work more to find some privacy solutions which is necessary.
Fairness (absence of undesirable bias)
ADS are based upon machine learning algorithms using collected data. This process of algorithmic for making decisions includes multiple potential sources of unfairness. As per the research, I have come to the conclusion that many definitions of fairness are actually conflicting. Several research groups and organisations have also started working on the design of fair ADS.
Conclusion
Therefore, I want to conclude the article by saying that the automation introduces more than just automated parts and it transforms the nature of the interaction between human and machine in profound ways. We need to keep our eye on these systems to look and see what these systems are actually doing to make it secured and better for every individual. As Louis Brandeis said, “Sunlight is the best disinfectant”. Firstly, we have to examine the discriminatory effects to take steps to remedy potential bias in these systems. The first essential thing is that we need to measure the extent to which these systems create disparate impacts because it is really not possible to manage what you do not measure. It is essential to note the requirements for explainability that varies from one ADS to another, because as per the potential impact of the decisions made and whether the decision-making process is fully automated.
The needs for transparency and explainability are essential to reduce the ADS risks. I have found that accountability is the most important requirement for the protection of the individual. The transparency and explainability allow for the discovery of deficiencies but it does not provide absolute guarantees for the reliability, and security, and fairness of an ADS. The accountability also does not provide the guarantee but if certification is rigorous and audits are conducted on a regular basis, potential issues can be discovered and corrective measures are taken.
Algorithms Decision Making are increasingly used for getting access to information regarding e-commerce, employment, health, justice, policing, banking and insurance and recommendation systems. ADS can provide the best benefits to each and every individual and for organisations, both working in the public sector and the private sector. And we should be aware that the ADS also rises the variety of risks, that are discrimination, unfairness, manipulation and privacy breaches. The main purpose of the ADS is to assess the actual and potential extent of Algorithm decision-making and its risks and opportunities, for not only the current use but for the future use as well. The main preference should be given to technical aspects, to broaden the discussion, legal, ethical and social dimensions are considered.
References
- https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf
- https://towardsdatascience.com/the-hidden-dangers-in-algorithmic-decision-making-27722d716a49
- https://www.researchgate.net/publication/337902605_Algorithmic_Decision-Making_and_the_Control_Problem
LawSikho has created a telegram group for exchanging legal knowledge, referrals and various opportunities. You can click on this link and join: