This article has been written by Shubham Chavan pursuing a Diploma in Corporate Law & Practice: Transactions, Governance and Disputes course from Skill Arbitrage.

 This article has been edited and published by Shashwat Kaushik.

Introduction

We can perceive that with each passing decade, there’s a technological advancement episode that brings change and challenges to the overall adaptability and applicability of the functioning world. Be it the “.com” boom of the ’90s, the applications flood of the 2000s, or the cloud computing/hyper-accessible internet of the 2010s,. Considering the prior trend and where the horses are headed currently, it would be safe to place our bets either on artificial intelligence (AI) or quantum computing for this decade’s piquant revolution. Given that the whole wide world has already jumped on the bandwagon of artificial intelligence and has begun integrating it in non-cognitive and cognitive functions to some extent, it is hard to overlook its prominence for the future and the possibilities it would unfold.

Download Now

The scope or serviceability of AI or it as a whole concept, is pretty well known to everybody so it would take a lot of effort to do a deep dive into what AI is. Rather, this article would try to zone in on the reasons why there’s a need to regulate it, the current advances and government takes on the technology, and how it should be perceived further. It is also to be considered that industry experts are trying to forecast the potential of the technology so it would be harsh on the legislators to censure their inability to fully legislate the AI ecosystem. AI’s clientele and integration into world-moving mechanisms are on the rise and the organisations providing these services are domineering over the general public just like any of the conquerors from the past centuries, so there is a need to secure the masses from the snag of the technology. There also seem to be antithetical ideas as to how the development should progress, and this fictional fear for the time being, of an AGI taking over has alarmed the common consensus even more.

Bearing all the above-mentioned possibilities, it appears that regulations and laws in the industry are imminent and a legal professional should get versed in them in their inception stage rather than catch up with them later.

Why is there a need to legislate AI

The regulation of any sector or activity is imperative due to the perilous implications for the plebians, who are the primary recipients of exposure and who have often instigated tumultuous protests across the globe against the desecration inflicted upon them. Hence, it is indispensable to enact stringent laws to govern the functionality of any potential advancement that can have altering effects.

Some of the prominent reasons why there is a need to regulate AI are:

Autonomous weapons

The International Committee of the Red Cross has defined autonomous weapons as any weapon that can attack its set target without any human intervention or control. The integration of AI in the development of this kind of advanced arsenal is ongoing. While the US Department of Defence has ensured that the development will be in correspondence with the renowned AI ethical principles, it still isn’t ensuring enough to neutralise the fear of Mayday. The confidentiality of the technology also remains in question, as incendiary organisations would also be waiting to lay their hands over these weapons. 

Data collection and privacy

The development of advanced chatbots, generative AI, and AI-based search engines has generated concerns over the private data of users. The main concerns over data being collected and applied are data life, i.e., the data of a subject lasting longer than the subject itself.

Data recycling  is data being used for something other than the original purpose for which it was collected. “Supplementary data”  is data being collected and stored by people not originally being the subject of collection or additional data accruing from any person that is ancillary and was not intended to be collected.

The role of data in LLMs is crucial, as its algorithm acts upon  past users’ data to generate responses. This is an example of how sensitive user data is processed and collected and raises concerns about proprietary code or personally identifiable information. The trepidation of proprietary information getting exuded on such AI platforms is also a probability. For instance, an employee at Samsung leaked a company confidential code on Chat GPT in May, which resulted in the tech giant banning all its employees from using generative AI.        

Transparency and explainability

The industry experts working day in and day out on the technology sometimes lack the ability to acknowledge and communicate the progressions in AI development; therefore, it’s rational to figure that getting familiar with the AI mechanisms is not that uncluttered. Internal organisational misconduct and the unethical progression of possible technologies are the main reasons why transparency and explainability are necessary. “Black box AI systems” is another challenge that needs to be overcome for better transparency. Complications caused by decision biases, unintended consequences, and past data reliability in crucial situations by AI systems can be resolved by tracing decision system data points.

Biases and discrimination

This is another reason why there seems to be a need to regulate AI. The past data sets on which these AI models have been trained or, for example, models using user responses to generate further responses, could be corrupted or worse, their algorithm could be set up in such a way that it generates biassed or discriminatory responses. For instance, the infamous 2019 “Image Net Roulette,” which was trained on datasets made in Princeton in 2009, resulted in misogynistic, racist, and sexist results. Another recent incident in the UK dating somewhere in October 2023 cemented the proposal of legislating AI, where the use of AI tools by the Department of Works & Pensions and Home Office caused biassed outputs.

Misuse of generative AI

Generative AI is a new phenomenon that has taken over the world in recent times, be it as simple as a language model. Computer-generated video models all fall under the generative AI group. In recent times, generative AI has caused a lot of stir as it is being misused to cause harm or being litigated against for its copyright infringements. Many of the substantial AI mammoths like Microsoft, Github, and OpenAI are being litigated against in a class action suit for training their products upon publicly generated code without crediting them. Other incidents of school bullying through deep fake technology have also come to light, which just showcases why new legislation covering such incidents is required.

AI development policies and regulations around the world

The attempt to regulate AI can be dated back to 2016, when the National Strategic Research and Development Plan for AI was passed in the United States, which stated the federal research aspect of the technology and provided some ethical considerations. Though there hasn’t been much of a full-fledged legislation attempt previously, other related acts have broadly covered the potential AI threats and have commented on points of transparency and bias. 

USA

In the short term, the focus is more on how to apply the existing laws prevalent in the country to AI organisations and their general usage, and there have been no enacted AI-specific laws. As of the historical timeline in August 2019, the National Institute of Standards and Technology released a report on U.S. leadership in AI, identifying areas of focus for AI standards and recommending actions to advance national AI standards development. Later, NIST released the AI Risk Management Framework, which provided guidelines for the private sector to consider while developing any AI services. In an attempt to cover AI-related matters, in April 2021: FTC, EEOC, CFPB, and DOJ issued a joint statement on AI enforcement. The four federal agencies clarified that their existing legal authorities apply to the use of automated systems and innovative new technologies, including AI, and expressed their commitment to ensuring compliance with federal laws. In May 2021, the NTIA issued an AI accountability policy request for comments from the public for the development of audit structures and algorithm explainability. In January 2023, the NIST also released its Artificial Intelligence Risk Management Framework, which provided voluntary, non-sector-specific guidelines. Biden has also issued an executive order as of October 30, which is to guide responsible AI development and use across government, industry, and society. The executive order covers policy areas such as safety and security, innovation and competition, worker support, AI bias and civil rights, consumer protection, privacy, federal use of AI, and international leadership.

Japan

Japan’s first attempt to regulate AI was the publication of Social Principles of Human-Centric AI in 2019. There is no legislated law covering AI in Japan as of now; instead, it is facilitated by contracts and agreements among the parties. In 2023, Japan, being the host of the G7 summit, approached the regulation of AI from a human-centric perspective. 

China

China’s government has been active in attempts to regulate AI. Some of the major events the Chinese government considered were the 13th and 14th Five-Year Plans (2016-2020) and (2021-2025) wherein the government specified AI as key for achieving economic growth targets and further committed state investments in AI. Later, in July 2023, the Cyberspace Administration of China laid out some rules to regulate generative AI. AI regulations are designed in such a manner that they instil the communist values of the CCP and thus are not in parlance with the world’s attempt to regulate the technology.

India

In 2018, NITI Ayog released its strategy on AI, which was a document that focused on responsible AI development and explored ethical considerations. The strategy also sets out a vision for the future of AI in India. It calls for the creation of a vibrant AI ecosystem that will foster innovation and ensure that India is at the forefront of the global AI race. To achieve this, the strategy recommends a number of initiatives, including:

  • Investing in research and development in AI
  • Promoting collaboration between academia, industry, and government
  • Creating a regulatory framework for AI
  • Educating the public about AI

In 2022, as a follow-up to the strategy, the government of India announced the establishment of a Centre of Excellence in AI. This centre will serve as a hub for research, development, and innovation in AI. It will also provide training and support to startups and businesses that are working on AI technologies.

Also in 2022, the government launched the IndiaAI portal. This portal is a one-stop shop for information on AI in India. It provides access to resources on AI research, development, and policy.

Finally, in 2022, India joined the Global Partnership on Artificial Intelligence (GPAI). The GPAI is a multi-stakeholder initiative that aims to promote responsible AI development and use. India’s membership in the GPAI is a signal of its commitment to working with other countries to shape the future of AI. India too does not have a proposed or enacted law to govern the AI ecosystem but recently, in 2023, we passed the Digital Personal Data Protection Act, which does cover the data acquired and serviced by organisations, which would also cover the workings of AI organisations. India is still figuring out which approach it should apply while legislating AI-specific laws, i.e., should it opt for the risk approach of the EU’s AI Act or the U.K.’s progressive approach?

The Digital Personal Data Protection Act of 2023

The DDPA establishes a comprehensive framework for the protection of personal data, addressing important issues such as consent, data minimization, and cross-border data transfers. It aims to safeguard individuals’ rights and privacy while fostering responsible and ethical use of data in the digital age.

While the DDPA does not explicitly address AI, its provisions are relevant to AI systems since they rely on data to function and make decisions. The act imposes obligations on organisations, including AI companies, to ensure that personal data is collected, processed, and stored in a lawful and ethical manner.

Here are some key aspects of the DDPA that are applicable to AI organisations:

  • Consent: The DDPA requires organisations to obtain individuals’ explicit consent before collecting and processing their personal data. This includes data that may be used to train or develop AI models.
  • Data minimisation: The act mandates organisations to collect only the minimum amount of data necessary for the intended purpose. This principle is particularly important in the context of AI, where large datasets are often used for training and experimentation.
  • Transparency: Organisations must provide individuals with clear and concise information about how their personal data will be used, including any potential AI applications.
  • Cross-border data transfers: The DDPA regulates the transfer of personal data outside India, ensuring that adequate safeguards are in place to protect individuals’ privacy rights. This is significant for AI organisations that may operate globally or collaborate with international partners.
  • Enforcement: The act establishes a Data Protection Authority to monitor and enforce compliance with the provisions of the DDPA. This body can investigate complaints, impose penalties, and issue guidelines to organisations.

While the DDPA does not specifically target AI, its provisions provide a foundation for addressing ethical and legal considerations related to the use of AI systems. By ensuring that AI organisations operate in a responsible and transparent manner, the act contributes to building trust and fostering innovation in the AI industry in India.

U.K.

The UK’s major AI attempt can be dated back to November of 2017, when the government pushed the development of AI and considered it one of the four grand challenges for the government. Later, with its exit from the EU, the U.K. in September 2021 announced its pro-innovation approach to regulating AI and its commitment to minimising the regulations. In March 2023, the Department for Science, Innovation, and Technology and the Office for Artificial Intelligence published a white paper detailing plans for implementing a pro-innovation approach. As for an Act, the U.K. government doesn’t seem to be willing to introduce an AI-specific act and has suggested that its industry regulators  adapt the existing laws with the AI provisions. The U.K. also hosted the first global AI summit on November 1 and 2, 2023, at Bletchley Park, which had discussions about inclusive measures for frontier AI systems. 

EU

Perhaps the EU has been the centre point of AI regulation around the globe with its recent decision on the AI Act. The AI Act was first proposed on April 21, 2021, by the European Commission. On June 19, 2023, the lawmakers agreed to changes in draft rules that banned the use of AI in biometric surveillance and generative systems to disclose or explain the reasons behind the content generated by them. On December 9, 2023, after extensive negotiations between the council and the parliament, a decision was made by the bodies provisionally considering the AI Act. The act categorises AI systems into four categories based on the risks they advance and thus determines their operationality and compliance. The low-risk systems such as mail segregating algorithms are far from any compliance requirements, while generative AI systems such as chatbots are categorised as limited-risk systems. Employee assessment systems or social engineering systems are classified as high risk while biometric recognition systems with some governmental applicability exceptions are prohibited from use. The Act applies to organisations similarly to how GDPR applies to any entity operating in European territories, be it foreign or European. The act also establishes penalties to be levied on defaults to comply with the provisions, which would extend to 35 million euros or 7% of the worldwide annual turnover of that organisation. It is expected that the Act will be enacted within the next fifteen months, which has raised worries for many organisations operating in the European Union territories.

Conclusion

In the current dynamic world, the race to cash in on AI technology is tremendous and mostly unregulated. We have seen requests and pleas from the industry’s very own tech giants, like Sam Altman and Elon Musk, requesting the US federal body to consider regulating AI before there’s an event causing public uproar. The business potential of AI is vast and every government would want to focus on the development of the business side but also consider the downside of a major AI leak or outrage. Governments will have to side with either accelerationists or doomers, both having their perspectives justified for their specific reasons. Intergovernmental alliances, as seen before for development and regulation, need to be multiplied so that there’s more of a global approach and better business prospects. 

In the Indian context, regulation and development of these technologies are still behind the world by at least 5 years, which provides us legal professionals with a head start to predict and prepare ourselves for the regulatory challenges that our government will be facing.    

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here