This article has been written by Yash Vardhan Singh pursuing a Diploma in Technology Law, Fintech Regulations and Technology Contracts course from LawSikho.

 This article has been edited and published by Shashwat Kaushik.

Introduction

Artificial intelligence (AI) has become a pivotal focus in the contemporary global tech landscape, particularly with the introduction of OpenAI’s ChatGPT a year prior. This has spurred a competitive rush among nations and corporations, urging them to shape regulations and standards. Policymakers and leaders emphasise the need to avoid past errors, such as late internet and social media regulation.

Download Now

Recent developments underscore this urgency. The EU is on the cusp of bringing the world’s first AI legislation into existence. The U.S.A. unveiled a presidential executive order on AI, China presented an AI governance framework, India’s Global India AI Summit and the UK hosted an AI Safety Summit leading to the adoption of the “Bletchley Declaration.” Though global AI regulation remains complex due to varied national perspectives, a nascent framework for AI diplomacy is emerging. 

Why is it important to regulate AI

The AI regulation should be aimed at instilling trust among citizens in the capabilities of AI. While many AI systems present minimal risks and contribute positively to societal issues, certain systems carry potential risks that necessitate attention to prevent unfavourable consequences.

One notable concern is the lack of transparency in understanding the rationale behind decisions or predictions made by AI systems and the subsequent actions taken. This opacity poses challenges in assessing whether individuals have been subjected to unfair disadvantages, such as in hiring processes or when applying for public benefit programmes, often referred to as “bias” in the AI ecosystem.

While current legislation offers some safeguards, it falls short of adequately addressing the unique challenges posed by AI systems. Additional measures are deemed necessary to comprehensively address the potential issues associated with AI technology. The impact of hallucinations, ethical concerns, and automated decision making can impact our lives in ways we cannot imagine and the worst part could be that it can all go undetected as we are still scratching the surface of this technology.

What should the global AI regulatory framework look like

The global framework should, ideally:

  1. Suggest a compilation of high-risk use cases.
  2. Establish explicit criteria for AI systems intended for high-risk applications.
  3. Outline precise responsibilities for users and providers of AI systems in high-risk scenarios.
  4. Recommend a thorough assessment of compliance before the deployment or market entry of AI systems.
  5. Advocate for enforcement measures post-deployment of such AI systems in the market.
  6. Put forward a framework for governance at both global and national levels. 

Trends of AI regulation at national level

EU AI Act

In April 2021, the European Union introduced the Artificial Intelligence Act (AIA), which currently proposes a risk-based framework for overseeing AI applications in both the public and private domains. This approach categorises AI use into three risk levels: unacceptable risk applications, high-risk applications, and applications not explicitly prohibited. The regulation prohibits AI deployment in critical services that could pose threats or encourage harmful behaviour, while permitting its use in sensitive sectors like health, subject to rigorous safety and efficacy evaluations by regulators. The legislation got the nod of the European Parliament in December 2023 and might come into force early this year.

The AI Act represents a form of legislation that governs all forms of automated technology, rather than addressing specific concerns within distinct domains. It broadly defines AI systems to encompass a diverse array of automated decision-making tools, including algorithms, machine learning tools, and logic tools, despite the fact that certain technologies covered may not strictly fall under the traditional definition of AI.

The United States’ Executive Order

The United States has not yet enacted comprehensive federal legislation specifically targeting AI applications. Instead, the Biden Administration, in collaboration with the National Institute of Standards and Technology (NIST), has released expansive guidelines aimed at ensuring the safe deployment of AI. Concurrently, various state and municipal governments are crafting their own AI regulations and establishing dedicated task forces. Unlike the EU’s approach, the U.S. regulatory focus has been on regulating specific AI applications rather than attempting to comprehensively oversee AI technology as a whole.

On the federal front, the Biden Administration unveiled the AI Bill of Rights, addressing concerns related to potential AI misuse and offering guidelines for the responsible use of AI across the public and private sectors. Notably, this AI strategy does not carry legal mandates. Instead, the Bill of Rights emphasises essential safety measures, including enhanced data privacy, safeguards against algorithmic bias, and recommendations for ensuring the safe and effective implementation of AI tools. Although not legally enforceable, this framework serves as a foundational reference for policymakers at various government levels contemplating AI-related regulations.

China’s Global AI Governance Initiative

It’s worth highlighting that just prior to the announcement of the U.S. executive order, China revealed its “Global AI Governance Initiative” at the BRI Forum in Beijing. China has set a goal for the private AI industry to make a whopping $154 billion annually by 2030.

However, in contrast to the extensive and detailed U.S. executive order, China’s initiative was concise, comprising approximately 1,500 characters. It primarily emphasised overarching principles such as a 

  • people-centric approach to AI development 
  • promoting beneficial AI advancements
  • ensuring fairness and non-discrimination 
  • advocating for broad participation and consensus-driven decisions
  • emphasising AI’s role in mitigating associated risks

However, certain nuances within the initiative shed light on China’s underlying objectives. The document underscores the importance of “respecting the national sovereignty of other nations and adhering strictly to their laws.” It explicitly opposes leveraging AI for “manipulating public opinion, disseminating disinformation, interfering in another country’s internal affairs, or compromising its sovereignty.” Furthermore, China’s Global AI Governance Initiative emphasises the need for inclusive AI development that benefits all sectors of society. This includes initiatives to ensure that AI technologies are accessible to small and medium-sized enterprises (SMEs) and that they contribute to sustainable economic growth and social development.

China’s Global AI Governance Initiative is significant for several reasons. First, it demonstrates China’s ambition to play a leading role in shaping the global discourse on AI governance. Second, it highlights China’s commitment to responsible and inclusive AI development, which could have a positive impact on the global AI landscape. Third, it signals China’s willingness to engage in international cooperation on AI, which is crucial for addressing the challenges and opportunities posed by AI.

Overall, China’s Global AI Governance Initiative is a significant step towards establishing a more robust and inclusive global framework for AI governance. It is likely to have a major impact on the development and use of AI technologies worldwide.

UK’s AI Diplomacy

Considering the dominant roles played by the United States, China, and the EU in shaping AI regulations, there was a notable twist when the UK government declared in June 2023 that it would organise the inaugural global summit on AI safety. This move came as a surprise, particularly since the United Kingdom had been perceived as lagging behind in AI regulation. Prime Minister Rishi Sunak had explicitly expressed a cautious approach, emphasising that he would not hastily impose regulations on AI.

Significant strides were achieved during the summit, culminating in the signing of the Bletchley Declaration by 27 nations, which notably included both China and the United States, along with the European Union. The declaration emphasises addressing the challenges posed by advanced AI technologies. Its core objectives are to recognise shared AI safety concerns, foster a collective understanding based on scientific evidence, and establish risk-aligned policies to ensure safety across nations.

It’s crucial to highlight that the U.S. government’s AI executive order also underscored its commitment to shaping an international AI framework by collaborating with 20 nations and the EU. This strategic engagement ensured that the United States played a pivotal role in shaping the discussions at the AI Safety Summit, all while fostering collaboration with China. This collaboration lays a foundational blueprint for future global AI diplomacy efforts.

Furthermore, there’s a growing consensus among academics and industry leaders advocating for the establishment of an international regulatory body for AI, drawing parallels to entities like the International Atomic Energy Agency. The AI Safety Summit hosted by the UK could mark an initial step towards releasing this vision.

Canada’s AIDA: Artificial Intelligence and Data Act

In 2022, the Canadian Parliament introduced a preliminary regulatory framework for artificial intelligence, employing a tailored risk-based strategy. Canada’s objective with these AI regulations is to standardise the design and development practices of private companies working with AI across its provinces and territories.

Diverging from the European Union’s methodology, the modified risk-based approach in Canada does not outright prohibit the use of automated decision-making tools, even in critical sectors. Instead, under the AIDA regulation, developers are required to formulate a mitigation plan aimed at reducing risks and enhancing transparency when employing AI in high-risk systems. The regulatory framework is designed to ensure that AI technologies are developed and used in a responsible and ethical manner. It sets out a number of principles that AI companies must adhere to, including:

  1. Transparency: AI companies must be transparent about how their AI systems work.
  2. Accountability: AI companies are accountable for the decisions made by their AI systems.
  3. Fairness: AI systems must be fair and unbiased.
  4. Safety and security: AI systems must be designed and used in a way that ensures the safety and security of individuals.
  5. Respect for privacy: AI companies must respect the privacy of individuals.

The regulatory framework also includes a number of specific requirements for AI companies, such as:

  • Conducting risk assessments to identify and mitigate potential risks associated with their AI systems.
  • Developing and implementing policies and procedures to ensure compliance with the regulatory framework.
  • Providing training to employees on the regulatory framework and their responsibilities under it.

The Canadian government is committed to working with stakeholders to develop a comprehensive regulatory framework for AI that will protect individuals and society from the potential risks of AI while also promoting innovation and economic growth.

Core principles of these diverse regulations

Recognising the diverse regulatory approaches across different jurisdictions due to varied cultural norms and legislative contexts, five overarching areas of consensus emerge. These areas collectively emphasise the need to harness AI’s potential benefits while mitigating associated risks, aiming for the collective good of citizens. These shared principles serve as foundational pillars for crafting more specific regulations:

  1. Regulations and guidelines regarding AI align with foundational principles outlined by the OECD and backed by the G20. These encompass upholding human rights, promoting sustainability, ensuring transparency, and implementing robust risk management.
  2. Emphasis on a risk-centric strategy for AI regulation. This entails customising regulatory measures based on perceived AI-related risks and ensuring alignment with core values such as privacy, transparency, non-discrimination, and security. The guiding principle is that regulatory requirements should be commensurate with the associated risk level — minimal obligations for low-risk scenarios and rigorous requirements for high-risk situations.
  3. Recognising the multifaceted applications of AI, some jurisdictions advocate for a blend of sector-specific regulations alongside overarching, sector-neutral rules to address distinct industry needs.
  4. AI regulatory efforts are integrated with broader digital policy agendas, encompassing domains like cybersecurity, data privacy, and intellectual property rights. The EU, in particular, adopts a holistic approach by harmonising AI regulations with comprehensive digital policy frameworks.
  5. Collaboration with the Private Sector: Various jurisdictions leverage regulatory sandboxes, fostering collaboration between the private sector and policymakers. These collaborative platforms aim to formulate rules that champion safe and ethical AI practices while addressing potential risks associated with innovative, high-stakes AI applications that warrant closer scrutiny.

Conclusion 

Given the extensive global reach of AI technology, encompassing data utilisation for training, research and development, computing infrastructure, and applications that transcend national borders, it becomes evident that no single government can comprehensively address AI policy and regulation in isolation. International cooperation is imperative to guarantee that individuals and societies worldwide can confidently rely on the advantages of AI that are both trustworthy and accountable. Additionally, collaboration at the international level is essential for continually assessing and mitigating the new risks associated with AI.

To facilitate this collaboration, there is a need for a dedicated international forum that brings together governments and various stakeholders to work cooperatively on AI policy. Such a forum would provide a platform for sharing insights, experiences, and best practices, fostering a collective approach to addressing the challenges posed by AI on a global scale.

Moreover, international cooperation should aim to promote the interoperability of AI policies and regulations. This approach, in the context of data protection, encourages the responsible provision of services across borders. It has the potential to enhance accessibility, reduce compliance costs, increase legal certainty, and ensure consistent protection of the rights and interests of individuals involved in the AI ecosystem.

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here