This article has been written by Adv. Sanjay Pandey, practising in District & Sessions Court, Varanasi and having more than 10 years of experience in teaching Law of Contract. He also completed a Diploma in Advanced Contract Drafting, Negotiation and Dispute Resolution from LawSikho in November, 2023.

This article has been edited and published by Shashwat Kaushik.

Introduction

The discovery of atoms and subatomic particles unveiled a new realm of understanding in science, and the rise of images and videos on the internet has reshaped our perception of the world around us, This time, experimenting with pixels and frequencies is ushering in a transformative era in the digital realm.

Download Now

However, with great power comes great peril, The proliferation of deep fakes has threatened some issues like authenticity, copyright infringement, identity theft, misinformation, and the impact of visual content on mental health. These hyper-realistic manipulations, fueled by advances in artificial intelligence, pose a formidable challenge to our social viability and legal accountability.

In today’s interconnected world, where information flows effortlessly across screens, the new deep fake technology tends to put a big question mark on the very fabric of creativity and authorship. In this article, we will understand can copyright law in India deal with deep fakes?

Today, many legal advancements have been evolving around deep fakes but they are not to the T. They are still work-in-progress.

Brief history of AI, machine learning and copyright

Even though computers were performing well and appropriately, carrying out the tasks they were used to being given commands for, no one called them intelligent until the early fifties. Rather, it would be better to say that, till then, no one had even thought that a machine could be as intelligent as a human.

The term ‘Artificial Intelligence’ was heard for the first time in 1956 when computer scientists Allen Newell, Cliff Shaw and Herbert A. Simon

developed a programme known as the “general problem solver.” This programmeme was based on the ‘Physical Symbol System Hypothesis’ and is considered to be the first artificial intelligence programmeme. Their argument was that if a machine could be programmed to interpret and connect with symbols, it would exhibit intelligence. The programme was showcased during the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), organised by John McCarthy and Marvin Minsky in the same year, where the term ‘Artificial Intelligence’ was used by John McCarthy, the father of artificial intelligence.

The term ‘Machine Learning’ was brought into vogue by the American computer scientist Arthur Samuel. He is also considered to be the father of machine learning. He developed the programme ‘Game of Checkers’ in 1952. The programme was designed in such a way that the machine could learn by playing against itself. The machine used to monitor its own moves, could plan a strategy and this way it gradually acquired the ability to win. As the machine played more and more, it could learn more.

The development of artificial intelligence and machine learning was going on at a slow pace till the end of the eighties, despite the invention of first Computer Neural Network by Frank Rosenblatt in 1957, the first chatbot Eliza in 1964, and the first robot, Shakey in 1969, 

With the advent of the internet in the 1990s, people around the world started creating huge amounts of data, resulting in the explosive growth of AI and machine learning systems.

Copyright legislation in independent India, more or less, emerged concurrently with the advent of artificial intelligence and machine learning technologies. The Indian Copyright Act was enacted in 1957 and, unlike the Act of 1914 passed by the British Government, incorporated the principles of the Berne Convention of 1886.

Since its enactment, the legislation has undergone several amendments to bring itself in line with all the international copyright conventions and treaties, the last being in 2012. Even then, there was no uproar in the world about deep fakes.

Artificial intelligence, machine learning, deepfake, copyright, infringement

Artificial intelligence

Artificial intelligence, or AI, is a branch of computer science. It can be defined as a system that reflects human behaviour and human intelligence, such as reasoning, logical thinking, judging and making its own decisions to solve complex problems. Artificial Intelligence is a broader term which encompasses various applications within it, like machine learning, deep learning, robotics, neural networks, natural language processing, etc.

Machine learning

Like old-fashioned AI, which was programmed, a machine learning system does not work with instructions; rather, it uses data, both labelled and unlabelled, to learn something new by identifying patterns. Thus, machine learning is a branch of AI that employs algorithms capable of learning from data in order to make predictions. Data is the oil for a machine learning system; the more it feeds on data, the easier it is for the machine to learn quickly and identify patterns.

Deep fake

Deep fake is an umbrella term for AI-generated fake audios, videos and images created with the help of advanced machine learning programmes, i.e., deep learning. These morphed images, videos and audio recordings are so realistic that it becomes impossible for the viewers or listeners to differentiate between the real and the fake. Such fake videos, audios and images are also called synthetic media.

No universal definition of deep fake has been accepted yet. Deep fake’ is a portmanteau of the words “deep learning” and “fake”.

Deep fakes are made by using powerful deep learning algorithms, i.e., ‘Generative Adversarial Networks’ or GANs. In the GAN model, one type of data is input to get the same output data.

A GAN model has two elements, a generator and a discriminator. The generator tries to recreate existing data (an image, audio, or video) and the discriminator tries to spot the difference between the real data and the data that the generator is trying to recreate. These two parts work together repeatedly in a competition until the generator creates the output data so realistic that the discriminator can’t tell the difference any further. This way, the generator’s ability to create realistic data is gradually improved, and over time, the generator becomes better and better at creating realistic data, resulting in deep fake media.

Today, deep fakes have become a matter of concern as they have gone beyond just face swaps. It has not only forced the public to believe a false event, something that never took place, or a false message that was never given, but has also raised some serious legal, moral and ethical issues.

Copyright

Copyright refers to the legal ownership granted to creators over their literary and artistic creations, encompassing a wide array of works including books, music, paintings, films, computer programmes, and architecture.

Copyright is granted to the creators for a limited period of time with a view to protecting their work from theft or misuse and to enjoy the monopoly of reproduction, distribution, display, performance and alteration of their works. Only the original creators and anyone else they give authorization to have the exclusive right to reproduce the work.

Infringement

Infringement may take any form, including a violation of any provision of law or a breach of any terms of an agreement or any other unauthorised act, like aiding, abetting, or preparing for illegal activities or criminal offences.

From the ongoing discourse, it’s evident that deepfake creation involves two distinct types of data: input data and output data, which mirror the input data. In the realm of copyright law, two immediate questions arise:

Is the copyright of the input data owners violated when their copyrighted material is used without their consent to generate a deep fake? If so, what legal measures safeguard their copyright?

Who retains the copyright ownership of the resultant output data—the individual who inputs the data into the network, the machine responsible for generating the output, or the company owning the machine?

Deep fake ramifications around copyright and authorship

In the previous section of this article, we saw that the process of creating deep fakes raises two issues of copyright infringement and authorship. Prior to delving into India’s stance on these two emerging issues posed by deepfake technology, let’s first explore what the other related concerns are surrounding deepfakes.

Privacy issues: data protection and the right to privacy are correlated

In this digital age, concerns about privacy have intensified due to the proliferation of technology and the collection, storage, and analysis of vast amounts of personal data. Challenges to the right to privacy become more severe, due to surveillance programmes, corporate data mining, and emerging technologies like facial recognition and biometric identification. Where data theft is a concern of data protection laws, the right to privacy has emerged as a fundamental human right in the interconnected world we live in.

Every individual has the right to control their personal information and activities. Stealing someone’s face or mimicking their voice without their consent is a blatant violation of their right to privacy.

Indiscriminate rise in deep fakes poses a threat to authors in maintaining the integrity of their creative process and disincentivizing them to take full benefit from their intellectual property. However, the data used for generating deep fakes may or may not be copyrighted.

Ethical issues

 Violating an individual’s right to control how their likeness and their intellectual property are used raises ethical concerns about privacy and an individual’s autonomy. Deepfakes can be used to create false or misleading content that misrepresents individuals, leading to reputational harm or fraud. This misuse of copyrighted material can have serious ethical implications, especially when used to manipulate public opinion or deceive others.

Deepfakes can be used to create harmful and malicious content, such as revenge porn or fake news, which can cause emotional distress, damage reputations, and incite violence. The unauthorised use of copyrighted material in such contexts exacerbates the ethical issues involved.

Intermediaries responsibility

Another discussion around deep fakes is whether intermediaries like social media platforms can be held responsible for copyright infringement. If yes, to what extent should they be held liable? What is the law for this?

The existing technology for detecting deep fakes has an accuracy rate of around 65%, making it difficult for intermediaries to comply with takedown rules efficiently.

In 2019, Facebook took part in the Deep Fake Detection Challenge (DFDC), an event organised online by Facebook, Microsoft, and other partners where participants from around the world were invited to develop algorithms capable of detecting deepfake videos. The algorithm that won the contest, when tested against a black box dataset, achieved a detection rate of 65.18% in the challenge.

Copyright to AI generated work

Deepfake is such a controversial technology that is being discussed vigorously these days, it seems like a double-edged sword. Before examining the copyright law in India regarding deepfakes, let us examine the question, Can a machine or AI claim copyright in the work created by it? The answer to this question lies in the legal framework of different territorial jurisdictions. 

The case of Kristina Kashtanova and her comic book ‘Zarya of the Dawn’ 

It brings to light the intricate legalities surrounding AI-generated content and copyright protection in the United States. The US Copyright Office (USCO) rejected the AI-generated portions of her work on the ground that AI-generated works lack human authorship. It said that a work can be copyrighted only if created with the aid of AI but if wholly created by AI, it would not be protectable.

While the US Supreme Court limits copyright protection to works grounded in human creativity, what if Kashtanova had created her work in the UK? The UK law acknowledges computer-generated works, including deep fakes, and considers the person who facilitated their creation to be the author.

Section 178 of the Copyright, Designs and Patents Act, 1988, defines a computer-generated work as a creation that is produced entirely by a computer without any human endeavour involved in the process. Section 9(3) of the Act further stipulates that when it comes to computer-generated works, the individual who sets up the necessary arrangements for their creation is considered the author. 

Similar to the UK law, the Indian Copyright Act of 1957 also acknowledges, albeit to a restricted degree, the person responsible for creating a computer-generated work as its author. However, it does not grant authorship to the non-human entity, the algorithm or the AI system. Under the Copyright Act of 1957, as amended by the Copyright (Amendment) Act of 2012, Section 2(d)(vi)(iv) states that the person who assists with a computer-generated work is recognised as its author.

Now, the question is, if a copyrighted data set is used to train an AI, which may be used to create deep fakes, of course, without the consent of the lawful copyright owner, it certainly infringes his copyright and the output data generated by using such copyrighted work, even deep fakes, cannot be said to be the original work of the AI and its claim for copyright protection would fail. 

The Indian Copyright Office grants copyright protection to derived works by acknowledging the original work. This could serve as a precedent to safeguard AI generated work from claims of copyright infringement by placing them in the category of ‘derivative works’. Such a practice of the Indian legal system receives legal force from the Berne Convention of 1886, which in its Article 2(3) states that derivative works are those creative works that are based on or derived from one or more existing works.

Deep fakes and copyright laws in India

While on the one hand it is true that there is no specific law to deal with deep fakes in India, on the other hand it is also true that copyright law is not a solution to the critical problem of deep fakes. 

Section 79 of the Information Technology Act, 2000, provides safe harbour protection to social media intermediaries by not making them liable for third-party contents. However, under Proviso 3(b), they are imposed with a liability to remove or disable access to the infringing content after being informed by the government, its agency or any individual, or else they would lose the protection.

In the US, the removal of copyrighted content is mandated only after undergoing the four-factor test, which validates the doctrine of fair use outlined in Section 107 of the Digital Millennium Copyright Act, 1998 (DMCA). Fair use is a legal principle in the US that allows the use of copyrighted content without a licence in the light of the following factors:

  • Purpose and character of the use.
  • Nature of the copyrighted work.
  • Amount and substantiality of the portion take.
  • Effect of the use upon the potential market.

Under the doctrine of ‘fair use’ it is likely that deep fakes may qualify and can be preserved under the principle of transformative views.

In India, the corresponding principle of ‘fair use’ of US law is the doctrine of fair dealing under Section 52 of the Copyright Act, 1957. This section laid down an exhaustive list of the acts that are not deemed to be infringing and the deep fake contents don’t fall into this list. This may be convenient in tackling deepfake technology created with malicious intent but it is still unable to protect the use of deepfake technology for authentic purposes, as the term ‘review’ under Section 52(1) (a) (ii) of the ICA makes it flexible for the courts to adopt the concept of transformative use and give protection to those works that seem beneficial to society at large.

Under Section 57 of the ICA, authors are provided with special moral rights to claim authorship of their work and the right to claim damages in respect of any distortion, mutilation, modification or any other unauthorised act in relation to the said work.

The authors, Sec. 14 of the Copyright Act, 1957, of a copyrighted work enjoy the exclusive right to do or authorise others to do certain acts in relation to their literary, dramatic, or musical works, artistic works, cinematography film, and sound recording.

Section 55 and Section 63 of the ICA  provide remedies for civil and criminal liability, respectively. On proving an infringement, the copyright owner is entitled to remedy by way of injunctions and orders for seizure and destruction of infringing articles.

Furthermore, Article 21 of the Indian Constitution guarantees the fundamental right to life and personal liberty, which includes the right to privacy and dignity. The author of a copyrighted work also has the right to control the use and dissemination of their creative works. This includes the right to decide who can access, copy, distribute, or modify their work. 

In Puttaswamy vs. Union of India in 2017, the Supreme Court reaffirmed that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21.

Conclusion

The copyright of deepfakes can be a complex issue, as deepfakes often involve the use of copyrighted material from various sources. The threat the world faces from deep fakes is due more to the inherent nature of the technology, which is dynamic, ever-changing, and often outpacing the capacity of legislation to anticipate its implications than to the limitations of copyright laws and the lack of uniformity in laws around the world.

Throughout history, the law has perpetually found itself in a position of trailing behind the rapid advancements of technology. As society evolves and innovations emerge, legal frameworks struggle to keep pace, often lagging behind the complexities and nuances of modern technological developments. 

India is also not untouched by the ongoing turmoil in the legal system around the world regarding deep fakes. It may seem that if the policy makers take one step forward in this regard, the next moment they have to take two steps back. Though the legal scenario is not clear, they have buckled up to reach a concrete solution. 

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here