This article has been written by Subarna Dutta  pursuing a Remote freelancing and profile building program course from Skill Arbitrage.

This article has been edited and published by Shashwat Kaushik.

What is right to internet privacy

As access to the internet is so easy these days for people, especially children, it is no wonder it has become a cause for grave concern. While the internet is home to a plethora of information, it also comes with some dreadful contents like porn and explicit sexual content, which unfortunately are easily accessible to people of all age groups as people are entitled to their privacy on the internet. Internet privacy is a legal right that refers to providing confidentiality to an individual, which means protecting his personal, financial, communicative and preferential data while staying connected to the internet.

We are in the age of rapid advancements in technology, and one of the major issues with the usage of the internet is the abundance of pornographic content, which further raises a lot of questions about privacy rights. The challenge surrounding this is finding a common ground where the platforms and the internet service providers, along with the government, respect individual online freedom while also taking into account the potential detrimental effects of explicit obscene content.

Download Now

Ethical controversies : deep fake pornography

Recently on X, a video of Rashmika Mandanna, an Indian actress, being seen entering an elevator wearing a black swimsuit, surfaced online and was quite widespread on social media. Later, when she took to social media, strongly denouncing the authenticity of the video, it was realised that it was a deep fake of hers. Her face was superimposed on the body of an Indian influencer. It’s scary, isn’t it?

A deepfake is nothing but a fake image, video or audio created using artificial intelligence and machine learning. As the world is adopting AI in almost every aspect, it’s also very concerning to witness the believable images and videos it can produce and how horrendously pervasive they have become.

And guess what? Women are the most affected by the rising menace of this AI technology. A report by Sensity AI, a company known for monitoring deep fakes, states that 96% of them were not consensual and 99% of those featured were women. “A creator offered on Discord to make a five-minute deep fake of a ‘personal girl,’ meaning anyone with fewer than 2 million Instagram followers, for $65,” NBC reports.

However, they are not just restricted to women’s bullying; deepfakes can put people into really dangerous or mischievous scenarios, whether in schools or workplaces, thus leading to extreme forms of trauma and harassment.

How are deep fakes created

A deepfake uses machine learning, which would train a neural network for hours on the real video to provide a realistic understanding of the person from various angles and lighting. The addition of AI makes the process much faster but it is still not convincing enough to produce an entirely fictitious image. A more competent neural network is the GAN or generative adversarial network. It consists of a generator and a discriminator. The generator creates fake samples while the discriminator gives feedback to the synthetic images produced so as to resemble the real ones. The images created by GANs are almost exactly the same as the real faces, which makes them ideal for producing convincing images.

Misuses and issues arising from  deep fakes

Apparently, these were created for fun initially but with their growing popularity over the years and the natural human propensity to indulge in unethical things, deep fake technology is now widely used for a number of image maligning activities like pornography, blackmailing and extortion, celebrity bating, various cybercrimes and frauds, and politicians saying things at the time of crucial elections that they have never said before.

A deepfake is a virtual sexual assault on someone without the victim’s consent to sharing the image or even knowledge. But there is more to it than just sexually explicit consent at many deeper levels.

This AI driven technology empowers people to create almost real visuals that are absolutely convincing. In cases of women, it reduces women to mere sexual objects, inflicting psychological harm and emotional distress, often leading to collateral damage like getting fired from jobs and financial losses. Though it is particularly not used for only women, it disproportionately affects women more than men because more convincing results are produced when a woman’s picture is used for the swap process due to AI bias. Images and videos targeting men are primarily celebrities and politicians making derogatory speeches or engaging in anti-state actions, thus creating distrust amongst common people. It can be extremely hurtful to a person, even if, at some point, a deepfake is debunked. The damage cannot be mitigated so easily.

International laws surrounding deep fake

It has been understood by now that deepfakes are a growing menace, thereby garnering subsequent attention worldwide. In response, countries are now coming up with the required legislation. However, this comes with a lot of challenges, as detecting deep fakes is tough as AI gets better with each passing second. As the internet is in almost every part of the world, the perpetrator can be placed anywhere geographically, thereby making it extremely difficult to apply laws to the jurisdictions of other enforcement agencies.

In 2023, India, along with 28 other major countries, which include the US, Canada, Germany, Australia, China and EU, showed a collective effort, the ‘Bletchley Park Declaration’, which acknowledges the risks in the arena of AI and its potential misuse.

Jurisdictions that have come up with preventative measures to deal with this menace:

The United Kingdom

The Online Safety Act, which received royal assent on October  26, 2023, is one of the laws to control a wide range of speech and media online that are deemed harmful. The Act addresses specific harms, which include anonymous trolling, misleading ads, underage access to online porn, nonconsensual exchange of deepfakes and the spreading of child sexual abuse material. Also, online platforms will be required to produce transparency reports, failing to comply with the rules of the Act could potentially land companies with huge fines (around $22 million) or ten percent of their annual turnover (whichever is higher) and the bosses could even face jail.

The European Union

The EU has enforced the Digital Services Act, which came into effect on August 25, 2023. This is one of the most important regulations in the field of protecting the digital space against the spread of illegal content and the protection of the fundamental rights of users. The DSA also claims to be the most ambitious regulation in the world and urges social media and online marketplaces to adhere to obligations so as to enhance transparency and also hold accountability for disseminating harmful content.

The United States of America

Though there is no federal law currently in the US addressing deepfakes, in 2019, the US also introduced a bill in the form of the Deep Fakes Accountability Act of 2019, which directs the Department of Homeland Security to establish a taskforce to address deepfakes and penalise for related violations, but it has yet to be passed. A few states, like Texas, Virginia and California, have enacted legislation criminalising deepfake pornography. But these laws are only helpful if the perpetrator is a resident of one of those jurisdictions.


On January 10, 2023, the Provisions on the Administration of Deep Synthesis of Internet Information Services were brought into force in China. This joint initiative by the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT) and the Ministry of Public Security (MPS) comprehensively addresses the risks related to ‘deep synthesis’ which produces doctored content like virtual human beings online. The regulations prevent the production of misinformation prohibited by laws & administration regulations, implement measures to address personal data protection, data security and other technical safety measures.

South Korea

South Korea too has taken steps to curb this deceptive use of AI technology by passing the Act on Special Cases Concerning the Punishment of Sexual Crimes, which makes the creation and distribution of fakes illegal, with offenders facing severe imprisonment up to five years or fines up to 50 million won (approximately . 43,000 USD).

Laws in India

While there are no laws as such dealing with deepfakes specifically, there are some provisions in the Indian Penal Code, 1860, and the Information Technology Act, 2000. Section 500 of the Indian Penal Code, 1860, ensures punishment for defamation. On November 7, 2023, the Ministry of Electronics & Information issued some guidelines in its latest advisory, which are as follows:

  • Ensure that due diligence is exercised and reasonable efforts are made to identify misinformation and deepfakes, and in particular, information that violates the provisions of rules and regulations and/or user agreements.
  • Such cases are expeditiously actioned against, well within the timeframes stipulated under the IT Rules 2021, and
  • Users are caused not to host such information/content/deepfakes.
  • Remove any such content when reported within 36 hours of such reporting and
  • Ensure expeditious action within the timeframes stipulated under the IT Rules 2021, and disable access to the content / information.

The Indian Penal Code of 1860

The IPC, which serves as the primary criminal code for India, contains several provisions that address the issue of deep fakes and impose strict punishments for those who engage in such activities.

Section 469 – Forgery

Under Section 469 of the IPC, creating a deep fake with the intent to deceive or cause harm to another person or organisation is considered forgery. This provision criminalises the act of manipulating or fabricating digital content to make it appear authentic and genuine. Individuals found guilty of forgery under Section 469 can face imprisonment for up to three years, a fine, or both.

Section 500 – Defamation

Posting deep fakes that damage the reputation or tarnishe the image of an individual or organisation can be prosecuted under Section 500 of the IPC, which deals with defamation. This provision punishes anyone who makes or publishes any statement, whether true or false, with the intention to harm the reputation of another person or entity. Offenders convicted under Section 500 can face imprisonment for up to two years, a fine, or both.

Section 66A – Sending offensive messages through communication services

Section 66A of the IPC addresses the dissemination of offensive or menacing information via electronic communication services, including social media platforms. Posting deep fakes that are sexually explicit, violent, or contain hate speech or discriminatory content can fall under the purview of Section 66A. Offenders who violate this provision can be punished with imprisonment for up to three years, a fine, or both.

Section 67B – Publication or transmission of obscene material in electronic form

Deep fakes that depict explicit sexual acts or nudity may be considered obscene material under Section 67B of the IPC. This provision prohibits the publication or transmission of obscene or lascivious content in electronic form, including deep fakes. Individuals found guilty of violating Section 67B can face imprisonment for up to three years, a fine, or both.

The Information Technology Act of 2000

Under this Act, failure to act as per the provisions and rules would hold organisations liable to lose protection under Section 79(1) of the IT Act.

Section 66D of the IT Act ensures punishment for cheating by impersonation using electronic means with imprisonment up to three years and a fine up to one lakh rupees.

Section 66E of the IT Act ensures punishment for violation of privacy through publishing, transmitting, or capturing an image of a private part of any person, which shall be punished with imprisonment, which may extend to three years, with a fine not exceeding two lakh rupees.

Section 67A of the IT Act ensures punishment for publishing or transmitting sexually explicit material in electronic form, in which the person convicted can face an imprisonment term up to five years and a fine that may extend to ten lakh rupees. Section 67B ensures punishment for publishing and transmitting material displaying child sexual abuse in electronic form with imprisonment up to five years and a fine that may extend to ten lakh rupees.

Some other provisions that can be included are Section 292 and Section 294 of  the IT Act, which also deal with punishment pertaining to the publication of indecent material.

Challenges and potential solutions to  deep fakes

With the growing advancement of technology, the detection of fakes is indeed challenging. The current challenges are:

  • Absence of a strong legal workforce trained specifically in visual verification techniques.
  • The increasing ability of technologies to become realistic and their wide availability.

Responding to deepfakes should be considered very serious and be the centre of discussion. Some ways to combat them are:

  • Use of blockchain technology- Any unauthorised edits or tampering can be taken into account with the use of blockchain technology.
  • Visual analysis- Often, AI struggles to imitate tiny facial expressions like eye movements, paving the way for detecting inconsistencies in the algorithms that generate deep fakes.
  • Multi-factor authentication- Multi-factor authentication provides layered forms of verification by adding biometric forms of recognition like facial or voice recognition, thus creating difficulties for fraudsters.
  • Use of machine learning- While we learned that machine learning is one of the most significant techniques for creating fakes, it can also be used for detecting them. Machine learning models can quickly analyse large amounts of audio and video data with great speed and can recognise patterns of deepfake algorithms. With passing time, ML models should be trained and adapted to detect new types of deepfakes. 

Social media platforms should hold accountability and responsibility of sensitive and personal data of an individual and the breach of data is violating the rights of privacy of an individual. Platforms like YouTube have made it mandatory for creators to disclose whether the content is AI generated. People should be trained and made aware of the types of frauds and ways to identify them.

While preventative measures are a way forward, despite the threats that deepfakes pose, there is no legislation currently prohibiting the creation of the same. A strict legal regulation prohibiting the creation and sharing of fakes is the immediate need of the hour.



Please enter your comment!
Please enter your name here