Fake profiles

This article has been written by Sukanya Das  pursuing a course on Training Program on How to Use AI to Grow Your Legal Practice from LawSikho.

This article has been edited and published by Shashwat Kaushik.

Introduction

In this digital era, deepfake technology has been introduced both with innovative opportunities and also with some significant challenges, especially in the realm of marketing. While deepfake technologies can create a new pathway in the whole entertainment industry like filmmaking, which can be very affordable for production houses, unfortunately, this technology has also become very infamous for the potential usage of creating pornographic videos, fake news, bullying, hate speech, child abuse, and also in the finance industry. Nowadays, the deepfake technology (generative adversarial networks (GAN)) is being used progressively to create fake content, manipulative videos, and images that cannot be distinguished between fake and real; for example, recently a viral deepfake video featuring the actress Rashmika Mandanna surfaced across the all-social media platform, which created a massive concern among the public. Another example: in 2018, a video went viral depicting a deepfake-generated video of former President of the US Obama making a statement. Here the landscape of deepfakes will be explored, focusing on types, benefits, international legislation, marketing, the role of law in AI and impacts on finance and reputation.

Download Now

What are deepfakes and its types

Deepfakes are a type of synthetic media that uses artificial intelligence (AI) to create realistic images, videos, or audio. They are often used to create fake news or to impersonate someone else. Deepfakes can be created using a variety of techniques, but the most common is to use a deep learning model to learn how to generate realistic faces or voices. This model can then be used to create a fake video or audio clip that will appear to be real.

Deepfakes are a relatively new technology, but they have already had a significant impact on the world. They have been used to spread misinformation, to blackmail people, and even to interfere in elections. As deepfakes become more sophisticated, it is likely that they will be used for even more malicious purposes.

There are a number of concerns about deepfakes. One concern is that they could be used to create fake news or to spread misinformation. For example, a deepfake video could be created to make it appear that a politician said something they did not actually say. This could be used to influence an election or to damage someone’s reputation.

Another concern is that deepfakes could be used to blackmail people. For example, a deepfake video could be created to make it appear that someone is doing something they did not actually do. This could be used to extort money from the person or to blackmail them into doing something they do not want to do.

Deepfakes could also be used to interfere in elections. For example, a deepfake video could be created to make it appear that a candidate is unfit for office. This could be used to discourage people from voting for the candidate or to damage their chances of winning the election.

There are a number of ways to combat deepfakes. One way is to educate the public about deepfakes and how they can be used to spread misinformation. Another way is to develop tools to detect deepfakes. Finally, it is important to support laws and regulations that will help to prevent the misuse of deepfakes.

The legal implications of deepfakes in marketing are complex and far-reaching. Deepfakes are realistic digital images or videos that have been created using artificial intelligence (AI) to make it appear that someone is doing or saying something that they did not. While fakes can be used for entertainment purposes, they can also be used to spread misinformation, manipulate elections, or damage reputations.

One of the biggest legal concerns about deepfakes is that they can be used to create false endorsements or advertisements. For example, a deep fake could be used to make it appear that a celebrity is endorsing a product when they actually have no affiliation with the product. This could mislead consumers and lead to them making purchases that they would not have otherwise made.

Another legal concern about deepfakes is that they can be used to harass or defame individuals. For example, a deepfake could be used to create a video that appears to show someone doing something embarrassing or illegal. This could damage the reputation of the individual and make it difficult for them to get a job or maintain relationships.

In addition to these specific legal concerns, deepfakes also raise broader questions about privacy and freedom of expression. For example, it is unclear whether deepfakes are protected by the First Amendment. If deepfakes are considered to be speech, then they may be protected under the First Amendment, even if they are used to spread misinformation or harass individuals.

The legal implications of deepfakes in marketing are still being debated. However, it is clear that deepfakes have the potential to be used for harmful purposes. As a result, it is important for lawmakers, regulators, and marketers to work together to develop laws and regulations that can address the risks posed by deepfakes.

Here are some specific recommendations for addressing the legal implications of deepfakes in marketing:

  • Develop clear laws and regulations that prohibit the use of deepfakes for false endorsements or advertisements.
  • Create a public awareness campaign about the dangers of deepfakes.
  • Encourage marketers to use ethical guidelines when creating and using deepfakes.
  • Support research into ways to detect and prevent deepfakes.

By taking these steps, we can help to ensure that deepfakes are not used to harm consumers or undermine our democracy.

Types

Audio-visual manipulation: Modifies audio/video recordings by overlaying the voices and faces of people on top of other people’s (morphing). Lip syncing is another way to make another person appear visually or vocally to say something that is not originally included in the video or audio content.

Synthetic media: This type of deepfake generates completely different and fabricated content, for example, videos and images. These are far from reality. For example, face swapping involves replacing one’s face with another person’s face or maybe with another photo/video content. Another example: in late 2017, a user started introducing celebrity porn deepfakes and there are also many hobbyists out there who only concentrate on making deepfake-related pornography.

Brief discussion on above points

There was a report that surfaced that New Jersey high school and Westfield High school students are using AI to manipulate original photos for creating fake nude or pornographic images of their own classmates and apparently the photos were being surfaced around the groups over the summer. AI technology has become so advanced that now teenagers are altering pornographic images taken from online sources and making nude photographs of their own underage classmates. From the source of NBC, parents were informed by the school administrator about including ChatGPT in their children’s curriculum; hence, at first, they were worried that this would impact their child’s education and make it easier to write an essay for their school activities. Nonetheless, after learning about the event involving the fake nude photos, things became much more worrying.

In India, publishing or transmitting any sexual or pornographic content in electronic form is punishable under Section 67A of the IT Act. The first conviction will lead to a penalty of imprisonment for a term and this might also extend to further five years of imprisonment along with a fine, which also may extend to ten lakh rupees. The second conviction can lead to extending the imprisonment term to seven years along with an extension of a ten lakh rupee fine. Another way around is by publishing or transmitting any content that causes sexual arousal but is not exactly or clearly indicated, as pornography is also deemed obscene and is punishable under Section 67 of the IT Act. This will lead to imprisonment for a term but can be extended to three years along with a fine, which may also be extended to five lakh rupees. In the subsequent conviction, the term punishment can lead to five years imprisonment along with the extension of a ten lakh rupee fine. The recent case of Sachin Tendulkar Section 500 of IPC was invoked against the gaming website owner who is responsible for spreading the deepfake video of Sachin Tendulkar (punishment of Defamation). It is to be noted that Section 465 (punishment for forgery), Section 469 (forgery for purpose of harming reputation) of the IPC, and Sections 66C and 66E of the IT Act are supposed to address offences that are related to computer resources and information. Section 66C particularly deals with identity theft and Section 66E addresses the punishment for a violation of privacy. So, these sections can be used to take legal action against any individual, if necessary, when anyone or a user tries to misuse AI technology for deepfakes like making defamatory statements of another person or harming one’s reputation. Section 67B of the IT Act, 2000, addresses the punishment for depicting explicit acts of children sexual content or pornography in electronic form. Sections 13, 14, and 15 of the POCSO Act, 2012, all of these acts could be invoked to protect the rights of women and children, and these crimes can also be prosecuted, and Sections 292 and 294 are also punishable for obscene material under the Penal Code, 1860.

The 2024 Lok Sabha elections showed a challenging situation, which certainly posed a threat because of deepfakes. Portraying some political candidates while showcasing their deepfake videos and audios that they were making remarks against a particular community and this can lead to an intense communal tension of provoking riots. This type of disruption during the electoral process could possibly pose a threat to social harmony and can also create a public disorder. This type of riot could be considered a deliberate offence under Section 153A (promoting hate between different groups on grounds of religion, race, place of birth, and residence) and Section 505 (statements conducting to public mischief) of the IPC. However, the Indian legal system is still lacking in many aspects, although the MeitY advisors are trying their best to resolve the issues while giving simultaneous temporary support. So, in the end, legislation needs to be serious and be fast forward regarding AI. 

Deepfakes have many other versatilitys, and deepfake technologies are capable of reproducing many events and can alter public opinion or perceptions, There are wide significant strategies available in the market. Deepfake creators can also be segregated in 4 ways:

  • public deepfake hobbyists;
  • malicious swindlers;
  • governmental actors such as politicians; or
  • real players, such as television establishments.

Benefits and issues of deepfake technology

Media entertainment industries

Using deepfake technologies can enhance the visual effects in the movies or any fantasy web series and it makes the storytelling more interesting. But in some other way, deepfake technology can also create a disadvantage on the side of actors/artists. A recent issue has happened to Netflix’s Black Mirror; this reveals that depicting characters and actors with the help of AI generation is not so far from the future. The actors, who were even gone on strike later in December 2023, made a deal with the companies to obtain their consent before making any of their digital replicas and it’s also included in the consent that the companies have to disclose for what reason the replica will be used; actors need to be compensated for the use of their digital replicas. Another thing is included, which is a guideline for synthetic fake performers based on actors images and can be used to train generative AI.

Another type of issue which has recently created a huge spark in India and within its people, the influential or celebrity figures like Katrina Kaif, Alia Bhatt, Rashmika Mandanna and Sachin Tendulkar were the prey of misuse of deepfake technology like “Morphing.” Unfortunately, nowadays these incidents have become common, which is raising alarms about the illegal and unethical usage of deepfakes. On December 23, 2023, the Ministry of Electronics and Information issued an advisory to all online platforms (intermediaries) by directing them to follow some certain rules. The intermediaries have to inform clearly to the users that they have to comply with the obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and Rule 3(1)(b) of the IT Rules for prohibited contents on their platforms and need to include in important documents like as part of the terms of service, privacy policy, and user agreements. Misleading or incorrect information is also forbidden in Rule (3)(b)(v), so this rule includes notifying the users that hosting, uploading, displaying, sharing, or modifying any of the content that is harmful, obscene, pornographic, defamatory, or paedophilic of the other person or the owners whose belongings they are using are unlawful, and intermediaries must be taking responsibilities for the content that they are sharing on their services and should educate their users about what is and what isn’t allowed. The advisory from March 15, 2024 advised or suggested companies that they should add some special labels or a unique code to any/each A-created text or media as it will make it easier to track the content, like where it came from and can be prevented for any misusing. So, the main goal is to identify if deepfakes were made by using any company’s tools or not and it will also help to find out who made them. Although all of these guidelines do not carry any legal weight, this means that companies are not required to follow them by law, so the companies still could influence the future laws for deepfakes.

In India, the legal framework lacks to measure and to address deepfake threats; however, the existing laws like the Information Technology Act, 2000 (IT ACT), the Indian Penal Code, 1860 (IPC) and the IT Rules offer potential therapy.

Education and training

Education and training play a pivotal role in combating the prevalence of fake malpractice, a crucial step in safeguarding the accuracy and integrity of creative projects. The public needs to be educated about the potential risks and consequences of misusing AI so that individuals can make informed decisions and avoid engaging in unethical practices.

Schools should incorporate AI education into their curriculum, starting at a young age. Students should be taught the fundamentals of AI, including its capabilities, limitations, and ethical implications. By providing children with a solid foundation in AI, schools can help them develop critical thinking skills and the ability to use AI creatively and responsibly.

One approach to teaching AI in schools is to focus on project-based learning. Students can be assigned projects that require them to use AI to solve real-world problems or create innovative solutions. This hands-on approach allows students to apply their knowledge and gain practical experience with AI.

In addition to teaching students about AI, it is also important to educate them about the dangers of fake malpractice. Students should be made aware of the various ways that AI can be misused, such as creating fake news or spreading disinformation. They should also be taught how to identify fake content and how to report it to the appropriate authorities.

By educating the public about AI and the importance of avoiding fake malpractice, we can create a more informed and responsible society. This will help to ensure that AI is used for the betterment of humanity and not for malicious purposes.

Deepfake techniques also need to educate the public because deepfake analysis can be done in a small but unique way, like while recording a video by using a camera, small traces are left behind, such as lens distortion and sensor noise. These traces act as proof or identification of deepfake because each camera is unique and due to that, evidence can be retained even after done with the deepfake video generation.

International legislation 

Criminal statute and defamation

The United States: Right now, there is no federal legislation available to address the threats of deepfake technology. Although some states have passed selected pieces of legislation about the deepfake technology, like Texas, which has approved the S.B. 751 and California also passed AB730 in the year of 2019. SO, both of these laws have banned the use of deepfake related content because it would have been used in their election and that might have influenced the candidacy.

China: This is the one in a very few countries that has strongly established a strict regulation regarding deepfakes, especially the Deep Synthesis Provisions. This act actually prevents the AI user from deepfaking any content without the primary user’s consent or knowledge. This act became effective in China in January 2023. The main two key purposes of the act are:

  • Strengthening online censorship.
  • Be up to the point with new advanced rapid technologies.

Defamation might be done as deepfake content creators are somewhat liable under private law of tort. However, the tort of defamation can differ from country to country, like Australian law, which is actually designed mainly to encounter the written and spoken material for example, a newspaper.

IP infringement and copyright

Unlawfully exploiting something like a trademark or label is a breach of IP rights. The danger with deepfakes is long and may cause more severe issues for example, human rights, personal data protection and privacy rights, and copyright infringement. Right now, there is no law in Australia which can help citizens for their IP rights for one’s own faces or voices; only an author can own a copyright of their work depicting their face or can be done with recording their voice also.

Role and challenges of law in AI

Due to huge usage of deepfakes and malpractices, the implications of banning deepfake usage of political officials and candidates have become a significant challenge. Here it is required to rework the legislation system rule in order to align the protection as well as the freedom of expression of the public within the jurisdiction. India also lacks in this deepfake misconduct issue; however, India is trying to implicate the laws, which can collectively help the government to fight the deepfake issue to some extent. Section 66E of the IT Act says that there will be penalties for infringing any individual’s privacy, such as transmitting or publishing any images without one’s consent and the punishment includes 3 years imprisonment along with INR 2 lakh fines that will be imposed on the preparator. Still, there is a dilemma on this act; the talk is whether the section is going to be applicable when the images are generated totally fake and have developed with the help of AIs.

Damage to finance and reputation

Unfortunately, the threat of malicious deepfakes looms over the finance industry, posing a significant risk of market manipulation, financial losses, and instability. These deepfakes, created with the intent to deceive and harm, can have far-reaching consequences for businesses and individuals alike.

One of the primary concerns is the potential for deepfakes to be used to manipulate stock prices. By creating fake videos or audio recordings of business leaders making false statements or engaging in unethical behaviour, malicious actors could spread misinformation and cause investors to make decisions based on inaccurate information. This could lead to sudden market fluctuations, panic selling, and significant financial losses for unsuspecting investors.

Another major risk associated with deepfakes is the harm they can cause to the reputation of businesses and individuals. Misleading or defamatory deepfake content can quickly spread online, tarnishing the credibility of businesses and damaging the trust that customers and partners have in them. This can lead to lost revenue, damaged brand image, and difficulty attracting new customers.

The misuse of deepfakes can also have a chilling effect on free speech and expression. Fear of being targeted by deepfakes could discourage businesses and individuals from speaking out on important issues or sharing their opinions publicly. This could stifle innovation, creativity, and progress across various industries and sectors.

To address the growing threat of deepfakes, regulatory bodies, technology companies, and law enforcement agencies are working together to develop solutions to detect and combat these malicious creations. This includes implementing stricter laws and regulations, investing in advanced technology to identify deepfakes, and raising awareness among the public about the dangers of deepfakes.

The fight against deepfakes is ongoing, and it is crucial for businesses and individuals to stay informed and vigilant. By working together, we can mitigate the risks posed by malicious deepfakes and protect the integrity of the finance industry and the wider economy.

Conclusion

Although deepfake technologies are offering many innovative ways and possibilities for the finance and marketing industry, the media and entertainment industry, and many more in a very productive way, the misuses of deepfakes are still posing substantial risks to build that one’s trust, and it’s also jeopardising the stability of the economy and also everyone’s individual trust. These challenges need to be addressed in a multi-function way; approaches need to come from the legislative, IT, education, and with some international collaborations also. The implications of these robust strategies can help with fostering responsibilities and can navigate the era of the evolving landscape of deepfakes by ensuring the remaining markets and authenticity along with creativity. 

References

LEAVE A REPLY

Please enter your comment!
Please enter your name here