Dark
Light
Today: March 21, 2025
July 16, 2024

Fake News Meets Deepfakes: The Technology Behind Modern Misinformation

Do you believe your eyes, whatever you see on the Internet, on social media? What if I tell you most of the information you consume is actually misinformation and the real context is totally different? In this article, you are about to get to know about modern technology and modern ways to spread fake news.

We know that Deepfake is a special type of media content that is generated from a special kind of Machine Learning, known as “Deep Learning”, and as the content is generated from A.I., so it’s well known that the content will not be real and together it’s known as, “Deepfake”. Pretty easy, right? Let’s explore the technical aspect of Deepfake.

Deep learning is a branch of machine learning that incorporates “hidden layers.” Deep learning is usually carried out through a particular group of algorithms known as the neural network which is modeled after the human brain’s manner of learning. The hidden layers would be a group of nodes on the expanded network performing mathematical operations on the input signals to produce the output signals, and in the case of deep fakes to produce really good fakes of real images. It is considered that the more layers of interconnected neurons in the multilayer network are the deeper the effective network is. This is why to create deepfakes, people use neural networks, and more specifically, recursive neural networks because the latter is known for performing well in image recognition.

I hope that was not too technical. Well, now that we have understood Deepfake a little better than in my previous article, let us understand why Deepfake has become more complex and advanced, and why people create Deepfakes.

In this digital era, a wide variety of people get their information from the internet, and the Internet is still the fastest way to transfer information. The content that is available on the internet may not be verified and that’s where cybercriminals take advantage. The process, however, of creating sophisticated deepfakes is brought about by two algorithms. Of course, one algorithm of the AI creates the best fake replicas of the real image as possible. The other model is made to predict whether an image is real or fake, or in other words, to classify the image. The two models keep feeding each other, and the one specialized in a particular task tends to become more enhanced. What ends up happening is that by training a model to compete against another model, you get a model that is very good at generating fake images, that the human eye can’t identify between real and fake.

Now-a-days, people are finding it quite easy to make Deepfake content as it has become available on the internet, with numerous Deepfake software available online. Some years ago, there was an internet trend of face swapping or fake aging mobile applications where we could upload/attach our photo and see how we might look at an old age, or apps that could swap our face with that of celebrities. Though these types of apps were harmless as they were made for fun, in recent years it has been found that cybercriminals are using Deepfake to create misinformation and sexually explicit content.

Let’s talk about the threat posed by sexually explicit Deepfake content. Deepfake porn or AI-Generated sexual abuse material involves the use of A.I. to create sexual images or videos of people doing something or saying something which they haven’t said or done in real life. An app called Perky AI, advertised on Instagram used artificial intelligence to remove clothes from women’s pictures. A commercial ad on Instagram showed how the Perky application could change American actress Jenna Ortega’s appearance in the picture depending on the typed text, and it claims to undress any picture resulting in nudity.

The app said that its AI could create ‘Adult only’ images, or ‘NSFW’ (an acronym for ‘not safe for work’), meaning nudity or explicit images inappropriate for most workplaces.

Regarding the Perky AI application, the developer is listed as RichAds. A brief overview of RichAds is stated on its website where it describes itself as a ‘’global self-serve ad network’’, specializing in the provision of tools to companies to design push notifications and other types of pop-ups and pop-under ads.

The app is currently unavailable, but it reminds us of the dangers of A.I. Fake images and videos that make people look naked or involved in sexual activities have been in circulation for many years. However, with the help of artificial intelligence tools, this kind of material has taken on realistic characteristics and has become easier to produce and share. In the creation of deep fake and fake news, when AI is the dominant tool utilized, it is often termed a “deep fake.”

Most non-consensual sexually explicit deepfakes are produced with women and girls as their subjects. Adult victims remain in a questionable legal zone while the prior provisions against AI-generated sexual materials involving children are sometimes not practically enforced.

According to a report by the Daily Mail, Meta ran ads for $7.99 per week on the Perky AI app that produced nude images of American actress Jenna Ortega.

Fan-Topia, the biggest subscription-based website, allows its members to view non-consensual sexually explicit Deepfakes of celebrities, of course after fee payments using  VISA/Mastercard credit cards or cryptocurrency.

An independent researcher analyzed non-consensual deepfake porn videos shared with Wired media and found that a total of at least 244,625 videos can be confirmed to have been uploaded to the 35 most popular sites established either entirely or partially for the distribution of deepfake porn in the last 7 years. Many videos were uploaded to these websites in the first nine months of the year 2023; the statistics show that 1,13,000 videos were uploaded in 2023, which is a 54% increase as compared to the 73,000 videos uploaded in 2022. Based on the analysis, it has been realized that in this year 2024 alone, more videos will have been produced than what was produced in previous years.

Digital social media played a vital role in making this type of dangerous tool come very handy/easily accessible. Cybercriminals utilize this type of tool to damage an individual’s reputation, either through sexually explicit content or misinformation. As the threat increases, how are A.I. companies moving towards its mitigation, let’s discuss it.

A company named Deep Media, now a group of about 20 members, has begun its mission to address topics such as the detection of synthetic audio and video. They’re doing this through collaborations, one of which is with the Air Force Research Laboratory, the chief science research and development wing of the division and part of the Department of Defence. First unveiled in April 2022, the grant entails the creation of tools that can detect deepfakes in categories such as faces, voices, and aerial places. Deep Media trains these detector tools by continuously developing datasets of sophisticated deepfakes using the company’s generation tool.

Meta i.e. Facebook’s parent company said that it will make big changes to its approach towards synthetic and manipulated media in advance of upcoming U.S. elections to test its capacity to tackle fake news resulting from new sorts of artificial intelligence tools. From May of this year, the social media giant will use “Made With AI” tags on all AI-generated videos, images, and audio visible on the firm’s apps. This builds upon a previous policy that was implemented to only cover a select set of altered videos, said Monika Bickert, Vice President of Content Policy. The Meta board suggested the policy be implemented for non-AI content as well, which can be equally misleading as AI-generated material. Moreover, it should extend to cover not only audio-only content but also videos indicating fake activities by persons that they never performed.

As we see the Deepfake detection race progress globally, Microsoft has developed Detection software, the software analyzes photos and videos to determine if they are likely made artificially, providing a confidence score. However, one expert warns it could quickly become outdated due to the fast evolution of deepfake technology. To address this, Microsoft has introduced a system allowing content producers to add hidden code to their footage to detect subsequent changes.

Imagine watching a video online and wondering if what you’re seeing is real or if it’s been processed with advanced technology. Microsoft’s Video Authenticator helps solve this problem by using sophisticated algorithms to detect subtle signs that an image or video has been manipulated – things that are difficult for our eyes to detect. It creates a trust score, represented as a percentage, that shows how deepfake the clip is likely to be. These markings include slight fading or gray areas where the computer-generated face blends with the original figure. To develop this tool, Microsoft trained it with its own machine-learning methods, taught it a dataset of about 1,000 videos known as Deepfake, and then tested the tool on samples even larger than Facebook, ensuring that it could detect false content in various situations with high accuracy.

Intel’s new technology can detect fake videos in real-time, Intel’s Fake Catcher technology can provide a video’s authenticity result in milliseconds with an accuracy of 96%, becoming the first company to do so. Fake Catcher takes a different approach. Instead of focusing on fakes, it explores something very human: the flow of blood. Here’s how it works: Fake Catcher uses a technique called photoplethysmography (PPG). This technique uses light to detect changes in blood flow to the skin. When your heart is beating, the blood produces small changes in the color of your arteries, which the PPG can detect.  In Fake Catcher, PPG signals are captured from 32 different facial locations. These signals are then analyzed and converted into a detailed map of how blood flow changes over time with different facets. By comparing these spatio-temporal maps with expectations of the real video, Fake Catcher can determine whether the video is authentic or manipulated. This new approach helps us to see the deeper analyses by focusing on a fundamental aspect of human biology i.e., blood circulation rather than just looking for artificial changes.

It’s worth noting how our own country, India, approaches this new threat. Currently, India does not have specific regulations for deepfake and AI-related crimes. However, many existing statutes provide both civil and criminal remedies. For example, under Section 66E of the Information Technology Act, individuals can be prosecuted for taking, publishing, or distributing an image of a person through media without consent, in breach of privacy. This offense carries a jail term of up to three years or a fine of up to ₹2 lakh. Similarly, Section 66D of the Information Technology Act punishes misuse of communication devices or electronic components leading to cheating or fraud, with imprisonment for three years and/or a fine of ₹1 lakh.

Additionally, sections 67, 67A, and 67B of the Information Technology Act provide for prosecution for publication or distribution of obscene or sexually explicit deepfakes. IT laws force social media platforms to remove ‘altered images’ immediately upon receipt of the notice, and failure to do so could result in the loss of ‘safe harbor’ protection – a provision that protects platforms from liability for material of the users.

The Indian Penal Code, of 1860 provides an alternative. Sections such as 509 (insulting a woman’s modesty), 499 (criminal contempt), and 153 (spreading public hatred) can be applied to computer offenses involving deepfakes. Additionally, if Deepfake material contains copyrighted material, the Copyright Act 1957 may apply. Section 51 prohibits the unauthorized use of copyrighted works and protects the rights of the original creators.

Union Minister of Electronics and Information Technology, Ashwini Vaishnaw, on November 23, 2023, held a key meeting with many social media companies, AI companies, and Tech companies where he talked about a new emerging crisis of Deepfake and the lack of an effective verification system.

The government plans to introduce a draft regulation for public consultation within days to address the in-depth aspects of Deepfake materials, and this bill aims to provide accountability for content developers and social media intermediaries.  Rajiv Chandrasekhar, Minister of State for Electronics and Information Technology, claimed that the existing rules are sufficient to deal with Deepfakes if strictly enforced in the country. He also confirmed the appointment of a special officer (Rule 7 officer) to monitor compliance, and online platforms to help users and citizens report in-depth offenses through FIR under Section 66D of the IT Act and Regulation 3(1)(b) of the IT Rules. He reminded social media platforms of the duty to remove such material immediately in accordance with the statutory orders thereunder.

This technology not only impacts one country but does affect globally. To deal with Deepfake, on October 30, 2023, U.S. President Joe Biden signed a comprehensive executive order on artificial intelligence (AI) aimed at reducing risks in various aspects including national security and privacy concerns. The Commerce Department is tasked with labeling standards for AI-generated content or watermarking AI content to make detection easier. States such as California and Texas have enacted laws regulating the broadcasting and publishing of fake videos intended to influence election results, deeming it a crime. Meanwhile, Virginia has implemented laws imposing criminal penalties for the distribution of non-consensual pornography. The Deep Fake Accountability Bill, 2023, which has been passed in Congress, mandates that creators on online platforms clearly mention AI generated and notify users of any changes to videos or other content. Failure to do so would result in criminal penalties for anyone involved.

The European Union (EU) has also boosted its rules on disinformation, allowing major social media platforms like Google, Meta, and X to start flagging deepfake content or be ready to be fined heavily. The proposed European AI Regulation imposes transparency and disclosure on providers of deepfake technology.

In recent Lok Sabha Elections in India, we have seen a potential increase in Deepfake materials related to politics. An article by Times of India mentions Indian Deep Faker founder Divyendra Singh Jadoun was working on four projects for different political parties and individuals. He said, “We received nearly 200 queries in just a month” ahead of Lok Sabha Elections. He also mentions a political consultant who didn’t want to be named and working with North-East politicians, who says, “Deepfakes will shift 2024 Lok Sabha campaign.” It’s very much certain that this technology was used for political campaigns and when we talk about deepfakes we all know something that is not real and shouldn’t be believed, and no wonder how much such false news we have seen and believed in the internet that was made with the help of this technology.

Recent events, including Slovakia’s elections, have shown the real impact of deepfake on electoral outcomes. Two days before the election, AI-generated recordings impersonating a Liberal candidate were released, creating confusion and disrupting voter confidence. The candidate ultimately lost the election, even though fact checkers worked quickly to debunk the fake audio. Although the precise effects of deep-lying were not measured, there is a risk that such lying may significantly affect the outcome of highly competitive races.

The United States Department of Homeland Security has published an understanding of Deepfake and the threat posed by it. The technology race of AI Creation and detection has been a global competition. Here the question that arises is, are people really aware of such technologies? Answering this question, I have come across a research paper where it says, 1 in 3 persons are aware of Deepfakes and they share them despite knowingly or unknowingly that it’s A.I Generated.

The researchers surveyed 1,231 Singaporeans and found that 54% were aware of deep fakes but many struggled to identify them. About a third of respondents admitted to sharing content that was later found to be AI-generated, although about 20% said they frequently encountered Deepfake videos. The U.S. had a higher percentage of respondents aware of deepfake technology compared to Singapore (61% vs 54%). Americans also reported greater distress and more frequent engagement with deep fakes. Moreover, many people in the U.S. acknowledged sharing deepfakes materials after they found out about fake AI videos compared to Singapore (39% vs. 33%).

Now that we have talked about the threats posed by deepfakes and the risks associated with them, let’s find out what we can do to make ourselves safe and, most importantly, how we can detect Deepfake content on the internet. We can make ourselves safe by sharing less information about ourselves on the internet and that’s the only key. As for detecting Deepfakes material, I will divide it into 3 parts: video, audio, and images.

For the video, we can look for some signs:

– You can see the blue effect in the face but not other parts of the video (or vice versa).

  • Different skin lines near the front edges of the face.
  • Double chins, double eyebrows, or double face.
  • Blurred vision when partially covered by hand or other object
  • Incompatible video quality in the same clip
  • Box-shaped and crop effects around the face, eyes, and neck
  • Not naturally smooth or blinking, or mismatching of lip sync
  • Inconsistent changes in background and lighting

Now, for audio, we can look at the following:

  • Agile or structured sentences
  • glitch in the sound of a sentence
  • Unique phrases that do not follow the speaker’s normal speech patterns
  • Relevance of the message for contemporary discussion and ability to answer relevant questions
  • Is the background sound consistent with the speaker’s presumed location?

For image, we can look for signs like:

  • Make sure the lighting and shadows are consistent throughout the photo. Discrepancies may indicate adjustment.
  • Note asymmetries in facial features, abnormal skin textures, or unusual eye shapes and positions.
  • Edges around the face, hair, or accessories may be blurred or misaligned.
  • Watch out for any odd effects, such as pixelation or distortion, especially around areas like the face, eyes, and hair.
  • The background may not match the subject, with different focus, shape, or lighting.
  • Look for any unusual or missing information in the image metadata, which could be a suggestion for change.
  • Deep side drawings often struggle with actual and anticipated eye movements.
  • Optical effects or other light sources may not match the rest of the image perfectly.
  • Parts of the image can have varying sharpness and clarity.

To make my reader feel safe, if you are targeted in a Deepfake scam then, make sure you follow the following points:

  1. Don’t Panic
  2. Block, report and distance yourself (only if it’s safe to do so)
  3. Seek help from a trusted person
  4. Report it to legal authorities
  5. Protect your online safety by using strong passwords, removing unknown people, enabling 2FA

So, now we have come to the end of the article. Thank you for reading this article. I hope I have helped you learn something new. I have tried to make it as simple as it can be so that people from both technical and non-technical backgrounds can understand the threat, the good and the bad, both sides of the issue. If you don’t understand something, you know how to reach out to me.

Author Name: Kunal Das

Arundhati Roy

Arundhati Roy

At our news portal, we strive to be your go-to destination for staying informed about the latest developments, breaking news, and insightful analysis across a diverse range of topics. Whether you're interested in politics, technology, health, entertainment, or global affairs, we've got you covered with comprehensive coverage and in-depth reporting.

Previous Story

Integrating Advanced Dialogue Management, Natural Language Generation and User Personalization in Conversational AI: A Comprehensive Framework

Next Story

Nation Prepares for Muharram: CM Yogi Adityanath Emphasizes Law and Order

Latest from Blog

Go toTop

Don't Miss

Is ChatGPT Down Right Now? Alternative AI Chatbots You Can Use

In recent months, ChatGPT has become a household name, offering

Use of Artificial Intelligence in Agriculture

Because of the growing world population and increasing demand for