How AI Detection Can Help Combat Deep Fakes and Online Misinformation
Have you ever seen or heard something online that made you think, “No…that can’t be right?” Chances are you’ve just encountered a deep fake.
A deepfake is a form of manipulated media that convincingly alters a legitimate video or creates a new one.
As this collection proves, deep fakes are rarely complimentary. While some are created as cruel pranks or to prove the manipulator's skills, others are created in an attempt to deceive, such as followers of a political rival.
Read on to learn why and how deep fakes can be so convincingly accurate.
What’s a Deep Fake?
(Graphic showing AI-generated image prompted from “President George Bush riding on a horse in space”)
Deep fakes take their name from deep learning technology, which is used to create fake videos.
Deep learning technology is a branch of artificial intelligence (AI) and machine learning that uses neural networks to teach computers to learn from examples and process data in a way similar to the human brain.
As AI learning and algorithms become more sophisticated, increasingly realistic deep fakes continue to be created and distributed.
Software and mobile phone applications that enable users to create their own deep fakes are available. However, some (ReFace and Avatarify, for example) aren’t considered adequate for producing truly convincing deep fake video content.
While amateur deep fakes can be harmless and entertaining, this technology can also be used dishonestly or disruptively. As a result, concerns are growing among the public and AI experts alike.
As Technology Learns, So Do Criminals
According to sosafe-awareness.com, AI generated cybercrime is on the rise. One such example is “Disinformation-as-a-Service” (DaaS), where AI is used to spread false information to manipulate public opinion or cause harm to reputations and influence business or political sectors.
Although altered, aka “photoshopped” images have largely disappeared, deep fake images and videos have replaced them.
This is currently contributing to online users’ increasing distrust of information. Even worse, it is becoming a global phenomenon in countries around the world, including those with less-educated citizens who may not recognize a deep fake.
An example of deep fakes causing concern with public audiences was documented during the months before India’s general elections in 2024. Another recent example involved alleged interference in Slovakian elections by using AI-generated audio.
A variety of deep fakes were produced and distributed to 75% of India’s millions of voters.
These included the use of:
Celebrities, actors, and actresses disparaging political candidates;
Dead politicians and famous figures animated by AI technology; and
Voice cloning to create accurate copies of well-known voices.
However, new uses have been researched and created as AI continues to evolve. One of these, AI analysis, has helped create AI detection technology that doubles as an effective weapon in the battle against deep fakes and misinformation.
How AI Analysis Works
In the era of rapidly advancing technology, the threat of deep fakes and online misinformation has become increasingly prevalent.
As AI evolves, it’s become both a tool for creating deceptive content and a crucial ally in detecting and combating it.
Let’s explore the critical role of AI detection in identifying deep fakes and maintaining a trustworthy online information environment.
The Power of AI Detection and Analysis
AI detection technology has emerged as a potent weapon in the battle against deep fakes and misinformation.
AI detectors like Undetectable AI check to see if a piece of text is AI generated, and other versions of AI detection tools are being developed such as image detectors
These new algorithms and machine learning techniques and AI systems are being created with the intention to analyze video and audio content with unparalleled precision.
AI detection systems are trained to identify subtle inconsistencies and anomalies that may be imperceptible to human viewers.
Let’s take a deeper look at AI’s abilities to be the “good guy” in today’s information wars. In other words, can an AI detector help identify a deep fake? Here’s some good news.
Database Content Comparison
One main strength of AI detection, including deep fake detection, is created by its ability to scrutinize digital content at a granular level.
This includes examining certain types of content, including:
Pixel patterns;
Compression artifacts; and
Audio frequencies.
This enables the AI application to compare the characteristics of a suspected deep fake with AI’s vast database of genuine and manipulated content and to flag any suspected tampering for further review.
While this makes it appear that the AI application is doing all of the work, remember that, like any other system, AI’s performance is only as good as those who created its algorithms, deep learning and neural networks.
The Intelligence Behind the AI
Researchers and developers around the world are racing to improve the capabilities of AI detection techniques.
These advancements are vital when battling deep fake creators and their own ever-evolving AI methods.
The Dark Side Of AI Detection
According to a recent article published by Gizmodo, AI detection systems which falsely claimed human writers used AI directly caused job loss.
Kimberly Gasuras, a journalist with 24-years of experience told Gizmodo that her longstanding tenure and reputation wasn’t enough to stop her employers from firing her, because her words got flagged by an AI detector as “AI-generated content.”
Despite the harm AI-generated content can create, we need to carefully control fears so as not to harm or punish innocent people.
If AI detection systems can spot AI-content 80% of the time, but 20% of the time they are wrong, then they should not be relied on as a final decision making factor, but rather, as part of the total discovery and investigative process.
Vigilance Builds Trust
AI detection not only helps protect individuals from being misled by deceptive content but also contributes to building trust in legitimate news sources.
As the battle between deep fake creators, ethical AI corporations, and AI detection continues to evolve, constant advancements in detection technology are key to maintaining the integrity of online information and building trust in news sources.
Still, as AI technologies are new and emerging, we have to be conscious of limitations and approach situations involving the detection or use of AI with caution and reason.