Fighting Deepfakes May Not Be a Technology Problem

When a deepfake Jensen Huang livestream promoting cryptocurrency pulled in 100,000 viewers, eight times NVIDIA’s actual GPU Technology Conference audience, it exposed a failure that was more social than technical.

A scam channel labelled “NVIDIA Live” used a QR code to funnel victims into sending cryptocurrency.

The render didn’t have to be flawless, but convincing enough to bypass scepticism.

“In information security, there is no such thing as being one step ahead of the cybercriminals. That sounds very beautiful in some studies — but it’s simply wishful thinking,” Miguel Fornés, security governance and compliance manager at Surfshark, a cybersecurity firm, said in an interaction with AIM.

Surfshark’s analysis places deepfake-related financial losses at $1.5 billion in 2025 alone. Fornés expects that number to grow as generative models improve.

The more critical admission is that defenders must be active at all times, while attackers need only one opportunity.

Two years ago, viral deepfakes were visibly flawed. “Just 24 months later, everything changed dramatically,” Fornés says.

From OpenAI’s Sora 2 to Google’s Veo 3.1, and even open-weighted text-to-video models from Chinese AI labs such as Tencent, have advanced rapidly.

“There are hundreds of millions of dollars right now, every day, being thrown at the most important AI companies to reach dominance, to reach superiority,” said Fornés.

But once that superiority is achieved by these companies — it is offered for nominal prices when packaged into a product.
“Previously, depending on quality, producing a one-minute deepfake video was estimated to cost between $300 and $20,000 and required professional skills,” stated Surfshark’s study.

But these costs have drastically reduced, widening the defender’s disadvantage.

Besides, traditional technical tells are eroding quickly. Looking out for signs of unusually clean audio, recycled footage, physics errors and visual artefacts may help now, but they won’t be reliable for long.

Can We Even Identify Deepfakes?

Talking about the limits of visual detection, Fornés said, “Whatever I tell you now, in one month, it won’t be valid. It’s as simple as that.”

Even so, he outlined a few signals users have traditionally relied on that are already eroding.
One of the clearest tells, he explained, used to be audio quality.

Real recordings carry the messiness of everyday life. A background chatter, a TV humming, appliances whirring, someone walking past an office desk. Deepfakes often strip all this away, resulting in unnaturally clean audio that should prompt suspicion.
Another pattern he pointed out was the reuse of existing footage. Many malicious clips aren’t fully generated from scratch.
Attackers “take some existing video” and simply modify the face or key characteristics. That means reverse-image searches sometimes catch the original material beneath the manipulation.

But visual inconsistencies, he warned, are disappearing just as quickly.

He recalled a deepfake video he saw circulating on Facebook — supposedly showing baggage handlers aggressively throwing suitcases.

What caught his eye wasn’t the outrage but the physics. One bag is thrown into a compartment and then inexplicably bounces back out despite a door blocking the path.

“This is what is called illogical kinetics,” he said, the kind of impossible movement that signals fakery.

The problem is that these tells are living on borrowed time. These illogical kinetics “will be fixed” in a few months, Fornés said.
The jerky motions, the uncanny rebounds, the unnatural steadiness in a synthetic face, all of it is being ironed out by rapidly improving models. The list of practical, user-visible indicators is evaporating.

That’s why he rejects the idea that this is a purely technical problem. The core vulnerability isn’t the pixels but the human mind.

“What happens with cybercrime is that it’s a psychological issue… they are pulling emotional clickers, and they are trying to elicit an emotional response,” said Fornés.

He recommends behaviourally focused defences, as in when media elicits a strong emotional reaction, especially around politics, unrest, or requests for money or action, pause and verify before acting.
Experts now recommend families decide a secret code. When a voice that sounds precisely like your kin’s, requesting urgent financial help, you ask for the code.

It’s a profoundly dystopian adaptation, one where verifying loved ones requires prearranged passwords, and trust itself has become a liability.

The C2PA Hope

Fornés highlights content provenance as the strongest systemic mitigation available today, pointing to the C2PA standard.
C2PA embeds “Content Credentials” directly into media—cryptographically sealed metadata that records when the asset was created, by whom or what system, and how it has been altered over time.
It is a joint effort that merges Adobe’s Content Authenticity Initiative with Project Origin, the BBC–CBC–New York Times–Microsoft consortium, and now includes major stakeholders across tech and media such as Adobe, Amazon, BBC, Google, Intel, Meta, Microsoft, OpenAI, Publicis Groupe, Sony and Truepic.

Each update produces a new signed entry, and the manifest is bound to the file’s pixel or byte structure. If the media is changed without updating the manifest, verification breaks.
In effect, the file carries a tamper-evident trail of its origin and transformations, allowing any newly generated image or video to be explicitly identifiable.

Only the other way around is viable: Attach cryptographic proof that the image I am looking at has been taken with a real camera and not altered.
This is what @C2PA_org is solving, already implemented and supported by many.
Think "source provenance", but for media.

— Steren (@steren) December 1, 2025

Fighting Slop

Recently, researchers at Mila, Québec’s AI institute, introduced OpenFake, released a new dataset to train the next generation of deepfake detection tools.

The problem, they argue, is not that detection tools are failing in isolation.

“Most of these issues were resolved in the latest models,” the researchers noted, leaving detectors trained on legacy data “almost incapable” of flagging modern deepfakes in the wild.

OpenFake dataset pairs three million real images with one million synthetic counterparts generated using current state-of-the-art models, including OpenAI’s GPT-Image-1, Google’s Imagen-4, and Flux 1.1 Pro.

“While humans can easily get fooled by visual realism, our method can spot tiny artefacts produced when an AI tool generates or upscales images,” the paper explains.

When Mila researchers retrained a standard SwinV2-based detector on OpenFake and tested it against real social-media content, models trained on older benchmarks failed almost entirely, but the OpenFake-trained detector did not.

OpenFake Arena, a public platform, allows users to try to fool a live detector. Any image that succeeds is folded back into the training data, thus each failure becomes new supervision.

The Dilemma

While the capacity for harm is undeniable, it opens up legitimate creative possibilities for millions.
Content creator Prateek Arora, who uses AI to produce animated narratives, anticipates richer storytelling and novel collaboration models.

Talking about the cutting-edge tools being released today, he told AIM, “I think that’ll really open up the creative ecosystem to many kinds of new ideas, new voices and new talent.” This dramatically reduces production barriers for genres that once demanded prohibitive budgets.

The technology, he argued, allows storytelling to detach from the physical presence of the creator, shifting emphasis toward ideas, structure, and narrative ambition rather than on-camera charisma.

Arora, however, cautioned that people need to be aware of sincere personal responsibility tied to the use of this technology.

“Don’t use it for bad engagement farming. Do it from a place of authenticity. Even if it’s something quick, that’s okay.”

The post Fighting Deepfakes May Not Be a Technology Problem appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...