At 19, Gregory Tarr’s new techniques for identifying deepfakes won him BT Young Scientist of the Year 2021. In our latest video in the Young Bright Minds series, Tarr explains how he’s overcome some of the challenges of spotting this AI-created media at scale.

What are deepfakes?

A deepfake is any media (usually video) with one person’s voice or face mapped onto another’s using AI-based software. They’re often meant to be funny or satirical, like placing Donald Trump in criminal underworld TV series Breaking Bad or critiquing Facebook’s data collection seemingly from the top

But some deepfakes are less obvious. They can spread fake news or otherwise fool people into thinking someone said or did something they didn’t.

Finding deepfakes in a heartbeat

Tarr radically improved existing processes for detecting deepfakes. “I was able to speed things up ten times.”

The deepfake detection method is fascinating. Tarr explains: “Photoplethysmography means graphing the light of the blood. Every time your face receives a pulse of blood, green and red hues change slightly. You can track that over time in a video.”

Scaling is the hardest part

“Many companies trying to detect these deepfakes have built models that work in lab environments,” says Tarr. “But because of the sheer size of the problem – hundreds of millions of videos – having the infrastructure and the computing power is a harder problem.”

Tarr is founder and CEO of Inferex. His business wants to work with companies’ deepfake detection models and deploy them across thousands of computers.

Tech no substitute for awareness

Tarr warns that technological solutions will only go so far in fighting fakes – we need to change how we think about what we see and read. “The only solution is that people wisen up. We need to be more aware that things we’re seeing or reading may or may not be true.”

For more videos about Young Bright Minds, subscribe to Tomorrow Unlocked on YouTube or follow us on Instagram.

Could you be fooled by a deepfake?