Technology 2 min read

Google Releases An Open-Source Deepfake Database

Garuna Liu / Shutterstock.com

Garuna Liu / Shutterstock.com

On Tuesday, Google released a deepfake database – 3,000 AI-generated videos – to accelerate the development of deepfake detection tools.

Generative algorithms have gotten so advanced within the last few years. So, it’s unsurprising that the resulting videos are becoming indistinguishable from reality.

Expectedly, this triggered a race to develop better detection methods for deep fakes, and Google has just made its contribution to this effort.

The search engine giant collaborated with Jigsaw, a technology incubator created by Google, to announce the release of a large dataset of visual deepfakes.

Google recorded 28 actors performing common expressions, mundane tasks, and speaking. It then used a publicly-available deepfake algorithm to alter the actors’ faces.

The tech giant then incorporated the AI-generated videos into the new FaceForensics benchmark of the Technical University of Munich and the University Federico II of Naples.

In their recent blog post, Nick Dufour from Google Research and Andrew Gully from Jigsaw wrote:

“The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts. As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.”

Deepfake Database As A Way to Accelerate Detection Tools

Like Google, Facebook also intends to release a deepfake database. However, the social media giant announced that it would arrive at the end of the year.

Similarly, an academic team from the Technical University of Munich performed four standard face manipulation methods on about 1,000 YouTube videos. The result is another database which they’re calling FaceForensics++.

All the datasets outlined above share a similar goal: to create an extensive collection that could help train and test automated detection. In other words, they’re all trying to accelerate and improve the development of deepfake detection tools.

There’s just one big problem.

When developers successfully create a detection method that exploits a flaw in a specific generation algorithm, the generative algorithm can easily to be updated to correct for it. Then, we’ll end up right back at the beginning.

Read More: Researchers Develop an AI-Watermarking Technique to Spot Deepfakes

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.