On Tuesday, Google released a deepfake database – 3,000 AI-generated videos – to accelerate the development of deepfake detection tools.
Generative algorithms have gotten so advanced within the last few years. So, it’s unsurprising that the resulting videos are becoming indistinguishable from reality.
Expectedly, this triggered a race to develop better detection methods for deep fakes, and Google has just made its contribution to this effort.
The search engine giant collaborated with Jigsaw, a technology incubator created by Google, to announce the release of a large dataset of visual deepfakes.
Google recorded 28 actors performing common expressions, mundane tasks, and speaking. It then used a publicly-available deepfake algorithm to alter the actors’ faces.
The tech giant then incorporated the AI-generated videos into the new FaceForensics benchmark of the Technical University of Munich and the University Federico II of Naples.
In their recent blog post, Nick Dufour from Google Research and Andrew Gully from Jigsaw wrote:
“The resulting videos, real and fake, comprise our contribution, which we created to directly support deepfake detection efforts. As part of the FaceForensics benchmark, this dataset is now available, free to the research community, for use in developing synthetic video detection methods.”
Deepfake Database As A Way to Accelerate Detection Tools
Like Google, Facebook also intends to release a deepfake database. However, the social media giant announced that it would arrive at the end of the year.
Similarly, an academic team from the Technical University of Munich performed four standard face manipulation methods on about 1,000 YouTube videos. The result is another database which they’re calling FaceForensics++.
All the datasets outlined above share a similar goal: to create an extensive collection that could help train and test automated detection. In other words, they’re all trying to accelerate and improve the development of deepfake detection tools.
There’s just one big problem.
When developers successfully create a detection method that exploits a flaw in a specific generation algorithm, the generative algorithm can easily to be updated to correct for it. Then, we’ll end up right back at the beginning.
Comments (0)
Most Recent