Logo

Logo

Facebook, Microsoft launch Deepfake Detection Challenge

Facebook is putting in $10 million towards “Deepfake detection Challenge,” this will spur detection test where commissioning researcher will be asked to produce realistic deepfakes and create data set for testing detection tool.

Facebook, Microsoft launch Deepfake Detection Challenge

Facebook Inc. (Photo : iStock)

Social media behemoth Facebook Inc. along with Microsoft, Massachusetts Institute of Technology (MIT) and other institutions have teamed up to fight ‘deepfakes’, the company said in a blog post on Thursday.

Deepfake work in a way where realistic AI-generated videos of real people doing and saying fictional things like the popular forged videos of Facebook CEO Mark Zuckerberg and US House Speaker Nancy Pelosi that went viral recently.

“They (deepfakes) lower the bar for an adversary that wants to create manipulated media,” said Matt Turek, who runs DARPA’s Media Forensics program.

Advertisement

Facebook is putting in $10 million towards “Deepfake detection Challenge,” this will spur detection test where commissioning researcher will be asked to produce realistic deepfakes and create data set for testing detection tool.

“The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality,” added Antonio Torralba, Director of the MIT Quest for Intelligence.

The company said those videos will be released this December and no user data will be utilized, and it will feature paid actors.  This Facebook’s new contest builds on its ties and will involve academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY.

“No Facebook user data will be used in this data set. We are also funding research collaborations and prizes for the challenge to help encourage more participation. In total, we are dedicating more than $10 million to fund this industry-wide effort,” Schroepfer said in a statement.

Professor Rama Chellappa from University of Maryland has commented, “Given the recent developments in being able to generate manipulated information (text, images, videos, and audio) at scale, we need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill effects of manipulated multimedia.”

(With input from agencies)

Advertisement