This AI Algorithm Can Detect Deepfake Videos with 98% Accuracy
Researchers have developed an AI algorithm that can spot a deepfake video with 98% accuracy.
In a paper presented earlier this month, researchers from Multimedia and Information Security Lab (MISL) in Drexel’s College of Engineering explained that they had created the “MISLnet algorithm” which can detect telltale signs of deepfake and manipulated pieces of media with incredible accuracy.
The team trained the machine learning algorithm to extract and recognize digital “fingerprints” of many different video generators, such as Stable Video Diffusion, Video-Crafter, and Cog-Video.
Additionally, the researchers have shown that this algorithm can learn to detect new AI generators after studying just a few examples of the videos produced.
The Difficulty of Detecting a Deepfake
According to LiveScience, the MISLnet algorithm represents a significant new milestone in detecting fake images and video content. That’s because many of the “digital breadcrumbs” that existing systems look for in regular digitally edited media are not present in entirely AI-generated media.
“When you make an image, the physical and algorithmic processing in your camera introduces relationships between various pixel values that are very different than the pixel values if you photoshop or AI-generate an image,” Matthew Stamm, PhD, an associate professor in Drexel’s College of Engineering and director of the MISL, says in a statement.
“But recently we’ve seen text-to-video generators, like Sora, that can make some pretty impressive videos. And those pose a completely new challenge because they have not been produced by a camera or Photoshopped.”
“Until now, forensic detection programs have been effective against edited videos by simply treating them as a series of images and applying the same detection process,” Stamm adds.
“But with AI-generated video, there is no evidence of image manipulation frame-to-frame, so for a detection program to be effective it will need to be able to identify new traces left behind by the way generative AI programs construct their videos.”
As AI-generated videos aren’t produced by a camera capturing a real scene or image, they don’t contain those telltale disparities between pixel values.
However, LiveScience reports that the team’s new MISLnet algorithm has been trained using a method called a constrained neural network, which can differentiate between normal and unusual values at the sub-pixel level of images or video clips, rather than searching for the common indicators of image manipulation.
The MISL algorithm was able to correctly detect AI-generated videos 98.3% of the time, beating eight other systems that the research team made which scored at least 93%.
The research team has been active in efforts to flag digitally manipulated images and videos for over a decade, but the group has been particularly busy in the last year, as editing technology is being used to spread political misinformation.
“It’s more than a bit unnerving that [AI-generated video] could be released before there is a good system for detecting fakes created by bad actors,” Stamm says.