U.S. Defense Department developing tools to catch deepfakes

time:2025-04-27 03:56:02author: adminsource: 为虺弗摧网

The U.S. Defense Department is already preparing itself for the fight against deepfakes, fake audio and video created by artificial intelligence that burst into the mainstream last year thanks to sites like Reddit.

According to MIT Technology Review, the development of tech to catch deepfakes is currently underway. Through the Media Forensics program run by the US Defense Advanced Research Projects Agency (DARPA), researchers have already built some of the tools to expose these fake AI creations. The Media Forensics program was actually originally set up to automate existing forensic tools, however its mission changed due to the concern over the rise of deepfakes. The project’s deepfake mission was announced earlier this year.

In 2017, users on Reddit started utilizing what amounts to extremely convincing face-swap technology to add actor Nicolas Cage into random movies he wasn’t already in. The technology was also being used to insert some female Hollywood celebrities into pornographic video clips. After deepfakes found its way into the daily news cycle and the outrage grew online, some websites banned deepfakes from being posted on their platforms.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!
SEE ALSO: Deepfakes are about to make revenge porn so much worse

However, deepfake creators kept perfecting the technology, continuously making the fake AI-generated imagery even more realistic. Earlier this year, an app called FakeApp was released effectively making the creation of deepfakes even easier. Concern over the tech quickly turned to its possible use in domestic abuse cases, such as generating sham revenge porn, and in creating fake news. In April, Buzzfeed created an Obama deepfake with Jordan Peele showcasing just how realistic these fake videos were becoming.

Fast forward to today, where the Defense Department and others are developing tools to combat deepfakes. One such tool comes from Professor Siwei Lyu of SUNY Albany and his students. The AI-generated superimposed video depends heavily on data collected from scanning static imagery. Because of this, Lyu noticed that the face-swapped deepfake videos rarely blink, opening an avenue of detection, at least for now.

Additional tools are being developed as part of the DARPA program to catch other deepfake inaccuracies such as strange or abnormal head and body movements. And while Lyu admits that an experienced deepfake creator or video editor can get around a tool such as one that examines eye-blinking, more sophisticated detection techniques are in the works.

With artificial intelligence becoming more and more advanced in general, it’s clear the deepfake battle will be an arms race between the fake video makers and those looking to unmask the face-swapped truth.


Featured Video For You
To learn to juggle, this AI-powered hand had to accumulate 100 years’ worth of experience