© 2024 KOSU
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Facebook Researchers Say They Can Detect Deepfakes And Where They Came From

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said.
AP
This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said.

Facebook researchers say they've developed artificial intelligencethat can identify so-called "deepfakes" and track their origin by using reverse engineering.

Deepfakes are altered photos, videos, and still images that use artificial intelligence to appear like the real thing. They've become increasingly realistic in recent years, making it harder to detect the real from the fake with just the naked eye.

The technological advances for deepfake productions have concerned experts that warn these fake images can be used by malicious actors to spread misinformation.

Examples of deepfake videos that used the likeness of Tom Cruise, Former President Barack Obama, and House Speaker Nancy Pelosi went viral and have shown the development of the technology over time.

"Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with," research scientists for Facebook Xi Yin and Tal Hassner wrote Wednesday.

The work was done in conjunction with Michigan State University.

Facebook's new software runs deepfake images through its network. Their AI program looks for cracks left behind in the manufacturing process used to change an image's digital "fingerprint."

"In digital photography, fingerprints are used to identify the digital camera used to produce an image," the researchers explained. Those fingerprints are also unique patterns "that can equally be used to identify the generative model that the image came from."

The researchers see this program as having real world applications. Their work will give others "tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research. "

Copyright 2021 NPR. To see more, visit https://www.npr.org.

KOSU is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.