Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn

Credit to Author: Louise Matsakis| Date: Wed, 14 Feb 2018 21:46:26 +0000

Facial recognition and machine learning programs have officially been democratized, and of course the internet is using the tech to make porn. As first reported by Motherboard, people are now creating AI-assisted face-swap porn, often featuring a celebrity's face mapped onto a porn star's body, like Gal Gadot's likeness in a clip where she's supposedly sleeping with her stepbrother. While stopping these so-called deepfakes has challenged Reddit, Pornhub, and other communities, GIF-hosting company Gfycat thinks it's found a better answer.

While most platforms that police deepfakes rely on keyword banning and users manually flagging content, Gfycat says it's figured out a way to train an artificial intelligence to spot fraudulent videos. The technology builds on a number of tools Gfycat already used to index the GIFs on its platform. The new tech demonstrates how technology platforms might try to fight against fake visual content in the future. That battle will likely become increasingly important as platforms like Snapchat aim to bring crowdsourced video to journalism.

Gfycat, which has at least 200 million active daily users, hopes to bring a more comprehensive approach to kicking deepfakes off a platform than what Reddit, Pornhub, and Discord have managed so far. Mashable reported on Monday that Pornhub had failed to remove a number of deepfake videos from its site, including some with millions of views. (The videos were later deleted after the article was published). Reddit banned a number of deepfake communities earlier this month, but a handful of related subreddits, like r/DeepFakesRequests and r/deepfaux, remained until WIRED brought them to Reddit's attention in the course of reporting this story.

Those efforts shouldn't be discounted. But they also show how hard it is to moderate a sprawling internet platform manually. Especially when it turns out computers might be able to spot deepfakes themselves, no humans required.

Gfycat's AI approach leverages two tools it already developed, both (of course) named after felines: Project Angora and Project Maru. When a user uploads a low-quality GIF of, say, Taylor Swift to Gfycat, Project Angora can search the web for a higher-res version to replace it with. In other words, it can find the same clip of Swift singing "Shake It Off" and upload a nicer version.

Now let’s say you don’t tag your clip “Taylor Swift.” Not a problem. Project Maru can purportedly differentiate between individual faces and will automatically tag the GIF with Swift’s name. This makes sense from Gfycat’s perspective—it wants to index the millions of clips users upload to the platform monthly.

Here’s where deepfakes come in. Created by amateurs, most deepfakes aren’t entirely believable. If you look closely, the frames don’t quite match up; in the below clip, Donald Trump’s face doesn’t completely cover Angela Merkel’s throughout. Your brain does some of the work, filling in the gaps where the technology failed to turn one person’s face into another.

Project Maru is not nearly as forgiving as the human brain. When Gfycat’s engineers ran deepfakes through its AI tool, it would register that a clip resembled, say, Nicolas Cage, but not enough to issue a positive match, because the face isn’t rendered perfectly in every frame. Using Maru is one way that Gfycat can spot a deepfake—it smells a rat when a GIF only partially resembles a celebrity.

Maru likely can't stop all deepfakes alone; it might have even more trouble in the future as they become more sophisticated. And sometimes a deepfake features not a celebrity's face but that of a civilian—even someone the creator personally knows. To combat that variety, Gfycat developed a masking tech that works similarly to Project Angora.

If Gfycat suspects that a video has been altered to feature someone else’s face (like if Maru didn't positively say it was Taylor Swift's), the company can “mask” the victim's mug and then search to see if the body and background footage exist somewhere else. For a video that places someone else’s face on Trump’s body, for example, the AI could search the internet and turn up the original State of the Union footage it borrowed from. If the faces don't match between the new GIF and the source, the AI can conclude that the video has been altered.

Gfycat plans to use its masking tech to block out more than just faces in an effort to detect different types of fake content, like fraudulent weather or science videos. “Gfycat has always relied heavily on AI for categorizing, managing, and moderating content. The accelerating pace of innovation in AI has the potential to dramatically change our world, and we'll continue to adapt our technology to these new developments,” Gfycat CEO Richard Rabbat said in a statement.

Gfycat’s technology won’t work in at least one deepfake scenario: a face and body that don't exist elsewhere online. For example, someone could film a sex tape with two people, and then swap in someone else's face. If no one involved is famous and the footage isn't available elsewhere online, it would be impossible for Maru or Angora to find out whether the content had been altered.

For now that seems like a fairly unlikely scenario, since making a deepfake requires access to a corpus of videos and photos of someone. But it’s also not hard to imagine a former romantic partner utilizing videos on their phone of a victim that were never made public.

And even for deepfakes that feature a porn star or celebrity, sometimes the AI isn't sure what's happening, which is why Gfycat employs human moderators to help. The company also uses other metadata—like where it was shared or who uploaded it—to determine whether a clip is a deepfake.

'I can't stop you from creating fakes, but I can make it really hard and really time-consuming.'

Hany Farid, Dartmouth College

Also, not all deepfakes are malicious. As the Electronic Frontier Foundation pointed out in a blog post, examples like the Merkel/Trump mashup featured above are merely political commentary or satire. There are also other legitimate reasons to use the tech, like anonymizing someone who needs identity protection or creating consensually altered pornography.

Still, it's easy to see why so many people find deepfakes distressing. They represent the beginning of a future where it's impossible to tell whether a video is real or fake, which could have wide-ranging implications for propaganda and more. Russia flooded Twitter with fake bots during the 2016 presidential election campaign; during the 2020 election, perhaps it will do the same with fraudulent videos of the candidates themselves.

While Gfycat offers a potential solution for now, it may be only a matter of time until deepfake creators learn how to circumvent its safeguards. The ensuing arms race could take years to play out.

"We're decades away from having forensic technology that you can unleash on a Pornhub or a Reddit and conclusively tell a real from a fake," says Hany Farid, a computer science professor at Dartmouth College who specializes in digital forensics, image analysis, and human perception. "If you really want to fool the system you will start building into the deepfake ways to break the forensic system."

The trick is to install a number of different protocols designed to detect fraudulent imagery, so that it becomes extremely difficult to create a deepfake that can trip up all the safeguards in place. "I can't stop you from creating fakes, but I can make it really hard and really time-consuming," Farid says.

For now, Gfycat appears to be the only platform that has banned deepfakes in the past utilizing artificial intelligence to moderate its site. Both Pornhub and Discord told me they weren't using AI to spot deepfakes. Reddit declined to reveal whether it was; a spokesperson said the company didn’t want to disclose exactly how it moderated its platform because doing so could embolden bad actors to try to thwart those efforts. Twitter didn’t immediately respond to a request for comment.

Millions of videos are uploaded to the web each day; an estimated 300 minutes of video are published to YouTube every minute. We're going to need more than just people pointing out when something isn't real, but likely computers too.

https://www.wired.com/category/security/feed/