A new face-swap app ran provocative ads featuring celebrity faces, illustrating the dangers of non-consensual deepfakes and the struggle to moderate them

Earlier this week, an app for creating “deepfake face-swap” videos rolled out more than 230 ads on Meta’s platforms, including Facebook, Instagram, and Messenger—127 of which showed Emma Watson’s face transposed onto provocative videos, and another 74 that featured the likeness of fellow actor Scarlett Johansson. None of them were created with the subjects’ consent.

Following an investigation by NBC, the ads have been removed from Meta’s platform. But the app is still available to download on Google Play, and as one of the many players in the rapidly-expanding market for deepfake porn, FaceMega’s ability to stage a massive campaign on mainstream social media platforms indicates that non-consensual content is likely to continue to slip through the cracks.

The controversy is one among many: Last month, Twitch streamer Brandon “Atrioc” Ewing came under fire for watching sexually-explicit deepfakes of his female streaming peers. (Yesterday, Twitch finally took a stand against this sort of content, stating that deepfakes aren’t welcome on the platform—albeit, nearly one month after the story broke.) Many of the women featured on that particular deepfake porn site only learned they were the subject of graphic videos after Atrioc issued a tearful public apology for watching them, having been caught red-handed during a livestream. But rather than amending the situation, his viral apology inadvertently directed more people to the platform, leading to a traffic explosion and further proliferating non-consensual deepfakes of female streamers, including Pokimane, QTCinderella, and Sweet Anita.

“Sex workers and civilians alike are now finding themselves between a rock and a hard place, with everyday people being passed off as pornstars against their will, and pornstars’ images being used non-consensually to harass everyday people.”

Despite the fact that the technology has accelerated rapidly since its introduction, there are few laws that protect victims of non-consensual deepfakes, in part because the legislative measures to hold platforms accountable could conflict with free-speech protections. For instance, limiting the protections provided by Section 230 of the Communications Decency Act—the law that prevents online platforms from being held criminally liable for user-created content—would likely result in more aggressive content moderation policies to avoid potential liability, leading to the stifling of free speech online. “Twitter, Instagram, Tumblr, TikTok—they don’t have the ability to effectively moderate content at that scale,” says sex worker and author Liara Roux, explaining that resulting false positives would have the downstream effect of curbing the right to speak freely about sex and politics on social media.

While some would encourage us to embrace the possibility of false-positives in the name of preventing harassment, the consequences of this kind of change are likely to disproportionately affect sex workers and other marginalized groups, as it did after the passing of the 2018 bill FOSTA-SESTA—which, in holding platforms accountable for the sexual content they host, resulted in a broader crackdown, along with the removal of much-needed resources for consensual sex workers to advertise their services online. Years later, it’s unclear whether the bill has done much to curb the sex trafficking it purported to target—but it’s certainly had an impact on the safety of precariously-employed workers in the sex industry, many of whom relied on these platforms to safely vet their clients.

Sex workers and civilians alike are now finding themselves between a rock and a hard place, with everyday people being passed off as pornstars against their will, and pornstars’ images being used non-consensually to harass everyday people. The rapid development of deepfake technology—and the increased public access—is making them harder and harder to combat, in part because the same methods used to detect deepfakes can then be employed to train new algorithms, leading to a never-ending arms race between the two. For example, when a researcher developed an algorithm that tracked inconsistent blinking in deepfakes, their published findings were swiftly repurposed to develop a new algorithm that could simulate realistic blinking—making deepfake technology even more sophisticated, and effectively rendering the detection method obsolete.

The rapid development of deepfake technology—and the increased public access—is making them harder and harder to combat, in part because the same methods used to detect deepfakes can then be employed to train new algorithms.”

Faced with this problem, companies like Microsoft and Adobe have taken a different approach to fighting misinformation: Instead of detecting what’s false, they’re creating new features that can verify what’s true by revealing where a photograph or video was taken, by whom, and which edits were made. They hope these Content Credentials will someday appear on every photo and video; some 900 companies have already agreed to integrate the feature. Though it may not prevent the production of deepfakes, creators hope that the venture could curb some of their negative effects.

Others believe that blockchain could serve as a valuable tool in the fight against disinformation, providing greater transparency into the lifecycle of a given image or video, through the use of a decentralized digital ledger that tracks, records, and verifies information, making it hard to conspicuously alter digital content. For example, the New York Times is currently exploring the potential of blockchain to track sources and edits for imagery, while arts-oriented tools like Glaze and Spawning are working to create new infrastructure around the future of consent and intellectual property rights, providing creators with the opportunity to opt out of having their images used in popular AI generator training models.

According to research conducted by livestreaming analyst Genevieve Oh, February 2023 saw the most uploads of deepfake porn videos in one month thus far—suggesting that controversy around its proliferation may only boost interest in the medium, much as Atrioc’s apology did. With public access to generative AI continuing to grow, it’s clear that there’s a need for new tools to protect everyday people from non-consensual usage of their likenesses—and, with the number of deepfakes doubling roughly every six months, they can’t come fast enough.

Tags