The platform has come under fire for banning everything from educational content to videos about racial injustice. Now, they’re putting a new “strike” system in place

What do “accountants,” influencers, and sex educators have in common?

They’re all worried about being banned from TikTok. It’s why, following complaints from creators about its hard-to-navigate rules, the company is introducing a revamped account enforcement system, instituting a strike policy intended to remove repeat offenders who have a “disproportionate” effect on the platform by frequently violating its policies with harmful content. But if you ask the everyday TikTok users whose content has been flagged for removal, it’s not just egregious violations that can get your videos taken down—so, too, can the use of normal-sounding words, phrases, and even hand gestures that are forbidden by the platform’s content moderation algorithm.

TikTok’s new strike policy is similar to those used by platforms like YouTube and Meta, and is being marketed as a bid toward increasing transparency around why users are having their content banned—but without changing the underlying censorship policies in place, will this new account enforcement system really create a more equitable environment, or just give users the tools they need to dodge its content moderation algorithm?

The app’s famously scattershot moderation policy has already led users to devise a myriad of workarounds. Social media censorship has resulted in the rise of a new online lexicon called “algospeak,” in which creators substitute often nonsensical replacements for censored words—the same trend that landed Julia Fox in hot water earlier this week, when she thought one TikToker’s use of the word “mascara” really meant mascara, when he was actually referencing sexual assault in terms that wouldn’t be banned by the app. Communities have also taken to renaming themselves to avoid algorithmic suppression, such as the strippers who call themselves “skrippers,” or else, “accountants”—a reference to the boring nine-to-five jobs that adult industry professionals sometimes claim to get prying relatives off their back.

“We’re losing the places where things like sex work can be more humanized—the kind of spaces where we’re allowed to exist with some degree of complexity online.”

The policing of sexual content has been steadily worsening since the passing of FOSTA-SESTA, a 2018 bill that poked a hole in Section 230—the rule that previously ensured that providers of internet services would not be considered the “publisher” of content users choose to post. Despite being framed as an effort to stop sex trafficking, FOSTA-SESTA actually wound up endangering countless sex workers, due to the fact that companies could now be held responsible for any content related to prostitution or illicit activities hosted on their website—removing opportunities for sex workers to advertise their services and vet clients online, and increasing the risk that companies could boot them from the platform altogether to avoid liability.

While TikTok’s guidelines ban nudity and sexually explicit content, they also state that educational content is an exception to that rule—but this has not been the experience of many sex (“seggs”) educators on the platform, who frequently complain that their content is being suppressed or removed, despite obeying guidelines. Marginalized communities are further affected by other users wrongfully reporting their content for removal—even if they’re not actually doing anything wrong, other than saying something someone doesn’t want to hear. According to designer and artist Michaela Stark, photos of plus-size models in modest attire frequently get removed, whereas content featuring more conventional body types is allowed to stay. Similarly, the independent porn director Erika Lust claims that “diverse and dissenting voices” are the ones being suppressed, including the LGBTQIA+ and BIPOC communities, and users who depict the functions of a woman’s body, such as breastfeeding and menstruation. All the while, images and advertisements sexualizing thin, white, conventionally attractive cis women continue to proliferate—because, according to Lust, they serve the patriarchal gaze.

“We’re losing the places where things like sex work can be more humanized—the kind of spaces where we’re allowed to exist with some degree of complexity online,” says erotic filmmaker Vex Ashley, describing the death of blogging platforms and social media networks where people could talk explicitly about the kind of work they do. “All of these spaces are now very ‘othered’ from one another. There are websites specifically for porn, but sex can only exist away from social media.”

“Making it possible for users to check whether their content has been blocked from the app’s recommendations may result in more clarity about their supposed infractions, transparency isn’t all that’s needed to reform the app’s overzealous approach to censorship.”

Creators of sex-related content aren’t the only ones being disproportionately affected by content moderation. Many queer creators claim they face algorithmic discrimination for the use of words that, while once considered slurs, have been reclaimed by their community. TikTok has already come under fire for banning creators who use the words ‘Black’ and ‘BLM,’ often removing videos about racial injustice without warning or explanation—not to mention the countless Black creators who have called out the platform for deprioritizing their voices, after they forged many of the viral dances, trends, and challenges that made the app so popular.

TikTok itself is no stranger to bans; the app’s CEO, Shou Zi Chew, is slated to testify before a congressional committee in less than two months, where he’s likely to be grilled by legislators about the app’s alleged affiliations with China’s ruling party. This follows the announcement that the US House of Representatives, along with 28 US states, have banned the app on government devices. And just last week, Republican congressmen Josh Hawley and Ken Buck introduced legislation that would ban the app on all devices nationwide if passed.

It’s clear that TikTok is feeling the pressure, both from its audience and US legislators. And though making it possible for users to check whether their content has been blocked from the app’s recommendations may result in more clarity about their supposed infractions, transparency isn’t all that’s needed to reform the app’s overzealous approach to censorship. One can’t help but wonder: will this updated account enforcement system really create a more equitable environment on TikTok, or are they just running out the clock?

Tags