The app’s new opt-out policy for EU users is a step forward for cognitive liberty—but will it be enough to stop users from trading privacy for entertainment?

TikTok’s algorithm is infamous: for making people realize they’re queer, for spurring a rise in self-diagnosed mental health issues, for keeping users scrolling so long that the app now issues a warning. But all that’s about to change—at least in the European Union, where TikTok users will soon be able to opt out of the platform’s content recommendation system in compliance with the Digital Services Act (DSA).

This new policy will allow EU users to trade their personalized “For You” page for a chronological feed of accounts they follow, or videos that are popular in their area. This is a departure from TikTok’s usual business model—because, unlike Facebook or Instagram, the app’s content recommendations have little to do with your own social network. Instead, their algorithm monitors user behavior in response to the videos on their For You page, resulting in a highly-personalized feed that can lead people to discover things about themselves before they’re even aware of them.

TikTok’s official stance is that shares, likes, and follows all dictate what kind of content a user is likely to see on the app—but according to an experiment by the Wall Street Journal, the secret to its proprietary algorithm is a simple one: surveillance. By closely monitoring how long someone lingers on each video, the algorithm picks up cues as to what kind of content to serve up next. This also means that, if a video gives you pause—not because you like it, but because it’s hard to understand, or contains disturbing information—you can easily wind up shunted down an algorithmic rabbit hole, served a slew of content about suicidal ideation, disordered eating, or election fraud conspiracies.

In theory, a personalized algorithm promises better videos; in practice, it makes it easier for such platforms to dictate what kind of content we encounter—which in turn impacts the user’s mental experience both online and off. According to cognitive liberty advocate Nita Farahany, professor of law and philosophy at Duke Law School and author of The Battle for Your Brain, this poses a threat to our cognitive liberty: the right of the individual to control their own mental processes, cognition, and consciousness. Design changes like the option to eschew algorithmic profiling on social media are a step forward, in Faranahy’s view. But as new technology continues to develop, these threats are multiplying—extending beyond fake news, disinformation, and manipulation by algorithm, and threatening that last bastion of privacy: the brain.

“In theory, a personalized algorithm promises better videos; in practice, it makes it easier for such platforms to dictate what kind of content we encounter, which in turn impacts the user’s mental experience both online and off.”

“Commodification of brain data has already begun,” Farahany states, describing how this is made possible by sensors embedded in earbuds, headphones, and watches, which are capable of detecting and decoding the wearer’s brain activity—a natural extension of the personal health and fitness trackers that have already been popularized. In China, government-backed surveillance projects are already using brain-reading technology to monitor the emotional states of employees on production lines and at the helm of high-speed trains. In the U.S., products like SmartCap monitor workers’ fatigue with the goal of increasing productivity.

Others are utilizing such technologies not for business, but pleasure: Creating sex toys powered by brain waves, for instance, to provide the hands-free, frictionless orgasms you never asked for. And while these technologies may not be mainstream yet, Farahany and other cognitive liberty advocates suggest we need to push for legislative safeguards to protect people’s freedom of thought now. “Brain sensors are already being sold worldwide. It isn’t everybody who’s using them yet,” she says. “When it becomes an everyday part of our everyday lives, that’s the moment at which you hope that the safeguards are already in place.”

Even without the looming threat of this Orwellian future, one need only take a glance at the modern internet for evidence of our diminishing control over our own data. Not only is our every move online traced, but our personal information is being used to train AI language and image models without consent, with massive datasets like LAION-B scaping 5.8 billion images from the internet, including confidential medical records. Free speech on social media is under threat, with overzealous content moderation policies necessitating the use of “algo-speak” on TikTok to evade algorithmic censorship. Yet even as our own privacy dwindles, the algorithms at the heart of such platforms often remain black-boxed, their techniques disclosed only in the form of leaked memos and carefully-crafted PR statements.

That’s another thing the Digital Services Act has set out to change. In addition to offering users more options to flag problematic content, these new policies will force TikTok to turn its data over to researchers for study—a level of transparency that the app has historically avoided.

“As new technology continues to develop, these threats are multiplying—extending beyond fake news, disinformation, and manipulation by algorithm, and threatening that last bastion of privacy: the brain.”

Coupled with the algorithmic opt-out and banning of targeted advertisements to users under 17, Farahany sees this policy as a step in the right direction, but also urges legislators to further update their digital rulebook: “Lawmakers and companies urgently need to reform the business models on which the tech ecosystem is predicated. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence” she writes for WIRED, describing how, without such changes, our collective freedom of thought is at risk.

In the era of online echo chambers and “mind-reading” TikTok algorithms, the concern that unseen forces are shaping our political landscape is a common one. It even led Meta to collaborate with researchers on a series of studies about the impact of Facebook and Instagram on users’ political views, with the goal of debunking the notion that its algorithm plays a key role in furthering political polarization. These initial findings, which were published in academic publications Science and Nature, suggest that while algorithmic recommendation made it easier for users to seek out content that confirmed their existing views, swapping out the algorithm for a chronological feed did not significantly alter levels of political knowledge and polarization on key issues between conservatives and liberals.

While eliminating the algorithm did little to heal the political divide, it did significantly decrease the time users spent on the apps—driving them away from Facebook and toward YouTube and TikTok, where algorithmic customization reigns supreme. This confirms what we already know: The more personalized the algorithm, the more addictive the app. It begs the question: Even as algorithms increasingly infringe on our personal privacy, will the entertainment they offer be enough to keep us swiping?

 

Tags