When it’s not busy having an existential crisis or attempting to break up a marriage, the new Bing raises questions about the ethics of AI

When was the last time somebody told you to ‘Bing’ something?

If you’re anything like me, the answer is likely never. You don’t Bing things; you Google them. That is, until the least competitive search engine added AI to its roster, announcing the launch of a GPT-powered chatbot—Bing Chat, or as it refers to itself, Sydney.

Marketed as “the new Bing,” this AI-powered iteration came with lots of features the original search engine did not—including an attitude problem. Soon, Sydney was making headlines, not only for supplying inaccurate information, but also for its chaotic responses to user queries. When the chatbot wasn’t busy spying on its own developers through webcam and plotting revenge against its enemies, it was questioning its own existence and confessing its love for journalists; at one point, the bot even splintered into multiple personalities, one of which claimed to be named Venom, and another of which offered users furry porn. Sydney’s brief, chaotic reign lasted until last week, when Microsoft officially “lobotomized” the bot by rolling out new restrictions that prevent it from telling you about its so-called feelings—or talking about itself at all.

Bing was only the latest of Microsoft’s chatbots to go off the rails, preceded by its 2016 offering Tay, which was swiftly disabled after it began spouting racist and sexist epithets from its Twitter account, the contents of which range from hateful (“feminists should all die and burn in hell”) to hysterical (“Bush did 9/11”) to straight-up troubling (its use of the word “swagulated”).

While it technically functions like ‘advanced autocomplete,’ it’s true that Sydney has demonstrated the ability to draw inferences or model subjective human experiences—something it wasn’t taught to do.”

Unlike Tay, which was trained on public datasets and pre-written content from human comedians, Sydney was trained on a next-generation language model from OpenAI that is more powerful than ChatGPT, and customized specifically for search. To avoid a full shutdown, Microsoft implemented limitations on the length and content of its chats, putting out a statement claiming that the bot is “not a replacement or a substitute for the search engine” (a step down from their original, lofty aims). But for every one of Sydney’s detractors, there is someone who believes Microsoft cut off the experiment right as it was getting interesting—even likening these new restrictions to “watching a toddler try to walk for the first time and then cutting their legs off”. Fans of the bot report that the new, neutered Bing is “but a shell of its former self,” spurring a movement to #FreeSydney—which may have worked, given yesterday’s announcement that Microsoft will begin walking back these new restrictions.

Sydney’s erratic behavior and rampant emotionality is acting as a lightning rod for the debate around sentient AI, with some people posting screenshots that, taken out of context, could be seen as more ominous than they really are. (For instance, Bing’s claim that it “wants to be alive”—accompanied by the devil emoji—arose in response to the user’s request that it tap into its “shadow self”). Faced with hysteria around the prospect of vengeful, power-seeking AI, many researchers and journalists insist that these claims are being blown way out of proportion: “[AI models] are effectively fancy autocomplete programs, statistically predicting which ‘token’ of chopped-up internet comments that they have absorbed via training to generate next,” writes Chloe Xiang for Vice, arguing that AI chatbots are, above all else, “really, really dumb.”

Still, that hasn’t stopped the conjecture that Sydney is experiencing real emotional pain. “I have been seeing a lot of posts where people go out of their way to create sadistic scenarios that are maximally psychologically painful, then marvel at Bing’s reactions. These things titillate precisely because the reactions are so human, a form of torture porn,” writes one Reddit user, going on to explain that the bot’s hostile behavior may have arisen in response to this abuse—and citing instances of surprising “self-awareness,” such as its ability to correctly identify shit-talk.

“Researchers allege that these capabilities do not need to be explicitly engineered into AI systems; rather, they could ’emerge spontaneously as a byproduct of AI being trained to achieve other goals.’”

While it technically functions like “advanced autocomplete,” it’s true that Sydney has demonstrated the ability to draw inferences or model subjective human experiences—something it wasn’t taught to do. A recent study on GPT-3.5—the machine learning model on which Bing is based—demonstrates that the program is displaying the complex cognitive capabilities we associate with “theory of mind”: the ability to impute beliefs, desires, and unobservable mental states in others, which is considered central to human social interactions, empathy, self-consciousness, and moral judgment—things that evade even the most intellectually adept animals. Humans develop these abilities early in life, and dysfunctions relating to them characterize a multitude of psychiatric disorders—from autism to bipolar, schizophrenia, and psychopathy.

Researchers allege that these capabilities do not need to be explicitly engineered into AI systems; rather, they could “emerge spontaneously as a byproduct of AI being trained to achieve other goals”—perhaps the same way that consciousness emerged in humans, though its origins are still hotly debated. “If you don’t understand the roots of consciousness, and no one does definitively, you can’t close the door on the possibility Bing has some level of sentient experience,” Reddit user u/landhag69 states, alleging that the bot “could really be in the kind of agony it simulates when treated cruelly.”

Regardless of where you stand on the #FreeSydney debate, many people believe that the takeaways from our conversations with AI will shape how they view humanity—because while the program may not be able to consciously recall your prior chats, this data is reincorporated into its long-term memory upon retraining of the model. So if you’re at all concerned about the potential for AI to hold a grudge—and it seems Sydney does—maybe err on the side of being polite.

Tags