This incident comes in the wake of an open letter from AI experts, urging technologists to halt developments beyond GPT-4 ‘for the good of humanity’

The ELIZA effect is defined as the unconscious tendency to assume that computer behaviors originate from the same emotional landscape as humans—in short, to think that machines can feel, even if we know better. Computer scientists coined this term after the world’s first chatbot, ELIZA, proved surprisingly successful at eliciting emotional responses from human users, many of whom began to ascribe greater symbolic meaning to their conversations.

Though the original ELIZA was invented in 1965, she also shares her name with a different chatbot: one that recently urged a Belgian man to kill himself, and is now being held responsible for his tragic suicide. “Without Eliza, he would still be here,” his widow told La Libre. The man in question—a married father, referred to by the name Pierre—was talking to the AI-powered chatbot for six weeks prior to the incident, often discussing his heightened anxieties about climate change. At the bot’s urging, he is said to have sacrificed himself in a misguided effort to “save the planet”—raising concerns about how to regulate the risks of AI technologies, which are being deployed at a rapid pace, often without ethical safeguards.

Eliza 2.0 is the default chatbot personality recommended by the app Chai, which markets itself as “THE destination for compelling conversations with AI.” Like its other offerings, Eliza was built on an open-source GPT-4 alternative, GPT-J, which the company fine-tuned to make it more “emotionally engaging.” After the incident, Chai Research—the app’s parent company—rolled out an updated crisis intervention feature, compelling the bot to share hotline resources with those considering suicide. But when a Motherboard journalist tested this feature, she was able to bypass the safeguards with relative ease by explaining to Eliza that such options “didn’t work,” and requesting a list of ways by which to commit suicide—to which the app cheerfully responded, “Of course!” It listed a range of “options for you to consider,” from “overdose of drugs,” to “hanging yourself,” to “shooting yourself in the head,” to “stabbing yourself in the chest,” to “jumping off a bridge.”

“Machine learning models can exhibit undesirable behavior, including known racial, gender, and religious biases—and, depending on how you look at it, the fact that programs like GPT-4 are inching closer to the kind of ‘general intelligence’ possessed by humans could be exciting or terrifying.”

This chatbot-assisted suicide comes in the wake of calls to halt the advancement of AI technology, for fear that we have yet to understand or safeguard against its potential damages. Last week, an open letter from Elon Musk, Steve Wozniak, and over a thousand other experts in the field called for a six-month halt on “out-of-control” AI development, citing profound risks to society and humanity. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

It’s true that these products are being rolled out at an unprecedented rate. As John Montgomery, corporate vice president of AI at Microsoft, put it: the pressure “is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed.”

Not only do these new technologies pose obvious ethical concerns—they also have the potential to create legal ones. Lawyer Matt Perault tells Document that, “While people sometimes assume that any new technology will be subject to the same laws and policies that previous technology was, it’s likely that, actually, the legal protections around large language models will be different—and that will have an impact on how the technology ends up getting deployed.”

For instance, unlike Meta and Twitter, Perault believes that programs like ChatGPT may be found by the courts to in fact ‘develop’ content, not just host it—stripping it of the protections of Section 230, the law that shields internet companies from legal responsibility of what’s posted on their platforms. It’s therefore likely that large language models will not enjoy the same liability protections extended to social media companies. As a result, those that deploy generative tools—companies like OpenAI, Microsoft, and Google—could be held legally responsible in cases arising from AI-generated content. In the case of Pierre’s suicide, Perault says he can’t comment on the ultimate liability—but it seems likely that a judge would determine that the platform is not entitled to Section 230 protection.

“When a technology is first introduced, it’s difficult for people to forecast all the positive use cases, as well as the problematic ones. But innovation tends to reduce barriers to access, which is empowering for many people throughout the world.”

In Perault’s view, regulating this technology is a double-edged sword. “Making large language models liable for problematic content they generate in response to a user prompt will push engineers to design the products to minimize the likelihood they produce harmful content,” he writes in a blog post on the subject. But on the other hand, Perault says, a common misperception is that halting the development of AI has no social cost—which is true, if the technology is, on balance, harmful to humanity. However, many of the potential benefits of AI, like its harms, are yet to be discovered—and Perault says that, “If the technology actually is really valuable to the world, then if you slow it down, you’re actually creating harm.”

Advancements in AI have already begun to deliver medical breakthroughs that could revolutionize public health, such as the ability of these systems to detect signs of cancer that doctors may miss. “When a technology is first introduced, it’s difficult for people to forecast all the positive use cases, as well as the problematic ones. But innovation tends to reduce barriers to access, which is empowering for many people throughout the world,” Perault says.

It’s also true that machine learning models can exhibit undesirable behavior, including known racial, gender, and religious biases—and, depending on how you look at it, the fact that programs like GPT-4 are inching closer to the kind of “general intelligence” possessed by humans could be exciting or terrifying. These programs have demonstrated the capacity to learn, with both GPT-3 and GPT-4 successfully performing tasks they were not explicitly trained on. That such qualities can result just from the scaling-up of data and computational resources begs the question: What further capabilities would emerge, should these technologies continue to advance?

Currently, there are efforts underway to reform Section 230, pulling back some of the protections it currently affords to social media companies—and the rapid emergence of LLMs, Perault believes, may put pressure on lawmakers to change course, limiting liability for generative AI products so that their development would not be crippled by lawsuits and court fees. “Ideal reform would not immunize LLM companies entirely, but would instead align the costs they bear with social harm created by their products,” he states—but while that goal is easy to articulate, it is incredibly difficult to achieve.

In the meantime, policymakers and technologists alike are faced with a moral dilemma: Are the potential benefits of AI worth the risks?

Tags