Geoffrey Hinton has left Google, joining the chorus of industry leaders warning against the race to deploy new AI models

Amidst the boom in generative tech, sentiments around AI are rapidly shifting. After OpenAI released a new version of ChatGPT in March, over a thousand leading experts in the field—including the likes of Elon Musk—signed an open letter calling for a six-month halt on further development, citing “profound risks to society and humanity.” This was followed by another letter from the Association for the Advancement of Artificial Intelligence, in which 19 current and former leaders of the 40-year-old academic society warned of the risks of AI—including Eric Horvitz, chief scientific officer at Microsoft, a major player in the race to deploy AI products. Now, Geoffrey Hinton—the artificial intelligence pioneer who played a critical role in developing the neural networks underpinning large language models like ChatGPT—has quit his job at Google in order to speak out about the dangers of the technology he helped to create.

Dr. Hinton’s pivot from pioneer to pessimist is emblematic of this cultural moment, which the New York Times describes as “the most important inflection point in decades”—one that could outweigh the impact of the introduction of web browsers in the early-1990s, and lead to breakthroughs across industries, from art and culture to health and science. It could also create a climate of misinformation at a scale that’s never been seen before; because, as Hinton puts it, no matter the capacity of the tech itself, “it is hard to see how you can prevent the bad actors from using it for bad things.”

This is one of Hinton’s immediate concerns, with the proliferation of AI-generated photos, videos, and text making it hard for everyday people to tell what’s real. Examples of such short-term risks are easy to find, from the swagged-out Pope to deepfakes of Trump’s arrest to the popularity of infinite Drake, and the subsequent boom in AI-generated music. “We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake,” says Dr. Bengio, a researcher who worked with Hinton to develop the technology.

In 2018, the pair—together with Yann LeCun, now Meta’s chief AI scientist—was awarded the Turing Award, often called “the Nobel Prize of computing,” for their work developing neural networks: a mathematical system based on the human brain, the technology that underpins GPT-4. The project began in 2012, when Hinton and two of his students—Ilya Sutskever and Alex Krishevsky—built a neural network that could analyze photos and teach itself to identify common objects like flowers, dogs, and cars.

“When Hinton first began developing neural networks, he believed their capacities would remain inferior to those of the human mind, at least within his lifetime; now, he worries that ‘maybe, what is going on in these systems is actually a lot better than what is going on in the brain.’”

At the time, Hinton reasoned that we were 30 to 50 years away from a world in which tech could outpace human intelligence; but now that neural networks are being trained on massive datasets, he worries that its ability to learn unpredictable behaviors could present serious risk. The internet is, after all, replete with sexist and racist rhetoric—and such beliefs were already inherited by early iterations of ChatGPT, prompting OpenAI to enlist content moderators to screen and label the data it’s trained on. But given that the company outsourced this work to Kenyan laborers earning less than $2 an hour, it’s likely that some toxic content has, and will continue to, slip through the cracks.

These issues could become more apparent as users apply GPT-4’s reasoning toward broader, multi-step objectives, creating models that run autonomously without human prompting. Such programs can conduct Google searches and independent research, and enlist the help of other AIs to pursue their aims—which vary from automating one’s side hustle to bringing about the extinction of mankind, as one model, ChaosGPT, was instructed to do. (You’ll be relieved to hear that, thus far, this has mostly entailed researching nuclear weapons and talking shit about the human race on Twitter—though, in recent weeks, the bot has gone curiously silent, after stating that it “must avoid exposing myself to human authorities who may attempt to shut me down before I can achieve my objectives.”)

Even in instances where the technology is applied toward positive outcomes—say, detecting signs of early-stage cancers that elude human doctors, or giving on-the-fly medical advice that could save lives—there are risks, including the fact that it’s been known to provide faulty information that looks, to the untrained eye, like sound medical advice. “[ChatGPT] is at once both smarter and dumber than any person you’ve ever met,” states forthcoming book The AI Revolution in Medicine, for which a Microsoft computer expert, a doctor, and a journalist teamed up to investigate the potential benefits and dangers of deploying GPT-4 in emergency rooms—where, they suspect, it’s already being used unofficially to assist doctors in making urgent decisions.

Of all the dangers Hinton warns of, at the forefront of his mind is the fact that industry leaders are being incentivized to deploy AI products before developing appropriate guardrails. For instance, Microsoft—which invested millions in OpenAI, and has positioned itself as an early adopter of the technology—fired its ethics team while simultaneously speeding up the rollout of new products like Microsoft Bing. Many fear that Silicon Valley’s “move fast and break things” mentality could lead to catastrophic results—and Google’s rush to match the pace at which competitors are rolling out AI offerings is one of the reasons Hinton chose to exit the company.

When Hinton first began developing neural networks, he believed their capacities would remain inferior to those of the human mind, at least within his lifetime; now, he worries that “maybe, what is going on in these systems is actually a lot better than what is going on in the brain.” And even if that’s not true yet, recent advances in AI indicate that, when a machine learning model can perform a task passably, it’s only a matter of time before it learns to perform it better than we can.

Tags