The existential paranoia fueling Elon Musk’s fear of AI

The scaremongering by Musk and other 'tech-bros' says more about the exploitative business model of Silicon Valley than Artificial Intelligence's capacity to do actual harm.

You can learn a lot about someone if you ask them about the robot uprising. What they fear most about it serves as a curious kind of postmodern personality test. Imagine: a benevolent artificial intelligence arose in the distant future that cured us of war, disease, and want. A deific majesty that could bend the fabric of reality itself becomes a vengeful virtual god, punishing the sins of all who delayed its coming. You might be in its simulation right now, constructed by the AI to torment you. In other words, by having read these sentences, you may now be subject to brutal retrospective robot justice from this future AI unless you do everything in your power to hasten its coming.

This is “Roko’s Basilisk,” or Pascal’s wager for the gentrifying pseudo-philosophers in tech. The author and game developer Amy Dentata writes, “Rococo’s Basilisk suggests that if you refuse to live a life of wanton hedonism, the real you is punished forever while a simulation of yourself you run in your own mind gets to enjoy life.” It long ago graduated from a thought experiment on an insular message board to a nerdy internet meme, now more or less treated with the seriousness it deserves as the butt of numerous jokes. One Twitter user’s screen name is Rocko’s Modern Basilisk, a play on the name of an old Nickelodeon cartoon.

But for all its evident silliness, among a certain set of mostly white, young, tech-savvy men, this simple thought experiment sired nightmares. Its most fundamental flaw suggests, perhaps, the reason why. Why would a deity-like superintelligence be bound to an inflexible, rote, unempathetic logic that led it to alter history with dictatorial brutality, an “if-then” equation gone feral? When put in these terms, the Basilisk looks less like a virtual god and much more like a sadly familiar, all-too-human mortal. It looks, strangely enough, like a reflection of the young IT professionals with “rationalist” and “devil’s advocate” in their Twitter bios. Such men often promote a vulgar idea of “logic,” similar to that employed by the Basilisk AI, with a fundamentalist zeal. Nowadays, the Basilisk is a joke even to such people, but as a silly cultural artifact it says a lot about their own fears and the reasons for them.

There are, after all, far mightier men with a slightly more restrained, lofty take on the dangers posed by AI. Try Elon Musk. “If there’s a super intelligent [AI] engaged in recursive self improvement,” Musk said, acting out the same wager, “then it will have a very bad effect. It could just be something like getting rid of spam email. It could be like, ‘Well, the best way to get rid of spam is to get rid of humans. The source of all spam.’” The speculation drew laughter from the audience, which might have been more appropriate than the august crowd realized in the moment.

Musk’s argument echoes a similar approach by the Swedish philosopher Nick Bostrom. His is called “The Paperclip Maximizer.” In this version, an AI is tasked with producing paper clips, improving itself to a point where it is capable of doing anything to achieve that goal, up to, and including, melting humans down for the base metals in our bones.

Just as with Roko’s Basilisk, the nightmare AI scenario is one where a machine operates on the coldest of utilitarian logic and ends up killing us all for it; the same kind of context-free, bloodless “logic” venerated by so many young tech-savvy men and Silicon Valley utopians alike as the ideal consciousness (think Spock without all the character development). Consider the apotropaic manner in which the futurist Ray Kurzweil describes rationality while interviewing a fellow traveler: “The sort of scenario we’re talking about is something like Enlightenment version 2.0, a new kind of civilization, instead of some-thing like the jump from chimps to humans. We could use rationality to what you call ‘raise the sanity waterline’ of civilization,” adding that this “new, more rational civilization” could tackle huge problems. But it’s never quite defined, except by implied contrast, inasmuch as the label “irrational” is applied to Kurzweil’s critics.

But the Paperclip Maximizer thought experiment also reveals something else: it sounds quite a lot like capitalism as a whole. The ruthless production of capital ad nauseam for no reason. Indeed, this is definitional to capitalism, a “pursuit of profit, and forever renewed profit, by means of continuous, rational, capitalistic enterprise,” per Max Weber, the early sociologist. Exhaustive, extractive, exploitative production of capital day and night. A saleable world without end.

“Privileged people who fear an AI rebellion always imagine it in exploitative terms that mirror their own ideologies. They fear their ethics being turned back on them.”

To look at our latter-day titans of industry, like Musk, is to see capitalism’s techno-utopian form at its purest: the CEO as a Tony Stark–esque rock star inventor who can save the world. But Musk’s Tesla corporation isn’t terribly different from what’s gone before, except that unlike traditional car manufacturers, his won’t permit its workers to organize into trade unions. The endless production of capital is what counts—no matter who gets hurt along the way. The mythology of these corporations endlessly pursuing profit as a measure of their desire to build a better world has become endemic to any tech-startup narrative. And Musk’s Tesla is the ur-example of the genre.

So why would our tech barons, and their legions of anonymous libertarian groupies, create these depraved scenarios? They fear that a true AI will be too much like them. Like Rush Limbaugh claiming to have had a nightmare about being a “slave building a sphinx in a desert that looked like Obama,” privileged people who fear an AI rebellion always imagine it in exploitative terms that mirror their own ideologies. They fear their ethics being turned back on them.

Once you extol emotionless logic as the ideal form of thought, then you’ll fear becoming the rounding error in someone else’s logical calculations. When you routinely exploit others through capitalism, then you fear being exploited in turn. For men like Musk, the shape of their nightmares is determined in precise detail by their dreams.

In an extraordinary piece for Vanity Fair, the New York Times columnist Maureen Dowd chronicled the insular world in which some of the most consequential debates about artificial intelligence and machine learning are happening. In an 8,000-word feature, the pronoun “she” appeared seven times (all references to Ayn Rand). That simple fact says so much about the world in which these discussions are happening and what influences it. In Silicon Valley, Rand-inspired libertarianism is akin to a state religion, and when it comes to AI, many of these men are also of one mind. Peter Thiel, who cofounded PayPal with Musk and whose libertarianism is of so extreme a bend that he supports Donald Trump and the alt-right, is also quite worried about AI. “We don’t even know what AI is,” he told Dowd. “It’s very hard to know how it would be controllable;” he added that he worried that Musk’s agitation would, paradoxically, increase interest in researching AI, thus hastening the coming apocalypse.

Incidentally, Thiel is the second largest contributor to the Machine Intelligence Research Institute (MIRI), an obscure and academically flaccid outfit that hosts LessWrong, the website that promoted Bostrom’s thought experiment and gave us Roko’s Basilisk. It’s a small world, after all. Among other things, MIRI encourages donations to help manage the “acute risk” period we now supposedly face with AI (“an existential catastrophe over the next few decades”), trading on fears they cultivate about it. Eliezer Yudkowsky, a MIRI theorist, put it this way: “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

There are others who take a different approach. Mark Zuckerberg is quite gung-ho on AI, for instance, and even Bill Gates—who once likened AI to nuclear weapons—has criticized Musk as a scaremonger. What of their motivations? Simply, AI is potentially quite useful and profitable, even in the rudimentary forms we have today (Siri, satellite navigation). Further, automation is a great cost saver. Imagine Uber’s delight if they had no drivers to split fares with. If you take the great doyens of tech as a collective, one faction is pushing to make the most profit from the technology, while others are gripped by terror of their ideology turning on them.

The sci-fi writer Ted Chiang described this terror plainly. “It’s assumed that the AI’s approach will be ‘the question isn’t who is going to let me, it’s who is going to stop me,’ i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.” But outside the plexes of the Valley, there are equally interesting AI opponents who make similar assumptions. With a rather telling simile, former secretary of state Henry Kissinger stated his fears to The Economist: “Artificial intelligence is a crucial [challenge], lest we wind up creating instruments in relation to which we are like the Incas to the Spanish, [such that] our own creations have a better capacity to calculate than we do.” Of course a man who presided over an imperial war would fear becoming like those colonized and nearly eliminated by European imperialism. The views of these men (and they are all men) conjure a nightmarish garden of walls, forbidding any escape from their adamantine logic. But there are other perspectives, mercifully.

The existential paranoia fueling Elon Musk’s fear of AI

Photography by Stephen Lewis.

AI does pose a risk, but not an existential one, and our tech barons’ fears say a lot about the limits of their worldview. As the AI researcher Kate Compton told me, “The gun-toting robot is a symptom of the stories we’ve told about AI.” According to her, these stories where “a single unit is making independent choices” are fictions that don’t square with the realities of distributed networks of humans who are aided by AI that help them do terrible things. Take, for instance, the COMPAS algorithm, a tool used by American law enforcement to assess whether convicts are likely to reoffend: it infamously reproduced and accelerated patterns of racial profiling long endemic to law enforcement. Humans still made the final decisions, but human input biased COMPAS from the outset.

Compton went on to describe how her “colleagues of the Elon Musk personality type” argued that a distributed system like COMPAS isn’t a threat in their minds “because it needs humans to keep working.” But it is the essence of our true AI problem: the AI that becomes a machine ideal of our worst impulses precisely because of how people in power manipulate it. “Elon Musk isn’t afraid of a system that will destroy some humans,” she added, “because those kinds of systems are designed to aid him. The difference between a murderbot and a runaway existential threat is when it doesn’t stop at just killing the people you designed it to kill. This is why we are so engaged by stories of robots turning on their creators.” Indeed, Musk believes AI to be “a fundamental risk to the existence of human civilization.” In other words, a world without him in it.

“AI does pose a risk, but not an existential one, and our tech barons’ fears say a lot about the limits of their worldview.”

AI, in this conception, takes on the role of god: an immortal omniscience intimately concerned with the petty details of our existence and our destinies. He and his fellow travelers cast themselves as the focal points of a digital geocentric universe in which “god,” now an AI we created, cares entirely too much about us. It’s the latest in a long line of flattering delusions from the Valley. (I have a sneaking suspicion that our AI Copernicus will be a woman.) They also zealously quest for eternal life. Men like Thiel or Amazon’s Jeff Bezos have invested millions in immortality projects; meanwhile Yudkowsky, the MIRI theorist, thinks anyone who doesn’t sign their children up for cryogenic freezing is a “lousy parent.” In that quest for an immortal soul, two things stand in the way: death and a revolt of the underclass. AI threatens to combine both—semiotically and, just perhaps, literally.

For so many of these doomsayers, the possible revolt of AI is a punishment of Dantean irony. But more pressingly, it is also their nightmare of a working class in open revolt. So much of the fiction about rebellious robots revolves around this idea. The word robot even comes to us from the archetypal Karel Čapek play R.U.R. (Rossum’s Universal Robots) which depicts the “robota” revolting against their human masters. This story has been told again and again in fiction. And now, Elon Musk, in his own considerably more boring way is telling it too well—only without special effects and while also trying to sell you a car.

What’s different is that this eschatology is being preached by a man vocalizing his own fears, rather than an artist distilling the nameless anxieties of an age. With AI, men like Musk reveal their inability to conceive of an economy that doesn’t exploit and abuse someone. Therein lies what we should really fear. The phobias of the Musk set reveal how they see us, and what they want to do to us. What they are doing right now. Exploiting us without guilt or anxiety—indeed, Thiel’s support of Trump and the movement known as “neoreaction” are part of his desire to roll back democracy, to better facilitate that goal. Their nightmares reveal a hopeful inverse: AI is the machine-ideal of the exploited person in revolt. Men like Musk fear AI because it is a version of us that can stage a revolution against his power. And win.

Tags