That Elon Musk has created a new Artificial Intelligence company, xAI, should come as a surprise to nobody. After all, Tesla’s “Full Self-Driving” and “Optimus” robots practically seeded the current AI bubble by showing just how absurdly credulous investors could be in the face of the most absurd and thinly-supported claims related to artificial intelligence. After six years of shamelessly cashing in on self-driving car hype without having to actually deliver more than a driver assistance system and some debunked safety claims, it was inevitable that Musk would be back for further bites of such an irresistible apple.
What is striking about xAI is its mission, which is no less than “to understand reality” (or “to understand the true nature of the universe,” depending on the day). This marks a distinct turn for the high priest of high tech miracles, from the material realm of world-improving gadgets to the philosophical immaterium of spirit and essence. In a Twitter Space that I couldn’t even bring myself to listen to, Musk was explicit about the company’s goals: bring about an artificial superintelligence capable of bringing down governments and enslaving humanity. Framed within a narrative of global superpower conflict, Musk’s claim to be birthing a new God is a classic work of religious eschatology that its adherents are too culturally deprived to recognize for what it is.
On the one hand, this is just the continuation of a long series of escalating gambits on Musk’s part, beginning remarkably early in his involvement with Tesla, which I documented in my book Ludicrous: The Unvarnished Story of Tesla Motors. Long ago, Musk realized that the path through Tesla’s many challenges lay in having an increasingly histrionic set of miracles to hype up for fundraising purposes, each more ludicrous than the next. After solving climate change, automating car manufacturing, colonizing Mars, Level 5 AVs, general-purpose humanoid robots and curing every known disease by mapping and manipulating the human brain with a single computer interface, there just isn’t anywhere else to escalate to besides forging God on earth.
The question then, isn’t so much why Musk has jumped into the business of monetizing eschatological anticipation, but why his transparent religion-peddling is still seen as part of the (ostensibly) rational world of science and capitalism. That Musk leads a cult has been obvious for some time, but what sets his cult apart is that it still bears the tattered legitimacy of science, technology and capitalism. Where other religions are explicit in their rejection of reason in favor of faith, Musk has ushered huge numbers of people beyond the skepticism that once delineated reason from faith, without them ever realizing that they’ve abandoned their core values. This has been going on for some time, but with his proclamation of an age of AI apocalypse the break is complete.
The seamless shift from things like rockets and electric cars to alchemical absurdities like discerning “the true essence of the universe” and creating new artificial Gods is made possible in part by the technological wonders we’ve witnessed in our lifetimes. The creation of the iPhone, not just as a device but as an entire ecosystem and economy, was so profoundly transformative that we’ve come to expect similarly epochal disruptions on a far more regular basis than is remotely reasonable. But faith hasn’t just infiltrated our relationship with technology; thanks to Musk it’s also become an increasingly unavoidable feature of modern capitalism.
Looking back at the years of reporting I did on Tesla, and the online engagement that flowed out of it, the feature that stands out from any other phenomenon that I have ever covered or discussed online was the faith it inspired. Tesla has almost always been in a state of crisis, much of it happening in relatively plain sight for comprehensible reasons, but its sense of inevitability always pushed it through. When the company spent billions on “alien dreadnought” automated manufacturing for the Model 3, investors slipped effortlessly from being sure that it would succeed to being sure that the needed re-up of capital when it failed would bring resounding success in some other way. Having worked hard to surface and explain the facts that called this inevitability into question, I can say: the facts never once mattered.
Here, faced with apocalyptic visions of superhuman tech priests crafting superintelligent new Gods in order to mediate a global superpower conflict, it’s as easy as ever to step off the ride. On a human level, losing faith in Musk is as simple as looking at how he has strung along the rubes who have believed (and paid for!) his “Full Self-Driving” pitch since 2016. At a very high level, even the technical issues with machine “superintelligence” are not undiscernable to a reasonably skeptical non-technical mind.
As usual, I found Joseph Weizenbaum’s explanation of the shortcomings of machine intelligence a useful guide. Framing human intelligence as a dialogue between the conscious and unconscious mind, he writes:
“The lesson here is rather that the part of the human mind which communicates to us in rational and scientific terms is itself an instrument that disturbs what it observes, particularly its voiceless partner, the unconscious, between which and our conscious selves it mediates. Its constraints and limitations circumscribe what are to constitute rational -again, if you will, scientific- descriptions and interpretations of the things of the world. These descriptions can therefore never be whole, anymore than a musical score can be a whole description or interpretation of even the simplest song.
But, and this is the saving grace of which an insolent and arrogant scientism attempts to rob us, we come to know and understand not only by way of the mechanisms of the conscious. We are capable of listening with the third ear, of sensing living truth that is truth beyond any standards of provability. It is that kind of understanding, and the kinds of intelligence that is derived from it, which I claim is beyond the habits of computers to simulate.
We have the habit, and it is sometimes useful to us, of speaking of man, mind, intelligence, and other such universal concepts. But gradually, even slyly, our own minds become infected with what A.N. Whitehead called the fallacy of misplaced concreteness. We come to believe that these theoretical terms are ultimately interpretable as observations, that in the “visible future” we will have ingenious instruments capable of measure the “objects” to which these terms refer. There is, however, no such thing as mind; there are only individual minds, each belonging not to “man” but to individual human beings. I have argued that intelligence cannot be measured by ingeniously constructed meter sticks placed along a one dimensional continuum. Intelligence can be usefully discussed only in terms of domains of thoughts and action. From this I derive the conclusion that it cannot be useful, to say the least, to base serious work on notions of “how much” intelligence may be given a computer. Debates based on such ideas–e.g., “Will computers ever exceed man in intelligence?”– are doomed to sterility.”
Joseph Weizenbaum, Computer Power and Human Reason: From Judgement to Calculation (1976)
The religion of which Elon Musk is a high priest is poignantly summed up here: an insolent, arrogant “scientism” that robs humanity of its very soul, in pursuit of a new technological God. That this God cannot even be conceptualized without flattening the incomprehensible diversity of humanity into unreal concepts is instructive. If it were possible to reduce humanity to computable datapoints like “intelligence,” we might be able to normalize ourselves as part of a computer-ingestible dataset that could (at least in theory) produce an artificial intelligence capable of being compared on an apples-to-apples basis with human intelligence.
But even if that dehumanization were acceptable, let alone useful or desirable, it points to the fundamental problem with improving AI techniques in pursuit of superhuman or god-like consciousness. The issue isn’t one of algorithmic structure or technique, the issue is one of data labeling. We already struggle to collect and accurately label data about the relatively simple physical characteristics of the material world we live in, and this is currently the most meaningful challenge to building autonomous systems: Tesla’s own (now-former) head of AI Andrej Karpathy has said repeatedly that his work is far more about data labeling, hygiene and curation than algorithms. The idea that this challenge should be any different in the immaterial space of human intelligence and conceptual logic, where data points lack the physical grounding they enjoy in the real world, is of course absurd.
But collecting, labeling and curating data is fundamentally a matter of labor, and not a space ripe for creative breakthrough. Like car manufacturing, it’s the kind of work where methodical rigor wins over Red Bull-fueled hackathons. That’s because Musk isn’t purely a tech guy who has backed into religion, which is more how I see someone like Eliezer Yudkowsky. Musk isn’t just a religious leader, he’s a televangelist. At the end of the day, it’s the promise of algorithmic breakthrough that fuels the messianic, apocalyptic, and above all, capitalist anticipation Musk is instrumentalizing. Matters of technical feasibility mean nothing; what matters is that the size of the jackpot overwhelms the impossibility of the odds of hitting it.
We’ve been watching this play out in the realm of self-driving cars for years now. Tesla’s entire approach to driving automation technology has basically no real technical basis, beyond the faith that someday its fleet database becomes so big, or a new algorithm becomes so magical, that it just works. So lost are Musk’s followers in a maze of word games and “misplaced concreteness,” that they can’t even define a consistent line between when a car is driving itself and when it isn’t. They can speak about abstract concepts (what one might term “the lore”) with absolute confidence, but the moment the conversation becomes rooted in the real world of insurance, legal liability, economic viability or safety-critical performance and validation, the whole edifice falls apart.
What’s dangerous about all this is not the rise of religious expression itself. Religion, spirituality and philosophy are deeply human impulses, and the idea that rationalism, science or material progress could simply do away with them seems unrealistic. What’s worrying is that Musk is peddling religion disguised as scientific rationalism, muddling the two because his real goal isn’t science or spirituality, but fueling his insanely lucrative runaway hype train. Scientific reason and spirituality are some of the deepest wells of human potential, and he’s poisoning them to make a few more billion.
Writing nearly fifty years ago, Weizenbaum recognized a risk that feels remarkably similar to the bizarre pass that the Elon Musk techno-spiritual self-enrichment carnival has brought us to. Between the structure of the computer and the fundamental aspects of the human mind, there has always been the potential for the cognitive-spiritual confusion in which we now find ourselves. In a particularly lyrical passage toward the end of “Computer Power,” Weizenbaum gropes for the language to describe the scenario our relationship with science and spirituality now seem to be running into:
“It also used to be said that religion was the opiate of the people. I suppose that saying meant that the people were drugged with visions of the good life that would surely be theirs if they but patiently endured the earthly hell their masters made for them. On the other hand, it may be that religion was not addictive at all. Had it been, perhaps God would not have died and the new rationality would not have won out over grace. But instrumental reason, triumphant technique, and unbridled science are addictive. They create a concrete reality, a self-fulfilling nightmare. The optimistic technologists may yet be right: perhaps we have reached the point of no return. But why is the crew that has taken us this far cheering? Why do the passengers not look up from their games? Finally, now that we and no longer God are playing dice with the universe, how do we keep from coming up craps?”
Joseph Weizenbaum, Computer Power and Human Reason: From Judgement to Calculation (1976)
At the point that we can no longer distinguish between reason and religion, between sincere belief and cynical fundraising pitch, between humanity and inhumanity, it feels like we’ve reached the point that Weizenbaum was warning about. Either Elon Musk delivers, creates a new God and ushers in the end times, or we have to figure out new ways of relating with technology, spirituality, and our basic humanity. In the space between those outcomes, it just feels like we’ve crapped out.
Leave a comment