An early and influential thought experiment in the artificial intelligence doom literature is coming true—the paperclip maximizer scenario, except the world isn’t filling up with superfluous, unwanted paperclips. It’s filling up with predictions of AI doom.

For the same reason Hollywood doesn’t make nondisaster movies in which planes aren’t hijacked and cruise ships don’t overturn and sink, an AI expert is condemned to invisibility who doesn’t emphasize the alleged “existential” risk. This why you barely hear anything about Meta’s AI guru, Yann LeCun.

Asked for a definition, ChatGPT says: “Journalism is the production and distribution of reports on the interaction of events, facts, ideas, and people that are the ‘news of the day’ and that informs society to at least some degree” (emphasis added).

I love the last part. The paperclip maximizers push “to some degree” to its bare minimum, like a Big Mac’s serving up empty calories. Yet the press alarums about artificial intelligence serve a purpose—perhaps the only purpose press coverage can supply—by signalling to a half-attentive audience at least the magnitude of the moment. Never mind that 90% of the words are really about something else: displaced resentment of successful young people, antibusiness stereotypes, the simple, click-baity pleasure of the words “AI Apocalypse” in a headline.

Whatever was behind the firing and rehiring of Sam Altman at the ChatGPT-making company OpenAI, artificial intelligence is out of the bag globally, it’s out of the hands of any particular class of Silicon Valley nerd, it’s irrelevant whether one company, OpenAI, is officially motivated by profit or by a stated credo to save humanity from the harms of artificial intelligence.

Mistakes will be made, but before AI is given power to launch nuclear weapons, engineer new life forms or manufacture paperclips to the exclusion of all other human wants, it will be tested over and over on its ability to make reliable, useful judgments on nonextinction-level questions. These include what TV show you might like, possible side effects of a new drug, how a shopper might respond to a personalized offer, how a self-driving car might react to an odd traffic situation.

An electric switch must prove itself millions of times in noncritical applications before being used in a nuclear power plant. The same will be true with AI.

Anyway, humanity is already doomed. We know this. It may doom itself through its own technology, according to one theory of why alien civilizations aren’t evident. Humanity (so far) has survived the atom bomb, selective breeding, genetic engineering, the environmental effects of our own pollution, etc.

If artificial intelligence is a threat to our survival, it would likely be in conjunction with one of these other threats. The interesting exception: If AI dooms us, it might do so through inducement, by offering us a kind of existence that is no longer human.

This worry reportedly led to a falling out between Elon Musk and Google founder Larry Page. Artificial intelligence’s most attractive potential is also its most disturbing—the potential to ward off or indefinitely delay the extinction of human memory and cultural accomplishment if not our actual physical species.

Nearly four billion years have elapsed since life appeared on Earth, 600 million since life became multicellular, 520 million since the first brain evolved. As far as we know, the possibility of preserving knowledge through writing has been around only 5,000 years, birthing a technical civilization. Now comes the possibility of feeding that knowledge into ChatGPT to birth artificial intelligence.

Humanity is on a wild ride, and in some ways accelerating. So much of modern anxiety is the anxiety of not knowing when and how this ride might end. If faster-than-light travel isn’t possible, we can say one thing: The likely only chance of human survival outside our solar system, which has its own terminal date, is with the help of artificial intelligence, taking our genetic material or at least our thoughts to new homes and cultivating them there.

Even the guy in line with me last month at the DMV who doesn’t own a computer or smart phone (making it hard to register a car) was well versed in the risks and hopes of artificial intelligence. “AI is coming and will transform society” is seven words. One study finds the most-read articles in the New York Times average just over 1,000 words. Sometimes adding empty calories is a way to make the news stick.