Running Toward Extinction

If you didn’t know better, you’d mistake it for satire: Industry leaders caution on extinction risk from technology they’re rapidly developing and deploying.

For the second time in as many months, A.I. heavyweights issued an almost comically dire warning on a possible human apocalypse.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the Center for A.I. Safety said Tuesday, in a one-sentence statement signed by dozens of notable figures including OpenAI CEO Sam Altman and executives from Google DeepMind and Anthropic, which raised $450 million last week to bring the startup’s cash pile to almost $1.5 billion.

The statement came two months, almost to the day, from a March letter+ endorsed by Elon Musk, Yuval Noah Harari and a veritable who’s who of… I don’t know, visionaries, I guess. Dan Hendrycks, the Center for AI Safety’s executive director, told The New York Times that Tuesday’s warning was an opportunity for more industry veterans and leaders to sound the alarm — or to sound alarmist, depending on how seriously you take this risk.

“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

If that’s true, you’d be forgiven for asking why these people — all of them — don’t simply stop building the technology until they think it’s safe. This isn’t World War II. Yes, there’s an A.I. arms race, and it’s certainly true that autocratic governments are keen to beat their democratic counterparts to the proverbial punch, but it’s not the same sense of urgency that drove the US to create and use a nuclear bomb.

Let’s face it, most of today’s A.I. work is for commercial purposes. If there was even a minuscule chance that the sprint to create highly advanced chatbots and applications that can seamlessly replace Mel Gibson with Sylvester Stallone in Mad Max might lead to an “extinction” event for humanity (or put us all in Mad Max), it surely isn’t worth it.

Altman has presented himself (including to Congress+) as a champion for responsible A.I. development and regulation. I don’t doubt his sincerity, nor do I doubt that he believes the technology has the potential to solve many of humanity’s most pressing problems. But, again, when the downside is defined as “extinction,” there’s an argument to be made that no amount of upside justifies the gamble, unless of course that upside is “no extinction” or, put differently, survival.

In other words, if you believe there’s a non-trivial chance that something you’re pursuing could end human civilization, the only way to justify the pursuit is by arguing that not moving forward could likewise mean the end of the world. I don’t think you can make that case here.

Meanwhile, Nvidia did it: Jensen Huang got his $1 trillion valuation on Tuesday, when the shares touched $414 at the highs, up an astounding 180% for 2023.

Just nine companies have ever managed the milestone. Nvidia is the first chipmaker to boast a 13-figure valuation.

As most readers are probably aware, Huang wowed a show in Taiwan on Monday with a presentation touting several new A.I.-related products and services. “It’s too much,” he bragged, towards the end. “I know it’s too much.”

It’s actually “more cowbell.” Markets have a fever, to quote the famous Saturday Night Live sketch. And the only prescription is more A.I.

Although Huang has acknowledged the necessity of ensuring that A.I. is safe, he’s unapologetically brash about Nvidia’s approach. At a separate event in Taiwan over the weekend, he summed up the company’s philosophy. “Either you are running for food, or you are running from becoming food,” he said.

Maybe it’s risk extinction or go extinct after all.

Read more:

Prospectors, Gamblers And Golden Tickets

Don’t Short A Bubble

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

8 thoughts on “Running Toward Extinction

    1. Exactly. I don’t understand the AI extinction fear. Perhaps I’m not paying attention, but has anyone filled in the details or sketched out a plausible scenario. Maybe Hollywood could make a movie in the vein of China Syndrome. That certainly worked to scare everyone off nuclear power plants. Experience says humans don’t pay much attention to extinction events if they are predicted to happen more than a week into the future. Climate change extinction prophecies do have some meat on the bone, but most humans don’t seem to be impressed.

    2. Assuming some measured progress on reduced carbon emission, climate change isn’t a species ending event. It’s not going to be pleasant but a couple of bush wars and a few millions of refugees aren’t going to be the end of the human race.

  1. The AI law that lawmakers should and can pass right now:

    Anti-deepfake law. All photorealistic images (still and video) generated by AI must be conspicuously so labeled whenever published.

    Even MSFT is asking for this.

  2. On the comparison between now and WWII – I could push back. By 1943, it was clear that the US + the USSR were fully capable of containing and eventually destroying Germany + Japan with strictly conventional means. Nuclear weapons were not needed. Indeed, the use of such weapons against Japan has become controversial over time as it has become clear that, from a military standpoint, it was pretty gratuitous.

    Today, you can dream up scenarios where, if China gets a subservient AGI or ASI first, they’ll get to be the boot stamping on all human faces, forever they so patently want to be. No one can doubt their desire and resolve. Compared to that, only the threat of Nazi Germany at the height of its power (in 1941, say) feels like an apt comparison.

  3. The reason to which I attribute humanity’s “meh” reaction to extinction events is pure selfishness. No one person is willing to give up anything of what they have to save us all. When it still existed (bought by Discovery) Omni magazine did a brave real time experiment in the reality of the “Prisoner’s Dilemma” problem. They put a postcard in every issue of the magazine one month with a tempting offer. Put your name and address on the card and mail it in with your choice of a check in one of two boxes, “Send me $100” or “Send me $10.” The deal was that if more than 10% of the cards returned asked for the hundy, no one would receive anything. However, if fewer than 10% requested the big money, everyone would get the amount they requested, including those asking for the $100. Nearly a million replies were sent (good for the mag) with 11% asking for the big money, so no payouts. People will risk an accident to get a stupid parking place in front of you. When I was still teaching I was an avid reader of the Chronicle of Higher Education. One day there was an article about a prof at University of FL who was about to be late for his class, eying the last place in the faculty parking lot, and about to pull in when a student entered from the other side. He got out and told the student to park in his designated lot. The student flipped him the bird and just walked away. The prof became enraged so he pulled back as far as he could, floored his car and rammed the student’s car, three times in all. Then he pulled around to the other side and did the same thing to the rear of the student’s car. He managed to drive away but still got arrested. We mostly aren’t selfless and want what we want and think we are entitled to. During COVID hundreds of thousands of people died because they stubbornly refused to get vaccinated. Last until the end of the century? I don’t think so.

NEWSROOM crewneck & prints