If you didn’t know better, you’d mistake it for satire: Industry leaders caution on extinction risk from technology they’re rapidly developing and deploying.
For the second time in as many months, A.I. heavyweights issued an almost comically dire warning on a possible human apocalypse.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the Center for A.I. Safety said Tuesday, in a one-sentence statement signed by dozens of notable figures including OpenAI CEO Sam Altman and executives from Google DeepMind and Anthropic, which raised $450 million last week to bring the startup’s cash pile to almost $1.5 billion.
The statement came two months, almost to the day, from a March letter+ endorsed by Elon Musk, Yuval Noah Harari and a veritable who’s who of… I don’t know, visionaries, I guess. Dan Hendrycks, the Center for AI Safety’s executive director, told The New York Times that Tuesday’s warning was an opportunity for more industry veterans and leaders to sound the alarm — or to sound alarmist, depending on how seriously you take this risk.
“There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
If that’s true, you’d be forgiven for asking why these people — all of them — don’t simply stop building the technology until they think it’s safe. This isn’t World War II. Yes, there’s an A.I. arms race, and it’s certainly true that autocratic governments are keen to beat their democratic counterparts to the proverbial punch, but it’s not the same sense of urgency that drove the US to create and use a nuclear bomb.
Let’s face it, most of today’s A.I. work is for commercial purposes. If there was even a minuscule chance that the sprint to create highly advanced chatbots and applications that can seamlessly replace Mel Gibson with Sylvester Stallone in Mad Max might lead to an “extinction” event for humanity (or put us all in Mad Max), it surely isn’t worth it.
Altman has presented himself (including to Congress+) as a champion for responsible A.I. development and regulation. I don’t doubt his sincerity, nor do I doubt that he believes the technology has the potential to solve many of humanity’s most pressing problems. But, again, when the downside is defined as “extinction,” there’s an argument to be made that no amount of upside justifies the gamble, unless of course that upside is “no extinction” or, put differently, survival.
In other words, if you believe there’s a non-trivial chance that something you’re pursuing could end human civilization, the only way to justify the pursuit is by arguing that not moving forward could likewise mean the end of the world. I don’t think you can make that case here.
Meanwhile, Nvidia did it: Jensen Huang got his $1 trillion valuation on Tuesday, when the shares touched $414 at the highs, up an astounding 180% for 2023.
Just nine companies have ever managed the milestone. Nvidia is the first chipmaker to boast a 13-figure valuation.
As most readers are probably aware, Huang wowed a show in Taiwan on Monday with a presentation touting several new A.I.-related products and services. “It’s too much,” he bragged, towards the end. “I know it’s too much.”
It’s actually “more cowbell.” Markets have a fever, to quote the famous Saturday Night Live sketch. And the only prescription is more A.I.
Although Huang has acknowledged the necessity of ensuring that A.I. is safe, he’s unapologetically brash about Nvidia’s approach. At a separate event in Taiwan over the weekend, he summed up the company’s philosophy. “Either you are running for food, or you are running from becoming food,” he said.
Maybe it’s risk extinction or go extinct after all.