Home Artificial Intelligence How existential risk became the largest meme in AI

How existential risk became the largest meme in AI

1
How existential risk became the largest meme in AI

The starkest assertion, signed by all those figures and lots of more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the chance of extinction from AI ought to be a world priority alongside other societal-scale risks similar to pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test variety of statement, we might have said ‘existential risk’ because that may mean a variety of things to a variety of different people,” says CAIS director Dan Hendrycks. But they desired to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ although a variety of us are concerned with various other risks as well,” says Hendrycks.

We have been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views at the moment are mainstream talking points, grabbing not only headlines but the eye of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, a company that studies the social impact of technology.

What’s occurring? Has AI really turn into (more) dangerous? And why are the individuals who ushered on this tech now those raising the alarm?   

It’s true that these views split the sphere. Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI goes to pose these existential risks,” says Signal president Meredith Whittaker, who’s cofounder and former director of the AI Now Institute, a research lab that studies the policy implications of artificial intelligence. “Ghost stories are contagious—it’s really exciting and stimulating to be afraid.”

“It’s also a approach to skim over the whole lot that’s happening in the current day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”

An old fear

Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here