Home Artificial Intelligence 4 Reasons Why I Won’t Sign the “Existential Risk” Recent Statement

4 Reasons Why I Won’t Sign the “Existential Risk” Recent Statement

2
4 Reasons Why I Won’t Sign the “Existential Risk” Recent Statement

Opinion

Fueling fear is a dangerous game

Photo by Money Macanaya on Unsplash

Some weeks ago, I published my pro and con arguments for signing that very well-known open letter by the Way forward for Life Institute — in the long run, I signed it, though there have been some caveats. Just a few radio and TV hosts interviewed me to elucidate what all of the fuss was about.

More recently, I got one other email from the Way forward for Life Institute (FLI in the next) asking me to sign a declaration: this time, it was a brief statement by the Center for AI Safety (CAIS) focused on the existential threats posed by recent AI developments.

The statement goes as follows:

Mitigating the chance of extinction from AI must be a worldwide priority alongside other societal-scale risks similar to pandemics and nuclear war.”

Very concise indeed; how could there be an issue with this?

If the previous FLI statement had weaknesses, this one doubles down on them as a substitute of correcting them, making it unattainable for me to support it.

Particularly, I actually have the 4 following objections, which needless to say are going to be a bit longer than the declaration itself:

The brand new statement is actually a call to panic about AI, and never simply to panic about some natural consequences of it that we are able to see without delay, but as a substitute about hypothetical risks which were raised by random individuals who give very vague risk estimations like “10 percent risk of human extinction.”

Really? 10% risk of human extinction? Based on what? The survey respondents weren’t asked to justify or explain their reasons, but I believe many were occupied with “Terminator-like” scenarios. You recognize, horror movies are intended to scare you, so that you go to the films. But to translate the message to reality will not be sound reasoning.

The supposed threat to humanity assumes a capability to destroy us that hasn’t been explained and an agency—the willingness to erase humankind. How would a machine need to kill us when devices don’t have any feelings, be they good or bad? Machines don’t “want” this or that.

The actual dangers of AI we see playing out without delay are very different. Certainly one of them is the potential of Generative AI to make fake voices, pictures, and videos. Are you able to imagine what you’d do when you received a phone call together with your daughter’s voice (impersonated with a fake voice) where she asks you to rescue her?

One other one is public misinformation with fake evidence, like counterfeit videos. The one with a fake Pope was relatively innocent, but shortly, Twitter will probably be flooded with false declarations, images about events that never occurred, and so forth. By the way in which, have you ever considered that the US elections are approaching?

Then there’s the exploitation of human-made content that AI algorithms are mining all around the web to supply their “original” images and text: humans’ work is taken with none financial compensation. In some instances, reference to human work is explicit, like in “make this image within the sort of X.”

If within the FLI letter of a month ago there have been suggestions of a “man vs. machine” mindset, it’s made very explicit this time. “Extinction from AI,” they call it, nothing less.

In the true world where we reside—not in apocalyptic Hollywood movies—it’s not the machines that damage us or are threatening our existence: it’s more like some humans (by chance, the powerful and wealthy ones, the owners of huge firms) leverage recent powerful technology to extend their fortunes, and sometimes on the expense of the powerless: now we have seen how the supply of computer-generated graphics has shrunk the small business of graphic artists in places like Fiverr.

Further, the idea that advanced machine intelligence would attempt to dethrone humans must be questioned; as Steven Pinker wrote:

“AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking on the world.”

Yann LeCun—the famous head of AI research at Meta—declared:

“Humans have all types of drives that make them do bad things to one another, just like the self-preservation instinct… Those drives are programmed into our brain, but there is completely no reason to construct robots which have the identical sort of drives.”

No, machines gone rogue is not going to develop into our overlords or exterminate us: other humans, who’re currently our overlords, will increase their domination by leveraging the economic and technological means at their disposal—including AI if it’s fitting.

I get that the FLI mentioned the pandemics to relate their statement with something that we just lived—and left an emotional scar on lots of us— but it surely’s not a legitimate comparison. Leaving aside some conspiracy theories, the pandemic we emerged from was not technology—vaccines were. How does the FLI assume catastrophic AI would spread? By contagion?

In fact, nuclear bombs are a technological development, but within the case of a nuclear war, we all know precisely how and why the bomb would destroy us: it’s not speculation, because it is within the case of “rogue AI.”

One last item that drew my attention was to see the list of individuals signing the statement, starting with Sam Altman. He’s the leader of OpenAI, which, with ChatGPT since November 2022, put the frantic AI race we live in motion. Even the mighty Google struggled to maintain pace on this race—didn’t Microsoft’s Satya Nadella say he desired to “make Google dance”? He got his wish at the fee of accelerating the AI race.

It doesn’t make sense to me that individuals on the helm of the very firms fueling this AI race are also signing this statement. Altman could say that he’s very nervous by AI developments, but when we see his company keeps going straight at full speed, then his preoccupation looks meaningless and incongruous. I don’t intend to moralize about Altman’s declarations, but accepting his support at face value undermines the statement’s validity –much more so after we consider that for Altman’s company leading the race is important to their financial bottom line.

It’s not that machines are going rogue. It’s the use that capitalistic monopolies and despotic governments make of AI tools that might damage us. And never in a Hollywood dystopic future, but in the true world where we’re today.

I won’t endorse a fear-fueled vision of machines that’s hypocritical in the long run since it’s brought by the very firms that attempt to distract from their profit-seeking operating ways. That’s why I’m not signing this recent statement endorsed by the FLI.

Further, I believe that wealthy and influential leaders can allow themselves to look at imaginary threats because they don’t worry about more “mundane” real threats just like the income reduction of a contract graphic artist: they know thoroughly they’ll never struggle to make ends meet at the tip of the month, nor will do their children or grandchildren.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here