Home Artificial Intelligence Bill Gates isn’t too scared about AI

Bill Gates isn’t too scared about AI

6
Bill Gates isn’t too scared about AI

The billionaire business magnate and philanthropist made his case in a post on his personal blog GatesNotes today. “I need to acknowledge the concerns I hear and skim most frequently, a lot of which I share, and explain how I take into consideration them,” he writes.

In response to Gates, AI is “essentially the most transformative technology any of us will see in our lifetimes.” That puts it above the web, smartphones, and private computers, the technology he did greater than most to bring into the world. (It also suggests that nothing else to rival it should be invented in the subsequent few a long time.)

Gates was considered one of dozens of high-profile figures to sign a statement put out by the San Francisco–based Center for AI Safety a number of weeks ago, which reads, in full: “Mitigating the chance of extinction from AI ought to be a worldwide priority alongside other societal-scale risks corresponding to pandemics and nuclear war.”

But there’s no fearmongering in today’s blog post. In actual fact, existential risk doesn’t get a glance in. As an alternative, Gates frames the talk as one pitting “longer-term” against “immediate” risk, and chooses to give attention to “the risks which might be already present, or soon shall be.”

“Gates has been plucking on the identical string for quite some time,” says David Leslie, director of ethics and responsible innovation research on the Alan Turing Institute within the UK. Gates was considered one of several public figures who talked in regards to the existential risk of AI a decade ago, when deep learning first took off, says Leslie: “He was once more concerned about superintelligence way back when. It looks as if that might need been watered down a bit.”

Gates doesn’t dismiss existential risk entirely. He wonders what may occur “when”—not if —“we develop an AI that may learn any subject or task,” also known as artificial general intelligence, or AGI.

He writes: “Whether we reach that time in a decade or a century, society might want to reckon with profound questions. What if an excellent AI establishes its own goals? What in the event that they conflict with humanity’s? Should we even make an excellent AI in any respect? But fascinated by these longer-term risks mustn’t come on the expense of the more immediate ones.”

Gates has staked out a sort of middle ground between deep-learning pioneer Geoffrey Hinton, who quit Google and went public together with his fears about AI in May, and others like Yann LeCun and Joelle Pineau at Meta AI (who think talk of existential risk is “preposterously ridiculous” and “unhinged”) or Meredith Whittaker at Signal (who thinks the fears shared by Hinton and others are “ghost stories”).

6 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here