Home Artificial Intelligence No, the AI Sky Isn’t Falling

No, the AI Sky Isn’t Falling

2
No, the AI Sky Isn’t Falling

As if AI alarmism wasn’t already running far ahead of reality, a bunch of AI researchers and tech personalities including Elon Musk, Steve Wozniak and deep-learning pioneer Yoshua Bengio has now written an open letter fanning the flames. The letter calls for a six-month moratorium on training powerful AI systems due to supposedly imminent danger they pose. MIT professor Max Tegmark, considered one of the letter’s organizers, says that the continuing competition to enhance AI is “more of a suicide race”. The one problem is that the alleged risks are unrealistic, the assumed state-of-the-art in AI laughably off-base, and the proposed measures quite harmful.

Let’s calm down for a moment and take a look at what AI really is, where it could be headed, and what (if anything) to do about it.

The least of the letter writers’ fears is that AI will “flood our information channels with propaganda and untruth”. But they’re already flooded with them, courtesy of our fellow humans, and slowing down AI development will at the start hinder our ability to robotically detect and stop misinformation, which is the one scalable option. The fear of AI-generated falsehoods also makes the pernicious assumption that humans are dumb, and can naively keep taking AI-produced untruths at face value. But anyone who has played with ChatGPT already knows higher, and this can improve rapidly as people gain more experience of interacting with AI.

The letter writers also fear that AI will “automate away all of the jobs”, as if that is remotely realistic within the foreseeable future. It also ignores the experience of the last 200 years, where automation has systematically created more jobs than it destroyed. For many occupations, AI will automate some tasks but not others, making employees more productive and the work less routine. Lowering the price of what AI can do will increase the demand for its complements, and leave more cash in consumers’ pockets, which they will then spend on other things. AI will create many entirely recent occupations, as previous waves of automation have (e.g., app developer). All these increase demand for labor fairly than lower it. The AI revolution is already well underway, however the unemployment rate is the bottom in memory. We’d like more AI, not less, to enhance productivity and grow the economy.

The roles AIpocalypse is only the start, nonetheless. “Should we develop nonhuman minds which may eventually outnumber, outsmart, obsolete and replace us?” asks the letter. “Should we risk lack of control of our civilization?” This falls into the fundamental fallacy of confusing AI with human intelligence. (AI researchers, of all people, should know higher than this.) AIs haven’t any desire or ability to take control of our civilization. They are only very complex optimization systems, capable only of trying to succeed in the goals we set for them in computer code. Achieving these goals typically requires exponential computing power, but checking the outcomes is simple. (Curing cancer is difficult. Checking if a treatment worked isn’t.) Most AI systems look nothing like humans, and haven’t any desires, emotions, consciousness, or will. They only do useful things, and the higher they do them, the higher. Why hinder their development? One of the best technique to make them safer is to make them more intelligent.

However the AI alarmists’ solution to all these hypothetical problems is — you guessed it — extensive recent regulation. Not only of AI, but for good measure, of “large pools of computational capability” (presumably all the cloud). Governments should intervene in AI and direct its development. Why all this is able to do more good than harm is left completely unaddressed by my smart colleagues. They mention past moratoriums in support of theirs, all of which were in biology and medicine, where the stakes are entirely different. They seek advice from a “widely-endorsed” set of AI principles, most of whose signatories are in truth AI researchers. They back their claim that AI’s “profound risks” have been “shown by extensive research” with a brief list of controversial books and ideologically-driven articles, fairly than serious scientific studies. And so they ignore that even when a near-term worldwide moratorium on some varieties of AI research were a very good idea, it’s a totally impractical one — leading many to wonder what the actual purpose of the letter could possibly be.

“Powerful AI systems must be developed just once we’re confident that their effects will likely be positive and their risks will likely be manageable,” claims the letter. Good thing we didn’t do the identical with fire, the wheel, the printing press, steam engines, electricity, cars, computers, and countless other technologies, because if we had we’d still be living in caves. AI leaders like Yann LeCun and Andrew Ng have publicly opposed the thought of an AI moratorium, and I’d wish to add my voice to theirs. Before we start being told that “science says AI have to be regulated”, the general public deserves to know, at a minimum, that there are two sides to this debate. And before we start panicking about AI’s hypothetical dangers, perhaps we must always consider the damage that such a panic would do.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here