Home Artificial Intelligence There ought to be no regulation of artificial intelligence. Here’s why.

There ought to be no regulation of artificial intelligence. Here’s why.

0
There ought to be no regulation of artificial intelligence. Here’s why.

For a start, regulating AI would mean regulating an industry that doesn’t exist yet. For all the net noise, what now we have, on the time of writing, is a powerful prototype in GPT-4, plus one other, Google Bard, which doesn’t work, and an infinite stream of online accounts promising the art of the possible. There’s also Midjourney, a forum that generates gamer art when the server isn’t crashing. There’s no industry to manage. How do you regulate something that doesn’t exist?

Set your mind back to the dawn of social media, circa 2003. Recall the primary version of Facebook. It offered a profile page, a capability to befriend someone, a capability to “poke” others, and that’s about it. There was no news feed, no fast messaging, no notifications, no ads, no video, no streams, no shorts, nothing.

Back then, no person knew how social media products would evolve. And that’s where we are actually with intelligent technology. Nobody knows what AI products will appear to be in ten years, and since none of us can foretell the longer term, none of us can say which regulations will in the long run be useful.

Image courtesy of marketoonist.com

Regulation is a function of society that, at its best, stops people from swindling and harming each other. But, although society cannot function well without some sort of regulation of industry, this isn’t the complete picture.

At its worst, regulation acts as a protector of politically favoured corporations from competition. In other words, those corporations acquire a regulatory moat. The protected corporations are almost all the time large corporations. In such cases, regulatory barriers to entry are erected, and smaller enterprises struggle to enter the market, even in the event that they have competitive solutions.

That is how large banks, energy corporations, telcos and pharmas survive decade after decade, regardless of bloated operations, the inefficiencies of centralisation, and profit capture by senior management and shareholders.

Give it some thought — banking, for instance, isn’t a resource intensive industry. It’s essentially an information management service. Why achieve this few recent banks join the playing field each decade?

With leaked Google emails about missing “moats”, it does make one wonder why corporations like Microsoft and OpenAI, often averse to regulation, seem like aggressively pushing for it with AI.

Megacap AI stocks have been rising since recent breakthroughs. Discover more with Sharestep

There are some very intelligent people leading the digital industries. But introverted folks are sometimes seduced by what could occur, in detriment of what’s likely to occur. This seeds a sort of paranoid, sci-fi pondering: an acceptance of unlikely tales as in the event that they were probable, and even inevitable. No technology has had more of this treatment than AI.

For example, several years ago, the tech community had a collective breakdown due to a thought experiment referred to as Roko’s Basilisk. Without going into details, Roko’s Basilisk makes the logical case for a future AI that can torture virtual copies of ourselves through eternity.

This paranoid style is on display amongst top public figures within the industry. I recently heard Max Tegmark, and a couple of days later Sam Altman (CEO of OpenAI) speaking anxiously about Moloch, a satanic entity from the Hebrew Bible, and the way its mythology pertains to artificial intelligence technology.

The founding father of this brand of elite, sci-fi paranoia might be Nick Bostrom, a philosopher who works at Oxford University. Specifically, in his book Superintelligence, he wrote in regards to the paperclip maximiser. On this scenario, a superintelligent AI has been tasked with the only aim of manufacturing paperclips. It’s so focused on the duty, and so effective, that it transforms all the surface of the earth into paperclips, ending humanity and all life on earth.

I propose that there’s a flaw on this sort of reasoning. Let’s call it the plausible equals inevitable fallacy. You may generalise it like this:

  1. Imagine any improbable scenario. Let’s call it X.
  2. Work backwards to assume a plausible narrative by which X could occur.
  3. Now that you simply’ve imagined a way that X could occur, determine that it will occur.

Let’s give it a try. I’m writing this from my kitchen in the meanwhile. Many objects surround me. The one which catches my eye is the electrical toaster. Is there a scenario by which electric toasters could lead on to catastrophe for humanity, perhaps for all times on earth?

Well, perhaps not as they are actually, but electric toasters could evolve. Indeed, it’s not out of the query that toasters in future could possibly be given more control over breakfast. It isn’t hard to assume a toaster that, for instance, could slice the sourdough itself. And, for that matter, a toaster that you possibly can program to supply toast at a set time within the morning. In any case, it could be pretty cool to have your toast ready whenever you enter the kitchen after waking.

But what if among the cheaper models had flaws? What in the event that they began spitting smouldering toast into the kitchen in the midst of the night? These sorts of accidents could lead on to deal with fires. In densely populated developments, such accidents could lead on to large lack of life. Actually, it isn’t out of the query that entire cities could burn to the bottom…

Of all of the concerns surrounding AI, the next two are most often raised: mass dissemination of harmful propaganda, and chaotic rises in unemployment. I suggest that these are also instances of the plausible equals inevitable fallacy.

At the top of the day, almost anything can plausibly occur. But most things we imagine could occur, are extremely unlikely to occur. The long run almost never seems as we expected.

Tech leaders have been debating the role of Moloch in AGI. Image courtesy of allthatsinteresting.com

Finally, and crucially, there is no such thing as a present reason to return down hard on AI with regulation. Take a go searching you. Nothing has happened. No lives have been destroyed, no countries torn down, no jobs lost. At this moment, all now we have is a successful chatbot. This will change in future, and we must always actually be vigilant. But it surely’s also clever to be vigilant of hysteria.

LEAVE A REPLY

Please enter your comment!
Please enter your name here