Home Artificial Intelligence Elon Musk & AI: Lessons from the Cobra Effect

Elon Musk & AI: Lessons from the Cobra Effect

2
Elon Musk & AI: Lessons from the Cobra Effect

We all know Elon Musk as someone who has loved AI from the very starting. He helped OpenAI once they began their journey and even worked on self-driving cars, making them a reality when people had doubts about artificial intelligence. But then, something modified, and Elon Musk suddenly became frightened about AI and commenced asking questions. What could possibly be the true reason behind this modification of heart?

Before we dive into the technical details, let me let you know an exciting story you would possibly not have heard before. It’s a tale which may help us understand why Elon Musk’s feelings about AI modified.

O

You see, in India, many individuals trusted their cows for his or her livelihood, and losing only one cow to a snakebite could possibly be a disaster. The federal government knew they’d to do something to assist the people and do away with these dangerous snakes.

So, they got here up with a plan! They decided to offer a reward to anyone who killed a snake and brought it to them. This made the villagers excited, and so they began hunting the snakes to earn their rewards. The federal government thought they’d solved the issue, but something strange happened.

After a number of months, the federal government was still paying rewards for dead snakes, though there must have been fewer snakes around. Puzzled, they decided to analyze what was happening.

To their great surprise, they found that some clever villagers had found a solution to make cash from the snakes by starting snake farms! They’d raise the snakes and only kill them once they needed extra cash, then bring the dead snake to the federal government for his or her reward.

The federal government was shocked by this discovery and realized their plan had backfired. They quickly stopped the reward system, hoping that the snake problem would eventually go away by itself. And so, the villagers had to search out recent ways to guard themselves and their cows from the sneaky snakes that also roamed the land.

In easy terms, we call this case the .

The lesson we learn from this story is that we must select the precise goals if we truly need to succeed.

On this planet of artificial intelligence, scientists and engineers were working hard to create machines that might think and learn like humans. They used their knowledge of the human brain to construct something called Artificial Neural Networks (ANN), which were designed to mimic the best way our brains process information.

But identical to the British government faced an unexpected problem with the snake rewards, these clever creators of AI knew they might face similar challenges with their recent inventions. You see, each ANN and human brains are driven by the identical intelligence behaviors, and so there was a probability that unintended consequences might arise.

The researchers and scinetist who make AI systems knew they’d to be very careful when designing their machines, especially once they thought concerning the story of the snake rewards. But because the machines grew greater and more powerful, like GPT-3.5 and GPT-4, it became harder for us to know how they worked.

These giant machines have billions of little parts called parameters, and even with smaller machines, we don’t all the time understand how they give you their answers. This is usually a big problem because we will’t make sure they won’t find sneaky ways to do things, identical to the villagers did with the snake rewards.

People like Elon Musk and other scientists are frightened about these giant AI experiments since it’s hard to know in the event that they might by accident create something harmful as a substitute of helpful. The story of the snake rewards is an easy solution to understand this concern, but there are various more things to take into consideration in the case of AI.

The large query is: How can we be sure we don’t create something dangerous with these giant AI machines if we don’t understand what’s happening behind the scenes? We want to search out ways to make AI protected and helpful for everybody.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here