Home Artificial Intelligence Geoffrey Hinton tells us why he’s now petrified of the tech he helped construct

Geoffrey Hinton tells us why he’s now petrified of the tech he helped construct

1
Geoffrey Hinton tells us why he’s now petrified of the tech he helped construct

It took until the 2010s for the facility of neural networks trained via backpropagation to actually make an impact. Working with a few graduate students, Hinton showed that his technique was higher than any others at getting a pc to discover objects in images. In addition they trained a neural network to predict the following letters in a sentence, a precursor to today’s large language models.

One among these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the event of ChatGPT. “We got the primary inklings that these things may very well be amazing,” says Hinton. “Nevertheless it’s taken an extended time to sink in that it must be done at an enormous scale to be good.” Back within the Eighties, neural networks were a joke. The dominant idea on the time, often known as symbolic AI, was that intelligence involved processing symbols, corresponding to words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software abstractions of brains wherein neurons and the connections between them are represented by code. By changing how those neurons are connected—changing the numbers used to represent them—the neural network will be rewired on the fly. In other words, it might be made to learn.

“My father was a biologist, so I used to be pondering in biological terms,” says Hinton. “And symbolic reasoning is clearly not on the core of biological intelligence.

“Crows can solve puzzles, and so they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons of their brain. And so it must be possible to learn complicated things by changing the strengths of connections in a synthetic neural network.”

A recent intelligence

For 40 years, Hinton has seen artificial neural networks as a poor try and mimic biological ones. Now he thinks that’s modified: in attempting to mimic what biological brains do, he thinks, we’ve give you something higher. “It’s scary if you see that,” he says. “It’s a sudden flip.”

Hinton’s fears will strike many because the stuff of science fiction. But here’s his case. 

As their name suggests, large language models are constructed from massive neural networks with vast numbers of connections. But they’re tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have as much as half a trillion, a trillion at most. Yet GPT-4 knows a whole lot of times greater than anyone person does. So perhaps it’s actually got a a lot better learning algorithm than us.”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here