This is Jeff Hinton. Because of a back condition, he hasn't been able to sit down for more than 12 years. I hate standing. I'd much rather sit down, but if I sit down, I have a disc that comes out.
Okay. At least now standing desks are fashionable. Yeah, but I was ahead.
I was standing when they weren't fashionable. Since you can't sit in a car or on a bus, Hinton walks everywhere. The walk says a lot about Hinton and his resolve. For nearly 40 years, Hinton has been trying to get computers to learn like people do.
A quest almost everyone thought was crazy or at least hopeless. Right up until the moment it revolutionized the field. Google thinks this is the future of the company.
Amazon thinks it's the future of the company. Apple thinks it's the future of the company. My own department thinks this stuff's probably nonsense and we shouldn't be doing any more of it. So I talked everybody into it except my own department. You obviously grew up in the UK and you had this very prestigious family full of famous mathematicians and economists.
And I was curious what it was like for you. Yeah, there was a lot of pressure. I think by the time I was about seven, I realized I was going to have to get a PhD.
Did you rebel against that or you went along with it? I dropped out every so often. Yeah. I became a carpenter for a while.
Jeff Hinton, pretty early on, became obsessed with this idea of figuring out how the mind works. He started off getting into physiology, the anatomy of how the brain works. Then he got into psychology. And then finally, he settled on more of a computer science approach to modeling the brain and got into artificial intelligence.
My feeling is, if you want to understand a really complicated device, like a brain, you should build one. I mean, you could look at cars and you could think you could understand cars. When you try and build a car, you suddenly discover that there's this stuff that has to go under the hood, otherwise it doesn't work.
Yeah. As Jeff was starting to think about these ideas, he got inspired by some AI researchers across the pond. Specifically this guy, Frank Rosenblatt.
Rosenblatt, in the late 1950s, developed what he called a perceptron, and it was a neural network, a computing system that would mimic the brain. The basic idea is... collection of small units called neurons.
These are little computing units but they're actually modeled on the way that the human brain does its computation. They take incoming data like we do from our senses and they actually learn so the neural net can learn to make decisions over time. Rosenblatt's hope was that you could feed a neural network a bunch of data, like pictures of men and women, and it would eventually learn how to tell them apart, just like humans do.
There was just one problem. It didn't work very well. Rosenblatt, his neural network was a single layer of neurons, and it was limited in what it could do, extremely limited. And a colleague of his wrote a book in the late 60s that showed these limitations.
And it kind of put the whole area of research into a deep freeze for a good 10 years. No one wanted to work in this area. They were sure it would never work.
Well, Almost no one. It was just obvious to me that it was the right way to go. The brain's a big neural network, and so it has to be that stuff like this can work, because it works in our brains.
There's just never any doubt about that. What do you think it was inside of you that kept you wanting to pursue this when everyone else was giving up, just that you thought it was the right direction to go? No, that everyone else was wrong. OK.
LAUGHTER MUSIC PLAYS Hinton decides he's got an idea of how these neural nets might work, and he's going to pursue it no matter what. For a little while, he's bouncing around research institutions in the U.S. He kind of gets fed up that most...
Most of them are funded by the Defense Department, and he starts looking for somewhere else he can go. I didn't want to take Defense Department money. I sort of didn't like the idea that this stuff was going to be used for purposes that I didn't think were good. he suddenly hears that Canada might be interested in funding artificial intelligence. And that was very attractive, that I could go off to this civilized town and just get on with it.
So I came to the University of Toronto. And then in the mid-80s, we discovered how to make more complicated neural nets so they could solve those problems that the simple ones couldn't solve. He and his collaborators developed a multi-layered neural network, a deep neural network.
And this started to work in a lot of ways. Using a neural network, a guy named Dean Pomerlew built a self-driving car in the late 80s, and it drove on public roads. Jan LeCun in the 90s built a system that could recognize handwritten digits, and this ended up being used commercially.
But again, they hit a ceiling. It didn't work quite well enough because we didn't have enough data, we didn't have enough compute power. And people in AI and computer science decided neural networks was wishful thinking, basically. So it was a big disappointment. Through the 90s into the 2000s, Jeff was one of only a handful of people on the planet who were still pursuing this technology.
He would show up at academic conferences and be banished to the back rooms. He was treated as really like a pariah. Was there like a time when you thought this just wasn't going to work and you did have some self-doubt? I mean, there were many times when I thought, I'm not going to make this work.
But Jeff was consumed by this and couldn't stop. He just kept pursuing the idea that computers could learn until about 2006, when the world catches up to Hinton's ideas. Computers were now a lot faster. And now it's behaving like I thought it would behave in the mid-80s.
It's solving everything. The arrival of super-fast chips and the massive amounts of data produced on the Internet gave Hinton's algorithms a magical boost. Suddenly, computers could identify what was in an image. Then, they could recognize speech and translate from one language to another. By 2012, words like neural nets and machine learning were popping up on the front page of the New York Times.
You have to go all these years, and then all of a sudden, you know, in the span of a few months, it just takes off. Did it finally feel like, aha, you know, the world has finally come to my vision? It was sort of a relief that people finally came to their senses. For Hinton, this was clearly a redemptive moment after decades of toil.
And for Canada, it meant something even bigger. Hinton and his students put the country on the map as an AI superpower. Something no one and no computer could ever have predicted.
Thanks for watching, and if you want to see more Hello World, click on the link to subscribe.