Warm welcome to the Royal Palace and the Bernadotte Library. Here you will find over 100,000 books, a collection that used to belong to previous kings and queens of the House of Bernadotte, offering a glimpse into their history and interests. However, today we're here to listen to our esteemed Nobel laureates, to their insights, their expertise and their invaluable contributions.
to science and economics. Once again, a very warm welcome to the Royal Palace. In this program, we'll be looking at the potential and pitfalls of artificial intelligence, why some countries are richer than others, and what a worm tells us about the origins of life. Your Royal Highness, thank you for that very warm welcome to your palace here in Stockholm. And Nobel laureates, this is the first time that some of you have been brought together in discussion on television and we're also joined by some of your family and friends as well as students from here in Stockholm.
Before we start, let's just give them a really big round of applause, renewed congratulations to all of you. I guess you're all getting very used to the sound of applause now, aren't you? So tell me, how has winning the Nobel Prize changed your life? Who shall I start with? Gary?
Well, the level of attention is something that's a thousand X, whatever it was for other awards. You know, the Nobel is a brand and it's 120 something years of history. To completely...
mesmerizing. Daron, one of the economists, what about you? How has it changed your life?
I mean, I'm here. Being in Stockholm for one week in December, that's a life-changing event, but I am amazingly grateful and happy. honored and I'll take it as it comes.
Your diary is going to be super full from now on. You're going to be running around from lecture to lecture and guest appearances. So Professor Geoffrey Hinton, what about you? Yeah, it makes an amazing change. I get huge amounts of email asking me to do things.
I luckily have an assistant who deals with most of it. I get stopped for selfies in the street, which is... Please.
It's very annoying, but if it went away, I'd be disappointed. But also, you've been teaching for many years at the University of Toronto, and you said after you won the Nobel Prize, they at last gave you an office. Yes, they didn't think I was worth an office before that.
James? I've noticed that people take what I say much more seriously. I've always proceeded on the assumption that no one was ever actually listening to anything I said. So now I have to...
really choose my words carefully. And does that extend to your family members as well? Do they listen to what you say now? I'd have to think about that one. David Baker.
Well, actually a highlight has really been this week and having all my family members family and colleagues here. It's been a great celebration. And yeah, I've had to give up email, which has been positive.
And I've learned to completely avoid selfies. So, but on the whole, it's been very exciting. And you don't travel light, do you?
If I can put it that way. Just remind me, how many people have you come with to Stockholm? 185. I think that must be a record. I'm going to have to check, but I'm pretty sure that must be a record.
Well, it's quite a party you're going to have. have. And Sir Dennis Hassabis?
Well, of course, it's been an honour of a lifetime. And to tell you the truth, it hasn't really sunk in yet. So maybe I'll do that over the Christmas holidays. But it's also, you know, an amazing platform to talk about your subject more widely and have to think about that responsibility in the coming years. Okay, let's turn now to the awards that were made this year.
And let's start with the Physics Prize. And here's a brief summary of the research behind that prize. This year's Physics Prize rewards research that laid the foundations for the development of AI, enabling machine learning with artificial neural networks. John Hopfield created a structure that can store and reconstruct information. Geoffrey Hinton built on his ideas.
and made it possible to create completely new content with the help of AI, so-called generative AI. This opens up numerous potential areas of use, for instance by providing techniques for calculating and predicting the properties of molecules and materials. Their research has also prompted extensive discussion of the ethics around how the technology is developed and used.
So, Geoffrey Hinton, you actually wanted to find out how the human brain works. So, how does it work? We still don't know. We've made lots of efforts to figure out how the brain figures out how to change the strength of connections between two neurons.
We've learned a lot from these big systems that we've built, which is if you could find any way to know whether you should increase or decrease the strength... And then you just did that for all of the connections, all 100 trillion connections, and you just kept doing that with lots of examples, slightly increasing or decreasing the strength. Then you would get fantastic systems like GPT-4.
These big chatbots learn thousands of times more than any one person. So they can compress all of human knowledge into only a trillion connections. And we have 100 trillion connections and none of us know much.
But that's interesting. He says speak for yourself, but anyway, he does know a lot, actually. So you make it sound, though, as if this is the best...
and it's never been bettered. We don't quite know how it works. And yet you also say that artificial intelligence, artificial neural networks could outsmart humans.
Oh, I think we've been bettered already. If you look at GPT-4... It knows much more than any one person. It's like a not very good expert at everything. So it's got much more knowledge and far fewer connections, and we've been bettered in that sense.
Do you agree with that, Demis? Well, look, I think so. I mean, just going back to your initial question, originally with the field of AI, there was a lot of inspiration taken from architectures of the brain, including neural networks and an algorithm called reinforcement learning.
Then we've gone into a kind of engineering phase now where we're scaling these systems up to massive size, all of these large... foundation models or language models. And there's many leading models now. And I think we'll end up in the next phase where we'll start using these AI models to analyze our own brains and to help with neuroscience as one of the sciences that AI helps with. So actually, I think it's going to come sort of full circle.
Neurosciences sort of inspired modern AI. And then AI will come back and help us, I think, understand what's special about the brain. Will machine intelligence outsmart humans? I mean, what kind of time frame are you talking about? Are you saying it's already happened?
So in terms of the amount of knowledge you can have in one system, it's clearly already happened. GPT-4 knows much more than any human. And it does make stuff up, but it still knows a lot. In terms of the timing, I think all the leading experts I know, people like Demis, they believe it's going to happen. They believe these machines are going to get smarter than people at general intelligence.
And they just differ in how long they think that's going to take. Well, we're going to start being bossed around by machines and robots. Is that what you're suggesting? Well, that's the question.
Can you have things more intelligent than you and you still stay in control? Once they're more intelligent than us, will they be the bosses or will we still be the bosses? And what do you think? I think we need to do a lot of research right now on how we remain the bosses. You didn't actually answer that question, Dennis Hassabis.
Do you think that machine intelligence could outsmart, outwit us? to the extent that actually they start ruling the roost. No, well, look, I think for now, so I disagree with Jeff on the fact that today's systems are still not that good.
They're impressive. They can talk to us and other things. They have quite a lot of knowledge, but they're pretty weak at a lot of things.
They're not very good at planning yet or reasoning or imagining and creativity, those kinds of things. But they are going to get better rapidly. So it depends now on how we design those systems and how we decide to sort of, as a society, deploy those systems and build those systems. All right.
So we'll look at what we do about it. But gentlemen, this is a very big fundamental question. Gary, and then you.
I think you're overrating humans in this. So we make up a lot of untruths as well. And there's so many examples of false ideas that get propagated. And it's getting worse, of course, with social networks. So the standards for AI to do well.
It's pretty low. Humanity is way overrated. All right.
Okay. I'll take a contrarian view here. Humans, since really the beginning of civilization, have created things that are better at them than in almost every domain.
Cars can go... infinitely faster. Planes can fly.
Humans can't. You know, for a long time, we've had computers that can do calculations that humans can't do. Demis has developed, you know, programs that solve Go and chess.
So we're very comfortable, I think, with machines being... being able to do things that we can't do. Chad GPT-4 has much more knowledge than a human being. I think we just take this kind of thing in stride. I don't think we worry about losing control.
That's the key issue. We know that computers can do a lot that we can't, but it's this question of control. Because planes fly, but it's the human pilot who's in the cockpit, assisted by technology, obviously, and we still...
drive cars. What about you two, the economists, where do you stand on this question? I'll take the opposite position to Gary.
I think humans are incredibly underrated right now. Human adaptability, fluidity... creativity, but also community. I think humans are just amazing social animals.
We learn as collectives and as collectives we are able to do a huge number of things in very quick succession. So I would worry about those people controlling AI before AI itself turning on us. Humankind's greatest...
Enemies, humankind. The sort of Dr. Evils that we see in popular science fiction. Or Dr. Do-Goods, who think they are doing good. I wouldn't put those past doing huge damage.
I would agree, I mean, as the tools get more powerful. I think the worry is not the machines themselves, but people using the tools, misinformation, autonomous military weapons, all kinds of things. Humans have a great track record of inventing things, you know, that jeopardize the human race, such as nuclear weapons. I mean, just think about.
how close we've been to obliterating the planet with the Cuban missile crisis. So we've done it already. We can do it again in a different form. So I guess I would like to ask Demis, I take the point of view, everyone's saying, yes, we need to regulate, we need to, but who has the incentive to do that?
It's one thing to say that, but I suspect the politicians and the governments, they're just playing catch up, that the thing is moving faster than they can get their hands on. hands-on and who in the private sector they just want to make money and get this stuff out there and so where where are the incentives to actually do something about that yeah well look i mean there is obviously the reason that many of us are working on ai is because we want to bring to bear all of the incredible benefits that can happen with AI in medicine, but also productivity and so on. But I agree with you, there is going to be a kind of coordination problem where I think there has to be some form of international cooperation on these issues. I think we've got a few years to get our act together on that.
And I think leading researchers and leading labs in industry and academia need to come together to kind of demand that sort of cooperation as we get closer to artificial general intelligence and have more information about what that might look like. But I'm... I'm a big believer in human ingenuity and as David says, you know, we're unbelievably adaptive as a species and, you know, look at our modern technology we already use today that we sort of seamlessly, the younger generation just seamlessly adapts to and takes as a given. And I think that's also happened with these chatbots, which...
You know, 25 years ago, we would have been amazed, those of us in the area of AI, if you were to transport the technologies back we have today, back then. And yet we've all, society seems to have sort of seamlessly adapted to that as well. Geoffrey Hinton, do you see that happening? You've raised the alarm bells about humans becoming subservient in a way to machines. Do you think that there's enough of a debate at an international level?
Do we need more ethics in science to... debate these kind of issues? Do you see that happening? So I want to distinguish two kinds of risks from AI.
One is relatively short term, and that's to do with bad actors, and that's much more urgent. That's going to be obvious with lethal autonomous weapons, which all the big defence departments are developing, and they have no intention of not doing it. The European regulations on AI say none of these regulations apply to military uses of AI.
So they clearly intend to go ahead with all that. And there's many other short-term risks like cybercrime, generating bad pathogens, fake videos, surveillance. All of those short-term risks are very serious and we need to take them seriously.
And it's going to be very hard to get collaboration on those. Then the long-term risk that these things will get more intelligent than us and there'll be agents, they'll act in the world and they'll decide that they can achieve their goals better, which we gave them the goals and they can achieve them better if they just brush us aside and get on with it. That particular risk, the existential threat, is a place where people will cooperate. And that's because we're all in the same boat. Nobody wants these AIs to take over from people.
And so the Chinese... Communist Party doesn't want AIs to be in control, it wants the Chinese Communist Party to be in control. You know, for somebody who's described as the godfather of AI, you sound quite a bit down on it in so many ways.
Well, it's potentially very dangerous. It's potentially very good and potentially very dangerous. And I think we should be making a huge effort now into making sure we can get the good aspects of it without the bad possibilities. And it's not going to happen automatically, like he says.
Well, we've got some students in the audience here, and I know that some of them want to pose a question to you, Laurie. It's Prashan Yadava from the KTH AI Society. Your question, please. I'd like to know in what ways AI can be put to use in bringing truly democratic values and bringing economic equalities to the world.
So, in what way can AI promote democracy and equality in the world? Who's going to answer that? I can start off. I mean, I think, as we've discussed, actually, for most of the conversation, and I think powerful technologies in of themselves are kind of like neutral, they could go good or bad, depending on what we as society decide to do with them.
And I think AI is just the latest example of that. In that case, maybe it's going to be the most powerful thing and most important that we get right. But it's also On the optimistic end, I think it's one of the challenges, the only challenge I can think of that could be useful to address the other challenges, if we get it right.
So that's the key. I don't know, you know, democracy and other things, it's a bit out of scope, maybe it's for the economists to talk about. Well, I'll just say, I think AI is an informational tool, and it will be most useful and most enriching for us in every respect if it's useful, reliable, and enabling information for everybody.
Not just for somebody sitting at top to manipulate others, but enabling for citizens, for example, enabling for workers of different skills to do their tasks. All of those are aspects of democratisation, but we still have a long way to go for that sort of tool to be available in a widespread way and not be manipulable. Let's go for another question now from our audience.
Al-Katerini Papathanassou from the Stockholm School of Economics. Your question, please. Hello. Thank you. My question regards, how do you think philosophy and science coexist?
We have a very deep need for more philosophy and perhaps there's an opportunity for some new great philosophers to appear to help us through the next phase of technological development. In my view, that is going to require, depending on your definition of philosophy, some deep thinking and wider thinking beyond the technology itself. Yeah, absolutely. I think actually... one of the things with the advances in AI, we will need to understand much better what makes us conscious, what makes us human.
There might be some stumbling blocks that will make us delve deeper into some of these questions. But even if advances in AI are very fast, we will need to question our own existence and what makes that more meaningful. Certainly we need ethics.
The kind of philosophy we don't need, I think, is philosophers talking about consciousness and sentience and subjective experience. I think understanding those is a scientific problem and we'll be better off without philosophers. Anybody else on this?
No? All right, thank you very much. But let's turn now to some of the work that has contributed to the award for the Chemistry Prize this year for Demis Hassabis, David Baker, along with John Jumper.
And let's just get a brief idea of the research that led to the Chemistry Nobel Prize Award. The ability to figure out quickly what proteins look like and to create proteins of your own has fundamentally changed the development of chemistry, biology and medical science. By creating the AI program AlphaFold2, this year's chemistry laureates Demis Hassabis and John Jumper have made it possible to calculate the shape of proteins and thereby understand how the building block of life works. The second half of this year's award goes to David Baker for what's been described as the almost impossible feat of building entirely new kinds of proteins.
Useful, not least, for producing what could block the SARS-CoV-2 virus. Making new proteins can simply open up whole... new worlds.
So let's start with you, David Baker. You've been applauded for creating these new proteins. And actually, you didn't even want to become a scientist in the first place.
So it's quite amazing that you've now got this Nobel Prize. But just tell us what kind of applications, implications do you think your work has led to or could lead to? Yeah, I think following up on our previous discussion, I think I can really talk about the really... real power of AI to do good.
So some of the proteins in nature solve all the problems that came up during evolution. And we face all kinds of new problems in the world today. You know, we live longer, so neurodegenerative diseases are important.
We're heating up and polluting the planet. And these are really existential problems. And now, you know, maybe with evolution, another 100 million years, proteins would evolve that would help address these. But with protein design, we can now design proteins to try and deal with these today.
And so we're designing proteins, completely new proteins, to do things ranging from breaking down plastic that's been released into the environment to combating neurodegenerative disease. cancer. And Demis Hassabis, of course, you're well known for being a co-founder of DeepMind, the machine learning company. And I mean, you're a chess champion.
You're a child prodigy, really, you know, making video games when you're only in your teens. So here you are, you've got a Nobel Prize under your belt as well. But you've already actually started using the research for which you were awarded the prize along with John Jumper. That's right. So we are with our own collaborations.
We've been working with institutes like the Drugs for Neglected Diseases, part of the WHO. And indeed, because if you reduce the cost of of understanding what these proteins do, you can go straight to drug design. That can help with a lot of the diseases that affect the poorer countries of the world where big pharma won't go because there isn't a return to be made.
So but in fact, it affects a larger part of the part of the world's population. So I think these technologies, actually going back to our earlier conversation, will help a lot of the poorer parts of the world by making the cost of discovery so much lower, you know, that it's within the scope then of NGOs and non-profits. Anybody else want to chip in on this? I mean, obviously, I think this is just an amazing opportunity for science.
Anything we can use to improve the scientific process can have, can have, not necessarily will have, can have. great benefits. But that doesn't change some of the tenor of the earlier conversation. Great tools also still create great risks.
Fritz Haber, you know, a Nobel Prize winner for work on which we depend every day with synthetic fertilizers, you know, also made chemical weapons for the German army in World War I and directly causing the deaths of hundreds of thousands of people. So the responsibility of scientists would... Powerful tools is no less.
We're seeing scepticism in all sorts of positions of power now, aren't we, all over the world. Is that something that worries you, that policymakers don't perhaps understand the full complexity of science, be it climate science or other difficult issues? I would say it's also part of our responsibility that we have to work harder in... getting people to trust science. I think there is much greater skepticism about science, and I don't know, I don't think anybody knows exactly why, but it is part of the general polarization, but it's also probably the way that we are not properly communicating the uncertainties in science, the disagreements in science, what we are sure and what we are not sure.
So I think we do have a lot more responsibilities in building... The public's trust in the knowledge that's usable in order for that knowledge to be seamlessly applicable to good things. Demis and then maybe Gary.
Yeah, I think I agree with that. And I think in just in the realm of AI, I feel like one of the benefits of the sort of chatbot era is AI is much more than just chatbots. You know, it's scientific tools and other things.
And but it has brought it to the public's consciousness. and also made governments more aware of it and sort of brought it out of the realm of science fiction. And I think that's good because I think in the last couple of years, I've seen a lot more convening of governments, civil society, academic institutes to discuss the broader issues beyond the technologies, which I totally agree with, by the way, including things like what new institutes do we need?
How do we distribute the benefits of this widely? That's a societal problem. It's not a technological problem. And we need to have a broad debate about that.
And we've started seeing that. We've had a couple of global safety summits about AI, one in the UK, one in South Korea, and the next one's in France. And I think we need actually a higher intensity and more rapid discussion around those issues. Gary, do you want to come in here? Yeah, the engine of Western economies in terms of the revolution in the last 50 years has been technology and science and Silicon Valley and that sort of thing in terms of...
And if you wanted to, if you're an enemy of the West, you want to destabilize that. And so I think this whole social network, I don't trust. I don't trust technology, I don't trust any of the enterprises. I don't think that's evolved naturally.
I think that's been manipulated by bad agents. And we have to be aware of that. Which bad agents? I think it's Russia and Iran. I don't think it's...
stupid to say that. Politics. Yeah. They're not looking in our best interests. I think there's other bad agents, too.
Sure. Probably the energy industry would like you not to believe in climate change. Just like the tobacco industry. knew very well that cigarettes cause cancer, but they hid that fact for a long time.
You know, if we cannot trust the energy companies, we cannot trust pharmaceutical companies, tobacco companies, can we trust the tech companies, which are extremely concentrated? And if AI is so important, what about the power of tech companies? I don't know why you're asking me. I don't work for a tech company.
So you have an objective opinion. No, no, but that's one aspect of the risks of AI that we didn't talk about. OK, well, just to take a more positive point of view again, I mean, despite the skepticism of about science and certainly you don't have to look far in the US, it should be pointed out that it was the response to covid with the mRNA vaccines was truly miraculous. It was a technology that really had not been proven at all.
And in very little time, because it was this thing about having a common enemy and and a threat. But, you know, we were able to mobilize very quickly, try something really completely. new and bring it to the point where it did a huge amount of good.
So there are reasons to be optimistic that were other threats to appear, that a lot of the silliness would sort of filter out and the correct actions would be taken. And the sceptics died. Okay. Well, on that positive note, let's just pause there for a moment and let's turn to the Economics Nobel Prize and let's see why the award was made this year. This year's Prize in Economics touches on historical injustices and cruelties, as well as current events too.
The question of how economic development is connected to individual rights, equality and decent political leaders. When large parts of the world were colonised by European powers, their approaches varied. Derone Asimoglu, Simon Johnson and James Robinson have shown that Prosperity rose in places where the colonial authorities built functioning social institutions rather than simply exploiting the locals and their resources.
But no growth or improvements in lifestyle were created in societies where democracy and legal certainties were lacking. The laureate's research also helps us understand why this is the case and could contribute to the development of a more sustainable society. to reducing income gaps between nations. So, Daron, when you're both talking about the importance of democratic institutions, what kind of institutions are you talking about? The label that we, Simon, Jim and I, use is inclusive.
institutions, meaning institutions that distribute political power and economic power and opportunity broadly in society. And that requires certain political institutions that provide voice to people so that they can participate there. views are expressed and also constraints on the exercise of that power.
So you're just talking about really the checks and balances we see in, you know, set down in constitutions like an independent legislature, a free judiciary, freedom of speech with, you know, the media being able to operate as it wishes. Absolutely, but that's not enough, partly because what you write in a constitution is not going to get. enforced unless there is a general empowerment of the people. So constitutions are sometimes changed just like shirts. And it doesn't mean anything unless it becomes enforced.
But you have also seen countries prosper economically, which have been governed by fairly authoritarian governments, haven't you? I mean, often we talk about Lee Kuan Yew in Singapore, Mahathir Mohamad in Malaysia, for instance. Yeah, I think that's not the general pattern.
I mean, so there are examples like that, of course, but for every example like that, there's far more examples of autocratic societies that have not flourished economically. You know, if you can create inclusive economic institutions, even under a politically kind of autocratic society, you can flourish economically, at least transitarily. You know, that's what happened in China, you know, starting in the late 1970s.
It was the movement towards a much more inclusive economy, giving people... the right to make decisions, making them residual claimants on their own efforts. So that's what generated economic growth.
But our view is that you can't sustain an economy like that under an autocratic political system. It can be there for a transitory period, but it's not sustainable. A lot of your research is based on countries which have been colonised and there's been a lot of debate, of course, particularly in the United Kingdom because of their historical Great British Empire.
Thank you. And whether it was good or bad for the countries that were colonized, practically all of Africa. But you say that colonization often brought about a reversal in economic fortunes of the colonized people. So just unpack for us why you say that, because it sounds like you're saying colonization was bad.
for the people. I think colonization was a disaster, absolutely. But of course, it did create prosperous societies in parts of the world, in North America and Australasia.
But for the indigenous people, it was a catastrophe. You know, diseases wiped out 90% of the population of the Americas. People were exploited.
They had their lands and livelihoods destroyed, their communities destroyed. I mean, absolutely, yes. So I don't think there's much debate about that, in my view. I think this notion of reversal, the Americas, it's very clear in the Americas. You know, at the time, you go back 500 years, where were the prosperous parts of the Americas?
Central America, the Central Valley of Mexico, Andean, the Inca Empire, you know, the Mexicas, the Valley of Oaxaca. You know, there you had writing, you had political complexity, you had economic organisation, sophistication and whatever. The southern cone of Latin America, North America far behind, you know, and then this gets completely reversed during the colonial period. And the places that were relatively poor then become relatively prosperous. So that's there you see the reversal in a very clear way.
Right. I want to bring you in, Demis, because your mother is Singaporean or Singaporean born. You were brought up in Britain, of course, but what do you think when you hear about this kind of thing, about democracy and prosperity and institutions? Well, it's very interesting.
Obviously, I've heard from my mother the sort of economic miracle that Lee Kuan Yew brought to Singapore, and he's rightly revered for that. I don't know, obviously, this is not my area, but how do you try and, you know, how are these institutions going to be built in the places where they aren't? Is there going to be external encouragement or it has to happen internally?
Or you just have to be lucky with finding the right leader, like a Lee Kuan Yew? I think the success stories, they all come from within. People build the institutions in their own context.
I think Lee Kuan Yew is a fascinating person. He's not the only person in the world like that. You had Soretse Karma in Botswana.
You have other outstanding... Thank you. But I think, on average, you know, the evidence suggests autocratic regimes don't do as well as democratic ones.
And sure, you know, people matter. Individuals matter. Having good leader matter. Where do you find Lee Kuan Yew? Well, that's what I was going to ask you.
So then if it has to come from within, you know, what... So you're pointing out with your great work what the issues are. But how... Other than wait for the right, you know, Mandela or Lee Kuan Yew to come along, which is very rare, as you say, what else can be...
done to encourage those institutions to be built. Yeah, but there's lots of institutions that are built without famous leaders. I think the track record of external imposition of institutions is not very good.
There are a few cases where you can point to, but generally... institutions are built organically, but there are influences out there. So one of the cases, Jim already hinted at, Tsereksekama, Botswana, an amazingly successful democracy in sub-Saharan Africa, an amazingly successful country in terms of economic growth, very rapid growth on the whole. And it was all existing, actually, pre-colonial institutions that were the basis of more democratic, but leadership there mattered too.
So you need to... combination. So I think facilitating institution building domestically, providing tools for them and getting rid of our hindrances, often, you know, Western and Russian powers or sometimes Chinese powers interfering in other countries' domestic affairs is not conducive to better institution building.
But at the end of the day, institutions are going to be built bottom up. Okay. So look, a major theme of this year's Nobel Prizes has been artificial intelligence. So James, let me ask you then, if you think technology...
AI could help Africa develop. But Africa has not been benefiting from all this technology. But could it, I'm saying?
It could, but to do that, many things have to change. Many things have to change. Institutions have to change. Politics has to change.
You know, people's trusts, all sorts of things have to change. And what about the impact of technology, AI for instance, on democracy really? I am talking about the impact on jobs, to what extent there'll be displacement of human activity and jobs by machines. Yeah, I mean I think that's a huge risk, I believe.
That humans would have a very difficult time building their social systems and communities if they become majorly sidelined and they feel they don't have dignity or use or a way to contribute to the social good. From your perspective, I mean, there have been a lot of advances in technology over the last hundred years. Have any of them really caused massive displacement of jobs?
I mean, already, you know, there's... A lot of these technologies are out there, but have they reduced the number of jobs? Yeah, it has happened. It has happened.
I mean, the early phase of the Industrial Revolution, where it was all about automation, there were huge displacements, huge wage losses. One third, you know, people's wages within 20 years, in real terms, fell to, for some people, to one third of what it was. That's just a tremendous change. Yes, but in the end, it became better. It became better, but the technology changed.
Yes, so in my view, there will be a lot of disruption, like the Industrial Revolution was. Also, look, you know, 90 years, it took 90 years. I don't think that's what we want to put up with. But there could be new classes of jobs.
Yes, exactly. But those new classes of jobs, they're not automatic. So there are like two ways of thinking on this, beyond the artificial general intelligence. One is that... You introduce these disruptive technologies and the system automatically adjusts.
Nobody needs to do anything. No policymaker, no scientist, no technologist. The system will adjust.
I think that just is contradicted by history. The way that it works is that we all have to work in order to make things better, including technologists, so that we actually use the scientific knowledge to create new tasks, more capabilities for humans, rather than just sidelining them. I mean, we've seen you talk about the lessons of history, but we saw with the printing press revolution.
People who were writing books were put out of business, but then lots of new jobs were created through publishing and so on. But look at the last 40 years. The U.S. is an extreme case. But roughly speaking, I'm exaggerating a little bit, but about half of the U.S. population, those who don't have college degrees, have had almost no growth in their real incomes until about 2015, from 1980. So no new jobs. import were created for them.
There were a lot of new jobs in the 1990s and 2000s, but they were all for people with postgraduate degrees and specialised knowledge. Geoffrey Hinton, do you think that this increase in productivity, essentially, that will come with automation and so on and so forth, is a good thing for society? Well, it ought to be, right? I mean, it's crazy.
We're talking about having a huge increase in productivity. So there's going to be more goods and services for everybody, so everybody ought to be better off. But actually it's going to be the other way around. And it's because we live in a capitalist society.
And so what's going to happen is, this huge increase in productivity is going to make much more money for the big companies and the rich. And it's going to increase the gap between the rich and the people who lose their jobs. And as soon as you increase that gap, you get fertile ground for fascism.
And so it's very scary that... We may be at a point where we're just making things worse and worse. And it's crazy because we're doing something that should help everybody. And obviously it will help in healthcare, it will help in education. But if the profits just go to the rich, that's going to make society worse.
So, OK, let's look at the last award. And that's the Nobel Prize for Medicine or Physiology. And this is why it was awarded this year.
Our organs and tissues are made up of many varied types of cells. They all have identical genetic material, but different characteristics. This year's Medicine Laureates, Gary Rifkin and Victor Ambros, have shown how a new form of gene regulation, microRNA, is crucial in ensuring that the different cells of organisms, such as muscles or nerve cells, get the functions they need. It's already known that abnormal levels of microRNA increase the risk of cancer, but the laureate's research could lead to developing new diagnostics and treatments. For example, it could map how microRNA varies in different diseases, helping unlock prognoses for the development of diseases.
So Gary, your research was based on... looking at mutant strains of the roundworm. Actually, it should have its own Nobel Prize, shouldn't it? It's featured so much in research that's led to Nobel Prizes. But just tell us, what does your work with roundworms tell us about genetic mutations in humans?
Doing genetics is a form of doing what evolution has been doing for four billion years. Our planet... is a genetic experiment that has been generating diverse life from primitive life over four billion years by inducing variation to give you the tree of life that goes, you know, to bats and to plants and to bacteria.
And we do that on one organism. And the reason it works so well is that evolution has evolved a way to generate diversity by mutating. That's That's what all around us, you know, when you see a green tree, it's because photosynthesis was developed two billion years ago. The reason we can breathe oxygen is because photosynthesis evolved and it wasn't there beforehand.
And so what we're doing is that process and that's why it works so well. So the reason the worm has gotten four Nobel Prizes And it's the worm that got it. We're just the operators.
You should be here with us. Yeah, it's very tiny. It's a millimeter long. It has 959 cells.
That's different from us, right? Every one of our cells does not have a name. But every cell in a worm has a name.
And that attracted a cohort of people who like names. Names are important. And we were thinking about, well, we can learn a lot about how biology works by sort of following cells. What's their history?
What do they become? How much do they talk to each other? But we figure it out by breaking it. I mean, it's extraordinary that a human has about 20,000 genes and a worm has 20,000 genes. Yes, that's why we really are too self-important.
We're just not, you know, humans are just not that great. You know, we're fine. I mean, I'm... I'm happy to be a human.
I don't want to be a worm. But, you know, a bacteria has 4,000 genes. That's not very different from 20,000.
I'm sorry. You know, people say, oh, geez, if you look for life on other planets, it's bacteria. How boring. You got it all wrong, folks.
Bacteria are totally awesome. But so what you're saying essentially is that mutations obviously can. be bad because they can lead to all sorts of genetic illnesses and so on, but they're not always bad and some are quite actually relatively insignificant, like you're colorblind, aren't you, for instance?
I mean, that's a genetic mutation, isn't it? It is, and it's a debilitating mutation for me. In the days of black and white publishing, I was king.
Things were fine, and then everything became color. You know, I have to say, so like our little worm, I'll go to a seminar and people are presenting graphs with red and green. And I come out going, geez, that was just complete horror.
And people said, oh, it was fantastic. You didn't see it. I wrote to Google Maps and said, you guys, you do traffic is red and green means things are fine. I can't see it. And you're losing 4% of the users.
And it's the best 4%. Oh, well. But, I mean, you actually wanted...
And they did not respond. You wanted to be an electrical engineer originally, didn't you? Well, yes, I did electronics as a kid.
Yeah. Because I loved electronics, and I built kits. I built a shortwave radio with a $39 kit made with vacuum tubes. This is before, you know, transistors. But the resistors have a color code.
Right, and so, and it tells you how many ohms it is, and that's how much resistance it has. And so, I didn't know it, I didn't know I was colorblind at the time, so I put it together, and the test for how well you are, whether it's going to work, is you turn it on and if it doesn't smoke, that that's good. And my electronic assembly didn't pass the smoke test. So, let's go.
for another question now from our audience. Jasmina Kocanowicz, what do you want to ask Professor Ruffin? Was microRNA an unexpected find or was it a part of your hypothesis while conducting your research?
Oh, no hypothesis on that. No, no, no, no, no. It was a complete surprise and I love surprises. And really, you know, that's the beauty of doing genetics is that...
What comes out is what teaches you, right? You know, you do a mutagenesis, you get an animal that looks like what you were looking for. Part of the search is saying, what am I going to look for? That's the art.
How did your research, along with Victor Ambrose's, go down when you first published it in the early 1990s? It was in a little corner of biology, this worm. And there was a sense when you would deliver a paper to go to give a talk about it that, well, it's a worm. Who cares?
You know, and it's a weird little animal. Until we discovered that it was in human genome and then many other genomes. And it's been embraced. And what was especially... sort of empowering to it was it intersected with RNA interference, which is an antiviral response, and people really care now, of course, about antiviral responses.
Of course. I mean, how did you all find doing your research? I mean, just listening to what Gary's saying, did you encounter setbacks? Can you define particular moments when your research really felt that you were on a winning streak? Did people discourage you from what you were doing?
All of the above. Really? I mean, you know, academia is really hard at some level. You know, you work sometimes three years on a project and then somebody anonymously destroys it.
So that's very, very difficult to get used to. So I do a lot of coaching with my graduate students to get them ready for that. But on the other hand, I've found academia to be quite open-minded as well.
as well. You know, when Jim, Simon Johnson and I, for example, started doing our work, you know, I think there was not much of this sort in economics. And people could have said, no, this is not economics.
And some people did. And people could have said, this is crazy. And some people did.
But there were a lot of people who were open-minded, especially young researchers. You know, they're hungry for new angles. So I found academia to be quite open-minded as well, but a tough place. Salvis.
David, I think you've both said that research in proteins is kind of seen as, at the time, being on the lunatic fringe of science. I mean, how did you cope with that kind of perception that you were doing something that was a bit out there? Well, I think when we started trying to design proteins, everyone thought there was a crazy way to try and solve hard problems. The only proteins we knew at the time were the ones that came down through, you know, through evolution, the ones in us and in all living things.
So the idea that you could make completely new ones and that they could do new things was really seen as, you know, lunatic fringe, as you said. But I think the way you deal with that is you work... on the problem and you make progress and you know now it's gotten to the point where every other day there's another company saying they're joining the protein design revolution and they're going to be solving it so you can go from the lunatic fringe to the mainstream faster than you might expect utterly vindicated weren't you yeah it's very very similar with you know I think if you're fascinated enough and passionate enough about the area you're gonna you know I was gonna do it no matter what you know and and actually I can't think of anything more interesting to work on than the nature of intelligence and computational principles underpinning that. And when we started DeepMind in 2010, nobody was working on AI, pretty much. Yes, except for a very few people, a very few foresighted people in academia.
And then now, fast forward 15 years, which is not very much time, and obviously the whole world's talking about it. And certainly in industry, no one was doing that in 2010. But we already foresaw... building on the great work of people like Professor Hinton, that this would be one of the most consequential transformative technologies in the world if it could be done.
And if you see something like that, then it's worth doing it in of itself. I mean, you, Professor Hinton, along with your co-recipient of the Physics Nobel Prize, Professor John Hopfield, who is 91, a real pioneer in this field of technology also. I mean, does this, what you've just heard here, resonate with you, that work that...
at one stage was seen as being on the lunatic fringe and then here you are, years later, vindicated? Yes. Students would apply to my department to work with me and other professors in my department would say, oh, if you work with Hinton, that's the end of your career.
This stuff is rubbish. How did that make you feel? I mean, did you still...
You were so sure? Luckily, at the time, I didn't know about it. Tom, I think for another question...
from our audience and Manoj Dinakaran. from the Karolinska Institute wants to ask this. What's your question? Thank you, Laureate. Science is all about being motivated when things don't go the way we expect them to.
So what kept you all motivated when things didn't go the way you expected in times of hardship and helped you adapt? Who'd like to pick it up there? That's the best bit. When what you expected to happen doesn't happen, then that's when you really learn something.
I mean... So that's the best bit. It's really crushing for the first couple of days.
And then you're like, oh, now I learned something. It's like I didn't understand that. There is ascertainment bias here because you have the people who've gotten winning hands in the poker game of life. You know, gentlemen, thanks to all of you and renewed congratulations on your Nobel Prizes. That's all from this year's.
Nobel Mines from the Royal Palace in Stockholm. It's been a privilege having this discussion with you. Thank you to their Royal Highnesses, the Crown Princess Victoria and Prince Daniel, for being with us. Of course, everybody else in the audience and you also at home for watching.
From me, Zainab Badawi, and the rest of the Nobel Mines team, goodbye.