Transcript for:
AI Risks and Society

they call you the godfather of ai so what would you be saying to people about their career prospects in a world of super intelligence train to be a plumber really yeah okay i'm going to become a plumber jeffrey hinton is the nobel prize winning pioneer whose groundbreaking work has shaped ai and the future of humanity why do they call it the godfather of ai because there weren't many people who believed that we could model ai on the brain so that it learned to do complicated things like recognize objects and images or even do reasoning and i pushed that approach for 50 years and then google acquired that technology and i worked there for 10 years on something that's now used all the time in ai and then you left why so that i could talk freely at a conference what did you want to talk about freely how dangerous ai could be i realized that these things will one day get smarter than us and we've never had to deal with that and if you want to know what life's like when you're not the apex intelligence ask a chicken so there's risks that come from people misusing ai and then there's risks from ai getting super smart and deciding it doesn't need us is that a real risk yes it is but they're not going to stop it cuz it's too good for too many things what about regulations they have some but they're not designed to deal with most of the threats like the european regulations have a clause that say none of these apply to military uses of ai really yeah it's crazy one of your students left openai yeah he was probably the most important person behind the development of the early versions of church gpt and i think he left because he had safety concerns we should recognize that this stuff is an existential threat and we have to face the possibility that unless we do something soon we're near the end so let's do the risks what do we end up doing in such a world this has always blown my mind a little bit 53% of you that listen to the show regularly haven't yet subscribed to the show so could i ask you for a favor before we start if you like the show and you like what we do here and you want to support us the free simple way that you can do just that is by hitting the subscribe button and my commitment to you is if you do that then i'll do everything in my power me and my team to make sure that this show is better for you every single week we'll listen to your feedback we'll find the guests that you want me to speak to and we'll continue to do what we do thank you so much jeffrey hinsson they call you the godfather of ai uh yes they do why do they call you that there weren't that many people who believed that we could make neural networks work artificial neural networks so for a long time in ai from the 1950s onwards there were kind of two ideas about how to do ai one idea was that sort of core of human intelligence was reasoning and to do reasoning you needed to use some form of logic and so ai had to be based around logic and in your head you must have something like symbolic expressions that you manipulated with rules and that's how intelligence worked and things like learning or reasoning by analogy that all come later once we've figured out how basic reasoning works there was a different approach which is to say let's model ai on the brain because obviously the brain makes us intelligent so simulate a network of brain cells on a computer and try and figure out how you would learn strengths of connections between brain cells so that it learned to do complicated things like recognize objects in images or recognize speech or even do reasoning i pushed that approach for like 50 years because so few people believed in it there weren't many good universities that had groups that did that so if you did that the best young students who believed in that came and worked with you so i was very fortunate in getting a whole lot of really good students some of which have gone on to create and play an instrumental role in creating platforms like open ai yes so i sus a nice example a whole bunch of them why did you believe that modeling it off the brain was a more effective approach it wasn't just me believed it early on fonoyman believed it and cheuring believed it and if either of those had lived i think ai would have had a very different history but they both died young you think ai would have been here sooner i think neural net the neural net approach would have been accepted much sooner if either of them had lived in this season of your life what mission are you on my main mission now is to warn people how dangerous ai could be did you know that when you became the godfather of ai no not really i was quite slow to understand some of the risks some of the risks were always very obvious like people would use ai to make autonomous lethal weapons that is things that go around deciding by themselves who to kill other risks like the idea that they would one day get smarter than us and maybe would become irrelevant i was slow to recognize that other people recognized it 20 years ago i only recognized it a few years ago that that was a real risk that was come might be coming quite soon how could you not have foreseen that if if with everything you know here about cracking the ability for these computers to learn similar to how humans learn and just you know introducing any rate of improvement it's a very good question how could you not have seen that but remember neural networks 20 30 years ago were very primitive in what they could do they were nowhere near as good as humans but things like vision and language and speech recognition the idea that you have to now worry about it getting smarter than people that seems silly then when did that change it changed for the general population when chat gpt came out it changed for me when i realized that the kinds of digital intelligences we're making have something that makes them far superior to the kind of biological intelligence we have if i want to share information with you so i go off and i learn something and i'd like to tell you what i learned so i produce some sentences this is a rather simplistic model but roughly right your brain is trying to figure out how can i change the strength of connections between neurons so i might have put that word next and so you'll do a lot of learning when a very surprising word comes and not much learning when if it's when it's very obvious word if i say fish and chips you don't do much learning when i say chips but if i say fish and cucumber you do a lot more learning you wonder why did i say cucumber so that's roughly what's going on in your brain i'm predicting what's coming next that's how we think it's working nobody really knows for sure how the brain works and nobody knows how it gets the information about whether you should increase the strength of a connection or decrease the strength of a connection that's the crucial thing but what we do know now from ai is that if you could get information about whether to increase or decrease the connection strength so as to do better at whatever task you're trying to do then we could learn incredible things because that's what we're doing now with artificial neuronets it's just we don't know for real brains how they get that signal about whether to increase or decrease as we sit here today what are the big concerns you have around safety of ai if we were to to list the the top couple that are really front of mind and that we should be thinking about um can i have more than a couple go ahead i'll write them all down and we'll go through them okay first of all i want to make a distinction between two completely different kinds of risk there's risks that come from people misusing ai and that's most of the risks and all of the short-term risks and then there's risks that come from ai getting super smart and deciding it doesn't need us is that a real risk and i talk mainly about that second risk because lots of people say "is that a real risk?" and yes it is now we don't know how much of a risk it is we've never been in that situation before we've never had to deal with things smarter than us so really the thing about that existential threat is that we have no idea how to deal with it we have no idea what it's going to look like and anybody who tells you they know just what's going to happen and how to deal with it they're talking nonsense so we don't know how to estimate the probabil probabilities it'll replace us um some people say it's like less than 1% my friend yan lar who was a postto with me thinks no no no we're always going to be we build these things we're always going to be in control we'll build them to be obedient and other people like yudkowski say "no no no these things are going to wipe us out for sure if anybody builds it it's going to wipe us all out." and he's confident of that i think both of those positions are extreme it's very hard to estimate the probabilities in between if you had to bet on who was right out of your two friends i simply don't know so if i had to bet i'd say the probabilities in between and i don't know where to estimate it in between i often say 10 to 20% chance they'll wipe us out but that's just gut based on the idea that we're we're still making them and we're pretty ingenious and the hope is that if enough smart people do enough research with enough resources we'll figure out a way to build them so they'll never want to harm us sometimes i think if we we talk about that second um path sometimes i think about nuclear bombs and the the invention of the atomic bomb and how it compares like how is this different because the atomic bomb came along and i imagine a lot of people at that time thought our days are numbered yes i was there we did yeah but but but what's what h we're still here we're still here yes so the atomic bomb was really only good for one thing and it was very obvious how it worked even if you hadn't had the pictures of hiroshima and nagasaki it was obvious that it was a very big bomb that was very dangerous with ai it's good for many many things it's going to be magnificent in healthcare and education and more or less any industry that needs to use its data is going to be able to use it better with ai so we're not going to stop the development you know people say "well why don't we just stop it now?" we're not going to stop it because it's too good for too many things also we're not going to stop it because it's good for battle robots and none of the countries that sell weapons are going to want to stop it like the european regulations they have some regulations about ai and it's good they have some regulations but they're not designed to deal with most of the threats and in particular the european regulations have a a clause in them that say none of these regulations apply to military uses of ai so governments are willing to regulate regulate companies and people but they're not willing to regulate themselves it seems pretty crazy to me that they i go back and forward but if europe has a regulation but the rest of the world doesn't competitive disadvantage yeah we're seeing this already i don't think people realize that when openai release a new model or a new piece of software in america they can't release it to europe yet because of regulations here so sam alman tweeted saying "our new ai agent thing is available to everybody but it can't come to europe yet because there's regulations." yes what does that gives us a productive disadvantage productivity disadvantage what we need is i mean at this point in history when we're about to produce things more intelligent than ourselves what we really need is a kind of world government that works run by intelligent thoughtful people and that's not what we got so free-for-all well that what we've got is sort of we've got capitalism which is done very nicely by us is produce lots of goods goods and services for us but these big companies they're legally required to try and maximize profits and that's not what you want from the people developing this stuff so let's do the risks then you talked about there's human risks and then there's so i've distinguished these two kinds of risk let's talk about all the risks from bad human actors using ai there's cyber attacks so between 2023 and 2024 they increased by about a factor of 12,200% and that's probably because these large language models make it much easier to do fishing attacks and a fishing attack for anyone that doesn't know is it's they send you something saying uh hi i'm your friend john and i'm stuck in el salvador could you just wire this money that's one kind of attack but the fishing attacks are really trying to get your loon credentials and now with ai they can clone my voice my image they can do all that i'm struggling at the moment because there's a bunch of ai scams on x and also meta and there's one in particular on meta so instagram facebook at the moment which is a paid advert where they've taken my voice from the podcast they've taken the my mannerisms and they've made a new video of me encouraging people to go and take part in this crypto ponzi scam or whatever and we've been you know we spent weeks and weeks and weeks and weeks and end emailing meta telling "please take this down." they take it down another one pops up they take that one down another one pops up so it's like whack-a-ole and then it's very annoying the the heartbreaking part is you get the messages from people that have fallen for the scam and they've lost £500 or $500 and they cross with you cuz you recommended it and i'm i'm like i'm sad for them it's very annoying yeah i have a a smaller version of that which is pe some people now publish papers with me as one of the authors mhm and it looks like it's in order that they can get lots of citations to themselves ah so cyber attacks a very real threat there's been an explosion of those and these already obviously ai is very patient so they can go through 100 million lines of code looking for known ways of attacking them that's easy to do but they're going to get more creative and they may some people believe and i some people who know a lot believe that maybe by 2030 they'll be creating new kinds of cyber attacks which no person ever thought of so that's very worrisome because they can think for themselves and discover they can think for themselves they can draw new conclusions from much more data than a person ever saw is there anything you're doing to protect yourself from cyber attacks at all yes it's one of the few places where i changed what i do radically because i'm scared of cyber attacks canadian banks are extremely safe in 2008 no canadian banks came anywhere near going bust so they're very safe banks because they're well regulated fairly well regulated nevertheless i think a cyber attack might be able to bring down a bank now if you have all my savings are in shares in banks held by banks so if the bank gets attacked and it holds your shares they're still your shares and so i think you'd be okay unless the attacker sells the shares because the bank can sell the shares if the attacker sells your shares i think you're screwed i don't know i mean maybe the bank would have to try and reimburse you but the bank's bust by now right so so i'm worried about a canadian bank being taken down by a cyber attack and the attacker selling selling shares that it holds so i spread my money and my children's money between three banks in the belief that if a cyber attack takes down one canadian bank the other canadian banks will very quickly get very careful and do you have a phone that's not connected to the internet do you have any like you know i'm thinking about storing data and stuff like that do you think it's wise to consider having cold storage i have a little disc drive and i back up my laptop on this hard drive so i actually have everything on my laptop on a hard drive at least you know if the whole internet went down i had the sense i still got it on my laptop and i still got my information okay then the next thing is using ai to create nasty viruses okay and the problem with that is that just requires one crazy guy with the grudge one guy who knows a little bit of molecular biology knows a lot about ai and just wants to destroy the world you can now create new viruses relatively cheaply using ai and you don't have to be a very skilled molecular biologist to do it and that's very scary so you could have a small cult for example a small cult might be able to raise a few million dollars for a few million dollars they might be able to design a whole bunch of viruses well i'm thinking about some of our foreign adversaries doing government funded programs i mean there was lots of talk around covid and woo the wuhan laboratory and what they were doing and gain a function research but i'm wondering if in you know a china or a russia or an iran or something the government could fund a program for a small group of scientists to make a virus that they could you know i think they could yes now they'd be worried about retaliation they'd be worried about other governments doing the same to them hopefully that would help keep it under control they might also be worried about the virus spreading to their country okay then there's um corrupting elections so if you wanted to use ai to corrupt elections a very effective thing is to be able to do targeted political advertisements where you know a lot about the person so anybody who wanted to use ai for corrupting elections would try and get as much data as they could about everybody in the electorate with that in mind it's a bit worrying what musk is doing at present in the states going in and insisting on getting access to all these things that were very carefully siloed the claim is it's to make things more efficient but it's exactly what you would want if you intended to corrupt the next election how do you mean because you get all this data on the people you get all this data on people you know how much they make where they you know everything about them once you know that it's very easy to manipulate them because you can make an ai that you can send messages um that they'll find very convincing telling them not to vote for example so i have no no reason other than common sense to think this but i wouldn't be surprised if part of the motivation of getting all this data from american government sources is to corrupt elections another part might be that it's very nice training data for a big model but he would have to be taking that data from the government and feeding it into his yes and what they've done is turned off lots of the security controls got rid of the some of the organization to protect against that um so that's corrupting elections okay then there's creating these two echo chambers by organizations like youtube and facebook showing people things that will make them indignant people love to be indignant indignant as in angry or what does indignant mean feeling i'm sort of angry but feeling righteous okay so for example if you were to show me something that said trump did this crazy thing here's a video of trump doing this completely crazy thing i would immediately click on it okay so putting us in echo chambers and dividing us yes and that's um the policy that youtube and facebook and others use for deciding what to show you next is causing that if they had a policy of showing you balanced things they wouldn't get so many clicks and they wouldn't be able to sell so many advertisements and so it's basically the profit motive is saying show them whatever will make them click and what'll make them click is things that are more and more extreme and that confirmed my existing bias that confirm my existing bias so you're getting your biases confirmed all the time further and further and further and further which means you're you're driving away which is now there's in the states there's two communities that don't hardly talk to each other i'm not sure people realize that this is actually happening every time they open an app but if you go on a tik tok or a youtube or one of these big social networks the algorithm as you you said is designed to show you more of the things that you had interest in last time so if you just play that out over 10 years it's going to drive you further and further and further into whatever ideology or belief you have and further away from nuance and common sense and um parity which is a pretty remarkable thing i i like people don't know it's happening they just open their phones and experience something and think this is the news or the experience everyone else is having right so basically if you have a newspaper and everybody gets the same newspaper yeah you get to see all sorts of things you weren't looking for and you get a sense that if it's in the newspaper it's an important thing or significant thing but if you have your own news feed my news feed on my iphone 3/arters of the stories are about ai and i find it very hard to know if the whole world's talking about ai all the time or if it's just my newsfeed okay so driving me into my echo chambers um which is going to continue to divide us further and further i'm actually noticing that the algorithm are becoming even more what's the word tailored and people might go "oh that's great." but what it means is they're becoming even more personalized which is means that my reality is becoming even further from your reality yeah it's crazy we don't have a shared reality anymore i share reality with other people who watch the bbc and other bbc news and other people who read the guardian and other people who read the new york times i have almost no shared reality with people who watch fox news it's pretty it's pretty um i i it's worrisome behind all this is the idea that these companies just want to make profit and they'll do whatever it takes to make more profit because they have to they're legally obliged to do that so we almost can't blame the company can we if they're if well capitalism's done very well for us it's produced lots of goodies yeah but you need to have it very well regulated so what you really want is to have rules so that when some company is trying to make as much profit as possible in order to make that profit they have to do things that are good for people in general not things that are bad for people in general so once you get to a situation where in order to make more profit the company starts doing things that are very bad for society like showing you things that are more and more extreme that's what regulations are for so you need regulations with capitalism now companies will always say regulations get in the way make us less efficient and that's true the whole point of regulations is to stop them doing things to make profit that hurt society and we need strong regulation who's going to decide whether it hurts society or not because you know that's the job of politicians unfortunately if the politicians are owned by the companies that's not so good and also the politicians might not understand the technology we you've probably seen the senate hearings where they wheel out you know mark zuckerberg and these big tech ceos and it is quite embarrassing because they're asking the wrong questions well i've seen the video of the us education secretary talking about how they're going to get ai in the classrooms except she thought it was called a1 she's actually there saying we're going to have all the kids interacting with a1 there is a school system that's going to start um making sure that first graders or even preks have a1 teaching you know every year starting you know that far down in the grades and that's just a that's a wonderful thing [Laughter] and these are what these are the people that these are the people in charge ultimately the tech companies are in charge because they will outsmart the tech companies in the states now at least a few weeks ago when i was there they were running an advertisement about how it was very important not to regulate ai because it would hurt us in the competition with china yeah and that's a that's a plausible argument there yes it will but you have to decide do you want to compete with china by doing things that will do a lot of harm to your society and you probably don't i guess they would say that it's not just china it's denmark and australia and canada and the uk they're not so worried about and germany but if they kneecap themselves with regulation if they slow themselves down then the founders the entrepreneurs the investors are going to go i think calling it kneecapping is taking a particular point of view is take taking the point of view that regulations are sort of very harmful what you need to do is just constrain the big companies so that in order to make profit they have to do things that are socially useful like google search is a great example that didn't need regulation because it just made information available to people it was great but then if you take youtube which starts showing you adverts and showing you more and more extreme things that needs regulation but we don't have the people to regulate it as we've identified i think people know pretty well um that particular problem of showing you more and more extreme things that's a well-known problem that the politicians understand they just um need to get on and regulate it so that was the the next point which was that the algorithms are going to drive us further into our echo chambers right what's next lethal autonomous weapons lethal autonomous weapons that means things that can kill you and make their own decision about whether to kill you which is the great dream i guess of the military-industrial complex being able to create such weapons so the worst thing about them is big powerful countries always have the ability to invade smaller poorer countries they're just more powerful but if you do that using actual soldiers you get bodies coming back in bags and the relatives of the soldiers who were killed don't like it so you get something like vietnam mhm in the end there's a lot of protest at home if instead of bodies coming back in bags it was dead robots there'd be much less protest and the military-industrial complex would like it much more because robots are expensive and suppose you had something that could get killed and was expensive to replace that would be just great big countries can invade small countries much more easily because they don't have their soldiers being killed and the risk here is that these robots will malfunction or they'll just be more no no that's even if the robots do exactly what the people who built the robots want them to do the risk is that it's going to make big countries invade small countries more often more often because they can yeah and it's not a nice thing to do so it brings down the friction of war it brings down the cost of doing an invasion and these machines will be smarter at warfare as well so they'll be well even when the machines aren't smarter so the lethal autonomous weapons they can make them now and they i think all the big defense models are busy making them even if they're not smarter than people are still very nasty scary things cuz i'm thinking that you know they could show just a picture go get this guy and go take out anyone he's been texting and this little wasp so two days ago i was visiting a friend of mine in sussex who had a drone that cost less than £200 and the drone went up it took a good look at me and then it could follow me through the woods and it follow it was very spooky having this drone it was about 2 meters behind me it was looking at me and if i moved over there it moved over there it could just track me mhm for 200 pounds but it was already quite spooky yeah and i imagine there's as you say a race going on as we speak to who can build the most complex autonomous autonomous weapons there is a a risk i often hear that some of these things will combine and the cyber attack will release weapons sure um you can you can get combinatorily many risks by combining these other risks mhm so i mean for example you could get a super intelligent ai that decides to get rid of people and the obvious way to do that is just to make one of these nasty viruses if you made a virus that was very contagious very lethal and very slow everybody would have it before they realized what was happening i mean i think if a super intelligence wanted to get rid of us it will probably go for something biological like that that wouldn't affect it do you not think it could just very quickly turn us against each other for example it could send a warning on the nuclear systems in america that there's a nuclear bomb coming from russia or vice versa and one retaliates yeah i mean my basic view is there's so many ways in which the super intelligence could get rid of us it's not worth speculating about what what is what you have to do is prevent it ever wanting to that's what we should be doing research on there's no way we're going to prevent it from it's smarter than us right there's no way we're going to prevent it getting rid of us if it wants to we're not used to thinking about things smarter than us if you want to know what life's like when you're not the apex intelligence ask a chicken yeah i was thinking about my dog pablo my french bulldog this morning as i left home he has no idea where i'm going he has no idea what i do right can't even talk to him yeah and the g the intelligence gap will be like that so you're telling me that if i'm pablo my french bulldog i need to figure out a way to make my owner not wipe me out yeah so we have one example of that which is mothers and babies evolution put a lot of work into that mothers are smarter than babies but babies are in control and they're in control because the mother just can't bear lots of hormones and things but the b the mother just can't bear the sound of the baby crying not all mothers not all mothers and then the baby's not in control and then bad things happen we somehow need to figure out how to make them not want to take over the analogy i often use is forget about intelligence think about physical strength suppose you have a nice little tiger cup it's sort of bit bigger than a cat it's really cute it's very cuddly very interesting to watch except that you better be sure that when it grows up it never wants to kill you cuz if it ever wanted to kill you you'd be dead in a few seconds and you're saying the ai we have now is the target cub yep and it's growing up yep so we need to train it as it's when it's a baby well now a tiger has lots of in stuff built in so you know when it grows up it's not a safe thing to have around but lions people that have lions as pets yes sometimes the lion is affectionate to its creator but not to others yes and we don't know whether these ais we we simply don't know whether we can make them not want to take over and not want to hurt us do you think we can do you think it's possible to train super intelligence i don't think it's clear that we can so i think it might be hopeless but i also think we might be able to and it'd be sort of crazy if people went extinct cuz we couldn't be bothered to try if that's even a possibility how do you feel about your life's work because you were yeah um it sort of takes the edge off it doesn't it i mean the idea is going to be wonderful in healthcare and wonderful in education and wonderful i mean it's going to make call centers much more efficient though one worries a bit about what the people who are doing that job now do it makes me sad i don't feel particularly guilty about developing ai like 40 years ago because at that time we had no idea that this stuff was going to happen this fast we thought we had plenty of time to worry about things like that they when you when you can't get the to do much you want to get it to do a little bit more you don't worry about this stupid little thing is going to take over from people you just want it to be able to do a little bit more of the things people can do it's not like i knowingly did something thinking this might wipe us all out but i'm going to do it anyway mhm but it is a bit sad that it's not just going to be something for good so i feel i have a duty now to talk about the risks and if you could play it forward and you could go forward 30 50 years and you found out that it led to the extinction of humanity and if that does end up being being the outcome well if you played it forward and it led to the extinction of humanity i would use that to tell people to tell their governments that we really have to work on how we're going to keep this stuff under control i think we need people to tell governments that governments have to force the companies to use their resources to work on safety and they're not doing much of that because you don't make profits that way one of your your students we talked about earlier um ilia yep ilia left openai yep and there was lots of conversation around the fact that he left because he had safety concerns yes and he's gone on to set set up a ai safety company yes why do you think he left i think he left because he had safety concerns really he um i still have lunch with him from time to time his parents live in toronto when he comes to toronto we have lunch together he doesn't talk to me about what went on at open ai so i have no inside information about that but i know i very well and he is genuinely concerned with safety so i think that's why he left because he was one of the top people i mean he was he was probably the most important person behind the development of um church gpt the the early versions like gpt2 he was very important in the development of that you know him personally so you know his character yes he has a good moral compass he's not like someone like musco has no moral compass does sam alman have a good moral compass we'll see i don't know sam so i don't want to comment on that but from what you've seen are you concerned about the actions that they've taken because if you know ilia and ilia's a good guy and he's left that would give you some insight yes it would give you some reason to believe that there's a problem there and if you look at sam's statements some years ago he sort of happily said in one interview and this stuff will probably kill us all that's not exactly what he said but that's what it amounted to now he's saying you don't need to worry too much about it and i suspect that's not driven by seeking after the truth that's driven by seeking after money is it money or is it power yeah i shouldn't have said money it's some some combination of those yes okay i guess money is a proxy for power but i am i've got a friend who's a billionaire and he is in those circles and when i went to his house and had uh lunch with him one day he knows lots of people in ai building the biggest ai companies in the world and he gave me a cautionary warning across the across his kitchen table in london where he gave me an insight into the private conversations these people have not the media interviews they do where they talk about safety and all these things but actually what some of these individuals think is going to happen and what do they think is going to happen it's not what they say publicly you know one one person who i shouldn't name who is the who is leading one of the biggest ai companies in the world he told me that he knows this person very well and he privately thinks that we're heading towards this kind of dystopian world where we have just huge amounts of free time we don't work anymore and this person doesn't really give a [ __ ] about the harm that it's going to have on the world and this person who i'm referring to is building one of the biggest ai companies in the world and i then watch this person's interviews online trying to figure out which of three people it is yeah well it's one of those three people okay and i watch this person's interviews online and i i reflect on a conversation that my billionaire friend had with me who knows him and i go "fucking hell this guy's lying publicly." like he's not telling the the truth to the world and that's haunted me a little bit it's part of the reason i have so many conversations around ar in this podcast because i'm like i don't know if they're i think they're a some of them are a little bit sadistic about power i think they they like the idea that they will change the world that they will be the one that fundamentally shifts the world i think musk is clearly like that right he's such a complex character that i don't i don't really know how to place musk um he's done some really good things like um pushing electric cars that was a really good thing to do some of the things he said about self-driving were a bit exaggerated but he that was a really useful thing he did giving the ukrainians communication during the war with russia stling um that was a really good thing he did there's a bunch of things like that um but he's also done some very bad things so coming back to this point of the possibility of destruction and the motives of these big companies are you at all hopeful that anything can be done to slow down the pace and acceleration of ai okay there's two issues one is can you slow it down yeah and the other is can you make it so it will be safe in the end it won't wipe us all out i don't believe we're going to slow it down and the reason i don't believe we're going to slow it down is because there's competition between countries and competition between companies within a country and all of that is making it go faster and faster and if the us slowed it down china wouldn't slow it down does ia think it's possible to make ai safe i think he does he won't tell me what his secret source is i i'm not sure how many people know what his secret source is i think a lot of the investors don't know what his secret source is but they've given him billions of dollars anyway because they have so much faith in asia which isn't foolish i mean he was very important in alexet which got object recognition working well he was the main the main force behind the things like gbc2 which then led to ch gpt so i think having a lot of faith in ia is a very reasonable decision there's something quite haunting about the guy that made and was the main force behind gpt2 which led rise to this whole revolution left the company because of safety reasons he knows something that i don't know about what might happen next well the company had now i don't know the precise details um but i'm fairly sure the company had indicated that would it would use a significant fraction of its resources of the compute time for doing safety research and then it kept then it reduced that fraction i think that's one of the things that happened yeah that was reported publicly yes yeah we've gotten to the autonomous weapons part of the risk framework right so the next one is joblessness in the past new technologies have come in which didn't lead to joblessness new jobs were created so the classic example people use is automatic tele machines when automatic tele machines came in a lot of bank tellers didn't lose their jobs they just got to do more interesting things but here i think this is more like when they got machines in the industrial revolution and you can't have a job digging ditches now because a machine can dig ditches much better than you can and i think for mundane intellectual labor ai is just going to replace everybody now it will may well be in the form of you have fewer people using air assistance so it's a combination of a person and an ai assistant are now doing the work that 10 people could do previously people say that it will create new jobs though so we'll be fine yes and that's been the case for other technologies but this is a very different kind of technology if it can do all mundane human intellectual labor then what new jobs is it going to create you'd you'd have to be very skilled to have a job that it couldn't just do so i don't i don't think they're right i think you can try and generalize from other technologies that have come in like computers or automatic tele machines but i think this is different people use this phrase they say ai won't take your job a human using ai will take your job yes i think that's true but for many jobs that'll mean you need far fewer people my niece answers letters of complaint to a health service it used to take her 25 minutes she'd read the complaint and she'd think how to reply and she'd write a letter and now she just scans it into um a chatbot and it writes the letter she just checks the letter occasionally she tells it to revise it in some ways the whole process takes her five minutes that means she can answer five times as many letters and that means they need five times fewer of her so she can do the job that five of her used to do now that will mean they need less people in other jobs like in health care they're much more elastic so if you could make doctors five times as efficient we could all have five times as much health care for the same price and that would be great there's there's almost no limit to how much health care people can absorb they always want more healthare if there's no cost to it there are jobs where you can make a person with an ai assistant much more efficient and you won't lead to less people because you'll just have much more of that being done but most jobs i think are not like that am i right in thinking the sort of industrial revolution played a role in replacing muscles yes exactly and this revolution in ai replaces intelligence the brain yeah so so mundane intellectual labor is like having strong muscles and it's not worth much anymore so muscles have been replaced now we intelligence is being replaced so what remains maybe for a while some kinds of creativity but the whole idea of super intelligence is nothing remains um these things will get to be better than us at everything so what what do we end up doing in such a world well if they work for us we end up getting lots of goods and services for not much effort okay but that sounds tempting and nice but i don't know there's a cautionary tale in creating more and more ease for humans in in it going badly yes and we need to figure out if we can make it go well so the the nice scenario is imagine a company with a ceo who is very dumb probably the son of the former ceo and he has an executive assistant who's very smart and he says "i think we should do this." and the executive assistant makes it all work the ceo feels great he doesn't understand that he's not really in control and in in some sense he is in control he suggests what the company should do she just makes it all work everything's great that's the good scenario and the bad scenario the bad scenario she thinks "why do we need him?" yeah i mean in a world where we have super intelligence which you don't believe is that far away yeah i think it might not be that far away it's very hard to predict but i think we might get it in like 20 years or even less i made the biggest investment i've ever made in a company because of my girlfriend i came home one night and my lovely girlfriend was up at 1:00 a.m in the morning pulling her hair out as she tried to piece together her own online store for her business and in that moment i remembered an email i'd had from a guy called john the founder of stanto our new sponsor and a company i've invested incredibly heavily in and standtore helps creators to sell digital products courses coaching and memberships all through a simple customizable link in bio system and it handles everything payments bookings emails community engagement and even links with shopify and i believe in it so much that i'm going to launch a stan challenge and as part of this challenge i'm going to give away $100,000 to one of you if you want to take part in this challenge if you want to monetize the knowledge that you have visit stephenbartlet.stan stan.store to sign up and you'll also get an extended 30-day free trial of stan store if you use that link your next move could quite frankly change everything because i talked about ketosis on this podcast and ketones a brand called ketone iq sent me their little product here and it was on my desk when i got to the office i picked it up it sat on my desk for a couple of weeks then one day i tried it and honestly i have not looked back ever since i now have this everywhere i go when i travel all around the world it's in my hotel room my team will put it there before i did the podcast recording today that i've just finished i had a shot of ketone iq and as is always the case when i fall in love with a product i called the ceo and asked if i could invest a couple of million quid into their company so i'm now an investor in the company as well as them being a brand sponsor i find it so easy to drop into deep focused work when i've had one of these i would love you to try one and see the impact it has on you your focus your productivity and your endurance so if you want to try it today visit ketone.com/stephven for 30% off your subscription plus you'll receive a free gift with your second shipment that's ketone.com/stephven i'm excited for you i am so what's the difference between what we have now and super intelligence because it seems to be really intelligent to me when i use like chatbt3 or gemini or okay so it's already ai is already better than us at a lot of things in particular areas like chess for example ai is so much better than us that people will never beat those things again maybe the occasional win but basically they'll never be comparable again obviously the same in go in terms of the amount of knowledge they have um something like gbt4 knows thousands of times more than you do there's a few areas in which your knowledge is better than its and in almost all areas it just knows more than you do what areas am i better than it probably in interviewing ceos you're probably better at that you've got a lot of experience at it you're a good interviewer you know a lot about it if you tried if you got gpt4 to interview a ceo probably do a worse job okay i'm trying to think if that if i agree with that statement uh gpt4 i think for sure um but i but i guess you could but it may not be long before yeah i guess you could train one on this how i ask questions and what i do and sure and if you took a general purpose sort of foundation model and then you trained it up on not just you but every every interviewer you could find doing interviews like this but especially you you'll probably get to be quite good at doing your job but probably not as good as you for a while okay so there's a few areas left and then super intelligence becomes when it's better than us at all things when it's much smarter than you and almost all things is better than you yeah and you you you say that this might be a decade away or so yeah it might be it might be even closer some people think it's even closer and might well be much further it might be 50 years away that's still a possibility it might be that somehow training on human data limits you to not being much smarter than humans my guess is between 10 and 20 years we'll have super intelligence on this point of joblessness it's something that i've been thinking a lot about in particular because i started messing around with ai agents and we released an episode on the podcast actually this morning where we had a debate about ai agents with some a ceo of a big ai agent company and a few other people and it was the first moment where i had no it was another moment where i had a eureka moment about what the future might look like when i was able in the interview to tell this agent to order all of us drinks and then 5 minutes later in the interview you see the guy show up with the drinks and i didn't touch anything i just told it to order us drinks to the studio and you didn't know about who you normally got your drinks from it figured that out from the web yeah figured out cuz it went on uber eats it has my my my data i guess and it i we put it on the screen in real time so everyone at home could see the agent going through the internet picking the drinks adding a tip for the driver putting my address in putting my credit card details in and then the next thing you see is the drinks show up so that was one moment and then the other moment was when i used a tool called replet and i built software by just telling the agent what i wanted yes it's amazing right it's amazing and terrifying at the same time yes because and if it can build software like that right remember that the ai when it's training is using code and if it can modify its own code then it gets quite scary right because it can modify it can change itself in a way we can't change ourselves we can't change our innate endowment right there's nothing about itself that it couldn't change on this point of joblessness you have kids i do and they have kids no they don't have kids no grandkids yet what would you be saying to people about their career prospects in a world of super intelligence what should we we be thinking about um in the meantime i'd say it's going to be a long time before it's as good at physical manipulation as us okay and so a good bet would be to be a plumber until the humanoid robots show up in such a world where there is mass joblessness which is not something that you just predict but this is something that sam alman open ai i've heard him predict and many of the ceos elon musk i watched an interview which i'll play on screen of him being asked this question and it's very rare that you see elon musk silent for 12 seconds or whatever it was and then he basically says something about he actually is living in suspended disbelief i.e he's basically just not thinking about it when you think about advising your children on a career with so much that is changing what do you tell them is going to be of value well that is a tough question to answer i would just say you know to to sort of follow their heart in terms of what they they find um interesting to do or fulfilling to do i mean if i think about it too hard frankly it can be uh dispariting and uh demotivating um because i mean i i go through i mean i i i've put a lot of blood sweat and tears into building the companies and then it and then i'm like wait should i be doing this because if i'm sacrificing time with friends and family that i would prefer to to to but but then ultimately the ai can do all these things does that make sense i i don't know um to some extent i have to have deliberate suspension of disbelief in order to to remain motivated um so i i guess i would say just you know work on things that you find interesting fulfilling and um and and that contribute uh some good to the rest of society yeah a lot of these threats it's very hard to intellectually you can see the threat but it's very hard to come to terms with it emotionally i haven't come to terms with it emotionally yet what do you mean by that i haven't come to terms with what the development of super intelligence could do to my children's future i'm okay i'm 77 i'm going to be out of here soon but for my children and my my younger friends my nephews and nieces and their children um i just don't like to think about what could happen why cuz it could be awful in in what way well if i ever decided to take over i mean it would need people for a while to run the power stations until it designed better analog machines to run the power stations there's so many ways it could get rid of people all of which would of course be very nasty is that part of the reason you do what you do now yeah i i mean i think we should be making a huge effort right now to try and figure out if we can develop it safely are you concerned about the midterm impact potentially on your nephews and your your kids in terms of their jobs as well yeah i'm concerned about all that are there any particular industries that you think are most at risk people talk about the creative industries a lot and sort of knowledge work they talk about lawyers and accountants and stuff like that yeah so that's why i mentioned plumbers i think plumbers are less at risk okay i'm going to become a plumber someone like a legal assistant a parallegal um they're not going to be needed for very long and is there a wealth inequality issue here that will will arise from this yeah i think in a society which shared out things fairly if you get a big increase in productivity everybody should be better off but if you can replace lots of people by ais then the people who get replaced will be worse off and the company that supplies the ais will be much better off and the company that uses the ais so it's going to increase the gap between rich and poor and we know that if you look at that gap between rich and poor that basically tells you how nice the society is if you have a big gap you get very nasty societies in which people live in world communities and put other people in mass jails it's not good to increase the gap between rich and poor the international monetary fund has expressed profound concerns that generative ai could cause massive labor disruptions and rising inequality and has called for policies that prevent this from happening i read that in the business insider so have they given any of what the policies should look like no yeah that's the problem i mean if ai can make everything much more efficient and get rid of people for most jobs or have a person assisted by i doing many many people's work it's not obvious what to do about it it's universal basic income give everybody money yeah i i i think that's a good start and it stops people starving but for a lot of people their dignity is tied up with their job i mean who you think you are is tied up with you doing this job right and if we said "we'll give you the same money just to sit around," that would impact your dignity you said something earlier about it surpassing or being superior to human intelligence a lot of people i think like to believe that ai is is on a computer and it's something you can just turn off if you don't like it well let me tell you why i think it's superior okay um it's digital and because it's digital you can have you can simulate a neural network on one piece of hardware and you can simulate exactly the same neural network on a different piece of hardware so you can have clones of the same intelligence now you could get this one to go off and look at one bit of the internet and this other one to look at a different bit of the internet and while they're looking at these different bits of the internet they can be syncing with each other so they keep their weights the same the connection strengths the same weights are connection strengths mhm so this one might look at something on the internet and say "oh i'd like to increase this strength of this connection a bit." and it can convey that information to this one so it can increase the strength of that connection a bit based on this one's experience and when you say the strength of the connection you're talking about learning that's learning yes learning consists of saying instead of this one giving 2.4 four votes for whether that one should turn on we'll have this one give 2.5 votes for whether this one should turn on and that will be a little bit of learning so these two different copies of the same neural net are getting different experiences they're looking at different data but they're sharing what they've learned by averaging their weights together mhm and they can do that averaging at like a you can average a trillion weights when you and i transfer information we're limited to the amount of information in a sentence and the amount of information in a sentence is maybe a 100 bits it's very little information we're lucky if we're transferring like 10 bits a second these things are transferring trillions of bits a second so they're billions of times better than us at sharing information and that's because they're digital and you can have two bits of hardware using the connection strengths in exactly the same way we're analog and you can't do that your brain's different from my brain and if i could see the connection strengths between all your neurons it wouldn't do me any good because my neurons work slightly differently and they're connected up slightly differently mhm so when you die all your knowledge dies with you when these things die suppose you take these two digital intelligences that are clones of each other and you destroy the hardware they run on as long as you've stored the connection strength somewhere you can just build new hardware that executes the same instructions so it'll know how to use those connection strengths and you've recreated that intelligence so they're immortal we've actually solved the problem of immortality but it's only for digital things so it knows it will essentially know everything that humans know but more because it will learn new things it will learn new things it would also see all sorts of analogies that people probably never saw so for example at the point when gpt4 couldn't look on the web i asked it "why is a compost heap like an atom bomb?" off you go i have no idea exactly excellent most that's exactly what most people would say it said "well the time scales are very different and the energy scales are very different." but then i went on to talk about how a compost he as it gets hotter generates heat faster and an atom bomb as it produces more neutrons generates neutrons faster and so they're both chain reactions but at very different time in energy scales and i believe gpt4 had seen that during its training it had understood the analogy between a compost heap and an atom bomb and the reason i believe that is if you've only got a trillion connections remember you have 100 trillion and you need to have thousands of times more knowledge than a person you need to compress information into those connections and to compress information you need to see analogies between different things in other words it needs to see all the things that are chain reactions and understand the basic idea of a chain reaction and code that code the ways in which they're different and that's just a more efficient way of coding things than coding each of them separately so it's seen many many analogies probably many analogies that people have never seen that's why i also think that people who say these things will never be creative they're going to be much more creative than us because they're going to see all sorts of analogies we never saw and a lot of creativity is about seeing strange analogies people are somewhat romantic about the specialness of what it is to be human and you hear lots of people saying it's very very different it's a it's a computer we are you know we're conscious we are creatives we we have these sort of innate unique abilities that the computers will never have what do you say to those people i'd argue a bit with the innate um so the first thing i say is we have a long history of believing people were special and we should have learned by now we thought we were at the center of the universe we thought we were made in the image of god white people thought they were very special we just tend to want to think we're special my belief is that more or less everyone has a completely wrong model of what the mind is let's suppose i drink a lot or i drop some acid and not recommended and i say to you i have the subjective experience of little pink elephants floating in front of me mhm most people interpret that as there's some kind of inner theater called the mind and only i can see what's in my mind and in this inner theata there's little pink elephants floating around so in other words what's happened is my perceptual systems gone wrong and i'm trying to indicate to you how it's gone wrong and what it's trying to tell me and the way i do that is by telling you what would have to be out there in the real world for it to be telling the truth and so these little pink elephants they're not in some inner theater these little pink elephants are hypothetical things in the real world and that's my way of telling you how my perceptual systems telling me fips so now let's do that with a chatbot yeah because i believe that current multimodal chatbots have subjective experiences and very few people believe that but i'll try and make you believe it so suppose i have a multimodal chatbot it's got a robot arm so it can point and it's got a camera so it can see things and i put an object in front of it and i say point at the object it goes like this no problem then i put a prism in front of its lens and so then i put an object in front of it and i say point at the object and it goes there and i say "no that's not where the object is the object's actually straight in front of you but i put a prism in front of your lens." and the chatbot says "oh i see the prism bent the light rays." so um the object's actually there but i had the subjective experience that it was there now if the chatbot says that is using the word subjective experience exactly the way people use them it's an alternative view of what's going on they're hypothetical states of the world which if they were true would mean my perceptual system wasn't lying and that's the best way i can tell you what my perceptual system is doing when it's lying to me now we need to go further to deal with sentience and consciousness and feelings and emotions but i think in the end they're all going to be dealt with in a similar way there's no reason machines can't have them all because people say machines can't have feelings and people are curiously confident about that i have no idea why suppose i make a battle robot and it's a little battle robot and it sees a big battle robot that's much more powerful than it it would be really useful if it got scared now when i get scared um various physiological things happen that we don't need to go into and those won't happen with the robot but all the cognitive things like i better get the hell out of here and i better sort of change my way of thinking so i focus and focus and focus and don't get distracted all of that will happen with robots too people will build in things so that they when the circumstances such they should get the hell out of there they get scared and run away they'll have emotions then they won't have the physiological aspects but they will have all the cognitive aspects and i think it would be odd to say they're just simulating emotions no they're really having those emotions the little robot got scared and ran away it's not running away because of adrenaline it's running away because of a sequence of sort of neurological in its neural net processes happened which which have the equivalent effect to adrenaline so do you do you and it's not just adrenaline right there's a lot of cognitive stuff goes on when you get scared yeah so do you think that there is conscious ai and when i say conscious i mean that represents the same properties of consciousness that a human has there's two issues here there's a sort of empirical one and a philosophical one i don't think there's anything in principle that stops machines from being conscious i'll give you a little demonstration of that before we carry on suppose i take your brain and i take one brain cell in your brain and i replace it by this a bit black mirror-l like i replace it by a little piece of nanotechnology that's just the same size that behaves in exactly the same way when it gets pings from other neurons it sends out pings just as the brain cell would have so the other neurons don't know anything's changed okay i've just replaced one of your brain cells with this little piece of nanote technology would you still be conscious now you can see where this argument is going yeah so if you replaced all of them as i replace them all at what point do you stop being conscious well people think of consciousness as this like ethereal thing that exists maybe beyond the brain cells yeah well people have a lot of crazy ideas um people don't know what consciousness is and they often don't know what they mean by it and then they fall back on saying well i know it cuz i've got it and i can see that i've got it and they fall back on this theata model of the mind which i think is nonsense what do you think of consciousness as if you had to try and define it is it because i think of it as just like the awareness of myself i don't know i think it's a term we'll stop using suppose you want to understand how a car works well you know some cars have a lot of oomph and other cars have a lot less oomph like an aston martin's got lots of oomph and a little toyota corolla doesn't have much oomph but oomph isn't a very good concept for understanding cars um if you want to understand cars you need to understand about electric engines or petrol engines and how they work and it gives rise to oomph but oomph isn't a very useful explanatory concept it's a kind of essence of a car it's the essence of an aston martin but it doesn't explain much i think consciousness is like that and i think we'll stop using that term but i don't think there's anything any reason why a machine shouldn't have it if your view of consciousness is that it intrinsically involves self-awareness then the machine's got to have self-awareness he's got to have cognition about its own cognition and stuff but i'm a materialist through and through and i don't think there's any reason why a machine shouldn't have consciousness do you think they do then have the same consciousness that we think of ourselves as being uniquely uh given as a gift when we're born i'm ambivalent about that at present so i don't think there's this hard line i think as soon as you have a machine that has some self-awareness it's got some consciousness um i think it's an emergent property of a complex system it's not a sort of essence that's throughout the universe it's you make this really complicated system that's complicated enough to have a model of itself and it does perception and i think then you're beginning to get a conscious machines so i don't think there's any sharp distinction between what we've got now and conscious machines i don't think it's going to one day we're going to wake up and say "hey if you put this special chemical in it becomes conscious." it's not going to be like that i think we all wonder if these computers are like thinking like we are on their own when we're not there and if they're experiencing emotions if they're contending with i think we probably you know we think about things like love and things that are feel unique to biological species um are they sat there thinking are they do they have concerns i think they really are thinking and i think as soon as you make ai agents they will have concerns if you wanted to make an effective ai agent suppose you let's take a call center in a call center you have people at present they have all sorts of emotions and feelings which are kind of useful so suppose i call up the call center and i'm actually lonely and i don't actually want to know the answer to why my computer isn't working i just want somebody to talk to after a while the person in the call center will either get bored or get annoyed with me and will terminate it well you replace them by an ai agent the ai agent needs to have the same kind of responses if someone's just called up because they just want to talk to the ai agent and we're happy to talk for the whole day to the ai agent that's not good for business and you want an ai agent that either gets bored or gets irritated and says "i'm sorry but i don't have time for this." and once it does that i think it's got emotions now like i say emotions have two aspects to them there's the cognitive aspect and the behavioral aspect and then there's a physiological aspect and those go together with us and if the ai agent gets embarrassed it won't go red um so there's no physiological skin won't start sweating yeah but it might have all the same behavior and in that case i'd say yeah it's having emotion it's got an emotion so it's going to have the same sort of cognitive thought and then it's going to act upon that cognitive in the same way but without the physiological responses and does that matter that it doesn't go red in the face and it's just a different i mean that's a response to the it makes it somewhat different from us for some things the physiological aspects are very important like love they're a long way from having love the same way we do but i don't see why they shouldn't have emotions so i think what's happened is people have a model of how the mind works and what feelings are and what emotions are and their model is just wrong what um what brought you to google you you worked at google for about a decade right what brought you there i have a son who has learning difficulties and in order to be sure he would never be out on the street i needed to get several million dollars and i wasn't going to get that as an academic i tried so i taught a corsera course in the hope that i'd make lots of money that way but there was no money in that mhm so i figured out well the only way to get millions of dollars is to sell myself to a big company and so when i was 65 fortunately for me i had two brilliant students who produced something called alexet which was neural net that was very good at recognizing objects in images and so ilia and alex and i set up a little company and auctioned it and we actually set up an auction where we had a number of big companies bidding for us and that company was called alexnet no the the the network that recognized objects was called alexet the company was called dnn research deep neural network research and it was doing things like this i'll put this graph up on the screen that's that's alexet this picture shows eight images and alex net's ability which is your company's ability to spot what was in those images yeah so it could tell the difference between various kinds of mushroom and about 12% of imageet is dogs and to be good at imageet you have to tell the difference between very similar kinds of dog and it would got to be very good at that and your your company alexet won several awards i believe for its ability to out outperform its competitors and so google ultimately ended up acquiring your technology google acquired that technology and some other technology and you went to work at google at age what 66 i went at age 65 to work at google 65 and you left at age 76 75 75 okay i worked there for more or less exactly 10 years and what were you doing there okay they were very nice to me they said they said pretty much you can do what you like i worked on something called distillation that did really work well and that's now used all the time in ai in ai and distillation is a way of taking what a big model knows a big neural net knows and getting that knowledge into a small neural net then at the end i got very interested in analog computation and whether it would be possible to get these big language models running in analog hardware so they used much less energy and it was when i was doing that work that i began to really realize how much better digital is for sharing information was there a eureka moment there was a eureka month or two um and it was a sort of coupling of chat beauty coming out although google had very similar things a year earlier and i'd seen those and that had a big effect effect on me the closest i had to a eureka moment was when a google system called palm was able to say why a joke was funny and i'd always thought of that as a kind of landmark if it can say why a joke's funny it really does understand and it could say why a joke was funny and that coupled with realizing why digital is so much better than analog for sharing information suddenly made me very interested in ai safety and that these things were going to get a lot smarter than us why did you leave google the main reason i left google was cuz i was 75 and i wanted to retire i've done a very bad job of that the precise timing of when i left google was so that i could talk freely at a conference at mit but i left because i was i'm old and i was finding it harder to program i was making many more mistakes when i programmed which is very annoying you wanted to talk freely at a conference at mit yes at mit organized by mit tech review what did you want to talk about freely ai safety and you couldn't do that while you were at google well i could have done it while i was at google and google encouraged me to stay and work on ai safety and said i could do whatever i liked on ai safety you kind of sense to yourself if you work for a big company you don't feel right saying things that will damage the big company even if you could get away with it it just feels wrong to me i didn't leave because i was cross with anything google was doing i think google actually behaved very responsibly when they had these big chat bots they didn't release them possibly because they were worried about their reputation they had a very good reputation and they didn't want to damage it so open ai didn't have a reputation and so they could afford to take the gamble i mean there's also a big conversation happening around how it will cannibalize their core business in search there is now yes and it's the old innovators dilemas to some degree i guess that contending with bad skin i've had it and i'm sure many of you listening have had it too or maybe you have it right now i know how draining it can be especially if you're in a job where you're presenting often like i am so let me tell you about something that's helped both my partner and me and my sister which is red light therapy i only got into this a couple of years ago but i wish i'd known a little bit sooner i've been using our show sponsors boncharg's infrared sauna blanket for a while now but i just got hold of their red light therapy mask as well red light has been proven to have so many benefits for the body like any area of your skin that's exposed will see a reduction in scarring wrinkles and even blemishes it also helps with complexion it boosts collagen and it does that by targeting the upper layers of your skin and boncharge ships worldwide with easy returns and a year-long warranty on all of their products so if you'd like to try it yourself head over to bondcharge.com/diary and use code diary for 25% off any product sitewide just make sure you order through this link bondcharge.com/diary with code diary make sure you keep what i'm about to say to yourself i'm inviting 10,000 of you to come even deeper into the diary of a ceo welcome to my inner circle this is a brand new private community that i'm launching to the world we have so many incredible things that happen that you are never shown we have the briefs that are on my ipad when i'm recording the conversation we have clips we've never released we have behindthe-scenes conversations with the guests and also the episodes that we've never ever released and so much more in the circle you'll have direct access to me you can tell us what you want this show to be who you want us to interview and the types of conversations you would love us to have but remember for now we're only inviting the first 10,000 people that join before it closes so if you want to join our private closed community head to the link in the description below or go to daccircle.com i will speak to you there i'm continually shocked by the types of individuals that listen to this conversation um because they come up to me sometimes so i hear from politicians i hear from some real people i hear from entrepreneurs all over the world whether they are the entrepreneurs building some of the biggest companies in the world or their you know early stage startups for those people that are listening to this conversation now that are in positions of power and influence world leaders let's say what's your message to them i'd say what you need is highly regulated capitalism that's what seems to work best and what would you say to the average person not doesn't work in the industry somewhat concerned about the future doesn't know if they're helpless or not what should they be doing in their own lives my feeling is there's not much they can do this isn't isn't going to be decided by just as climate change isn't going to be decided by people separating out the plastic bags from the um compostables that's not going to have much effect it's going to be decided by whether the lobbyists for the big energy companies can be kept under control i don't think there's much people can do to except for try and pressure their governments to force the big companies to work on ai safety that they can do you've lived a a fascinating fascinating winding life i think one of the things most people don't know about you is that your family has a big history of being involved in tremendous things you have a family tree which is one of the most impressive that i've ever seen or read about your great greatgrandfather george bull founded the boolean algebra logic which is one of the foundational principles of modern computer science you have uh your great great grandmother mary everest bull who was a mathematician and educator who made huge leaps forward in mathematics from what i was able to ascertain um i mean i can the list goes on and on and on i mean your great great uncle george everest is what mount everest is named after is that is that correct i think he's my great great great uncle his his niece married george bull so mary mary bull was mary everest bull um she was the niece of everest and your first cousin once removed joan hinton was involved in the a nuclear physicist who worked on the manhattan project which is the world war ii development of the first nuclear bomb yeah she was one of the two female physicists at los alamos and then after they dropped the bomb she moved to china why she was very cross with them dropping the bomb and her family had a lot of links with china her mother was friends with chairman mo quite weird when you look back at your life jeffrey we have the hindsight you have now and the ret retrospective clarity what might you have done differently if you were advising me i guess i have two pieces of advice one is if you have an intuition that people are doing things wrong and there's a better way to do things don't give up on that intuition just because people say it's silly don't give up on the intuition until you figured out why it's wrong figured out for yourself why that intuition isn't correct and usually it's wrong if it disagrees with everybody else and you'll eventually figure out why it's wrong but just occasionally you'll have an intuition that's actually right and everybody else is wrong and i lucked out that way early on i thought neural nets are definitely the way to go to make ai and almost everybody said that was crazy and i stuck with it because i couldn't it seemed to me it was obviously right now the idea that you should stick with your intuitions isn't going to work if you have bad intuitions but if you have bad intuitions you're never going to do anything anyway so you might as well stick with them and in your own career journey is there anything you look back on and say "with the hindsight i have now i should have taken a different approach at that juncture." i wish i'd spent more time with my wife um and with my children when they were little i was kind of obsessed with work your wife passed away from ovarian cancer no or that was another wife okay um i had two wives to have cancer oh really sorry the first one died of ovarian cancer and the second one died of pancreatic cancer and you wish you'd spent more time with her with the second wife yeah who was a wonderful person why did you say that in your 70s what is it that you've you figured out that i might not know yet oh just cuz she's gone and i can't spend more time with her now mhm but you didn't know that at the time at the time you think i mean it was likely i would die before her just cuz she was a woman and i was a man um i didn't i just didn't spend enough time when i could i i think i i inquire there because i think there's many of us that are so consumed with what we're doing professionally that we kind of assume immortality with our partners because they've always been there so we i mean she was very supportive of me spending a lot of time working but and why did you say your children as well what's the what's the well i didn't spend enough time with them when they were little and you regret that now if you um if you had a closing message for for my for my listeners about ai and ai safety what would that be jeffrey there's still a chance that we can figure out how to develop ai that won't want to take over from us and because there's a chance we should put enormous resources into trying to figure that out because if we don't it's going to take over and are you hopeful i just don't know i'm agnostic you must get get bed get in bed at night and when you're thinking to yourself about probabilities of outcomes there must be a bias in one direction because there certainly is for me i imagine everyone listening now has a internal prediction that they might not say out loud but of how they think it's going to play out i really don't know i genuinely don't know i think it's incredibly uncertain when i'm feeling slightly depressed i think people are toast is going to take over while i'm feeling cheerful i think we'll figure out a way maybe one of the facets of being a human um is because we've always been here like we were saying about our loved ones and our relationships we assume casually that we will always be here and we'll always figure everything out but there's a beginning and an end to everything as we saw from the dinosaurs i mean yeah and we have to face the possibility that unless we do something soon we're near the end we have a closing tradition on this podcast where the last guest leaves a question in their diary and the question that they've left for you is with everything that you see ahead of us what is the biggest threat you see to human happiness i think the joblessness is a fairly urgent short-term threat to human happiness i think if you make lots and lots of people unemployed even if they get universal basic income um they're not going to be happy because they need purpose because they need purpose yes and struggle they need to feel they're contributing something they're useful and do you think that outcome that there's going to be huge job displacement is more probable than not yes i do and what sort of that one i think is definitely more probable than not if i worked in a call center i'd be terrified and what's the time frame for that in terms of mass jobs i think it's beginning to happen already i read an article in the atlantic recently that said it's already getting hard for university graduates to get jobs and part of that may be that people are already using ai for the jobs they would have got i spoke to the ceo of a major company that everyone will know of lots of people use and he said to me in dms that they used to have seven just over 7,000 employees he said uh by last year they were down to i think 5,000 he said right now they have 3,600 and he said by the end of summer because of ai agents they'll be down to 3,000 so you've got so it's happening already yes he's halfed his workforce because ai agents can now handle 80% of the customer service inquiries and other things so it's it's happening already so urgent action is needed yep i don't know what that urgent action is that's a tricky one because that depends very much on the political system and political systems are all going in the wrong direction at present i mean what do we need to do save up money like do we save money do we move to another part of the world i don't know what would you tell your kids to do they said "dad like there's going to be loads of job displacement." because i worked for google for 10 years is they have enough money okay okay [ __ ] so they're not typical what if they didn't have money trained to be a plumber really jeffrey thank you so much you're the first nobel prize winner that i've ever had a conversation with i think in my life so that's a tremendous honor and you you you received that award for a lifetime of exceptional work and pushing the world forward in so many profound ways that will lead to great and that have led to great advancements and things that matter so much to us and now you've turned this season in your life to shining a light on some of your own work but also on the the the broader risks of ai and how um and how it might impact us adversely and there's very few people that have worked inside the machine of a google or a big tech company that have contributed to the field of ai that are now at the very forefront of warning us against the very thing that they worked upon there are actually surprising number of us now they're not as uh as public and they're actually quite hard to get to have these kinds of conversations because many of them are still in that industry so you know someone who tries to contact these people often and ask invites them to have conversations they often are a little bit hesitant to speak openly they speak privately but they're less willing to openly because maybe maybe they still have something at some sort of incentives at play i have an advantage over them which is i'm older so i'm unemployed so i can say what i well there you go so thank you for doing what you do it's a real honor and please do continue to do it thank you thank you so much people think i'm joking when i say that but i'm not the plumbing fish and plumbers are pretty well paid [Music] [Music]