you know the main thing I want people to take away from this is to feel like even if they're not somebody with deep technical expertise when it comes to AI it doesn't mean that their opinion about AI doesn't matter and isn't valid actually is um incredibly important and all of society will shape how AI ultimately emerges and how you use it as much as and how you build it and so that I think encourages us all to think quite carefully about what type of society do we want to live in in future [Music] hi Verity thank you for joining me on the show hi Richie thanks for having me excent so just to begin with why is history important for people who are working in AI well I think history is important for for anyone really that's contemplating building the future it's really important that we understand first where we come from and in that respect obviously anybody that's working in AI who is attempting to build the future should be taking a look back to see what's come before them I think it's particularly important if you're trying to think carefully about how AI is integrated into society how it's regulated how it's governed how societies will react to it I think you can learn a huge amount from looking at historical examples of transformative technology and how societies reacted to and and worked with those I think that brings a lot of insight and helps Maybe avoid past mistakes and and maybe emulate some things that people who have come before before us have done well okay yeah certainly learning from past mistakes seems like a a very important thing to do uh and your book of course is filled with examples of historical occasions where they've been techn technological breakthroughs so um one of the things you talk about in your book is the space race and I look this sort remembered as this a big occasion where lots of different people and teams like uh from across the US came together to get first man on the moon but um and it was sort of all about the goal of advancing science so is that a rose tinted view of what happened or is it accurate it's perhaps slightly Rose tinted but I think that's part of what's so interesting about the story so it really is an incredible technological achievement and in that way there's no rose tinting about it you know it's a amazing feat to have set this ambition and then been able to fulfill it which was by no means certain but I think it's also and this is what I write about particularly in my chapter in on space in the book it's also an incredible feat of um diplomatic and legal Innovation as well as technical Innovation and that's what I'm super interested in is the politics behind these stories I think that technology is so deeply political and we don't think about it that way often but if you think about choices that are made trade-offs that are made what gets funded what doesn't who works on what those are all pretty political questions and in the case of the Space Race Kennedy wasn't actually that interested in space in and of itself but he just wanted to find a a platform a sort of competitive space that he thought the United States could beat the Soviet Union and that was for geopolitical goals he felt that if he could show the show off the might and the talent and the ability of sort of free science in the United States that might encourage other um non-aligned countries and and people considering the two models to come more around towards towards the US's way of thinking and you know also he was thinking about War I mean this was the Cold War it's you know not very long since the second world war and it was a very very dangerous time for the for the entire planet because of these two nuclear powers and this this very real threat of nuclear war so the um capabilities put into space technology were often about spying satellites intercontinental ballistic missiles and these kinds of things so what's interesting then is we now look back on space as this incredible moment of sort of unity and scientific achievement but it's based in these very sort of um you know political decisions and and maybe sometimes cynical nationally interested decisions but I think what that teaches us for AI is that actually you can have something that is you know you're concerned about from your own self-interest but you can make decisions that try and uh encourage people and inspire people and uplift people rather than sort of use that competition as something to divide people and create antagonism and tension people say now that well you know you can't possibly have cooperation it's too much of a sort of tense and uncertain time geopolitically but of course you couldn't get something more tense than the height of the Cold War and yet against that backdrop we were able to see um the United Nations outer space Treaty of 1967 which as I write in the book legally determined that when they did finally set foot on the moon a couple of years later they did so first and foremost as representatives of humankind and of their nation state second um that that you that UN treaty determined that space was the province of all mankind and not just something that whoever got to the Moon first they may have planted a flag on it but they didn't own the moon and they certainly didn't plant nuclear weapons on it pointing down at their adversaries so these are decisions that politicians can make that Builders can make that Society can make the space race was actually quite unpopular for a while while and that caused some real tension over the funding for it so there's just these fascinating aspects I think of these stories that are really under re under you know if not under research certainly under reported and people are less aware of and I think that leads us perhaps not necessarily to Rose tinted glasses but to miss some really key parts of the puzzle okay um yeah it's kind of fascinating the idea that um some didn't really care about the space itself it was all about the geopolitics and the military aspects of this and the science was just a sort of side effect when it came to science he was most interested in desalination if he could you know get drinking water from sea water that was actually his his kind of scientific uh passion that he was interested in but you know you can't afford couldn't afford to do both not necessarily in terms of funding although that was a big part of it as I say it cost a lot of money to go to the moon and and you know Congress often weren't happy and and citizens often weren't happy with that and but also you know political time and attention is very um is is a sparse resource and there's only so much effort he could put into corralling and using his kind of dynamism and his kind of magnetic leadership powers to encourage people towards a goals so in the end for geopolitical goals he picks space but in the book I quote these amazing recordings we have of him talking in the Oval Office to his head of NASA where he says I'm not that interested in space why are we spending all this money on Space rather than Maybe cancer research for example the reason we're doing it is to beat the Soviets which really shows I think kind of why this was done but that doesn't mean that it wasn't also a really inspiring incredible technology I think you can do both if you think carefully about it and for people who are involved in um either creating AI or involved in maybe creating policy decisions about AI um how can you how should they try and replicate those sort of positive effects um where you have that sort of inspiring moment and then you're avoiding sort of some of the The Thorn your issues around like military uses and things like that well I mean look AI is going to be used in the military there's no it already is so um I think the people building AI need to think carefully about just what the purpose is of of what they're building why they're why they're doing it um and it may be that that's what they want to to work on but if they're trying to aim for an inspiring moment that gets people excited and and attempts to sort of up uplift and help support the the most amount of people they can around the planet then they should be thinking carefully about what the purposes of what they're building so you know that really is a a decision that should be made at the start I think and I write about this in the book in terms of what's your purpose why you're doing that I mean my former employer for example Deep Mind um this founder and CEO there feels very strongly about AI for science AI to help scientists and to turbocharge science which might help the planet with some really tricky problems I think that's a really Noble way of looking at AI can we use AI to help with the climate crisis can we use it to help you know do Advanced Diagnostics and and reach diseases soono you know these are these are incredible things so it just really comes down to intentionality you know do you know why you're building what you're building okay having a re having a reason for like why you're doing something that sounds like an excellent idea maybe often overlooked I think sometimes it is or you know sometimes the reason is just to make money and there's you know there's there's nothing wrong with making money um but can that be accompanied by um if you want to to to build something inspiring then then starting from kind of what Society do you want to live in what Society do you want to see in the future what are you trying to build that will really help you mentioned like the whole Space Race took part under the backdrop of the Cold War and I think with AI there' been um a few people sort of claiming that you know AI could cause huge problems like right the way up to sort of things like Extinction um and so is there a parallel between this sort of fear of AI and the fear of nuclear war that happened back in the 60s I think it's comparable in terms of the level of sort of uh discussion and hype around it I'm not sure it's comparable in reality um you know obviously in reality during the Cold War there were huge nuclear weapons arsenals and during the Cuban Missile Crisis we were very close to nuclear war and that's part of what chastened and humbled Kennedy and led him although not many people know this to going to the UN and actually suggesting maybe that the there should be a joint moon mission and not just something that the US did on on its own um and I write in the book about why that's a really incredible moment and leads to this United Nations treaty that we get later that determines that today the space is the province of all mankind and we have things like the International Space Station and nobody owns the moon um so you know you you sort of you were a lot closer to nuclear war then and I think you are to some of the more extreme uh suggestions at the moment that AI might cause human extinction now to be completely um uh you know the most generous possible to those arguments I think some of those people are advocating for that are concerned about uh Bad actors using AI in some way to interfere EG with critical infrastructure and things like that and so of course I think um you know cyber security and um making sure that we're resilient and considering those issues are really really important but I I don't think it's comparable in terms of um you know we have these weapons that can potentially wipe out all of humanity I don't subscribe to the view the AI is going to sort of uh get smarter than us and overtake us and and somehow uh kill us all that's just not something that I'm particularly focused on okay that's good like uh Terminator is not happening in the in the near future then okay no I don't think so good good um okay so um story from your book is around invitro fertilization and this seems like a fairly uncontroversial technology now but when it was first introduced in the 1970s I believe there were a lot of worries around this um so can you talk me through what were people worried about back then yeah it's a fascinating example that we don't think of but something much more relevant to AI than the atomic bomb analogy uh this is a technology which uh emerged with a bang in 1978 when the first B was born using IVF techniques in the UK and uh at first people were really excited you know this was a really kind of cool and exciting new scientific capability um and technological Marvel it was especially exciting people in the UK because it was kind of UK this UK scientific achievement uh but quite quite soon afterwards uh you see that people were getting concerned not just because of IVF itself although that was one of the concerns but also because of some of the um techniques that allowed IVF such as human embryology research plus some concerns about the growing um ability to edit genetic sequences and people started questioning you know what does this mean is this natural what does it mean to be human what does this say about the family and our future and it's hard to believe now that something is of normal and totally accepted and standard part of uh our society was ever controversial which I think really gives us a pause for thought when it comes to AI you know that it's sort of at the moment we're having exactly the kinds of discussions you saw around the biotech explosion in the 70s and 80s with people questioning what it means to be human these deep philosophical questions um and what I take great heart from that is that you know we can if we um think you know if we're thoughtful and we you know regulate appropriately and and listen and Trust people's concerns and hopefully AI will also just become a normal accepted part of life I mean it already is in many ways right I mean of course we have ai all around us all the time recommending us something to watch or a song to listen to or filtering out spam from our inbox or you know whatever it may be but these more advanced systems that people are so scared of and so frightened of right now um I think probably will just gradually over time become something that's much more normal accepted the key lesson from the IVF debate and the human embryology discussion that happened during this time I think this is certainly what I write about and what I say in the book is that people people's concerns were respected they were listened to and they were acted upon so the government said look we can't regulate this technology it's so new we don't know yet what it's going to be and what it's going to mean but what we can do is ask a group of experts to look at it and so they set up this thing called the warau commission which was a kind of interdisciplinary independent of government they funded by government commission to look at all the issues that had been thrown up by this new technology and it was led not by a biologist but by a philosopher and um she had some public policy experience having led a commission on uh educational issues before and sort of built this very unique group of um different disciplines you know there was religious Scholars on there legal Scholars social workers biologists of course and they did a huge consultative Pro process um up and down the country meeting people hearing from expert Witnesses and they looked at this and wrote a report that said look people are concerned about these things particularly the issue that people were worried about what it meant to do research on embryos and so they introduced something called the 14-day rule or they suggested something called the 14-day Rule and this was them saying look we think people need to feel like there's a limit they just don't want to see this stuff get out of control and so we're going to introduce this limit you can do research on human embryos but only up to 14 days and then after that no more and this was actually when it was suggested really controversial in the scientific Community who said well there's no scientific reason for this I mean why not 15 days why not 16 days which by the way is exactly the type of conversation I can imagine happen now when anybody suggests any uh regulation around technology at the moment and I have indeed Heard lots of those types of kind of very rationalist push backs but what baroness Mary waro who led the commission uh said was you know people in a democracy had the right to feel that their voices are heard and if we put a limit on this then that will help Innovation flourish and indeed it did and the UK this was eventually adopted the scientists um who opposed it soon became supportive when they realized that the alternative might be human embryology research being banned altogether which was a very real threat uh so they agreed to limits it it to sort of see off a ban and indeed the UK has a flourishing multi-billion pound Life Sciences sector now so I think um what it shows us this example amongst many other things um including about how to sort of do a process that involves a wider group of diverse voices and and participation in the discussions I think it shows us that some limited guard rails can in fact allow innovation to flourish and and not only can allow but sort of do allow in a way that if you don't put any guard rails on this and people really worry um you know they they might over regulate overreact or even you know just through complete distrust and sort of horror recoil from using these these new technologies at all that's absolutely fascinating because I think a lot of people like um if they're building something there gut reaction is going to be okay we don't want any regulations to call it's going to kill Innovation but actually having those kind of guard rails in place and that consultive process meant that it actually increased what was possible yes yeah I think you can you know you might think I think the example I talk about in the book is around live facial recognition you know do we want you know we comfortable are we comfortable with with things like live facial recognition are there areas where we actually think maybe AI you know there should be limits and we should say maybe not here actually you know we don't want AI in this in this particular Arena maybe because it's very sensitive or something it doesn't mean it has to be forever but do these kinds of you know you've got this happening in the European Union with the AI act they're saying that there's certain areas that are just so risky they're kind of unacceptable I think they call it unacceptable risk does that help you know wider Society look and think well the people who are supposed to be you know holding this stuff accountable and scrutinizing it and regulating it they seem to be on top of this now so I feel like I can trust it and therefore lean into it more and it gives businesses greater Clarity of the sort of you know field on which they can operate yeah this is really interesting stuff as related to this the process that went through the warut commission that lasted I think it was months or maybe years going from hey this public has a concern about this to things appearing in legislation so yes it yeah again I was thinking like um it sounds like a really long bureaucratic process but it had a good outcome can you talk me through is that is it is a long bureaucratic process a good idea yeah I think it's funny now we have this obsession with everything happening so quickly and people say oh you know the politicians can't possibly keep up with AI but you know that that's sometimes it it's going to be necessary and indeed a good thing to have a slow process where we don't overreact too quickly and maybe we respond later in the case of the warut commission I think the the first um IVF babies's born in 1978 the warut commission is not set up till 1984 it reports in 1985 and because of huge push back to start with um and political um decisions and things getting in the way like elections and people feeling it's too controversial the actual legislation which sets up um the independent regulatory body we have now in the UK called the human uh human fertilization embryonic Authority or human embryology and fertilization Authority e the E and the f I can't remember exactly around they are now um that wasn't set up that wasn't passed in law until 1990 um but I don't think any of us looking back think G gosh you know I can't believe they didn't catch up and get that quicker you know it's so so sometimes I think we understandably perhaps and this is again to back to your first question of why it's important to look at history we sometimes feel that if everything isn't done right away and right now um you know it means it's going to be useless but but that's really not the case at all um the the sort of Arc of history is long and Technology takes sometimes a long time to to to fully kind of flourish I mean you just look at the internet we're still at the early early days of the internet really in terms of the whole of um history and where I'm sure it will go so I think there's there's time and sometimes taking that time to be sort of consultative and deliberative can actually be a better thing um okay uh yeah uh I'd rather politicians took their time rather than rush through these decisions I think right right they they get it in the neck either way politicians I think sometimes politicians they have a hard time if they don't regulate and they get a hard time if they do and and you know my background is in politics and in technology so I kind of speak both those languages and I have great sympathy um for the fact that the two kind of misunderstand each other but one thing that annoys me often is is um you know sort of Technology people you know I've been guilty of this too so saying you know politicians don't understand the technology but you know technology people need to understand politics too and and often don't understand it very well at all from my experience um so sometimes I feel politicians get criticized because they haven't regulated quickly enough and then they get you know criticized because they are considering regulation and and it's it's it can be quite a difficult balance if you're trying to get that right when ites to you know how to ensure that we can grow and Innovation can flourish but also curb societal harms and ensure societal trust in the process okay so um since there are a lot of Technology people listening to this what do you think technology people should know about politics well then it's very difficult you know it's it's often a bunch of people also trying their best I mean I've been working technology for you know best the last decade or more now and I know that lot of technologists of people trying their hard doing their best to build something that they think is going to really make a difference and improve people's lives I'd love for technologist to understand that's true of a lot of politicians too you know there are definitely some bad ones I'm sure we can all think of some people that maybe aren't in it for the right reasons um and don't do a lot of good but um you know behind the scenes in the UK at least which is what I can speak to most most um eloquently uh but you know of course Beyond just the UK there's it's a lot of people trying their hardest to to sort of build something you know whether it be policy regulation or just you know um the type of future that they think will make the world better and they're hugely well intentioned and they when they don't understand something like technology you know they want to um and I wish that the two communities this is why I left politics to go into Tech because I felt like the two communities weren't talking to each other and it's really important they did um and I and I wish the communities would talk more and with more understanding of where each other are coming from remember Obama saying in 2016 he he guessed edited um wired an edition of Wired Magazine and either in that interview or in the event that he did around it he said he gets so many technologists coming to him and saying you know why aren't you you know using this cool new technology to you know deliver better public services and and and you know Embrace Tech more and the government and and his point to them was well I'd love too but you have to remember that my users are kind of very vulnerable people sometimes you know I'm administering you know benefits to people and um uh you know introducing legislation and products that really deeply affect people's lives in a way they don't have much recourse to it's not like they can just choose another service uh so I have to be really really careful um and so I think you know a bit more patience and understanding from the technology Community towards um politicians I think would be a would be a good thing that's not to say by the way that the politic politics Community doesn't have a lot that it has to understand and learn about technologist as well but that was not the question so yeah sure uh yeah I certainly patience is a good virtue for a lot of people in a lot of situations so that seems useful um all right so I'd also like to talk about the internet so um this started off as a military technology back in the late 60s it's gone through a lot of Evolutions since um so basically used for everything do you want to talk me through the history through this a little bit and what you think the different phases of the internet are yes so um of course you know as we've discussed the first example in my book is the the space race and then the second one is IVF and hum and Bry and then both the third and fourth uh chapters are the internet but I've split it into pre and post 911 um because if you sort of need an example of how political technology is or how much politics affects how a technology develops and the internet is a really fascinating one so I knew the story pretty well as I'm sure most of your listeners do about the internet emerging from um arpa and which is now Dara the advanced research projects agency that Eisenhower set up actually um in response to to Sputnik to discover the weapons of the future and they actually uh you know amongst many other um things come up with the Internet or or certainly come up with the funding um that helps produce the internet the early internet um but what I think is less understood and um and discussed is the sort of process by which that public network infrastructure became privatized and that happens across the sort of 80s um which coincides of course with a time when you know politics both in the US and in the UK and elsewhere was embracing this kind of deregulation privatization um political approach that was quite in fashion at the time and it was not consulted on widely it was not a decision that was made by you know Congress it was something that sort of happened um and at the time was was extremely controversial for how little people were consulted about this this process um to to fast forward a little bit we we get to a situation where because of this privatization there's quite a quite a lot of controversy around different aspects of Internet governance one of those being the domain name system which is run basically by sort of one man at um stanf no not Stanford I think University California if I'm I'm I'm misremembering um but he he runs the whole of the domain name System including deciding for example who gets the you know country level domain and um uh and who he decides to work with to make those kind of huge formative Decisions by this point business is quite heavily involved in the internet and since the uh Clinton and Gore government who are now in post after the 1992 election are very keen that science and business unite to sort of project American um Power outwards uh the government step in to try to bring some order to chaos essentially at the same time they don't want to step on the toes of the kind of organic internet community that has built this thing and has built a lot of interesting um rules and regulations around it themselves you know it has been self-governing to a to a large degree and they come to this compromised position to introduce Ian or the internet Corporation for sign names and numbers which we have today that I think this is such a fascinating body I think it's fascinating because it's multi-stakeholder I think it's fasc which you know which means it's not just technologists uh that are involved it's it's lots of different groups of people in some cases just anybody that's interested actually can like kind of turn up to these meetings and have an input um it's multi-stakeholder it's voluntary it's a nonprofit corporation and it oversees some of the most critical infrastructure that our kind of entire Society is built upon today the government role is incredibly interesting this kind of light touch but critical hand of um guiding hand in terms of getting to a place and then when it's finally set up in 1998 the the the government of the time the Clinton Gore government of course Al Gore had been very interested and and and um involved as a politician in the internet since the early days and I I actually tell the story of the internet through his life story in in the book he has a fascinating Journey himself um 1998 they say you know we'll set this thing up and then within two years there'll be no more role for the US government by the way at this point they have moved the internet into the Department of Commerce rather than the Department of Defense so you can see how these political decisions are affecting how how the internet starts to be shaped and who it starts to be shaped by it's now no longer seen as this military tool um if it ever was seen like that um but actually as a tool of kind of economic power but in the year 2000 Gore loses the election to Bush and very quickly afterwards we have 911 and the George W Bush Administration say oh no no no we're not doing this transition that was promised in the year 2000 out of sort of American control um no no way are we giving up the kind of role that we have here in in Internet governance um and it takes a further 16 years it's not until 2016 that this change is finally made and by then it has to be done quite quickly and very sensitively to avoid a potential breakup and vulcanization of the internet entirely which is spurred on um in large part from some of the actions from that bush Government after 911 including the um you know some may say excessive and um uh controvers certainly controversial uses of Internet surveillance um after after 9911 so the internet and the politics of the internet is fascinating and again with lots of lessons for us with AI today in terms of the importance of a multi-stakeholder model the important role of government how these political decisions will affect things and also I think in terms of really um ensuring that we are leading by example in use of technology so it's not to give any potential um uh adversaries any weapons use against us with our own behavior it's of course a fascinating story and something that affects everyone I do find it amazing that um the all the domain name uh registration that I the stuff that Ian does now was originally a single person just controlling everything um yeah it was very originally just a text file just just astonishing uh that is absolutely fascinating and it seems like um this has been a common theme with the Internet is it was originally all about uh decentralizing systems and then you have that centralization of like a single person being like the point of failure for this domain name system and so um are there any lessons for AI in terms of this sort of back and forth between centralizing and decentralizing power look the internet's this open network architecture and the decentralize the decentralized nature of that makes it this incredible opportunity for people to build upon including of course you know Tim burners Lee building the worldwide web on top of the um on top of the um open protocols that Vince Surf and and others came up with so I think that's definitely bleeding across into AI at the moment where there's a big discussion around open source models versus closed proprietary models I mean I know people very well on both sides of that debate and I think you know they're they're often coming on it from a very genuinely held kind of authentic belief that theirs is the right approach um I wouldn't want to draw too many lessons from the in from the sort of uniqueness of that open in that open architecture across to AI but of course you can argue that if you make your foundation model open source and people can build upon it you might get more sort of exciting and interesting Innovation and I suppose there is an argument that um that openness kind of breeds a level of scrutiny and accountability that ultimately ends up strengthening um the tech technology itself in a way that like leads to more certainly more trust um uh because people can kind of get their hands on it and um take a look at it themselves um so I think you know there may be some lessons for people considering open versus closed models to to look at in terms of that decentralization you mentioned uh the idea of surveillance and I think this is one of the cases where I I like a good conspiracy theory where you know are always silly and over top this is one case where even the conspiracy theorists didn't sort of realiz how bad like the surveillance was going on um there was the case with um Edward Snowden revealing how much the US government was spying on people back in the S of early uh 2000s do you want to talk me through um like how this came about and are there any lessons here for AI well the reason I write about that in the book is because it's so relevant to what happens with the internet the the fourth chapter which is the internet post 911 opens with this meeting that happened in Dubai in 2012 when there was a proposal from a number countries essentially to disband I can and move to a completely different model which as I say in the book would have sort of broken the internet and I interview um Larry strickling who was an American official at the time who was responsible really for trying to save Ian and the multi-stakeholder model and keep the internet open um why that is linked is because the the kind of shock um and horror quite how um uh deep and uh and Broad us and UK to be clear surveillance had had gotten by that point people weren't shocked that they were spying you know all nations spy but it was both at the kind of details of this became public and I think as I say the breadth and depth of it was was was kind of surprising to people um and that didn't just upset you know adversaries or kind of you know people that you might expect that the the the UK and the US was spying on like you know Russia but it infuriated allies like you know at the time German Chancellor Angela Merkel um who I think said something like you know friends shouldn't spy on each other and this kind of thing so it became something that was very difficult for the US and the UK in their geopolitical conversations in their diplomatic conversations it put them on the back foot and they lost some of that moral High Ground um and it meant that um that there was kind of more support potentially for disbanding Ian which still hadn't gone through this transition that was promised in 1998 it was people were suspicious well why are the US government hanging on to it now if you ask people like Larry he'll say well you know actually that role of the US was not that significant it was kind of a unique leadership role but it wasn't control over anything but that's of course not how it was seen you get the benefit of the doubt until you people find out that you maybe have been kind of abusing that um position and in the end it was the Obama government who were trying to kind of repair relations and that change in um political leadership was was helpful to Bringing around um other nations to keep IAM um and uh and lead the us through that transition process the end of that chapter is the Obama Administration Larry um primarily trying to get this transition through and people like Donald Trump and Ted Cruz saying you this is anti-American what you're doing and you could imagine if the OB government hadn't managed to make that happen before the election that certainly the Trump Administration surely wouldn't have let it happen and what that might have meant for the internet that we have today so the reason I discussed the surveillance issues in there is just to sort of remind us as sort of democratic nations that hold ourselves to a set of values that we need to think about that when it comes to AI if other Nations see us using AI primarily to sort of increase the level of surveillance as I said live facial recognition or more intrusive methods um then why wouldn't they do that too and why wouldn't it lead them to kind of distrust our motives and what we're doing whereas if we can sort of show an AI That's the best of us show the AI can do these really incredibly positive things around health and wellbeing and climate and and so on as I've said um then that might encourage people a bit more to thinking that maybe the Democratic model for AI is is more appealing and I I think that just benefits Us in terms of real politic but also in terms of um our own societies and the type of society that I think we all want to live in okay so um it just sound like um you need that a lot of cooperation going to be needed like across borders just in order to make sure these sort of good things happen um so just to wrap up um since the whole point of this is about shaping the future of AI if people want to get involved in this uh what can they do well you know there's a bunch of different things depending on your level of interest and and time and how involved you want to be and what you're already doing I if you're already building AI then think really learn from the past read the book um and and understand how this technology is not built in a vacuum but is built in a context a context of human history and politics and sociable norms and values and that what you're building will be shaped by those um and is already being shaped by those that's really empowering because you can if you think that um these are human decisions that we're making that guide the future of technology then that gives you a lot of um potential power in that you know what what decisions are you making who is this technology for who are you building for what are you building for and and why you why you doing it um If you're sort of outside of AI but you're um maybe concerned about certain aspects of that there's lots of different things you can do you can sort of get involved with your um you know with your union or you can write to your democratically elected representative to say you know these are my concerns what are you doing about it or what are you going to do about it um you can you know put your hand up in inside your company um to say that you want to get involved in AI transition you know it's the company thinking about how they're Consulting there's all all sort of different ways but primarily I think just people feeling you know the main thing I want people to take away from this is to feel like even if they're not you know somebody with deep technical expertise when it comes to AI it doesn't mean that their opinion about AI doesn't matter and isn't valid actually is um incredibly important and all of society will shape how AI ultimately emerges and how you use it as much as and how you build it um and so that I think encourages us all to think quite carefully about what type of society do we want to live in in future and how is AI going to sort of contribute us to us getting them more quickly hopefully or you know in other ways um maybe preventing us and I think we want to sort of guard against the latter and really encourage the former and that has to be something that everybody's involved in that's a very positive message to the fact that everybody should have some kind of say in how AI is used and lots of ways to get involved so all right thank you very much for your time par thanks for having me thanks so much [Music]