Transcript for:
AI Industry and Society Impact

i went to school with a lot of the people that now build these technologies i went to school with some of the executives at OpenAI i don't find these figures to be towering or magical like I remember when we were walking around dorm rooms together in our pajamas and it it instilled in me this understanding that technology is always a product of human choices and different humans will have different blind spots and if you give a small group of those people too much power to develop technologies that will affect billions of people's lives inevitably that is structurally unsound [Music] artificial intelligence is the backbone of some of the biggest companies in the world right now multi-trillion dollar companies can talk about nothing else but AI and of course whenever it's discussed in the media by politicians and civil society it's compared invariably to the steam engine it is going to be the backbone for a new machine age some people are really optimistic about the possibilities it will bring they are the boosters the techno optimists the technoutopians others are doomers they're down on AI agi artificial general intelligence it's never going to happen and if it does well it's going to look like The Matrix or maybe even the Terminator and Skynet we don't want that do we today's guest however is not speculating about the future instead they're very much immersed in the present and indeed the recent past of the artificial intelligence industry karen How went to MIT she studied mechanical engineering she knows the STEM game inside out but she made a choice to go into journalism and media to talk about these issues with a fluency and a knowledge that very few people have rather than speculate what Karen has done with this book is talk to people in the field 300 interviews with 260 people in the industry 150 interviews with 90 employees of OpenAI both past and present she has access to emails company Slack channels the works this is the inside account of open AI and the perils of artificial intelligence big tech and big money coming after well pretty much everything it's an amazing story told incredibly well i hope you enjoy this interview karen how welcome to Downstream thank you so much for having me Aaron it's a real pleasure to have you on right we say we say that to everybody every guest uh but I have to say and this has had rave reviews even though I've lost the uh the dust jacket Empire of AI with this huge poem you've written 421 pages I think not including the acknowledgements really really interesting book it's about AI this burgeoning industry in the United States around artificial intelligence that word has been in circulation since the 1950s I believe yeah before we drill down into your book what is AI and what do people mean by AI when they talk about it in 2025 in Silicon Valley this is you would think this is the easiest question but this is always the hardest question that I get because artificial intelligence is quite poorly defined we'll go back first to 1956 because I feel like it helps understand a little bit about why it's so poorly defined today but the term was originally coined in 1956 by this Dartmouth professor assistant professor John McCarthy and he coined it to draw more attention and more money to research that he was originally doing under a different name and that was something he has explicitly said a few decades later he said "I invented the term artificial intelligence to get money for a summer study." And that kind of that that marketing route to the phrase is part of why it's really difficult to pin down a specific definition today the other reason is because generally people say that AI refers to the concept of recreating human intelligence in computers but we also don't have a scientific consensus around what human intelligence is so quite literally when people say AI they're referring to an an umbrella of all these different types of technologies that appear to simulate different human behaviors or human tasks um but it really ranges from something like Siri on your iPhone all the way to chat GBT which behind the scenes are actually really really different ways of operating they're totally different scales in terms of the consumption of the technologies um and of course they they often have different use cases as well so right now when OpenAI meta when they use those words AI in regards to their products specifically what are they talking about most often they are now talking about what are called deep learning systems so these are systems that train on loads of data and you have software that can statistically compute the patterns in that data and then that model is used to then make decisions or generate text or make predictions so most modern-day AI systems built by companies like Meta by OpenAI by Google are now these deep learning systems so deep learning is the is is the same as machine learning is the same as neural networks deep learning is these are synonyms deep learning is a subcategory of machine learning machine learning refers to a specific branch of AI where you build software that calculates patterns in data deep learning is when you're specifically using neural networks to calculate those patterns so you have a what I call um one of the founding fathers of AI used to call AI a suitcase word so you because you can put whatever you want in the suitcase and suddenly it AI means something different so we have this suitcase word of AI and then under that any datadriven AI techniques are called machine learning and then any neural network datadriven techniques are called deep learning so it's the smallest circle within this broader suitcase work so deep learning and neural networks are kind of interchangeable not exactly in the sense that neural networks are referring to a piece of software and deep learning is referring to the process that the software is doing right do you get upset when when when politicians so in this country we have a prime minister called Karma you know and they say we think the NHS can save you know 20% by you know using AI applications right do you sort of think my good like these people have no idea what they're talking about because that is such an expansive term it can't really it's its political convenience is precisely doesn't mean anything it it does frustrate me a little bit i so I often use the analogy that AI is like the word transportation i mean if transportation can refer to bicycles or rockets or self-driving cars or gas guzzling trucks you know like they're all different modes of transportation serve different purposes different costbenefit analyses and you would never have a politician say we need more transportation to mitigate climate change you would be like but what kind of trans like what are you talking about well yeah we need more transportation to stimulate the economy i mean maybe in that case it's like it's just yeah like there is a vagueness around the AI discussion that is really unproductive and I think a lot of that leads to confusion where people think AI equals one thing and AI equals progress and so we should just have all of it but actually if we were to use the transportation analogy you know like having more bicycles having more public transit sounds great but if someone were actually referring to just like using rockets to commute from you know um Dublin to to London and we were like everyone should get a rocket now like that's going to bring us more progress you'd be like what are you talking about and that's effectively what these companies are doing with general intelligence when you're giving people tools for free with regards to generative AI to just generate stupid images of nonsense that's kind of what we're doing right i I presume you would take that analogy to that level it's like saying "Let's use a rocket to get from Dublin to London to Paris." Yeah exactly like it's not fit for the task um and the the extraordinary amount of environmental costs for flying that rocket when you could have flown a much more efficient plane to do the same thing is like what are you doing you know um and that's some one of the things that people don't really realize about artificial or about generative AI is that the resource consumption required to develop these models and also use these models is quite extraordinary and often times people are using them for tasks that could be achieved with highly efficient different AI techniques and you're but because we use the sweeping term AI to mean anything then people just think "Oh yeah right right i'm just going to use Chat GBT for my one-stop shop solution for anything AI related." So right now data centers globally I think are about 3 3.5% of CO2 emissions i think the the data centers for AI are a tiny fraction of that but obviously they're growing at an extraordinary pace yeah are there any numbers out there with regards to projected CO2 emissions of data centers globally 5 10 15 years from now or is that also it's so recent that we can't really speculate about the numbers involved there are numbers around the energy consumption which you could then use to kind of try and project uh project carbon emissions so there was a McKenzie report that recently projected that based on the current pace of data center and supercomputer expansion for the development and deployment of AI technologies we would need to add around half to 1.2 times the amount of energy consumed in the UK annually to the global grid in the next 5 years wow yeah and most of that will be serviced by fossil fuels this is something that Sam Alman actually even said in front of Senate the Senate a couple weeks ago he said it will most probably be natural gas so he actually picked the nicest fossil fuel but we already seeing reports of coal plants having their lives extended they were meant to be retired but they're no longer being retired explicitly to power data center development we're seeing reports of Elon Musk's XAI the giant supercomputer that he built called Colossus in Memphis Tennessee it is being powered with around 35 unlicensed methane gas turbines that are pumping thousands of toxic air pollutants into the air into that community so this data center acceleration is not just accelerating the climate crisis it also is accelerating the public health crisis of people's ability to access clean air as well as clean water so one of the aspects that's really undertalked about with this kind of AI development the OpenAI's version of AI development is that these data data centers need fresh water to cool because if they used any other kind of water it would erode corrode the equipment it would lead to bacterial growth and so most often these data centers actually use public drinking water because when they enter into a community that is the infrastructure that's already laid to deliver the fresh water to companies to businesses to residents and so one of the things that I highlight in my book is there are many many communities that are already they do not have sufficient drinking water even for people and I went to Monte Vido Uruguay to speak with people about a historic level of drought that they were experiencing where the Monte Vido government literally did not have enough water to put into the public drinking water supply so they were mixing toxic waste water in just so people could have something come out of their taps when they opened them and for people that were too poor to buy bottled water that is what they were drinking and women were having higher rates of miscarriages uh elderly were having an exacerbation or inflammation of their chronic diseases and in the middle of that Google proposed to build a data center that would use more drinking water this is called potable water right this is a potable water yeah exactly you can't use seaw water because of the saline aspect that you Exactly exactly and Bloomberg recently had a story that said 2third of the data centers now being built for AI development are in fact going into water scarce areas you said a moment ago about um XAI unlicensed energy generation using methane gas when you say unlicensed what do you mean as in the company just decided to completely ignore existing environmental regulations when they installed those methane gas turbines and this is actually a really one of the one of the things that I concluded by the end of my reporting was not only are these companies really corporate empires but also that if we allow them to be unfettered in their access to resources and unfettered in their expansion they will ultimately erode democracy like that is the greatest threat of their behaviors and what XAI is doing is a perfect example of at the smallest level the they're enter these companies are entering into communities and completely hijacking existing laws existing regulations existing democratic processes to build the infrastructure for their expansion and we're seeing this hijacking of the democratic process at every level the smallest local levels all the way to the international level it's kind of that that orthodoxy of seek permission after you do something is now I mean when you start applying this is business as usual for those companies that's part of their expansion strategy which we'll talk about and we're going to talk about um the sort of global colonial aspect as well with regards to resource consumption resource use just bring it back to the US again because at the top of this conversation I want to offer a bit of a primer to people out there who they maybe know what AI is they maybe have used chat GPT what are the major companies we're now talking about in this space particularly in the United States of America over the last 5 years who who are the people in this race to AGI Mhm allegedly um artificial general intelligence something which you know either might be sentient probably not or capable of augmenting its own intelligence more plausible who are the major players in that field right now one caveat on AGI is that it's as illdefined as the term AI um so I like to think of it as just a rebranding you know the the entire history of AI has just been been about rebranding and the term deep learning was also a rebranding so anyway but the players First OpenAI of course they were the ones that fired the first shot with chat GBT anthropic major competitor Google Meta Microsoft they're the older uh internet giants that are now also racing to deploy these technologies um super safe super intelligence which spun out of also uh o an open AI splinter there are many openai splinters so this was founded very recently by the former chief scientist of OpenAI and Thinking Machines Lab founded very recently by a former chief technology officer of OpenAI and Amazon is now trying to get into the game as well so basic and Apple is also trying to get in the game so basically all the older generation tech giants as well as a new crop of AI players are all jostling in this space and that's just the US right and that's just the US right so the the Chinese ecosystem is interesting because they're not so um they don't really use the term AGI like that this is like a very kind of unique thing about the US ecosystem is that there's a quasi religious fervor around that underpins the construction of AI products and services whereas in China it's much more like these are businesses we're building products that users are going to use so if you're just looking at companies that are building chat bots that are sort of akin to chat GBT then we're talking about Bite Dance owner of Tik Tok um Alibaba the equivalent of Amazon BYU the equivalent of Google Huawei the equivalent of Apple and uh Tencent the um what is the equivalent of Tencent i I guess Meta is the equivalent of Tencent so they're also building on these things and there's similarly a crop of startups that are moving into the generative AI space and in Europe we've got the little tidlers like Mistral in France you know really not not at the races cuz we're Europe um what's the business case for all this because obviously you've got massive companies often driven by maximizing shareholder value multi-trillion dollar valuations you do these things you invest money to make money as a capitalist society so what what is the business case made by say Microsoft when they have their shareholder meetings and they say we're going to allocate 4050 billion dollars towards building data centers and so on so it's really it's interesting that you mentioned Microsoft because Microsoft has recently been pulling back their investments in data centers they they went all in and now they're really rapidly starting to abandon data center projects so to answer your question it is really unclear what the business case is and Microsoft has been one of the first companies to start acknowledging that and Satia Nadella has come onto some podcasts recently where he actually stunned some people in the industry by being quite skeptical of whether or not this race to AGI was productive um but one of the things that I I really felt after reporting what is driving the fervor is you can't actually fully understand it as just a story about money it has to also be understood as a story of ideology because when in the absence of a business case then you ask why are people still doing this and the answer is there are people who genuinely fervently believe and they talk about it as a belief in this idea that we can fundamentally recreate human intelligence and that if we can do that there is no other more important thing in the world because what else like how else you should you be dedicating your time other than to bring about this civilizationally transformative technology and so that's part of why what drives open AI what drives Enthropic what drives safe super intelligence these other smaller startups and then the bigger giants which are more business focused and more classic companies that actually care about their bottom lines they end up getting pressured because shareholders are seeing the enormous amounts of investment by these startups and they're seeing users start shifting from Google search to using chat GBT as search chat GBT should not be used as search but consumers think that it is and then shareholders ask in Google's shareholder meetings what are you doing with AI what is your AI strategy why aren't you investing in this technology and so then all of the other giants end up racing in the same direction what does Warren Buffett make of it that's what I want to know is he sort of like you guys if he's like you guys are wasting your money he's like he's he's probably right i have no idea has he invested in AI no I don't I don't think so he just sticks to Coke and these sorts of things doesn't he i mean there's there's two rational so I think one is like you say a quasa religious fervor has inflected the investment decisions of some of the world's most um valuable companies which is just an extraordinary thing to even think about i suppose the other one is that a lot of people in this space as we'll talk about in a moment are heavily influenced by people like Peter Teal and Peter Teal's orthodoxy is that competition is for idiots right if you're going to start a business it has to be a monopoly and I can only presume that companies like Microsoft etc Although maybe that's not the best example now given recent events but XAI Open AI Meta the only reason you would invest ultimately hundreds of billions trillions of dollars into this is because first mover advantage gives you a monopoly on the most transformational technology since the steam engine i mean that's the only way I can make sense of it right have Have they has anybody in that space kind of said that we want a we want the monopoly on AGI we want to be the the Facebook of AGI well what what OpenAI often says to investors is if you make this seemingly fantastical bid into our technology you could get the biggest returns you've ever seen in your life because we will then be able to use your funding to get to AGI first so it's still riding on this concept of the fact that there might be an AGI which is high it's not like rooted in scientific evidence um and even if we fail we will successfully be able to automate a lot of human tasks to the point where we can convince a lot of executives to hire our software instead of a labor force so that in and of itself could potentially end up generating enough returns for you more than you've ever seen before so that's usually the pitch that they make but you know it is a huge risky bargain that these investors are actually pitching into and and you know a lot of investors they they have a bandwagon mentality like they aren't necessarily doing their own analysis to say let me do this investment they're just seeing everyone glom onto this thing and they're like well I don't want to miss out why don't we glom on as well but you know there are some investors that have actually recently reached out to me to be like one of the most under reportported stories right now is the amount of risk that is not just being taken on by these VCs is actually being taken on by the entire economy because the money that these investors are investing comes from like university endowments and things like that so if the bubble pops it doesn't just pop for Silicon Valley it actually has will have ripple effects across the global economy i mean when you look at the the the sort of e-commerce bubble in the late 90s okay it was a bubble you know pets.com or whatever it was you know had these crazy valuations but you know buying and selling goods and services offline and then taking that online i mean that makes sense that's a that's a plausible sort of commercial model but like you say nobody's really done that with artificial intelligence it does kind of feel like you know you read these stories about Chulip Mania in 17th century Holland and it does kind of feel very similar um you mentioned Open AI and we've talked about it many times and of course OpenAI is is the central organization in this book what's the big idea behind Open AI when it starts and when does it start let's let's end of 2015 2015 so it's 10 years old what are the animating values that give birth to open AAI so open started as a nonprofit which many people don't realize based on the fact that it's one of the most capitalistic if not the most capitalistic organization in Silicon Valley today but it was co-founded by Elon Musk and Sam Alman as a bid to try and create a fundamental AI research lab that could develop this transformative technology without any kind of commercial pressures so they positioned themselves as the anti-Silicon Valley the anti- Google because Google at the time was the main driver of AI development they had developed a monopoly on some top AI research scientists and Musk in particular had this really great fear of not just Google but Google's deep mind uh Google's acquisition of deep mind where he was very worried that this consolidation of some of the brightest minds would lead to the development of AI that would go very badly wrong and what he meant by very badly wrong was it could one day develop sentience consciousness go rogue and kill all humans on the planet and because of that fear Alman and Musk then thought we need to do a nonprofit not have these profit- driven incentives we're going to focus on being completely open transparent and also collaborative to the point of self-sacrificing if necessary if another lab starts making faster progress than us on AI and on the quest to AGI we will actually just join up with them we will we will dissolve our own organization and join up with them and uh that didn't hold for very long so what's their theory behind that because you know at that point Google is now about maybe 2015 is maybe the world's most valuable company i don't know is certainly up there and this is a nonprofit so how how are they going to achieve AGI before Google so initially the bottleneck that they saw was talent like right Google has this monopoly in talent we need to chip away at that monopoly and get some of those Google researchers to come to us and also start acquiring PhD students that are just coming out of uni and because of that I have come to speculate this is not based on any documents that I read or anything i've come to speculate that part of the reason why they started as a nonprofit in the first place is because it was a great recruitment tool for getting at that bottleneck they could not compete on salaries with Google but they could compete on a sense of mission and in fact when Alman was recruiting the chief scientist Deia Sudskver who was the critical first acquisition of talent that then led to many other scientists being really interested in working for OpenAI he appealed to Sudskver's sense of purpose like do you want to do you want a big salary and just to work for a for-profit company or do you want to take a pay cut and do something big with your life and it was actually that reason that Sutzk said you know what you're right i I do want to work for a nonprofit and so that's how they initially conceived of competing with Google was we we're starting a little bit late to the game how do we first get a bunch of really really smart people to join us let's create this really big sense of mission and and the I open the book with two quotes in the epigraph and one of them is from Sam Alman writing a blog post in 2013 and he quotes someone else that says successful people build companies more successful people build countries the most successful people build religions and then he reflects on this and says it seems to me that the most successful founders in the world don't actually set off to build a company they set off to build a religion and it turns out building a company is the easiest way to do so and so you know it's not like 2013 and then 2015 he creates OpenAI as a nonprofit it's important to say as well Sam Melman is not some sort of idealistic um porpa you know he's working at Y Combinator he is very much inshed within the Silicon Valley elite um I suppose also there's tax as well right if you're a nonprofit you've got the mission you've also got a bunch of tax breaks which you don't have as a for profit so maybe there's a very cynical genesis there um but I suppose just reading your book and becoming more familiar with the arguments over time you know clearly the amount of compute you have is is was always going to be critical and if you if you believe on the in the um neural network model the deep learning model the amount of compute you have is always going to be critical and it just seems implausible that a nonprofit could ever have been able to compete with Google for instance ever like it seems implausible because you have to spend as we now see tens of billions hundreds of billions of dollars on compute did nobody say that did nobody say "Hey you know like the bottleneck isn't just talent action it's being able to spend hundreds of billions of dollars on these Nvidia GPUs." It's so interesting because at the time the idea that you needed a lot of compute was actually neither very popular nor one that was seen as that scientifically rigorous right so there were there were many different ideas of how to advance AI one was we already actually have all the techniques that we need and we just need to scale them but that was considered a very extreme opinion and then on the other extreme it was we don't even have the techniques yet and interestingly recently there's a New York Times story that says why we likely won't get to AGI anytime soon by Cade Mets and he cites this stat that 75% of the longest standing most respected AI researchers actually still think to this day we don't actually have the techniques to get to AGI if we will ever so it's we're we're kind of coming full circle now and it is starting to become unpopular again this idea that you can just scale your way to so-called intelligence but that was the research vibe when openi started was we can actually maybe just innovate on techniques right and then very quickly because Ilas Sutskver in particular was a scientist who anomalously did think that scaling was possible and because Altman loved the idea of adding zeros to things from his career in Silicon Valley and because Greg Brockman the chief technology officer also very Silicon Valley entrepreneur liked that idea as well then they identified why don't we go for scale because that is going to be the fastest way to see whether we can beat Google and once they made that decision about less than a year in roughly is when they started actually talking about that that's when they decided we actually need to convert into a forprofit because the bottleneck has shifted now from acquiring talent to acquiring capital and that is also why Elon Musk and Sam Alman ended up having a falling out because when they started discussing a for-profit conversion both Elon Musk and Sam Alman each wanted to be the CEO of that forprofit and so they couldn't agree and originally Ilascover and Greg Brockman chose Musk they thought that Musk would be the better leader of OpenAI but then Altman essentially and this is something that is very classic a very classic pattern in his career became very persuasive to Brockman who he had had a long-term relationship with about why it could actually be dangerous to go with Musk and like like I would definitely be the more responsible leader so on and so forth and then Brockman convinces Susper and the two chief scientist chief technology officer pivot their decision and they go with Alman and then Musk leaves in a huff and says I don't want to be part of this anymore which has become rather typical of the man hasn't it subsequently but that is incredible really so by 2016 there's a recognition that in terms of capital investment they're going to have to go toe-to-toe with maybe at that point the world's biggest company and they're a nonprofit yeah i just find it weird that and but lots of people bought the propaganda that open AI was in some way open what did the open stand for by the way the open originally stood for open source which in the first year of open AI they really did open source things they did research and then they would put all their code online so it it it really was like they did they did what they said and then the moment that they realized we got to go for scale then everything shifted it's such an amazing story and so emblematic of the 2010s that you have this organization which presents itself as effectively an extension of activism you know ends up becoming today some people value open AI at $300 billion um and it's doing all these terrible things which we're going to talk about sam Alman specifically who is he what's his background how does this guy who nobody's heard of become the CEO of a company which today is you know it's it's almost more valuable than any company in Europe for instance yeah altman is he's spent his entire career in Silicon Valley he was a first a founder a startup founder himself and he was part of the first batch of companies that joined Y Combinator one of now today one of the most prestigious startup accelerators in Silicon Valley but at the time he was he was the very first class and no one really knew what YC was he did that for seven years he was running a company called Looped which was a mobile-based social media platform effectively a Foursquare competitor but which actually started earlier than Foursquare it didn't do very well it was sold off for parts and but what he did do very well during that time was ingratiate himself with very powerful networks in Silicon Valley so one of the first and longest mentors that he ended up having throughout his career is Paul Graham the founder of Y Combinator who then plucked Sam Alman to be his successor and Sam Alman then at a very young age became president of YC and then he ended up doing that for around 5 years and during his tenure at YC he dramatically expanded YC's portfolio of companies he started investing not just in software companies but also pushing into quantum into self-driving cars into fusion and really going for those hard tech engineering challenges and if you look at how he ended up then as a CEO of OpenAI I think that he basically was trying to figure out what is going to be the next big technology wave let me test out all of these different things position myself as involved in all of these different things um so in addition to all his investments he started cultivating this idea of AI also seems like maybe it'll be big let me start working on an idea for a fundamental AI research lab that becomes open AI and once open AI started being the fastest one taking off then Alman hops over and becomes CEO he hops over so how does that happen where does he come from cuz like you say originally it's got people like he's there who's there first him or I sits technically Alman recruited Satzkever but Alman was only a a chairman he he didn't take an executive role at OpenAI even though he founded the company right and similarly with Musk Musk didn't have an executive role he was just a co-chairman so it was just the two of them that were chairman of the board and Ilia Szver and Greg Machmann were the main people the main executives that were actually running the company dayto-day in the beginning I mean I have to say reading the book Sam Alman he comes across as a a master manipulator like masterful manipulator and understander of human psychology there's this great quote let me get it up uh which you have I think it's from Paul Graham um Sam Alman has it you could parachute him into an island full of cannibals and come back in 5 years and he'd be the king if you're Sam Alman you don't have to be profitable to convey to investors that you will succeed with or without them I mean he just sounds He's also described by the way as a once in a generation fundraising talent i think that's by you um how how is he able to just basically come out of nowhere and compete with people like Elon Musk Zuckerberg as this kind of intellectual heavyweight in Silicon Valley in regards to one of the major growth technologies of our of our decade so from the public's perspective he came out of nowhere but within the tech industry everyone knew Sam Alman you know like I I as someone who worked in tech like I knew Sam Alman ages ago because Y Cominator was just so important it was as a CEO of potential company that valuable was it always something that he might be in no I don't think people ever thought that he would jump to become the CEO of a company because he has such an investor mindset and his approach has always been to be involved in many many companies i mean he invested in hundreds of startups as both the president of YC and running some uh personal investment funds as well but people he was well respected within the valley he was seen as a critical lynchpin of the entire startup ecosystem and not just by people within the industry but by policy makers which is key he started cultivating relationships with politicians very very early on in his tenure as the president of YC and for example I talk in my book about how Ash Carter the head of the Department of Defense under the Obama administration came to Altman asking "How can we get more young tech entrepreneurs to partner with the US government?" So he was seen as a gateway into the valley and obviously the valley isn't just made of of of startups there's also the tech giants but back then like starting a startup was way cooler than working at a tech giant because Google Microsoft they were considered the older safer options if you really wanted job security but if you wanted to be an innovator if you wanted to do breathtaking things you would build a startup and then that start your number one goal as a startup founder was to get into YC so Altman was the pinnacle he was he was a he was emblematic of the pinnacle of success in the valley and he even if his net worth wasn't the same as other people in terms of his social capital his networking he understood early on that's where the real value lies exactly so interesting i mean some notes that I wrote down um cuz there are there are points where I'm thinking why on earth is this gentleman the CEO of such a valuable company he seems kind of useless and the notes I had down were um people pleaser yes liar conflict averse how' you become the CEO of such a successful company maybe you think that or don't think that I don't know I mean at points it kind it comes across as almost psychotic the capacity to to lie here's an interesting question for me and I don't know I don't know how comfortable you are with answering it in writing this book there's another alternative timeline where you basically write a hography of Sam and you leave all of that out right there are other writers out there I won't name them they sell a ton of books and they write very positive affirming um biographies of these visionary leaders whether it's Elon Musk or Steve Jobs etc why didn't you just write that book about Sam Alman you know you would have made a ton more money right and I'm but I'm reading this stuff and I'm thinking my good this and it's so deaf and nuance your your portrait of Sam Alman i just think the guy I mean this this is going to really hurt him when he reads this stuff i imagine why didn't you do that take the easy route i don't know that that would have been the easy route i mean I just wrote the facts and the facts come out that way you know like I interviewed over 260 people across 300 different interviews and over 150 of those interviews were with people who either worked at the company or were close to Sam Alman and that's just what they presented was all of the details that I ended up putting in and one of the things that he that just came through again and again and again well two two things that came through again and again no matter how long someone worked with him or how closely they worked with him they would always say to me at the end of the day I don't know what Sam believes so that's interesting m and then the other thing that came through was I would ask them well what did he say to you he believed in this meeting at this point in time for why the company needed to do this XYZ thing and the answer was he always said he believed what that person believed except because I interviewed so many people who have very divergent beliefs and I was like wait a minute he's saying that he believes what this person believes and then what that person believes and they're literally diametrically opposite so yeah so I just I just ended up documenting all of those different details to illustrate how people feel about him i mean he's a polarizing figure both extreme in the positive and negative direction some people feel he is the greatest tech um leader of our generation and they but they don't say that he is honest when they say that they just say that he's one of the most phenomenal assets for achieving a vision of the future that they really agree with and then there are other people who hate his guts and say that he is the greatest threat ever and it really also comes down to whether or not they agree with his vision and they don't and so then his persuasive powers suddenly become manipulative tactics m I mean if you compare him to somebody like Elon Musk as a CEO who is obviously far from perfect but Elon Musk makes makes big bets he has gut instincts he's very happy to alienate people if he thinks he's right about something and you know obviously I don't agree with him on many many things but that's that's quite a sort of there's an archetype with regards to a business leader that that looks like that and then you got somebody like Sam Alman he's doing all of these things like I say the peopleleasing the conflict aversion and yet he's managed to lead this company to essentially a third of a trillion valuation he must obviously be doing something right as well so what are his sort of comparative advantages as a business leader cuz on paper I read all that stuff and I think the guy wouldn't be able to get up in the morning and make breakfast and yet he's accomplished some extraordinary things yeah I think it really comes down to he really he does understand human psychology very well which not only is helpful in getting people to join in on his quest so he's great at at acquiring talent and then he's said himself like I'm I'm a visionary leader i'm not an operational leader and my best skill is to acquire the best people that then operationalize the thing so he's he's good at persuading people into joining his quest he's good at persuading what whoever has access to whatever resource he needs to then give him that resource whether it's capital land energy water laws you know um and then he is people have said that he instills a very powerful sense of belief in his vision and in their ability to then do it he's good we say in English soccer we would say good man manager he can inspire people he inspires people to do things that they didn't think that they would be able to do um but yeah but I mean this is this is why there's so much controversy he is such a polarizing figure because people who encount everyone has a very personalized encounter relationship with him because he he often um he he does his best work in one-on-one meetings when he can say whatever he needs to say to get you to do believe achieve whatever it is that he needs you to do and that's also part of the reason why there's so many diverging like people that are like "Oh I think he believes this i think he believes that." And they're like totally diverging it's because he's he's having these very personalized conversations with people um and so some people end up coming out of those personalized meetings feeling totally transformed in the positive direction being like I feel super human i can now do all these things and it's in the direction that I want to go it's I'm building the future that that he sees and I see and we're like aligned and and then other people end up coming out of these meetings feeling like was I played you know like was this was he just telling me all these things to try and get me to do something that's actually fundamentally against my values you said you spoke to 150 people who were connected with open AI um over 150 interviews yeah yeah sorry 150 interviews 250 interviews altogether 27 people altogether the the numbers were No but it's absolutely incredible i should have said this right at the start really what's your what's your personal sort of bio on all this stuff because of course when people out of journalism media cover technology the intersection of that with politics we go well they don't really know what they're talking about they're generalists because they come out of journalism what's your background because it's quite particular i studied mechanical engineering at MIT for undergrad and I went and worked in Silicon Valley because that's what I thought I wanted to do i lasted a year before I realized it was absolutely not what I wanted to do and then I went into journalism and the reason why I had such a visceral reaction against Silicon Valley is because I was quite interested in sustainability and how to mitigate climate change and the why I went to study engineering in the first place was I thought that technology could be a great tool for social change and shaping consumer behaviors to to prevent us from planetary disaster and I realized that Silicon Valley's technology incentive structure incentive structures for producing technology were not actually leading us to develop technologies in the public interest and in fact most often it was leading to technologies that were eroding the public interest and the problems like mitigation of climate change that I was interested in were not profitable problems but that is ultimately what Silicon Valley builds they want to build profitable technologies and so it just seemed to me that it didn't really make sense to try and continue doing what I wanted to do within a structure that didn't reward that yeah and then I thought well I've always liked writing maybe I can use writing as a tool for social change so I switched to journalism you went to MIT Review right and then I went to a few publications and then eventually MIT Technology Review to cover AI and then Wall Street Journal and then the Wall Street Journal i mean these are big just just so people know there's real there's real credibility behind this all these interviews this CV um and it's interesting as well you say I wouldn't write a hography I just wrote what was there i mean maybe that's partly an an extension of your sort of STEM background right you know rather than writing like propaganda on a puff piece which let's be honest is is most coverage of of the sector but it's true right well you know people often ask me this is like how much does did my engineering degree help me in reporting on this and I think it helps me in ways that are not what people would typically assume i went to school with a lot of the people that now build these technologies i went to school with some of the executives at OpenAI you know and so for me I do not find there to be magic i don't find these figures to be towering or magical like I remember when we were walking around dorm rooms together in our pajamas and it it instilled in me this understanding that technology is always a product of human choices and different humans will have different blind spots and if you give a small group of those people too much power to develop technologies that will affect billions of people's lives inevitably that is structurally unsound you like we should not be allowing small groups of individuals to concentrate such profound influence on society when it is not you cannot expect any individual to have such great visibility into everything that's happening in the world and perfectly understand how to craft a one-sizefits-all technology that ends up being profoundly beneficial for everyone like that it just doesn't make sense at all um and I think the other thing that it really helps me with is it g Silicon Valley is an extremely elitist place and it allows me to have an honest conversation with people faster because if they start stonewalling me or like trying to pretend that there's certain things that these technologies are capable of that they're not actually capable of I will just slap my MIT degree down and be like "Cut the bull crap." like tell me what's actually happening and it is a shortcut to getting them to just speak more honestly to me but it's not actually because of what I studied it's more just that it signals to them that they need to speed up their throat clearing that's really interesting though because I do I do feel like lots of coverage of this sector i mean I again I can only speak in regards to the UK and we're a tidler compared to to you guys but at the intersection of particularly politics and technology the coverage by political journalists at Westminster you know K star and Rachel Reeves say we're going to build more data centers isn't that fantastic actually not necessarily they're not going to create that many jobs once they're built they can use a ton of energy ton of water what's the upside for the UK taxpayer there is very little interrogation of just the press releases yeah um and it's really interesting to me that you've come out of MIT and then you've taken this trajectory is This stuff you just talked about knowing these people this tiny group of people whose decisions now affect billions already is this stuff a on the present trajectory is it an existential challenge to democracy and challenge is is speculative is it going to end democracy i think it is greatly threatening and increasing the likelihood of democracy's demise but I I never make predictions of this outcome will happen because it makes it sound inevitable and one of the reasons why I wrote the book is because I very much believe that we can change that and people can act now to shape the future so that we don't lose democracy but on this trajectory right if the next 20 years like the last 20 years on this trajectory for sure I think it will end democracy yeah how quickly we've really screwed up in the last 20 years right i wonder you know it's kind of Gosh yeah i'll give it maybe 20 years 20 years yeah we used to have this thing called privacy high streets childhood all gone um you've said that um what OpenAI did in the last few years is they started blowing up the amount of data and the size of the computers that need to do this training in regards to the um in regards to the um deep learning give me a sense of the scale we've talked a little bit about the data centers but how much energy land water is being used to power open AI just specifically as one company yeah to power open that's really hard um because they they don't actually tell us this so we only have figures for the industry at large and the amount of data centers so it's not in their annual reports for instance no well they don't have annual reports because they're not a public company of course huh so that's you know one of the ways that and and actually it doesn't matter if they're a public company because Google and Microsoft they do have annual reports where they say how much capital they've spent on data center construction they do not break down how much of those data centers are being used for AI they also have sustainability reports where they talk about the water and carbon and things like that but they do not break down how much of that is coming from AI either and they also massage that data a lot to make it seem better than it actually is but even with the massaging there was that story 2 years ago or sorry la last year 2024 where both Google and Microsoft reported I think it was a 30% and 50% jump in their carbon emissions because largely driven by this data center development yeah and also the context here is over the last it was one of the good news stories of the last sort of 10 to 15 years is that CO2 emissions per capita in the US has kind of plateaued right across the west had kind of plateaued and actually in the UK energy consumption dropped I mean we stopped making things but still you know everything's made in East Asia now but no but it's it it was kind of a good story and I kind of bought it right I thought that you know we'd have we'd kind of plateaued obviously the global south would consume more energy But we are as well um should we look at these companies as kind of analogous to the East India Company of the 19th century that is the analogy that I have increasingly started using especially with the Trump administration in power because the British East India Company very much was a corporate empire and started off not very imperial they just started off as a company very small company based in London and of course through economic trade agreements with India gained significant economic power political power and eventually became the apex predator in that ecosystem and that's when they started being very imperial in nature and they were the entire time abetted by the British Empire the nation state empire so you have a corporate empire you have a nation state empire and I literally see that dynamic playing out now where the US government is also in its empire era the Trump administration has quite literally used words to suggest that he wants to expand and fortify the American empire and he sees these corporate empires like OpenAI as his empire building assets and so I think he is probably seeing it in the same way that the British crown saw the British East India Company of let's just let this company acquire all these resources do all these things and then eventually we'll nationalize the company and then India formally becomes a colony of the British Empire so Trump whatever the equivalent modern day equivalent would be of nationalizing these these companies is his endgame like he is helping them strike all these deals and installing all this American hardware and software all around the world with the hope that then those become national assets and then you know there was actually just a recent op-ed in Financial Times from Marate Shake one of the former EU par parliamentarians who pointed out like isn't it so convenient for the US to get all of this American infrastructure installed everywhere around the world so that the US government could literally turn it off at any time i mean if you want to talk about empire building there's that but at the same time these corporate empires are also trying to use the American empire as an asset to their empire building ambitions so there's a very tenuous alliance between Silicon Valley and Washington right now in that each one is trying to use the other and ultimately trying to dominate the other and there's a growing popularity in Silicon Valley of this idea of a politics of exit this idea that democracy doesn't work anymore we need to find other ways of organizing ourselves in society and maybe the best way of organizing ourselves is actually a series ofworked companies with CEOs at the top so I don't ultimately know who's going to win like the nation state empire or the corporate empire but either version is bad because all of the people in power now both the business ex executives and the politicians do not actually care at all about preserving democracy i mean the analogy of India is really interesting so I think I might have my dates wrong um East India Company is running things until 1857 you have the Indian mutiny basically an uprising against the East India Company and then of course that commercial endeavor has to be underpinned by the organized violence of the British imperial state um and it does feel it does feel like that could be the next step of what happens with regards to US interests overseas i suppose one retort would be well hold on it sounds kind of good i'm a I'm a socialist i kind of like the idea of SpaceX being nationalized i kind of like the idea of you know the federal government having a 51% stake in Open AI and Tesla and Meta what would you say to that i don't necessarily know if my critique is of the nationalization of the company more as like why are they nationalizing these companies and what are they what you know like the because of this endgame mentality of let's just let these companies run rampant around the world so that ultimately whatever their assets are become our assets is leading the Trump administration to have a completely hands-off approach to AI regulation they're quite literally they proposed the big beautiful bill which passed the House and is now going up to the Senate with a clause that would if implemented put a 10-year moratorium on AI regulation at the state level which is usually the state level is usually where regulation sensible regulation happens in the US so they're doing all of these actions now with wide-ranging repercussions that will be very difficult to unwind in the name of this idea that maybe if they just allow these companies to act with total impunity that it will ultimately benefit the nation state how do people like Sam Alman look at the rest of the world outside the US these kind of tech leaders and how do they look at Little Britain and Italy and how do they look at us what do they think about us you know you've you've been inside their minds it's it's Yeah I mean they see them as resources they see different territories as different types of resources which I mean is what older empires did you know they would look at a map and just draw out the resources that they could acquire in each geography we we're going to go here and acquire the labor we're going to go here and acquire the lands we're going to go here and acquire the minerals i mean that's literally how they talk like when I was talking with some OpenAI researchers about their data center expansion you know there was this one OpenAI employee who said "We're running out of land and water." and he was just saying "Yeah we're just like trying to we're just trying to look at the whole world and see where else we can place these things where what other geographies can we find all the conditions that we need to build more data centers land without earthquakes without floods without tornadoes hurricanes all these natural disasters and can deliver massive amounts of energy to a single point and can cool the systems." And they they are they they're looking at that level of abstraction to what are the different pieces of territory and resources that we need to acquire and that includes other parts of the west that's not just the global south no it includes other parts of the west as well yeah so there have been rapid data center expansion in rural communities in both the US and the UK and they it always ends up in economically vulnerable communities because those are the communities that often actually opt in to the data center development initially because they are not informed about what it will ultimately cost them and for how long and so I spoke with this one Arizona legislator who said I didn't know it had to use fresh water and for the UK audience Arizona is a desert territory there is no there's there's a very very stringent budget on freshwater and after that legislator found out she was like I would have never voted for having this data center in but the problem is that there are so few independent experts for these legislators city council members to consult that the only people that they rely on for the information about what the impact of this is going to be are the companies and all the companies ever say is we're going to invest millions of dollars we're going to create a bunch of construction jobs up front and it's going to be great for your economy yeah i mean that's all we hear about data sense in this country and it's a great it's a great top line for the chancellor and the prime minister because they can say tens of billions of pounds worth of investment okay but in terms of long-term jobs how many and also by the way for that rural community in God knows where you know the northeast of England or whatever you're not telling them that actually they can't use their hose pipes for 3 months a year because all the water is going to that local data center exactly and it's quite extraordinary and and and the most scary thing about all of it is in the UK at least the politicians don't know any of that i sincerely don't think the chancellor knows any of that uh and there's no real I mean even if you use the prism of colonialism imperialism with regards to exploitative economic relations between the United States and other parts of the world they think you're a troskist right that's that's the crazy things they can't even look after their own people because if looking after your own people boils down to being too leftwing well I think part of it is also that they don't really realize that it's literally happening in the UK so the so to connect it to the UK data center development along the M4 corridor is has literally already led to a ban in construction of new housing in certain communities that desperately need more affordable housing and it's because you cannot build new housing when you cannot guarantee deliveries of fresh water or electricity to that housing and it was due to the massive electricity consumption of the data centers being built in that corridor that led to that ban that's nuts i mean that's the most valuable real estate for housing in in the country the M4 and do you think UK politicians are are aware of that contradiction or is that just that's I mean you know I don't know if they are aware if maybe they're they don't have awareness or maybe they are aware and they're also thinking of other tradeoffs i mean now in the UK and and in the EU at large there's just this huge conversation around data sovereignty and of course technology sovereignty there's this whole concept of developing the EU stack and why is it that we don't have any of our tech giants why don't we have any of this infrastructure um and like here Starmer just said this week during London tech week we want to be AI creators not AI consumers so I think in their minds maybe this is a viable trade-off we we skimp a little bit on housing for the ability to have more indigenous innovation but I think the thing that is often left out of that conversation is this is a false trade-off people think that you need colossal data centers to build AI systems you actually do not this is specifically the approach that OpenAI decided to take but actually before open AI started building large language models and generative AI systems at these these colossal scales the trend within the AI research community was going the opposite direction towards tiny AI systems and there was all this really interesting research looking into how small your data sets could be to create powerful AI models and how little computational resources you needed to create powerful AI models so there were there were interesting papers that I wrote about where you could have a couple hundred images to create highly performant AI systems or uh you could have AI systems trained on your mobile device that's like a sing not even a one computer chip on running on your mobile device and OpenAI took an approach that is now using hundreds of thousands of computer chips to train a single system and those hundreds of thousands of computer chips now are consuming you know city city loads of energy and so if we divorced the concept of AI progress with this scaling paradigm you would realize then you can have housing and you can have AI innovation but once again there's not a lot of independent experts that are actually saying these things most AI experts today are employed by these companies and this is basically the equivalent of if most climate scientists were being bankrolled by oil and gas companies like they would tell you things that are not in any sense of the word scientifically grounded but just good for the company i interviewed a great guy um twice actually now a guy called Angus Hansen who's really just on it with regards to the exploitive nature of the increasingly exploitive nature of um uh of the United States um economic relations with the UK just fascinating fascinating uh a book and man and I just don't think it's cut through to our politicians here how bad it's getting and you're saying about AI consumers or or or creators I mean ultimately you're talking about meta you're talking about alphabet you're talking about XAI you're talking about open AI we are consumers we are dependent it's a colonial exploitative relationship with regards to big tech has been for a really long time our smartest people which the taxpayer trains here go to the US I think one of the top people at Slack as a UK national demis you know um uh deep mind now working under the you know the sort of the the umbrella of alphabet and yeah it just doesn't make it just doesn't make sense for me with regards to that formulation they they simply don't get it you know every I came here using my Mastercard millions of Brits use Apple Pay and Google Pay and Mastercard and Visa and every time we do 0.1 0.2% 2% crosses the Atlantic and and it's it just goes over the heads of um our political class which is is very unnerving in regards to the efficiency of these um smaller systems where does where does deepseek fit in all of this because of course the scaling laws at the heart of open AI which is you get to AGI by more compute more parameters more data is kind of untethered a bit by the arrival of deepseek yes what deepseek is such an interesting and complicated case because they basically they it was it's a Chinese AI model that was created by this company Highf flyier and they were able to create a model that essentially matched and even exceeded some performance metrics of American models being developed by OpenAI and Anthropic with orders of magnitude less computational resources less money that said it's not necessarily the perfect it's like I I don't think the world should suddenly start using Deep Seek and saying DeepS solves all these problems because it's still engaged in a lot of data privacy problems copyright exploitation things like that um and some people argue that ultimately they were distilling the models from that that were first developed through the scaling paradigm so you first develop some of these colossal scaling models and then you end up making them smaller and more efficient so some people argue that you actually have to first do that scaling before you get the efficiency but anyway what it did show is you can get these capabilities with significantly less compute and it also showed a complete unwillingness of American companies now that they know that they can use these techniques to make their models more efficient they're still not really doing it why do they do they like giving their money to Nvidia what's the Because when you if you continue to pursue a scaling approach and you're the only one with all of the AI experts in the world you persuade people into believing this is the only path and therefore you continue to monopolize this technology because it locks out anyone else from playing that game and also because path dependence like these companies are actually not that nimble they end up the way that they they organize themselves it it's not so easy for them to just like immediately swap to a different approach they end up putting in motion all the resources all of the training runs so on and so forth o o over the course of months and then they just have to run with it so Deep Seek actually wasn't the first time that this happened the first time that this happened was with image generators and stable diffusion and stable diffusion was specifically developed by an academic in Europe who was really pissed that the AI companies like OpenAI were taking a scaling approach to image generation he was like "This is literally wholly unnecessary." And they're spending thousands of chips all of this energy to produce dolly and ultimately he ended up producing stable diffusion with a couple hundred chips with using a new technique called latent diffusion hence the name stable diffusion and you know arguably it was actually an even better model than Dolly because users were saying that stable diffusion had even better image quality better image generation better ability to actually control the images than Dolly but even knowing that latent diffusion existed OpenAI continued to develop Dolly with these massive scaling approaches and it wasn't until later that they then adopted the cheaper version but it was it was just significantly delayed and and I was asking open air research like why that doesn't make any sense why did you do that and they were like well once you set off on a path it's kind of hard to pivot also Jensen Huang the the CEO of Nvidia is really charismatic right i mean it's quite funny when cuz I'm I'm a Marxist just I'm going to make that confession you have these big sort of structural um understandings of how of how history happens and then you sort of realize actually this guy's really charismatic and this person's really manipulative and all of a sudden the world's hyperpower is you know making these technological decisions okay uh quite strange um we talked about data centers we talked about earth um water energy i want to talk also about some of the more exploitative practices with regards to workers in the global south you use one really grueling example actually in Kenya can you talk about some of the research around that some of the people you met yeah so I ended up interviewing workers in Kenya who were contracted by OpenAI to build a content moderation filter for the company and at that point in the company's history it was starting to think about commercialization after coming from its nonprofit fundamental AI research roots and they realized if we're going to put a text generation model in the hands of millions of users it is going to be a PR crisis if it starts spewing racist toxic hateful speech in fact in 2016 Microsoft infamously did exactly this they developed a chatbot named Tay they put it online without any content moderation and then within hours it started saying awful things and then they had to take it offline and to this day as evidenced by me bringing it up it's still brought up as a horrible case study in corporate mismanagement and so open I thought we don't want to do that we're going to create a filter that wraps around our models so that even if the models start generating this stuff it never reaches the user because the filter then blocks it in order to build that filter what the Kenyon workers had to do was wade through reams of the worst text on the internet as well as AI generated text on the internet where OpenAI was prompting its models to imagine the worst text on the internet and the workers then had to go through all of this and put into a detailed taxonomy is this hate speech is this harassment is this violent content is this sexual content and the degree of hate speech of violence of sexual content so it was they were asking workers to say does it involve sexual abuse does it involve sexual abuse of children so on and so forth and to this day I believe if you look at OpenAI's content moderation filter documentation it actually lists all of those categories and this is one of the things that it offers to clients of their models business clients of their models that you can toggle on and off each of these filters so that's why they had to put this into that taxonomy the workers ended up suffering very many of the same symptoms of content moderators of the social media era absolutely traumatized by the work completely changed their personalities left them with PTSD and I highlight the story of this man Moin who is one of the workers that I interviewed who showed to me that it's not just individuals that break down it's their families and communities because there are people who rely on these individuals and so Mofat was on the sexual content team his personality totally changed as he was reading child sexual abuse every day and when he came home he stopped playing with his stepdaughter he stopped being intimate with his wife and he also couldn't explain to them why he was changing because he didn't know how to say to them "I read sex content all day that doesn't sound like a real job that sounds like a very shameful job chad GBT hadn't come out yet so there was no conception of what does that even mean and so one day his wife asks him for fish for dinner he goes out buys three fish one for him one for her one for the stepdaughter and by the time he comes home all of their bags are packed and they're completely gone and she texts him "I don't know the man you've become anymore and I'm never coming back." You say that's the case with regards to text are people also having to engage with images as well i mean that was more of a social media thing is that here too yeah they there were workers that they then so after this they contracted these Kenyon workers that contract actually was cancelled because there was a bunch of scrutiny on that company and there the the third party company that they were contracting the workers through and huge scandal um this is Sama right sama yeah and there was a huge scandal around Sama and then open ended up shifting to other contractors who were then involved in moderating images And were they remunerated for the kind of work they were doing quite well or for the Kenyan workers they were paid a few dollars an hour right and then on the other side of the of the Atlantic you talk about people in South America um doing effectively you know mechanical turk peace work for these companies as well can you talk about that a little bit yeah so generative AI is not the only thing that leads to data annotation this has actually been part of the AI industry for a very long time and so I ended up years ago interviewing this woman in Colombia who was a Venezuelan refugee about the specific thing that happened to her country in the global AI supply chain so when in 2016 when when the AI industry first started actually looking into the development of self-driving cars there was a surge in demand for highly educated workers to do data annotation labeling for helping self-driving cars navigate the road you have to show self-driving cars this is a car this is a tree this is a bike this is a pedestrian this is how you avoid all of them these are the lane markings this is what the lane markings mean and they're humans that do that and it just so happened in 2016 when this demand was rising that Venezuela as a country was was dealing with the worst peacetime economic crisis in 50 years so the economy bottomed out a huge population of highly educated workers with great access to internet suddenly were desperate to work at any price and these became the three conditions that I call the crisis playbook in my book that companies started using to then scout out more workers that were extremely cheap for working for the AI industry and so the woman that I met in Colia she was not just it she was working in a level of exploitation that was not based on the content that she was looking at she was labeling self-driving cars and labeling you know retail platforms and things like that the exploitation was structural to her job in that she was logging into a platform every day and looking at a queue that automatically populated with tasks that were being sent to her from Global North companies and most of the time the tasks didn't appear and when they did she had to compete with other workers to claim the task first in order to do it at all and because there were so many Venezuelans in crisis and so many of them were finding out about data annotation platforms in the end there were more and more and more workers competing for smaller and smaller volumes of tasks and so these tasks would come online and then disappear within seconds and so one day she was out on a walk when a task appeared in her queue and she sprinted to her apartment to try and claim the task before it went away but by the time she got back it was too late and after that she was like I never went on a walk during the weekday again and on the weekends which she discovered is less often less likely for companies to post tasks she would only allow herself a 30 minute walk break because she was too afraid of that happening again and did she did she detail about how that gave her sort of anxiety or insomnia or mental health kind of overheads it's it it that sounds insane sounds insane way to live um it completely controlled her life she didn't tell me about whether or not it gave her insomnia but it completely controlled the rhythms of her life in that she had this plugin that she downloaded that would sound an alarm every time a task appeared so that she could you know cook or clean or whatever without literally just looking at the laptop the whole day and she would turn it on to max volume in the middle of the night because sometimes task would arrive in the middle of the night and if the alarm rang she would wake up sprint to her computer claim the task and then start tasking at like 3:00 a.m in the morning um and she had chronic illness um one of the reasons why she was tethered to her apartment doing this online work in the first place was not just because she was a refugee but also because she had severe diabetes and it got to the point where she ended up in the hospital and was completely blind for a period of time and the doctor said that if you had not come to the hospital when you did you would have died and so she was tethered to her home because she had to inject herself with insulin like five times a day and it was this really complicated regime that didn't allow her to commute to a regular office have a regular job so she was doing all this extremely disruptive disregulating work on top of just trying to manage extreme severe diabetes i mean it's extraordinary you've managed to un unveil those stories i think I mean that's why the book is so interesting fascinating for me that's why it's got the plates it's got is that you're you know you're speaking to people who are on first name terms as Sam Alman then you're talking to Venezuelan refugees in Colombia um and it's really important to say that this work is being done for multi-trillion dollar companies yes that's the other side of it right you're seeing Elon Musk worth 300 billion plus dollars and then there are people that's where the value is being generated yeah exactly and that's when the reason why I really wanted to highlight those stories is because that's where you really see the logic of Empire there is no moral justification for why those workers whose contribution is critical to the functioning of these technologies and critical to the popularity of products like chat GBT are paid pennies when the people working within the companies can easily get million-doll compensation packages the only justification is an ideological one which is that there are some people born into this world superior and others who are inferior and the superior people have a right to subjugate the inferior ones my last question what does the US public do about big tech if it wants to take on some of these issues income inequality regional inequality global imperial overreach etc a few proposals and which you know somebody can execute on what would you suggest yeah I wouldn't even say it's just the US public i mean anyone in the world can do something about it and one of the remarkable things for me in reporting stories is people who felt like they had the least amount of agency in the world were actually the ones that put up the most aggressive fights and actually started gaining ground on these companies in take taking resources from them so I talk about Chilean water activists who pushed back against a Google data center project for so long that they've stalled that project now for 5 years and they forced Google to come to the table and the Chilean government to come to the table and now these these residents are invited to comment every time there's a data center development proposal which they then said is not it's not the end of the fight like they still have to be vigilant and at any moment if they blink something could happen but but anyone in the world I think has an active role to play in shaping the AI development trajectory and the way that I Think about it as the full supply chain of AI development you have a bunch of resources that these companies need to develop their technologies data land energy water and then you have a bunch of spaces that these companies need access to to deploy their technologies schools hospitals offices government agencies all these resources in all these spaces are actually places of democratic contestation they're collectively owned they're publicly owned so we're already seeing artists and writers that are suing these companies saying "No you cannot take our intellectual property." And that is them reclaiming ownership over a critical resource that these companies need we're seeing people start exercising their data privacy rights i mean one of my favorite things about visiting the UK and EU as an American that has no federal data privacy law to protect me is to reject those cookies every single web page that I encounter that is me reclaiming ownership over my data and not allowing those companies to then feed that into their models we're seeing just like the Chilean water activists hundreds of communities now rising up and pushing back against data center development we're seeing teachers and students escalate the a public debate around do we actually want AI in our schools and if so under what terms and many schools are now setting up governance committees to to to determine what their AI policy is so that ultimately AI can facilitate more curiosity and more critical thinking instead of just eroding it all away the same thing I'm sure wherever your audience is sitting right now if they work for a company that company is for sure discussing their AI policy put yourself on that committee for drafting that policy make sure that all the stakeholders in that office are at that table actively discussing when and under what conditions you would accept AI and from which vendors as well because again not all AI models are created equal so do your research on which AI technologies you want to use and which companies are providing them and I think if we everyone can actually actively play a role in every single part of the supply chain that they interface with which is quite a lot most people interface with the data part many people will now have data center data centers popping up in a community near them everyone goes to school at some point everyone works in some kind of office or community at some point if we do all of this push back a 100,000 times fold and democratically contest every stage of this AI development and deployment pipeline I am very optimistic that we will reverse the imperial conquest of these companies and move towards a much more broadly beneficial trajectory for AI development yeah we've had we've had big tech social media for the last 15 20 years and I suppose the question is is the same set of patterns going to apply to this stuff and I I I think when you speak to someone like Jonathan hate when he talks about um young people and their consumption now of social media and mobile telephones etc his real worry is AI yeah and if there is this lass attitude from policy makers and also let's be honest from civil service civil society that there was over the last 15 20 years I mean he's terrified about the implications so it's interesting to see that there's congruence between what you're saying what Jonathan hates saying can I ask you one more question have you ever read Dune by Frank Herbert I've watched the movie and it's sitting on my bedside table to actually read the original and I'm so glad that you asked me this because this is an analogy that I use all the time now to describe the AI world yeah but Larry and Jihad so yeah so one of the things that was so shocking to me because we already talked about this like quasi religious fervor within the AI community and I was interviewing people who one of the people that their voice was quivering when they were telling me about the profound cataclysmic changes on the horizon like these are very visceral reactions these are true believers and Dune strikes me as a really good analogy for understanding this ecosystem because Paul Trades mom in the story she creates this myth to help position Paul as a supreme leader and to ultimately control the population and the people who encounter this myth they don't know that it's a creation so they're just true believers and at some point Paul gets so wrapped up in this own mythology that he starts to forget that it was originally a creation and this is essentially what I felt like I was I was seeing with my interviews of people in the AI world because because I w I had the opportunity to start interviewing people starting all the way back in 2019 you know I interviewed some people who for back then and for the book to just map out their their trajectory and there were non-believers back then that are true believers now like if they were able to stay long enough at that company they all in the end become true believers in this AGI religion and so there's this vortex of it's like a black hole ideological black hole i don't know how to explain it but people when they swim too long in the water it just becomes them so what you're saying is Sam Sam Alman is the Lisan Algib that's the character and and Paul Graham maybe was the you know the it would seem it would seem like that would be the most appropriate character to assign to him yeah wow this has been fabulous and I have to say honestly the book is really really exceptional empire of AI i read it so much that dust jacket I think my daughter actually ripped it off but anyway uh it is a sensational book sensational journalism fantastic journalist we don't have enough of those in the world thank you um real pleasure to meet you Karen thanks so much for joining us it was great to meet you [Music]