Transcript for:
GPT-5 Launch and Industry Impact

a huge amount of expectation on GPT5. The anticipation of this launch was up there with the top three product launches of all time. This is when you see real big things happening. Either a productivity boom or the inverse. You can see you're just reaching that level now across just about everything. They cut the cost of AI at least in half if not more. And they caught up to everybody else in coding. That's a big big deal. When we talk about abundance uh in in all of its many facets, taking 700 million people and suddenly giving them access to state-of-the-art AI, I think becomes transformative. Just go all in and start turning your business into an AI native uh business. This leads to Peter's abundant state. I think I think what we're starting to see here is now that's a moonshot, ladies and gentlemen. Hey everybody, welcome to another episode of WTF Just Happening Technology. I'm here with my Moonshot mates, Salem and Dave, and two special guests, geniuses. You've met them before on WTF if you're a listener. Uh, and Dave, would you introduce AWG? I think that' be important. Alex, yes, I'd love to introduce Alex. So, he uh Yeah, genius is probably a good word. Uh, math, physics, and computer science degrees from MIT. uh true polymath, understands everything and we're going to talk about a lot of it today. PhD from Harvard uh in addition to that in physics uh and reads literally every document, every research document, every breakthrough in AI and many other fields. So always incredibly informative to have him. Welcome, Alex. And Sel, would you do the honors with ImmOD? Uh sure. So, Immad is one of those folks where every time he says something, you have to take twice the time to parse what he just said and make sense of it. More intelligence per word density than most people you've ever met. A founder of stability diffusion and stability AI, former hedge fund, quant uh brain, size of several planets and uh building I think a systemic layer for the next version of the internet with crypto built in which I think is really powerful. So, welcome Imat. So, first of all, I literally just landed from a week in Portugal, so my head is still spinning after a 12-hour flight. Um, but hey, what could possibly go wrong? Today, we are speaking about two or three special events this past week. In particular, the announcement and launch of GPT5 and the continuation of the AI wars. But before we get there, Salem, uh I think you've recently gone through surgery or is that just a I had Yeah, I had shoulder arthoscopy where they drill three holes in your shoulders and do kind of an oil lube and filter on it. I had a bone spur impinging on the tendon, etc. What's incredible with the advances in technology today, I was in and out in like 2 hours. It's like unbelievable that they go that deep into your body and then you're just out again. It's amazing. So, don't forget don't forget the excess homes. I was going to ask for tennis this weekend. I guess uh I guess we're not playing, huh? No, not for a little bit. And my right hand is Yeah, we'll leave that for another time. And this is a special episode because I'm filming in the new Moonshot podcast studio. So, check out the background. Hope you like it. It's a real background. We'll be doing a lot of episodes from here in the future. Immad, you're in London and it's midnight or something like that. Yeah, it's just time for the brain to get going. You're amazing, buddy. And Alex, at that time, maybe his brain slows down a little bit so we can understand everything. That's my hope. We'll find out. Uh, and Alex, you're in Boston. Where are you today? That's right. Cambridge, Massachusetts. Yes. Center of the known universe. At least for us MIT alums. All right. Certainly the center of Cambridge. All right. Let's dive into this episode. I'm going to start with this note. Uh this is uh Sam Alman uh two days ago uh he made the announcement of GPT5 and uh in particular this is the quote that stuck out. GPT5 is a significant step on our way to AGI which also means it isn't AGI yet. So I have a question for you guys. We also saw the day before this announcement Sam put up this tweet showing the Death Star. And now I I have to ask I don't get it. It's like whether you put this up it's like what is he trying to do? Get views or get people really worried. You know a lot of this launch was pretty uncoordinated but there were Kevin While also posted something with Elmo with a fire behind him saying you know it's coming. So there was a lot of uh sort of pre-event tweeting and buzzing or exing and buzzing uh about something huge is coming. And I don't know why a Death Star, you know, but a lot of people talked about it already. It feels not a great look. Uh, you know, I mean, you're trying to get people accepting and happy about the future and you show that imagery. It's kind of like, okay. Well, one of one of the Google people posted the Millennium Falcon and he said, "No, we're meant to be the rebels." So, he said, "This is meant to be from the point of view of the rebels." Oh, okay. There you go. A lot more sense. And everyone's like, "Nah, that's not the case. by that. That's way too subtle. So, here's my question. We go around the horn. You know, a huge amount of expectation on GPT5. Uh, and I would love to ask each of you, what do you think of it? What do you think of the announcement? It was a little over an hour. Let's start with uh with Emod. Emod, what do you think? Yeah. So, it was kind of in line with what I expected because when you're doing a AI for like 700 million people, it's very difficult to do like a mega AI. And so, we'd be guided to it would be a multi- routing type of thing from mini up to pro. And that's kind of what we saw. It's like basically O4 but with one front layer. So, I thought the announcement was okay. It's just the expectations are so high now, particularly when you build it up, that you just have to keep on beating every time by more than a little bit. I think we all thought it would beat, but the question was how much? And it was like, okay, wasn't it? Uh, Alex, how about you, buddy? I I tend to think the the real net impact of a launch like this tends to to be more about lifting hundreds of millions of users up from a model like GPT40 to a frontier model. And I I think the changing economics of a radical cost reduction of frontier models, these are going to be I think the long-term impacts to the extent there were expectations that there would be an onlogically shocking moment when there would be new qualitative capabilities that would come online. I I I tend to think that um that ultimately lifting hundreds of millions of new users to frontier level and getting them to interact at scale with a frontier model over the long term that's going to be just as impactful uh and just as economically relevant as introducing some jaw-dropping new qualitative capability. Yeah, I I hear you and that is true. I mean that's what Sam's mission was. deliver a single uh a single user interface that enabled you to do quick answers or do long detailed research and coding. Um you know, Sem, do you remember you and I were together up in the Bay Area uh with Dave in Boston when Google IO came out and there was so much holy holy holy moments when Google IO was showing their uh their capabilities. What did you think about this Lynn? Uh I have the same reaction as Eman which was eh it's not 10x better than what was there before. I think I'll concur with Alex though in terms of I think the real power will come in the cost drop which will make it much more accessible to a lot of people and I think downstream in a couple of months as people start building applications and GPTs and special agents on top of this then we're going to see some really big surprises which I'm looking forward to. Let's close it out with you Dave. Uh Dave, you've been thinking about this and watching all the tetel telltale signs for a while. You were you excited, impressed, depressed? What was it? Well, I mean, you called it right, Peter. Compared to to Google IO, which had incredible showbiz value and a ton of video, you a ton of computerenerated video. Uh, for whatever reason, OpenAI decided to go folksy, make it look like a like a high school presentation, you know, and and and feel startupy. And I don't know if they'll stick with that. You know, Steve Jobs did the best showbiz in the history of the world. And and the anticipation of this launch was up there with the top three product launches of all time. It really was. Yeah. Yeah. So, you have an opportunity to really blow people's minds. Uh either they didn't have time to really work on it or they don't have that staff built up yet or they just don't care. Maybe I I think I don't think that's the case, but they they really did not put a huge amount of effort into this event. Uh and it came through and and you'll see some data that supports it's pretty obvious that that did come through. Let's look at that. This is uh the review on poly market and what we see here is answering the question which company has the best AI model by the end of August and coming into this you know open AI was riding high uh with uh with Google you know coming in second and then thropic in third and then we see there the timestamp for when this release was live. Any commentary Dave? Yeah. Well, I mean this is it's great that Poly Market exists because because the feeling that I think we got we all watched it live here in the office. We had a little uh actually Alex suggested it. It was phenomenally cool actually. But but watching the ticker in real time, you know, there's there's a dip when they did their first coding demo and then a huge plunge when they did their second coding demo. And literally the betting markets went from 80% chance they'll have the best AI in the world, not just at the end of this month, but also at the end of the year, to completely inverting and saying, "No, Google's going to have the best AI at the end of the month's month and the end of this world uh year." And I I think they actually showed some incredible capabilities and rolled them out at a ridiculously great price point, but the market reaction to it is, "Wow, I I think Google's going to eat your lunch." So, yeah, you can't deny it. It's right there. People are putting putting money behind this this prediction. Every week, my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff, only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, this report's for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmandis.com/tatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. You know, one thing I just want to point out for folks listening, and I think it's true that when you when you have this huge expectation of of GPT5's launch or any of these new models, you know, when Gro 4 came out, at the end of the day, I sort of feel a sense of underwhelming. And I think it's not because it's not impressive. It's because we become so desensitized to extraordinary progress, right? Right. I think there's some I think there's something else here though that that I'm really enjoying which is that given that the closeness of the different models. It means it's likely that we won't have one runaway success and that means you have a very competitive market which is just good for consumers overall for a time being and all the models will do incrementally better over time. So I'm excited by the fact that there's not one breakout. Sure. But I do think it's important for folks to uh you know let's talk about the desensitization a second because I think folks who are listening to this have to realize that our expectations are getting so high and every time there's a new roll out that has additional capability um it's like oh e that's not so impressive but you know compared to what existed a year ago or two years ago it's extraordinary. Iman do you agree with that? What are your thoughts? Yeah, I mean it's hydonic adaptation, right? Like um when you get into a Whimo for the first time, it's great. Second time, yeah. And now it's just a whole experience around this. I I think that part of it was just the communication though because as you've noted like 4 was a good model, but we see people getting wireheaded and hallucinations and all sorts. Lifting that up to a better base level should have been the communication with practical examples, but they didn't really show that. Again, I think the communication was a bit off in showing that lifting of the floor. The other thing that I think is that for the first time, I think what we saw was that there's a big gap between what the consumer gets and what the lab has. We actually saw a few OpenAI people say that like before this came out, we had Horizon and Zenith as the two models on this um LM arena where you compare secret models against each other. They chose to release Verizon, but Zenith was better. And OpenAI have admitted they have better models internally as well, even before the next cluster build out. So, they're pulling their punches. Yeah. Because it makes more sense as you head towards AGI to actually not release the best model to everyone, particularly because it's more expensive to inference. GPT 4.5 was so expensive and that was their frontier model at the time but it was too expensive for anyone to use for normal tasks for the 700 million people tasks for the genius tasks you don't want to give someone else that AI you just use it for yourself to out compete everyone else so I think we'll see that bifocation of decent models for everyone for everyday task for 700 million and then you make $700 million using the other model because it's the only logical thing to Yeah. One of the reasons I was so disappointed by the the lack of really compelling demos and showmanship yesterday is because I'm constantly trying to make more people aware of how much change is coming and how insanely important and imminent it is and how much they need to rethink what they're doing tomorrow. And I was hoping to get some ammunition that I can actually just forward and use. And they they managed to make one of the biggest, you know, turning points in history, the history of humanity, make it kind of boring. And and I mean, maybe it was maybe it was deliberate because they had the chart the charts that were completely wrong as well. Like maybe it's just all deliberate in that look, you don't have to worry too much about this, right? No, that is a theory that is a viable theory actually because you know all the accelerationists including me and Alex we know that a lot of this is being used internally for self-improvement a lot of the compute a lot of the capabilities and it could be that that it was intentional um that don't scare the world don't scare don't scare the world well I mean like yesterday when GPT5 came out so GT5 is a router model so your thing goes in and it routes it to thinking or mini or nano depending on something they said well it was actually broken for like 24 hours and you're like really you you released it and then you just left it broken in that the routing was off buting being broken is also a great way to actually gather data to do the model improvement and they discussed this flywheel of data improvement so again I think we see this bifocation now where most of the announcements by open AAI are likely to actually be very consumerdriven very floor raising and I think we'll see less and less of the big massive stuff apart from the outputs like we've had a breakthrough in something or other but not generalizing that. I'm still waiting to see what an AGI or ASI demo would look like or feel like. Um, and I don't know, but we're going to find out. Don't get me started and move right along. All right, let's let's let's turn for a bit to benchmarks. And when I was having the conversation before this uh podcast began uh about you know should we talk about the benchmarks? Will it get old? Uh Alex, what was your comment about the benchmarks? Riveting. Some of these benchmarks, Peter, are absolutely riveting. We are so spoiled. We're we're lifting hundreds of millions of people to the frontier level of these models. We're collapsing costs. Uh the economics are collapsing by an order of magnitude. And here we are complaining, oh it poke it, it it didn't demonstrate any ontologically shocking new capabilities. How spoiled we all are. We are we have gotten spoiled. And uh let's jump into the riveting uh benchmark. So Alex, since you've got the floor, let's begin here. Uh debuts number one in LM Arena. So first off, what is LM Marina? So, Ella Marina is and I think we discussed this in the the last episode a bit uh is a crowdsourced benchmark wherein the community the internet at large is able to interact with competing frontier models in a variety of ways. The ranking that we're seeing here is focused on text based interaction interaction so conversations. There are uh other scores that deal with web development and other modalities. And what we're seeing here is GPT5 leaprogging over the the rest of uh the the leaderboard to number one in textbased interaction. There's a another parallel benchmark with web development where you see even larger margin a larger difference in ELO scores between GPT5 and the the next largest or the next strongest competitor. And I this is remarkable. Again, we're we're we're so spoiled to to see these leaprogging capabilities every now 3 months or so could get even faster. But this is going to to be transformative in terms of everyday conversations that hundreds of millions of people have software development and a number of other domains. So can I ask you Alex, how do how do you reconcile this chart with the poly market chart? Does that mean Google will again leaprog this before the end of the year? to I I would say to the extent that that poly market uh is is indicating a prediction a rational prediction about the market uh and I think that was set for end of August I I would interpret that market movement as a prediction that Google will launch a new frontier model by the end of this month every every expectation and we're going to see in a little bit how much Google has done I mean they've been extraordinary under demis leadership um here's the next one and I'm going to turn to view e-mod arc agi 1 and we'll see arc agi 2 in a moment the leaderboard here you want to give us a dissection of this what we're want what we're seeing here yeah this is kind of very very hard tasks that are meant to indicate progress towards agi so grock kind of led the way there um as you can see which one is that it's fourth thinking right um and so this kind of perto frontier of solving these very very complicated tasks versus the cost. And so 03 was actually really really good, but it's way out there in that it's far more expensive. GPT5 has different levels, the high, the medium, and the low. And it doesn't quite beat Grock, um, which is also the case for other ones like Humanity's Last Exam. And so I think this was part of it in that we see better performance on GPT5 for everyday stuff and it just ees a lead on some of these or is up there. I don't think they wanted to blow everyone's socks off cuz remember they also have models that scored gold medals at the IMO. Mhm. You know, and Gemini for example recently they had Deep Think that's a new TED model of their version that scored a gold medal. So I think apart from uh XAI who are trying to do the best they can on all these benchmarks and release the best they can, we're starting to see some punches being pulled at the top of these benchmarks on the AGI side on the Super Genius side. So I think we'll see a bit more clustering up there. Alex, you would agree? I I think there are two ways to look at this chart. So one uh as inmod said is which which point in the scatter plot which is plotting cost versus score is at the top of the chart. Uh that's that's one way to look at it. The other way is what is the cost frontier? What's the paro optimal frontier where you get the best score or the best performance at a given cost. And there, if you look just a bit to the left, you see the GPT5 minisseries and to the lower left of that, the GPT5 Nano series have set uh have defined a new frontier for cost performance. So I I think the buried headline here is the the hyperdelation that we're seeing in the cost of intelligence that ultimately I think ends up being even more transformative than just narrow capabilities at ultra high cost. You could run the thought experiment. What would happen if we could build super intelligent computer so unaffordably that human civilization can't afford it? Compare that with what happens when intelligence is too cheap to meter that everyone can afford it. I I think that's the central discussion. I think you'll see you'll see Google and OpenAI compete on that left hand curve effectively. And by the way, we're going to make all these charts available to our subscribers. Um, you just go to diamandis.com/wtf and you can get the all the charts uh downloaded for you. Uh, we'll be doing this uh from all of our WTF episodes going forward. Just make sure you have this so you can share it with your friends and your family. All right. Here we see ARC AGI2 leaderboard and uh, Emod, why don't you lead us off on this one? Yeah, this is just a more complicated version of ArcGI1 because they're worried with 03 that it might saturate. So again, I think um as Alex said, you see the same thing with the GPT on the left hand side. Kind of keeping that just a more complicated version of the previous one. All right, moving along. So here's one that I think is uh we discussed this Iman on one of our previous episodes or I think I'm not sure it was with Alex that you know how we benchmark these uh these frontier models is going to start to saturate and understanding how these frontier models actually become economically useful how they're able to solve grand challenges. So here we go. Uh this is a uh a look at economically important tasks. uh Eman want to take a shot. Yeah, I think that you know this is the year where you break through that line effectively or you reach that level of performance especially there's another chart I think we have of meta which me which shows the length of tasks this can do and GPT5 is right at the top of that like you can do the tasks on law logistics sales really well for a long time without supervision and with lower hallucinations which is the other big news that they had around this and so they actually become genuinely useful they released chat GPT agent that you could just set off and it will look up the internet and do all sorts of stuff. A little while ago, it wasn't quite good enough, but soon it will be. And once that happens, this is when you see real big things happening. Either a productivity boom or the inverse, you know, people getting laid off and we're not sure which is those two features is going to happen. But again, you can see you're just reaching that level now across just about everything. Who wants to plug in on this one? Sele Well, this is where I think what where I mentioned. So there's two or three really big things here, right? To Alex's point, the cost drops of running these models and we can do a ton. And to Iman's point, they're kind of taking out the hallucinations and cleaning it up. Even though the top line is not amazing, there it's a lot more rock solid. Therefore, the kind of agents and applications that build off these things will be very very solid and stable going forward. And I think that's where we'll see some amazing use cases coming out where we apply them in industry. And how should the how should our listeners be thinking about this? Do they think of it from a point of view? Well, if you're running a business, this is a time to really build, dig in, right? Before you didn't know quite what you were going to get. What you're going to see now going forward now is it's pretty reliable, pretty solid. Go all in. If you haven't, you should be doing that anyway. Just go all in and start turning your business into an AI native uh business. Yeah, the problem problem I run into all the time is as the AI is getting better and better and better, the benchmarks get harder to interpret. And also in the early days, you know, it's all just pre-training. Oh, this is 100 billion parameters. This is 500 billion. This is a trillion. It's getting bigger. As it gets bigger, it gets smarter. And then the benchmarks are nice and simple. Now, you know, the post- training became very important, but now the chain of thought reasoning is dominating. It's just such a huge factor. It makes it much harder to track what's working and what's not working. And the danger there is that people get paralyzed when they should be getting motivated just like Salem just said. And and that's that's a challenge actually. And that you it's like a benchmark like this is vague and it's it's a little bit difficult for people to take this benchmark and then translate it into should I start an AI law firm? Should I, you know, should I use it to to work on, you know, this discovering fundamental physical properties? Is it going to be good at material science? So, it's getting harder to make those predictions. But that's of course the answer to all those is yes, you should. Yeah. I like to think, you know, in jobs we teach people to be like machines. So, obviously the machines are going to do it better. Like if you look at the health bench scores for example on the hallucinations and hallucinations in general I think something like 6 to 12% of all diagnoses are incorrect. AI has just kind of dropped below that level now close to 30% if you go to primary do primary care doctor. Yeah it kind of varies but it's a lot. AI now makes less errors than humans I think just now over the last month and again that's going to be the most errors it ever makes. Yeah. And we'll we'll go into this a little bit later, but you know was there interesting study that said, you know, physicians by themselves do like 80%, physicians with AI models together do like 90%, but AI models by themselves was doing like, you know, 93% which means that the uh the human pulls back and enters lots of bias into the answers. You know, when I was when I was chatting with doctors about who's going to do my surgery, I came across a guy and I said, "How many of these shoulder arthroscopies have you done?" And he said, "About 10,000." I said, "Okay, that's like you're more like a robot than anybody. We'll go with you. We'll go with you cuz I want that consistency." By the way, that is the number one question you should ask a surgeon when you're interviewing them is how many times have you done this surgery this morning? Right? because you're basically training the neural net of the surgeon by seeing every possible case. And of course, we're going to end up with robotic surgeons that can see in every part of the spectrum and have had not just 10,000, but millions of cases. You just don't want to be the 50th one that morning. That's all. All right, here's our next benchmark. Uh GPT5 sets new record in frontier math. Alex, I'm going to you on this one, buddy. Yeah, I think this is perhaps the most exciting benchmark to to come out of GPT5 in the the past 24 to 48 hours. So what's exciting here? If you look at the performance of GPT5 high on in the lower right hand corner, Frontier Math Tier 4. So Frontier Math Tier 4 is a benchmark that measures the ability of AIS to solve problems that would take professional mathematicians sometimes weeks to solve. but nonetheless problems for which there are known answers. We're seeing we're starting to see increments on Frontier Math Tier 4 that if you extrapolate them suggest that and I've gone through this exercise and it's a running discussion between me and uh the the the folks at uh at Epic AI. If you project this forward, you find by again the law of straight lines, by the end of this year, we're we uh Frontier AI are starting to reach 15 to 20% of hard math problems being solvable by AI. Project that forward another year, so by the end of 2026, you get to 35 to 40% of math, hard math being solved. projected forward to 2027, end of year, you get to 70%. So, what I think we're we're staring at is a slow motion solution to math. And that's one of the reasons why I think there's just riveting all math or at least at least math math is currently understood in in summer of 2025. Isn't that amazing? And I completely agree. And uh it does play into Ahmad's theory that maybe they slowplayed it intentionally because if you if you were to ask me, hey, what happened yesterday? They're crushing this benchmark relative to any other model. Uh they cut the cost of AI at least in half if not more. And they caught up to everybody else in coding. Like if if if they just said that in like 2 minutes, that would have been, you know, the Death Star moment. Yeah. Just do that. Wait, can I can I drill into that just for a second? Alex, when you say it can solve math, right? Can you give a specific example of what that looks like? cuz cuz I struggle with that even though you know better better than 800 on your SATs I guess I I think what's a specific problem or class of area that you could say that it's done something interesting. Yeah. No uh so you can uh you can look at the Epic AI website for Frontier Math Tier 4 see uh see lists of example problems that have been published. These are problems, hard problems in in number theory, in analysis, in algebraic geometry that would require a professional mathematician weeks to solve that are being solved over the course of a short benchmark by by GPT5. Okay. It also asks a question like what what does this look like in practice? Say say the dog catches the car and we actually get AI that achieves superhuman performance in in math. I I think it's a profoundly different world. It is and it's it's hard for not everybody's a mathematician, not everybody's an engineer, but the way a lot of things get designed and built and created in the world is you you run into problems and you immediately look up in these massive books and tables. Has anyone ever solved this before? And so if the if the AI is continually solving and archiving all of these c mathematical capabilities and just making them available, then the engineering algorithms can just find it and use it. plug it in and go. And it's the same in coding, you know, huge libraries of solved problems, solved modules that can be assembled to create things very, very quickly. I want to close on EMOD here before we move on just because we have a lot to cover still. Emod, closing thoughts on this one? Yeah, I mean, it's kind of an improvement over the O4 Mini. Again, we had the um IMO gold medal from OpenAI um whereby they had a verifier on the other side of their model and they said just by extending the RL of GT5, they got a gold medal. So, this model can get a gold medal. So, it can go even higher if you push it. From the last few days of doing some pretty advanced math, I can say that GT5 high is probably the best math model out there. Um, but the really crazy thing is I think it's getting to the point now whereby the solutions to math won't be complicated. They'll be really elegant. And that's how we typically see breakthroughs. So people are thinking giant supercomputers, lots of work. But most of the advances that we've had in science, in math have actually been just very elegant. And if you can do a million different things at once, then you can maybe find some of that elegant theory under all of this. And that's what's going to be a big leap. And if more and more people can do that now because the mini medium and the high are actually at the same level which is crazy then you might have a lot more mathematicians and the humans and the AI can figure out what that elegant theory is. And now it's time for probably the most important segment the health tech segment of moonshots. It was about a decade ago where a dear friend of mine who is incredible health goes to the hospital with a pain in his side only to find out he's got stage 4 cancer. A few years later, a fraternity brother of mine dies in his sleep. He was young. He dies in his sleep from a heart attack. And that's when I realized people truly have no idea what's going on inside their bodies unless they look. We're all optimists about our health, but did you know that 70% of heart attacks happen without any precedent, no shortness of breath, no pain. Most cancers are detected way too late at stage three or stage 4. And the sad fact is that we have all the technology we need to detect and prevent these diseases at scale. And that's when I knew I had to do something. I figured everyone should have access to this tech to find and prevent disease before it's too late. So I partnered with a group of incredible entrepreneurs and friends, Tony Robbins, Bob Hury, Bill Cap to pull together all the key tech and the best physicians and scientists to start something called Fountain Life. Annually I go to Fountain Life to get a digital upload. 200 gigabytes of data about my body head to toe collected in four hours to understand what's going on. All that data is fed to our AIs, our medical team. Every year it's a non-negotiable for me. I have nothing to ask of you other than please become the CEO of your own health. Understand how good your body is at hiding disease and have an understanding of what's going on. You can go to fountainlife.com to talk to one of my team members there. That's fountainlife.com. All right, I'm going to dive into a bit of video here. This is labeled let the vibe coding begin. GBD5 is clearly our best coding model yet. It will help everyone even those who do not know how to write code to bring their ideas to life. So I will try to show you that. I will actually try to build something that I would find useful uh which is building a web app for my partner to learn how to speak French so that she can better communicate with my family. So here I have a prompt. I will execute it. It asks exactly what I just said. Um please build a web app for my partner to learn French. So I can simply press run code. So I'll do that and cross my fingers. Whoa. Oh, nice. So, we have a a nice u a nice website name is Midnight in Paris together. Supermantic. Um we also see a few tabs, flashcards, quiz, and mouse and cheese. Exactly like I asked for. I will play that. So, this says Luca. All right, I'm going to pause it there. Commentary. Dave, what do you think about this? So, so this is exactly when Poly Market plummeted. So, I'm so glad you you captured that that clip because because the the audience is looking at this and they're acting like, "Wow, didn't this blow your mind. There's only two types of people in the world. People who don't give a crap about this or who already do it, but they've been doing exactly this with probably with with Claude 4 or I'm sorry, with Claude Opus or Claude Sonnet for on Maximma. They've been doing this for like four months. And so it it completely missed the mark even though it was the best presented part of the presentation purely because it didn't show off the new capability or the new abilities. But the ability to do this in in chat GPT in other words a single model that allows you to do everything. Um, so yeah, I mean if if I'm an investor in in the upcoming round, this is really big news because Anthropic, you know, generally claims to be the leader in coding. Most of the people who do heavy duty coding lead on anthropic and they completely caught up in this release and that's a very very big deal uh because you know not only are you good at everything else but you're actually as good as Anthropic in their wheelhouse. Yeah. The questions that were coming out was is this an anthropic killer? Right. Yeah. Here's here's my question about this. So you generate this web page web app, right? But if let's say I'm a language startup and I want to launch that actual product, uh there's a huge amount of backend stuff I have to do to make it systems integrated, integrated with stripe, etc., etc. And we're finding that that's where all of the work is going. Uh and therefore this graded through up a front end and looks good like a front-end prototype. Is it actually doing that much behind the scenes or is there still a lot of the work? And that's the question I have for the folks on the P that we have here. Here's my question for for you guys. How should someone listening to this who hasn't played with uh chat GPT5, let's call it that, uh play with their own vibe coding on this? What's their first step? What do they do? How do they play? I I I would encourage everyone who has uh chat GPT chat GPT5 thinking access in particular to create a game. I I think this is one of the the simplest exercises. You've always wanted to to create a longtail application, a game or an interactive app of some sort and you don't have coding experience, go and ask Chat GPT5 thinking to implement a new app for you, a new game, a new something and let it rip. and do it right in the canvas. There's a canvas button right down on the little the little navbar search bar at the bottom. So, click the canvas button. Do it right there locally. It's much more convenient. They've added a lot of capability inside the canvas. So, so you can just build an entire game for yourself right there inside the, you know, just go to, you know, go to chatgbd.com and do it right there. Amazing. Yeah. Yeah. I think the performance isn't quite there yet versus replet, lovable or bolt which do everything sele every other integration but again these things all verticalize very quickly. Um let's move on here and uh we saw you know the co-ound the co-founder of cursor come on stage and spend time with Greg Brockman the openi president. Uh Dave what do you think about this? How important was this? Well incredibly important. So it was not just a little time, it was a huge amount of stage time in one of the biggest, you know, live streams in history. So, so it was very important. Um, and of course what happened is, you know, OpenAI was going to buy Windsurf and essentially attack Cursor with a incredibly powerful competing product that's virtually identical in functionality. Actually, people here in the office about half use cursor, about half use winds surf, it looks virtually identical. And so that deal fell apart. Microsoft torpedoed it um because of intellectual property rights that Microsoft would have. So they torpedoed the deal. And here we are just a couple weeks later and OpenAI is now saying, you know what, we're going to work very closely with Cursor. We're going to give them a lot of stage time. And I think what we're starting to see here is the alignment between the coding companies and their LLM partners because previously everything connected to everything. So any LLM is available through any coding platform. I think going forward it's very likely that cursor works closely with OpenAI. You know Windinsurf is now part of Google or sort of part of Google. Half and half half in half out. Um and then of course Microsoft wants VS Code and they want to build their own thing. And so you're going to see this vertical alignment and also already people all over Twitter or X are saying hey when I use it through its kind of native platform through the canvas it works much much better than if I try and select it through something like lovable or or replet and so everyone's speculating that they're they're doing kind of what Microsoft always used to do. They're hampering the people that aren't playing by their rules in very subtle kinds of ways. And there's no way to prove it, but it's certainly all over the internet. Yeah. I mean, I think if you if you look at this, OpenAI and Anthropic both have $3 billion in API revenue. 1.4 billion of Anthropics API revenue. So about half is Curset and Microsoft Copilot. And so they price GPT5 about 40% lower than Sonnet. So they're coming after Anthropic basically. And they will undercut them on price. And now the performance is roughly equivalent. Um they're just basically trying to kill anthropics revenue. All right, the AI wars continue. Um here's another one. This is an important uh part of the of the story from the GPT5 announcement and we're going to hear Sam Alman speaking about AI saving lives. One of the top use cases of chat GBT is health. People use it a lot. You've all seen examples of people getting day-to-day care advice or sometimes even a life-saving diagnosis. GPT5 is the best model ever for health and it empowers you to be more in control of your healthcare journey. We really prioritized improving this for GPT5 and it scores higher than any previous model on Healthbench, an evaluation that we created with 250 physicians on real world tasks. So I think a lot about this, right? and and the AI models are at this point, I think, better than most physicians, but they're only as good as the data you feed them. And that's the biggest challenge. Can you get access to the data that truly tells your story, right? So, uh, comments. So Sam Sam did a very brief introduction to kick off the event yesterday and then uh he did a much longer segment with a woman who was a cancer survivor who had really done her own self diagnosis and and completely changed the course of her own treatment by talking to Chad GPT and getting very very good advice from Chad GBT. And I think Sam chose to do that segment himself largely I think because one it's it's a very emotional human segment. I thought pretty well done too. Um, but also because it's going to prevent the regulators from ever saying slow down or stop. Like if you're going to save lives that imminently, you know, would have been ended, you cannot slow down. You have to keep moving. And I think that's very important as a mission for OpenAI to keep keep the throttle. You know, there are two drivers to keep the throttle going. One is the incredible health care benefits. The other is the threat from China. And so both of those are, you know, right front and center. Selim, I I think it's this felt more like PR to me than anything else cuz I think you could do this with many of the models rather than this. Okay, maybe incrementally better than the others. I think uh integrated broadly into somebody's healthc care regime is where we'll see the real value of something like this rather than this immediate thing. I do take Dave's point. I think that's exactly right. I think they're pushing hard to kind of show they're trying to add a lot of value. Yeah. Let me let me throw in something on the personal front here. So, uh, one thing that I do and I've talked about it openly on this is I'm chairman of an organization called Fountain Life. And when folks come in for what we call an upload, we fully digitize them, right? We get 200 gigabytes of data about you, full body MRI. Didn't you just do yours? I did. I did it uh I did it a few weeks ago. I just got my results back, right? So, 200 gigabytes of data. Uh, so for me, what was important was I reduced my um non-calcified soft plaque, which is the dangerous plaque that can give you a heart attack in the middle of the night, down by 20%, the lowest it's ever been. I got my liver fat from 6% down to 1%, which is fantastic. But what happens is in my Fountain Life app, we're running this on uh on Anthropics right now, but you know, maybe we'll go to GPT5. We're running much of the other programming on on uh on Gemini. But here's the point. I can query all my data. It you know, the Fountain Life system pulls in all my wearables, right? So Apple or my use glucose monitor and I can ask a question. I just asked a question the other day. I said, "Listen, uh, there's a point at which my deep sleep increased significantly. What was I taking? What was the supplementary medicine that increased it?" And being able to explore stuff like that is amazing, right? So, the best AI, you know, healthc care models in the world are great, but they're directly a function of do you have enough deep data about your physiology over time to understand what's going on? I mean ultimately that's critical and Immod you've been thinking a lot about this. I think my liver I think my liver fat went from 1 to 6% last week. You're heading towards you're heading towards fra. Well I mean have you come through fountain life yet? Uh I haven't. Um I need to find the time to do it. You're the godfather to my kids. You got to come. I have I have done the heart test where they check if you have soft plaque in your in your and Lily was like with your diet you must be you must be on the verge of a thing. go get it done. And we got it done and they said and they said you're whistle clean. We got nothing. That's great. You know, but the challenge is So then you had a steak and you're just one I went to town. One quick point, right? Your body is incredibly good at hiding disease and you don't feel a cancer until stage three or stage four. 70% of heart attacks have no significant precedent. So you have to look, you need to get the data. I have a schedule. I mean, first I want to get this shoulder sorted out now done. Now I can go and do other stuff. All right. Well, mountain for sure. And Eman, you think you came through, didn't you? I haven't been through yet. No, I nearly got there. I will. I will. We will have to get healthy and we all need the data. I think this will be really interesting though because the models themselves are getting good and again better than any doctor. Like they mentioned the health benchm um benchmark that they have. Doctor scored 20% and the latest models they have score 60 70%. On that so they're better than any doctor. But the really exciting thing is like we built a healthcare model called II Medical uh which we released open source and we've got a much better version coming which outperformed every single model except for GPT5 and 03 and it works on a Raspberry Pi. It works on anything. So by next year I think we'll be at the point where the key thing is you get the right data especially like Fountain Hell has so much and then you just have AI just going constantly because what you want is for it to figure out stuff proactively as you feed it the data. Yeah. And now we're seeing these models being able to just detect breast cancer 5 years in advance and other things like that. Wouldn't it be nice if that happened? And now you have the capability of doing that which I think will save so many lives before the AI makes diagnosis. And I love having a very deep bench of data for me over the course of 8 years. And every you know right now I go from annual upload to quarterly updates and uh ultimately it's you know it is all about the data. Well I think that if I just say one quick thing like right now everyone on this should be trying to get as much data as possible because the models are coming and the more models you data you give these models about yourself the longer you will live and the better you will live. Before now we didn't have the right models. Now we have the right models and again they'll be available via open AI and then also open source. Yeah. I mean so for all the folks building stuff around this here's my end here's my desired end state. I want to get it to a point where you're you're about to drink a coffee and it says hold on wait 10 minutes I'm still metabolizing the donut. Give it time to I can optimize your your digestion. I think that's when things get really fun. Well I I want the AI to to say warning pull up and don't eat the donut. separate problem. All right, let's continue on here. Here you see another demo that came out of the GPT5 announcement, an executive assistant for all of us. And uh you know, I use Outlook right now from Microsoft and this got me thinking about moving to Google Calendar. So, let's play the let's play the demo. And we're giving Chachebt access to Gmail and Google Calendar. Let me show you how I've been using it. I've already given Chachabt access to my Gmail and Google calendar. So, it just works and it's easy here. But if you hadn't, Chachib would be asking you to connect right now. Let's see what Chacht is doing. Okay, that was pretty quick. Okay, so Chachib has pulled in my schedule tomorrow and oh, without even asking, Chachi BT found time for my run. I don't think I was invited to the launch celebration. We'll get you on there. We'll get you on there. ChatVT has found an email that I didn't respond to two days ago. I will get on that right after this. And even pulled together a packing list for my uh red eye tomorrow night based on what it knows I like to have with me. It's been amazing to see that as GPT5 is getting more capable, ChatGBT is getting more useful and more personal. Right. I found that impressive. I you know, I have an amazing chief of staff, Esther, that many of you know. Um, and she's incredible, but I think she could use this and I could use this. Thoughts? I'm really coming around to a mad's theory that they deliberately undersold it because this is this is coolish, but you know, in finding an email that you didn't open two days ago. You don't need AI for that. But we we are using this stuff for business planning inside. I've got, you know, I'm the chairman of a couple of companies that have hundreds of employees and knowing what everybody's doing and why they're doing it is immensely challenging. And we're having a field day with this in very heav high level strategic planning in understanding you know performance and understanding you know everything going on. It's an incredible unlock uh at the executive management level. Sure. And again for me as a watcher you know kind of frustrating to see it you know planning out her run when I know it can it can actually plan entire business units. Uh but still the point is it's it's very very capable. So I don't I was I was frustrated, but I get it. All right. You know, the the opportunity to now enable, you know, Eric Bernolson calls this white collar drudgery, right? There's a lot of croft that we do just to get through. I think this solves for a lot of that. And I think this will amplify the capability of a lot of people. I think chiefs of staff rise up a whole level because you could use this effectively and do a lot more. Oh my god. Well, like the standard behavior in corporate environments is individual people desperately want to help move the company forward. They want to contribute. They want to have maximum impact and they want to know that the executive team knows they're doing that and it goes horribly wrong when either they don't know exactly what they should be doing or they do something amazing and nobody notices and this completely unlocks unlocks and solves those problems. Love that Dave. So when when Donna and Nick ping me and go here, what dates are you available for the next WTF episode and here are 14 that intersect? figure it out with your calendar and now I'll be able to get help with that. You will. In fact, it'll get scheduled without your permission. Well, we're kind of dancing monkeys anywhere, right? You were just being told, "Okay, be here at this time." Interesting, right? I do what's on my calendar when it's very funny when um I'd gotten to know Larry Pageige and Sergey Brenn very well. Larry was on my my board at X-P prize in early days and uh there was a point at which they said, "By the way, we fired our executive assistant." I mean, what do you mean you fired your EA? Well, we learned that if we don't have an EA, no one could put anything on my calendar without our permission. And then like a decade later, I was scheduling a uh a podcast with Elon and I said, "Elon, who should I schedule with?" And he goes, "Me?" I said, "Don't you have an EA?" He goes, "Nope." So maybe that's the mistake we're making. All right, let's go on to the next uh next topic here. This is just reads AI revenue models. So GPT5 is available now for free uh including its most advanced models. At the same time, Gemini uh is their advanced models at $249 a month. Grock Heavy is at 300 bucks a month. And uh uh how do you think about the pricing situation here? Chat GPD is 700 million weekly active users on their way to a billion probably within the next 6 months. Dave, thoughts? Yeah. No, they they really slashed the price. Um it shows up for the user user but also the APIs which I think we have on the next slide. Um we do. Let me let me go ahead to that slide here. Yeah, here you go. Yeah, this is what Ahmad was talking about earlier. They just absolutely the the cost per intelligence came way way down. I think you said 40%. and I had it at about half half of where we were a week ago. That's a big big deal and it's you know it it's more than you would expect on the curve. Uh and again they didn't really sell it yesterday in any big way but it is a big step on that paro frontier. Quick 4.5 was um $75 input and $150 output. Wow. Yeah. um as compared to a buck 25 on input and $10 on output. Yeah, I would say per million tokens. Yeah, I would say GPT4.5 was never quite on the cost frontier anyway. Uh what I see in this with this uh almost an order of magnitude reduction in the cost frontier is unlocking new use cases and those I I would expect to be qualitatively different. So for example, favorite use case, if tokens for LLMs are suddenly an order of magnitude cheaper, that means that for for example scientific discoveries uh or mathematical discoveries that require searching, lots of possible completions of sentences of theorems etc. Then you can do 10 times more searching and that makes a qualitative difference. You can brute force you can brute force it in that sense. Yes, exactly. Exactly. I got to give a shout out to there, you know, probably thousands and thousands of engineers out there that listen to this podcast. And if you tried writing code through any of these models, well, any of these really great models through either the Anthropic or the new GBD5 or um Gemini 2.5 Pro, if you tried a month ago, you have to try again today. It's just night and day different in terms of being able to build something without even looking at the code in terms of getting exactly what you asked for. Um, I'm using mostly Gemini 2.5 Pro Deep Think to do the planning, but then I'm putting it into either GPT5 or Cloud 4 uh Sonnet Max to do the coding. It's working like you would not believe and night and day better than just a month ago. How how many of them how many of the frontier models do you have open at a time and are you trying the same thing on each of them Dave? Yeah, I keep them all open actually. Uh so I've got but you know look it's it's 250 bucks a month. It's it's not going to kill you and you can turn it off anytime but I keep them all open and I don't usually try Grock for code. I do everything else. Um not sure why. Maybe I should. Alex or EOD, how are they getting these cost reductions? I I think a lot of it this is based on public information shared by the frontier labs. A lot of it comes from optimizing the inference stack. So moving to to faster blackwell GPUs I I think is one factor. Optimizations low-level optimizations in the tech stack uh at inference time. Distillation of smaller models with fewer parameters based on higher quality data. Algorithmic innovations architectural innovations. These all compound. Some of them are 50% improvements, some of them are 2 to 3x improvements, but collectively as uh as is now the the lower in the industry, we're seeing order of magnitude per year cost reductions. But are they how much money are they losing on this per transaction? It it's difficult to know from the outside, but I would also say that the matter is somewhat confounded by the enormous capital expenditures that are going into this space. So it's not necessarily even a reasonable question to ask how much is being lost. You have to to sort of factor out the capital expenditures as as we've discussed previously. We're we're in the process of tiling the earth's surface with data centers. This is an enormous capital expenditure. So it's a little bit difficult to to separate out the amortization of of capex from the opex of just day-to-day inference and electricity. I can definitely tell you they're they're definitely not incinerating money. There's a lot of FUD on the internet about them. Oh, they're incinerating money. They're losing huge amount. They're not. They're they're operating better about break even or better. And in, you know, in the context of what Alex just said, the order of magnitude improvement in cost per compute that just came from the GB200s from from Nvidia, that would put this way over the top. In fact, I was talking to Gemini earlier today about, you know, what do you think they spent training GPT5? And it came back with this insane number, a billion dollars on H100s. I said, 'Well, I don't think they used H100s. I said, 'Oh, okay. Well, if they use GB200s, it would be more like 60 million. Wow. Okay. Uh, but yeah, it's about a factor of 10 reduction in the cost of the compute and they're passing some of that through. And, you know, the GB200s are just coming off the line and starting to get into production, so it'll be a little while. So, when I saw the pricing, I thought they were doing this for competitive advantage and taking a huge loss. What I'm hearing you guys are saying is that's not the case. They're really running maybe about break even, but it passing on massive savings to the consumer. Yeah, Seem, this is what hyperdelation looks like. And it's an interesting thought experiment to ask, assuming this is sustainable. And I I have no reason to think that it isn't sustainable. What does hyperdelation right now at at inference time for frontier models look like once it starts to to spread to the rest of the economy? That this leads to Peter's abundance state, I think. You know, I just want to say something um for those listening. I feel smarter during these episodes getting a chance to speak with Dave and Salem and uh and Alex and Immod. And I hope you do, too. I mean, that's the reason I do this. We put about 20 hours of deep research in per week, trying to find the most relevant content to share with you, and then how do we make it actually understandable, connect the dots, and deliver sort of a distilled cliff note to help you stay ahead. uh you know selfishly I do this for because it's a blast. See is and and Dave what about you? Well I think the curation that goes into this right where we you're looking across the spectrum at and then picking out what are the most relevant things right Dave talks about the actionability of it but I think the fact that we can curate the very important bits for our viewers I think is the most important part and the most fun and we get to see that first. Yeah. I always I always have my kids in the back of my mind when we're doing these podcasts like because they're they're going to live their entire lives in the post AGI world and you know one of my kids was talking to one of the guys here in the office and said you all your dad ever talks about is AI like yeah but but the whole time you're growing up did I ever talk about AI once? I mean I never mentioned it until suddenly it's going to change your life. I mean you you must get on top of this right now. You must have a plan. It's for it's for their own good. And so I'm always thinking that in the back of my mind. How many listeners out there need this information in order to remap what they're doing and to be inspired, right? I mean, our our goal here is to inspire everyone to be in the thick of this and to find your own moonshots to understand. So, if we just connect the dots on one thing, right? the fact that GPT5 is now free and has built into it the best doctor in the world that can diagnose anything on a on a much better basis instantly for you is a profound uplift and this I think Alex the point you were making earlier exactly when we talk about abundance uh in in all of its many facets taking 700 million people and suddenly giving them access to state-of-the-art AI I think becomes transformative M it'd be interesting. I'm looking forward. Here's the thing I want to watch. How does um OpenAI's user growth go from here given that they've made it free? I thought you were going to go in a different direction. S I I agree that that's interesting. But another is in some sense this is the greatest AB experiment that macroeconomists should be all over because prior to to yesterday the world most of the world didn't have access to frontier AI and starting yesterday most of or a fraction of the world call it you know a tenth of the world does uh what is the before and after look like do we see dramatically different outcomes in in different dimensions you know uh Sam didn't have a huge part in the event yesterday but he did a lot of postgame interviews which I watched and in one of them uh one of the interviewers said you know imagine college and education for me in say 2035 and he said 2035 like there college in 2035 he said if it exists I mean we need to coin a term for it maybe this is like an intelligence shock that's hitting the world oh I hope so intelligence inversion just intelligence my son is 13 my son is 13 and I'm hoping the university system implodes in the next 5 years before he you your boy and mine. Um and by the way uh I'll make a a call out as talking to my son. I'm saying I'm going to go do WTF with uh with my Moonshot mates and he goes have you reached a million subscribers yet? And I go why? Cuz well then you'll actually have a great podcast ad. So for those you haven't subscribed yet, please help me get this to a million subscribers. Subscribe. Share it with your friends. It's not about view count. I think it's more about quality and I think there if if a smaller set of people gets a much more value out of it, I think that's better. All right, Emit, a huge fraction of the people I bump into have actually watched the pod. So, we got we got a quality audience for sure. Yeah. Yeah. I think um the closing out is you know there's a cap on human intelligence but there isn't on artificial intelligence. So everyone will have abundant and you can expect that next year a zero drops off here and then the year after another zero drops off and we're seeing that insane be crazy. That's insane. So uh I want to hit a couple of things. Open AAI is eyeing a half a trillion dollar valuation um which is pretty extraordinary. It's one of the highest valued private companies along with Bite Dance, SpaceX and Ant Group. Wouldn't say much more here other than, you know, how will they go public, when will they go public, and will this be the largest IPO ever? Um, you know, we've seen OpenAI's GPT5, but they also unveiled their open models. I don't want to go into this in too much detail, but uh, Alex, do you want to lead us on this one? Actually, Immod you're the open open model uh champion around the world. Wait, wait, wait, wait, wait. Hold on one second. Hold on one second. Can we just go back to the previous slide just for a second? Okay. So, um, OpenAI made 10 billion is making about 10 billion a year. Microsoft is making about 300 billion a year in revenues. And so, OpenAI is valued at half of Microsoft. So I just want everybody just that ratio 3 trillion. No, no, I mean revenues. Um it's 10 billion versus 300 billion in revenues. Okay. Okay. So there's a very big it's very lofty but that feels overpriced to me. Anyway, well Sam's Sam's projection is 100 to 150 billion in revenue in what is it two years from today, which I I don't doubt that that's entirely possible. There's only two versions of the world actually. There's a there's a version of the world where open AI easily hits that target and there's a version of the world where Google destroys them and wipes them off the face of the earth. Those are the two possible outcomes. I mean, talk about capitalism at its finest, right? Look, SpaceX is 13 billion in revenue. And what's its valuation like almost a trillion or something like that? Half a trillion. 210 right there. Oh, it's right there. Okay. Okay. But when they own Mars, it'll go up a little bit. Um, so you know, next slide. All right. There was an interesting note that Elon, you know, pushed out on uh on X, which was uh when is open A going to buy Microsoft? Okay. Fascinating. All right, continue on with the uh open models here. I think that I mean it's pretty significant. A lot of people worried about Chinese open models going everywhere and OpenAI have released a really solid model. It's a bit weird, it has to be said. But the main thing is this model costs um $4 million to train and it's better than any model that we had this time last year. That's extraordinary. Next year it will cost $400,000 to train a model. How did you know it was 4 million to train? Did they release that? They said 2 million H100 hours and the 20 billion parameter model that runs on your laptop was 10 times cheaper. It was uh 2 million. It's $2 an hour. And that's that's like from scratch or was there distilling from a big model? No, it's from scratch. It's 80 trillion tokens. 80 trillion words. So when we Dave, to your to your point, I I think the the footnote there uh is is where do those tokens come from? And I I think it's reasonable to assume in the style of say Microsoft FI models that these are tokens that were generated uh through some synthetic process from a much larger much more expensive in terms of fixed costs model in which case whether you you call the the the total pre-training cost just the marginal cost for training on the back of a much larger parent model or teacher model I I think that's the key distinction. We should do a whole podcast on just that topic because in the broader sense AI that helps create the next AI is an incredible force multiplier for just humanity. And it's a good example because when you when you just distill the training data and create some synthetic data using the prior model, you knock 90 to 99% off the cost of creating the next iteration. It's just it's just crazy economics how it feeds back like no technology previously other than maybe robots building robots someday. Yeah. But there's nothing that feeds back like that. We need we need a new term, you know, that supersedes Moore's law here because the speed of this is extraordinary. And uh we're witnessing the evolution of of something that uh I think we're going to look back as I can tell Alex is about to say something brilliant. Know that look point out a couple of things here. One, we we do have this already. It's called education. distillation is is what humans use to to take years and years that researchers and teachers spend accumulating and then convey this in a concise lesson to to a student. Uh so so we as humans do distillation as well. It's very efficient, very economical. Uh and so it's perhaps not that surprising to to see distillation give us radical e economic efficiencies uh in in in these open weight models. That that's first point. Second point just to go back Peter to your earlier comment ha having these supply chain safe if you want to call them that openweight models is transformative for so many applications that are highly regulated that are very sensitive to supply chain risks in finance in healthcare in government now we have Americanrained models that can be embedded in all sorts of missionritical internet disconnected systems and that is going to be transformative insane yeah I I think that just one final thing on this. Um, this model only has 5 billion active parameters and so it runs like faster than you can read even on a MacBook. And I think the big thing is everyone's talking about billion dollar training runs. I actually don't think that's true at all. I think you will have a GPT5 level model by 2 years max that will cost under a million dollars to train end to end. And nobody's got that in their numbers. All right. Anything the the the the expensive part was the journey to get there. I I completely agree that at some point we're going to discover the made this point previously the perfect architecture the the perfect sort of micro kernel version of a foundation model that's relatively small parameter count that's fully multimodal and if we knew what that were today we could radically collapse training costs. I think what this actually shows is we don't even need that. We need to have a trillion good tokens. And if we've got a trillion good tokens, then you can train a frontier model for less than a million dollars next year. And so I think that do you then embed that into all sorts of devices and humanoid robots and moving cars and anything? Yeah. Everything everywhere you embed everything you everything becomes built in with a builtin. I think that's where that's where this goes. Yeah. only your question exactly defines the future entrepreneur. What am I going to do with all that? If I could for a a million which is seed money, I can build a GPT5 level model. What else can I build? And in the this is going to be the age of abundance where it's limited by people's imagination. If you can imagine something genuinely useful that people want, the cost of creating it is near zero. Well, you don't have to create the model. One one entity needs to create that model open source once and the economies of scope means it can be used anywhere. Maybe I can maybe I can stop arranging the room for the damn Roomba. There you go. That would be a that would be a great starting point. Let's not go there. Hey everybody, there's not a week that goes by when I don't get the strangest of compliments. Someone will stop me and say, "Peter, you've got such nice skin." Honestly, I never thought, especially at age 64, I'd be hearing anyone say that I have great skin. And honestly, I can't take any credit. I use an amazing product called OneSkin OS01 twice a day, every day. The company was built by four brilliant PhD women who have identified a 10 amino acid peptide that effectively reverses the age of your skin. I love it and like I say, I use it every day, twice a day. There you have it. That's my secret. You go to oneskin.co and write peter at checkout for a discount on the same product I use. Okay, now back to the episode. All right. Uh we have a back end of this WTF episode which is to look at all the other companies in the AI wars. They continue Gro, Gemini, Meta, Nvidia, Apple. I'm going to try and move us through this. Uh there is some important data we need to share with everybody. Just you know this is what we're watching and what we're keeping in tune with. Hopefully you are too. Let's jump in. Uh the first is again a quick look at humanity's last exam hle benchmarks. Um, Alex, yeah, I think what we're what we're really seeing here, so we we see two models, Grock with extensions and derivatives of various sorts and GPT5 and its derivatives leading the pack. I I think if you pull back that headline, what you're actually seeing here is the power of tool use and the power of parallelism with GPT5 leaning heavily on search and and other tools and Gro for heavy leaning on the power of having multiple parallel agents collaborating and zooming out 10,000 meter perspective. I I think what this points to is what we were just discussing a world in which it's not just the the core foundation model but arrangements not even necessarily scaffolding but the ability to integrate both these micro kernel type foundation models with each other in teams of agents and the ability to integrate them with powerful tools in their environment. that's going to turn out to be one of the the the next big shocks in terms of how we're able to to challenge the frontier for HLE and other hard benchmarks. By the way, people listening, our subscribers listening, if you get us a second, you want to do something fun, just get on to uh onto chat GPT5 or Gro or wherever and just ask it to give you 10 example questions from uh humanity's last example. Yes. I mean, I I I I'm going to just share a couple of them here that I asked for. And so, here's one in the classics um category classics. Here's a representation of a Roman inscription originally found on a tombstone provided provided translation for the Palmeirini script. Uh a transliteration of the text is the following. And then you have to transcribe that. Uh here's another one. What is the rarest noble gas on Earth as a percentage of all terrestrial matter in 2002? Okay. Um All right. Here's one I'm going to ask our our geniuses here. Um in physics, a point mass is attached to a spring. Spring constant is K and oscillates on a frictional surface on a frictionless surface. Uh if its amplitude of motion is doubled, what happens to its total mechanical energy? A, it doubles. B, quadruples. C, it triples. Or D, it remains the same. I'm not going to ask you to answer that. Should hopefully quadruple. I would expect it does quadruple. Yes. Correct. Okay, we got All right, there we go. At least flashing back to my physics courses. Please, for God's sakes, let's not do that. All right, last one. Consider a balanced binary search tree like a red black tree with nodes. What is the worstase time complexity for searching a given key? And the answer is zero log N. Okay. Uh order login. Order login. Thank you. There you go. I'd like to point out on this one, the open source models that OpenAI just released scored 19% and 17%. And the 17% is the 20 billion parameter model that will run on anyone's laptop. Crazy. Amazing. Bring bring that into your college exams, everybody. All right, we had Elon pipe up. He said, "Great work." So, uh, here was the tweet he's referring to. Very proud of us at XAI after seeing the GPT5 release with a much smaller team. We are ahead in many GR 4 world's first unified model and crushing GPT5 in benchmarks like ARC AG AI AGI. So we're going to have this continuous I don't if it's an ego battle, a financial battle, um whatever it might be where everybody just just trying to one up each other. And of course, his next tweet was, "Grock five will be out before the end of the year, and it will be crushingly good." So, comments on Grock. He just tweeted, he just tweeted saying, "Grock 4.2 before the end of this month." Number one, 4.2. I think Peter, one of the takehomes here is that whoever is defining the benchmarks wins. It's like, you know, you create the evals and humanity wins. It's it's amazing how starved the research community is for compelling new evals as discussed previously and uh and so to the extent that we can create more evals that as uh as I I think your community has also chimed in historically with some wonderful ideas for abundanceoriented benchmarks or evals the frontier labs will I think race to achieve them. All right. Um most of the poly market uh you know predictions have Google winning by the end of the year and for good reason. What we've seen is extraordinary and here's the title of the slide. Demesis in a word relentless. In only two weeks they've shipped or achieved and I'll read the list here. Gemini 3. We'll see an example of that. Gemini 2.5 pro deep think. Gemini pro free for university students. Alpha Earth, amazing. We'll see a demo of that. Um, Enus, deciphering ancient text. Gemini won the gold medal in the International Math Olympiad. Storybook. Um, Kaggle Game Arena. Jewels notebook LM video uh overviews and Gemma, which passed 200 million downloads. And this is Google's lightweight open-source open weights model. I mean, really impressive work. Um, yeah. Well, so Dennis has Dennis has 6,000 people in AI R&D. Uh, Open AI is up to a little under 2,000 now, but these guys at Google have been working on it for years. Mhm. So, they've got about a factor of 10 more uh person hours put into it so far, and they're all operating on things in parallel. So, they're they're now unleashing it all. You know, it was all just kind of sitting there in the lab until OpenAI put the competitive pressure on them. Yeah. Now something has shifted in a big way at at Google and in a couple of fronts. You know, one they're unleashing all the things they've been working on. Uh the other is they proactively reached out to a bunch of our companies including Blitzy which is you know Blitzy is a particularly hot company but I don't know how they found it probably through all their big data but the Gemini people came over to our office proactively and said we need to meet with you. So they're really reaching out trying to get the businesses to move over to using Gemini. And that was also uh really evident in the the GPD5 roll out yesterday is the call to companies saying we're here, we're open, we want to partner with you. Uh and we're cutting the price point to make it easier to do and you know we're open for business. So I think I think that's a new thing. Um I hadn't seen anyone proactively reach out to our companies until this week. Amazing. All right, let's take a look at a few of these uh examples coming out of Google. This is Google's Genie 3. Uh, it's world models for gaming. Let's play the video. I I was blown away by this. I found this probably one of the most impressive things I've seen in the last week. What you're seeing are not games or videos, they're worlds. Each one of these is an interactive environment generated by Genie 3, a new frontier for world models. With Genie3, you can use natural language to generate a variety of worlds and explore them interactively, all with a single text prompt. [Music] Let's see what it's like to spend some time in a world. Genie 3 has real-time interactivity, meaning that the environment reacts to your movements and actions. You're not walking through a pre-built simulation. Everything you see here is being generated live as you explore it. And GD3 has world memory. That's why environments like this one stay consistent. World memory even carries over into your actions. For example, when I'm painting on this wall, my actions persist. [Music] I can look away and generate other parts of the world. But when I look back, the actions I took are still there. And Genie3 enables promptable events so you can add new events into your world on the fly. Something like another person or transportation or even something totally unexpected. You can use Genie to explore real world physics and movement and all kinds of unique environments. You can generate worlds with distinct geographies, historical settings, fictional environments, and even other characters. We're excited to see how Genie 3 can be used for next generation gaming and entertainment. And that's just the beginning. Worlds could help with embodied research, training robotic agents before working in the real world, or simulating dangerous scenarios for disaster preparedness and emergency training. All right. Um, I'm going to pause there, but holy cow. I I mean, I I don't see I mean, first of all, the simulation theory just took a huge jump forward. Boom. I mean, you know, this this blew my mind. Um, I actually showed this to a friend who spent the last 2 three years building metaverses. Yeah. And and he was literally had his jaw dropped and he said, I I I don't even know where to start. the fact that you can have a responsive environment that tailors depending on where you look and all that's generated on the fly in real time. He he couldn't cope. I've just never seen his mind broken like that. Yeah, it's a master fusion transformer um similar to a lot of the video models like VO and others. And again, we're kind of seeing the breakthroughs coming in this especially cuz Google has such an amazing data set. I think that you'll see a video like this um video model like this as well from XAI. this is what Elon's going to be putting those 10,000 black wells on his video model towards. Um, but the fact that it's real time now gives you a real idea about that. Similarly, we've seen real time video generation from WAN and others now. So, every pixel will be generated in a few years, which going to be cool. And what if you know Meta, you know, Zuck has wanted the metaverse forever and of course this is delivering the metaverse on the one hand. On the other hand, this is billions of dollars of capex that's uh that's been allocated to video gaming uh or to to metverse software that suddenly is in danger of having been rendered irrelevant. If if this can all be just the output of a single model uh a thousand voices in the video gaming industry just cried out in anguish if this is all just a prompt away. That's the response I got. Yeah. I mean, with with V with with V3 potentially crushing Hollywood and and this potentially crushing the video game industry or reinventing it, accelerating it, making it possible for anybody to create magically compelling video games. This is the Star Trek holiday that this is the Matrix that this is the the key node potentially in in the tech tree of our civilization that unlocks general purpose robotics and general purpose autonomous vehicles. Yeah. because they can train inside of that. That's right. Extraordinary. Absolutely extraordinary. All right. Here's another extraordinary gift from Google. And this is Google's Alpha Earth Maps in real time. This turns massive satellite data into unified global maps. Views of land and coastal areas on 10x10 meter precision tracking deforestation, crop uh wealth, water use, and urban growth. Take a quick look at this video. This is how our new AI model, Alpha Earth Foundations, interprets the planet. Different colors in this map show how different parts of the world are similar in their surface conditions. So, similar colors mean similar things, like two deserts, two forests. The model understands the unique patterns that distinguish any ecosystem. So, it's able to use those learned patterns and quickly find matching patterns in other places in the world. This allows it to tell the difference between, say, a sandy dune on a beach and the deserts of the Sahara. It used to take months to years for scientists to accurately map the world. But with our data set, they can do it in minutes. Much like Google Search has indexed the web, with Alpha Earth Foundations, we've indexed the surface of the planet. And we're making this available through Google Earth Engine for the years 2017. All I can say is just in time. Thoughts? What what's interesting here I I think is this is what's called an encoder only model. It takes 10 m by 10 meter patches of earth's surface and converts them to highdimensional vector representations. Uh encoder only models were were very popular in natural language processing prior to the advent of so-called decoder only models like the GPT series. I think the the elephant in the room here is once we have encoder models that are encoder only models that are uh that cover the earth's surface, we're we're about to get decoder only models. And what that'll enable in practice is so so right now with these encoder models, you can convert arbitrary land masses or ocean masses to to vectors and do a bit of regression on them and maybe a bit of light prediction. With decoderon models, you'll be able to take uh few square kilometers of land and extrapolate out visually what does the future of this land look like? And you'll be able to do searches of interventions. If I put a parking lot here or I put a hospital here, what's going to happen in all likelihood to development in the area? You'll be able to do urban planning as a matter of of a tree search in the same way in which alpha alpha go or alpha 0 or mu0 are able to play chess. That's I think going to be the real amazing unlock. Amazing. See, this is the this is the application usage where I think all these things start to really shine where you can take all that capability and apply to something like this. It'll completely transform how we look at the world. My mind is kind of blown with this one. Yeah. Yeah. I mean, I I'm really glad you said that, Alex, because I really did not get the implications of this until you explained it just now. I do appreciate that. You know on the prior slide too I had a meeting earlier today with Satya Mahajan the CEO of data line here who um you know all these companies are thinking what's my moat what's my moat what's defensible what's going to give me recurring revenue for the next 20 years I'm like it's just not a way to think anymore you if you if you look at the rate of change it's all about small nimble teams and great team dynamics overall there'll be far far more company success than ever before but you can't expect to sit still. You have to reinvent yourself all the time. Amen. And Amen. I think I think agility and passiondriven building um and understanding the root cause problems that you want to go solve, those are the fuel for the future. It's not setting up regulatory, you know, blockage. All right. Next up is a video of Zuck on Meta uh on Meta Super Intelligence Labs. Super Intelligence for everyone. Let's take a look. I want to talk about our new effort, Meta Super Intelligence Labs, and our vision to build personal super intelligence for everyone. I think an even more meaningful impact in our lives is going to come from everyone having a personal super intelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be. This vision is different from others in the industry who want to direct AI at automating all of the valuable work. This is going to be a new era in some ways, but in others, it's just a continuation of historical trends. About 200 years ago, 90% of people were farmers growing food to survive. Today, fewer than 2% grow all of our food. Advances in technology have freed much of humanity to focus less on subsistence and more on the pursuits that we choose. And at each step along the way, most people have decided to use their newfound productivity to spend more time on All right. So, um, he's out pitching hard. He wants to get to super intelligence first. What could possibly go wrong? Um, he is, uh, he, the poaching continues. And I love this. So, Zuck contacted over 100 OpenAI employees. 90% of them turned him down. Why? because they think OpenAI is closer to AGI than meta. That's got to sting. Um, Iman, what do you think about that? I think he has a very different definition than Sam Orman does. I think, uh, one of the reports was that he was talking about how AI could make reals a better product. Um, so I think it's a very different view to the type of ASI that we're talking to. They should just call it meta intelligence. But, you know, I think it shows like you see there billion dollar offers, people still don't move. I think everyone feels that we're getting close to that AGI point and you want to be where it's going to happen, you know, uh cuz what even is money after that? We're going to find out soon. Well, I mean, that's a very important point. I mean, in this in this postabundance world, uh we're living in a postcist world as well. Money has very little meaning. Uh Immad, you and I and Alex, you and I have spoken about that at length, right? And I I I think Peter, as the cost of talent is increasing, and it would appear that it certainly is, that that's going to force Frontier Labs to start competing based on algorithmic insights and ideas, and I I think that's a net positive for the economy and the world. Amazing. All right, I love this next one, the Zuck poaching effect. So Sam just announced $1.5 million bonuses for every employee over two years. Um, he's now officially made every employee at OpenAI a millionaire by giving them over a million dollars. Uh, that compares to 78% of NVIDIA employees who are also millionaires. David asked if that included the baristas. Do we answer that? I don't know, but we'll be at OpenAI in a couple of weeks and we'll ask. Okay. It'll affect your tipping at the coffee counter, I guess. Oh my god. Yeah, Peter, I I would expect this to create a a bloom of seed funding of startups in in the next year or two. It's just going to be absolutely enormous. I'm already starting with with some of the the startups I advise starting to see the beginnings of an absolutely enormous That is such an important point, right? This is something that America does so well. We create, you know, these these decabillion and centabillion dollar companies and trillion dollar companies and we make because of stock options and stock distribution, we make all the employees super wealthy and they turn around and invest in other individuals and that doesn't exist in a lot of countries. Dave, you you've spoken about this. Well, there's two things that are different this time, but but you're right, that is the engine of America and it works really, really well. Uh this time around, it's so fast and the teams are so young. So that's unprecedented and I could see some things that go wrong with that. But it's uh you know it's a field day right now so might as well might as well savor it. Also it's much clearer now how you're supposed to work with either open AI anthropic or Google. How is that? Uh it's not well I mean they they've made it very very clear that they want partners in all these categories especially complicated regulated categories or categories that have proprietary data. They they here's the API. Here's how we want to work with you. The pricing is going to be super low. we want you. Uh for those three companies, it's really clear. It's not as clear with Grock yet. And and I don't think anyone knows how to work with Meta, if there is any way. But uh but for the three other big guys, it's it's just a field day for here's how we want to partner and please just bring in the revenue, change the world, we're all happy. So, um I I really am cheering for Sam in this battle, too, because uh you know, Mark Andre built Netscape, coolest company ever, got absolutely obliterated when Microsoft woke up. They just annihilated him and changed the course of his life. He did well in the end anyway, but complete life change. So now Sam is that guy. He woke up Google. Um he's got a little shaky relationship with Microsoft. I think I think he pushed Google over the edge. I think they were awake already in that regard. Yeah. Yeah. Well, so now he's got them all coming after him uh concurrently and he's got to outra them and it'd be a great American success story if he can stay ahead of that and and survive. Can't wait for the Hollywood movies that are coming out all on these subjects. Yeah. I think one of the really interesting things is that crypto has basically been legalized in America almost fully with the in the last week. And so I think next year Crypto XAI is going to be the most ridiculous thing you've ever seen because these startups will go with a few smart people. They'll get massive traction by leveraging these models and then anyone will be able to buy them pretty much instantly. And so we're just at the start still of the bubble I think versus what we're going to see. It's the it's going to be the biggest bubble of all time. Well, bubble has a negative connotation to it. Eman, of of course, but you know, we're just at the start now. like this is the final harra of the current financial system or societal system as well. I really I really think though like if you just take a step back and try and visualize Sam's life for real the biggest companies in the world are offering your direct reports $1 billion to walk out the door. You have to fight. At the same time, Mira Marotti, Ilot Sutzkver, two of your founders are trying to raise 1020 billion dollars to compete directly with the thing they built at OpenAI. They don't get any They did raise They did. Does it get any harder for an entrepreneur than where Sam is right now? And he's he's like bulletproof. He's just fighting his way through it. It's it's something the movie will be really cool. I think there's so much to work on. This this is a great testament to the fact that if you keep pushing product, right, and keep doing launching new things and keep innovating, you can stay ahead. And Facebook showed us that, Yahoo showed us that, Google showed us that. All in their era, they just keep breaking boundaries. And so the only thing now is can you break those boundaries and break the status quo and relentlessly keep doing that and differentiate yourself from the competition. Yeah, I think it's sort of an interesting economic experiment. In in the past, I've compared the AI buildout that's happening in the US to uh 1939 and uh the the prelude to the Manhattan project. It's sort of an interesting thought experiment to ask what would have happened if uh nuclearization uh and the Manhattan project hadn't been a nationalized effort, but instead had been a private sector effort where blue chip companies were all competing with each other to see who could build the the first atomic weapon. And how much would they be spending to poach the top scientists from each other to build that first uh atomic bomb that has such strategic import over the future light cone? I I think we're living in some sense a civilian version of that thought experiment. Mhm. Amazing. It's it's actually the really interesting thing is it's not hard to know how to build the models if you know how. The if you know how is really really rare. And so that's why they're willing to splash these billions on top of that. And it'll be interesting to see what they come up with now as these things get commoditized. All right. On the OpenAI train, Nvidia and OpenAI announced their first European Norway data center. This is $2 billion OpenI data center, 100,000 Nvidia GB300 super chips. Uh they'll host 230 megawws of capacity uh expandable to 520 megawatt, so half a gawatt of capacity powered 100% by renewable energy out of Norway. Let's take a quick look at this video. The launch of Stargate Norway marks a new chapter for AI infrastructure in Europe. We're entering a new industrial era. Just as electricity and the internet became foundational to modern life, AI will become essential infrastructure. Every country will build it. Every industry will depend on it. AI is no longer handcoded. It is trained. It is refined with massive compute. It is deployed into factories, research labs, and digital services. Stargate Norway will be powered by GB300 super chips and connected with MVLink. It is designed to scale to hundreds of thousands of GPUs and support the most advanced models in training reasoning and real time inference. All right, there you got it. EOD analysis, please. Yeah, I mean I think this is part of the big sovereign AI strategy because your comparative advantage as a country will be how many chips you got and how much intelligence you have when most of your workers are digital. Uh we've seen OpenAI go very aggressively on this front. In fact, this week they announced that they're going to be rolling out chat GPT to all federal workers in the US at the cost of $1 per agency per year. So I think the land grab has really begun. They couldn't say they couldn't have said free. I would add, Peter, there's I think a less obvious angle here, which is uh pulling back the the details on the announcement. This new uh this new data center is planned to be powered with hydro power, which is intrinsically scarce. You you either have access to it or you don't. You don't necessarily it's it's not that ergonomic as a nation state to create a lot more hydro power. So that means there is very literally uh land grab here. Uh and this is uh Stargate planting its its flag in that hydro power to the extent that uh Europe has a policy of of bounding power to uh to certain energy sources. There's only a finite amount that's available to reprogrammed to AI. So real land grab and we'll see geothermal energy as a land grab and we'll see uh uh other areas. I I want to move this forward here. Um we saw a couple of interesting announcements coming out of the White House. So Apple announced a hundred billion dollar US investment. Uh this is increasing uh their total investment to $600 billion. Uh and uh uh I don't know this is uh finally we're seeing Apple come back to the US. How much of Apple's uh products are overseas manufactured right now? Anybody have an idea? It's got to be overwhelming majority like 90 plus% huge comments Dave. Oh well look you know we've been most of the countries that have like a Samsung in Korea or there the government industrial integration is very very tight. The US has never really had that before. This is the first time. But uh I mean it obviously works really really well. It got Japan on the map and then it got Korea on the map and now it's gotten China beyond on the map. Um, and so, you know, Trump is the first president to really take this to its limit. He's a business guy, so he knows how to do it. Um, and it's obviously going to work really, really well. It's it's not super hard to figure out. You just need to do it. I I would also maybe add I think there's again going back to this idea of a tech tree existing for civilization. It seems clear that there's an innermost loop to the tech tree that's at the intersection of fabs and electricity sources and drones and rare earths. And to the extent that it's possible to to colllocate as high a density as possible talent and infrastructure for building all of these I I think that has the potential to lead to an economic explosion for the US for the world. Amazing. One more article coming out of the White House here on AI and that is Trump demands Intel CEO resignation over China ties. Trump label CEO highly conflicted over $200 million plus in past investments in Chinese tech firms in a relationship with the defense department. For me, this has echoes of the J. Edgar Hoover anti-communist campaigns from the FBI. Um, Immod, do you have any opinion on this being a non-American? Yeah. Well, I mean, look, just posturing, right? Like, I I think that this whole US versus China AI thing is completely overblown because everything gets commoditized soon anyway. Actually, to be honest, we should have had uh the push for open source being that China wants to get into all our systems. Then they would have actually put proper money behind it. Um I think that it's completely wrong though cuz again the correct view is this is abundant and it's going to come to everyone everywhere. You can't keep a lid on it at all. How do you keep a lid on math? H I threw this into the deck actually just to spark the conversation around um you know right now the the chips that are driving this entire AI revolution 2/3 66% market share through TSMC a single manufacturer is utterly insane and not sustainable. So my guess is that the White House is thinking about this and talking about Intel every day. There's it's not coincidence that Trump decided to tweet about, you know, one CEO. Uh the China thing, I don't know what what what he's thinking about there, but but you know, Lipu is a 65-year-old guy. Intel must succeed for it's just an incredible national priority. Incredible asset, right? I mean, it defined the last 50 years. Mhm. And u Yeah. So, so anyway, the point is the White House is talking about it. We need balance in chip manufacturing desperately and we need a lot more volume of chip manufacturing. So I mean if you're at Intel, what you should be thinking about is where do you how do you leaprog? Well, they have they you know their tech their 18 nanometer or 1.8 nanometer 18 angstrom tech is absolutely fine. They need to to get the yields up. Uh and then they need to build more fabs and which means federal help. And so I I I just think that if someone running that company can can get friendly with the current administration that it's all unlocked and it'll explode and succeed wildly, which is what America needs. Um and I don't know, I I hope they figure out the relationship between Lipu and Donald Trump quickly. Yeah. Actually, part of this was because Intel were trying to sell their fabs to TSMC. Yeah. So again, gets complicated. I mean, that would be devastating for the world, you know, really. like there's no way that can get through. But but I get it right. It's it because all the losses at Intel come from the fabs. They would immediately monetize a huge asset. The remaining Intel would be hugely profitable the next day. So that's the allure of that transaction. But then you have one company controlling their entire destiny. Yeah, there's no way that makes sense. All right, I want to close out with this slide um which is uh I find telling especially in the backside of the Intel conversation that we're still early in terms of buildout. So here we see a slide showing infrastructure capex as a percent of the US GDP. So the railroads were 6% of the GDP back in the 1880s. Uh telecom was 1% back in 2020 and today our AI data centers in 2025 is 1.2%. 2% we're still early and uh and you know Alex we've talked about and I emod that we're about to turn the planet into competronium. We're building data centers every place and maybe the solar system. We'll see. And maybe the solar system, you got an event coming up soon. Talk to me about it. On August 20th, we have our next monthly EXO workshop. The last one, the last two or three have sold out. People are absolutely loving them. It's a 100 bucks uh to come bring your company and we'll teach you how to build an exo. We actually have a great ad which we'll get a link to and post in here which is they created an ad where the an AI reads out a real review by a real person, but it's an AI reading it out. Super funny. So, okay, it's fun. And and for those interested, I've got some comments interested in the Abundance Summit in March. Uh applications are closed at this moment. They'll be reopening in September, but you can get on the wait list by going to www.abundance360.com and let us know that you're interested. Uh we'll have all of our Moonshot mates at the Abundance Summit as well. Uh let's take a quick look around the horn. Dave, what's happening for you in the next few weeks? Well, the biggest thing by far is we'll be together at OpenAI in what, 11 days, 12 days? I'll be there the whole week, actually. Um, and I, God, there's so much going on in that building. Yeah. So, uh, really looking forward to that. We're having a fun podcast with the chief product officer, Kevin While at OpenAI. Looking forward to that conversation. Um, Immod, it's 2 a.m. Do you know where your children are? You're you're a nuclear power source, buddy. Thank you for sticking with us through the hours of the UK. It's too much fun to sleep. It is. And what's uh what's on your plate over the next month? Uh we got some big releases coming up. In particular, I've been looking at the economics of the AI age. So um it's going to be wild. I'm going to be releasing a bunch of stuff around that. I've seen what you're going to release and it is stunning. Um you know, dare I say, uh you know, just earthshattering. Uh Alex, how about yourself, pal? Oh my goodness. Well, I I think we're we're in uh a time although on an exponential curve, every point looks like the knee in the curve or the inflection point. So, one has to be, you know, careful of of such anthropic bias. I I I spend most of my time advising tech startups and making sure that the the benefits of of AI are evenly distributed throughout the economy and every day is an adventure and an opportunity to smooth out the singularity as it were. All right. Well, everybody, thank you for joining us on this episode of WTF and the GPT5 announcement. Uh, we'll be coming back to you with an episode again next week. Uh, please, uh, tell your friends about what we do. Our mission here is to help you understand how fast the world is going, to inspire you, to give you the motivation to create your own moonshots, and to make this understandable, and actually uh, what was the word you used, Alex? Riveting. Riveting. I'm at the edge of my seat. an amazing time. The most amazing time ever to be alive. All right, to all of you, thank you for a fantastic conversation. Every week, my team and I study the top 10 technology meta trends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff, only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these meta trends with you, I write a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important meta trends 10 years before anyone else, this report's for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you. If you don't want to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free, go to dmmandis.com/metatrends to gain access to the trends 10 years before anyone else. All right, now back to this episode. [Music]