[Music] Hey, a Oh yeah. Hey [Music] But still The sun. [Music] [Music] [Music] The [Music] sun can't fast. [Music] [Music] So [Music] [Music] [Music] [Music] Hey, hey, [Music] hey. Heat. Heat. [Music] [Music] Heat. Heat. N. [Music] [Music] [Music] Heat. [Music] [Music] Heat. Heat. Heat. Hey, Heat. [Music] Hey. [Music] Heat. Hey, Heat. Heat. Heat. [Music] [Music] [Music] [Music] Hold. Oh. [Music] But still [Music] [Music] Hello. [Music] [Music] Get [Music] us out [Music] [Music] [Music] [Music] Hey. Hey. Hey. [Music] Hey. Hey. Hey. [Music] Heat. Heat. [Music] Hey, hey, hey. Heat. Heat. N. [Music] [Music] [Music] [Music] [Music] [Music] Heat. [Music] Hey [Music] Heat. Hey, Heat. [Music] [Applause] you. [Music] [Music] Heat. Heat. [Music] [Music] Hey, hey, hey. [Music] Heat. Heat. [Music] [Music] [Music] All right, everybody. We have an amazing crowd here today. We're going to be live streaming this. So, let's hear you. Make some noise so everybody can hear that you're here. Let's go. Not bad. I'm Alex Caneritz. the host of Big Technology podcast and I'm here to speak with you about the frontiers of AI with two amazing guests. Dennis Assabis, the CEO of Deep Mind is here, Google Deep Mind. Good to see you, Dennis. Good to see you, too. And we have a special guest, Sergey Brin, the co-founder of Google is also here. All right, so this is going to be fun. Let's start with the frontier models. Uh, Demis, this is for you. With what we know today about frontier models, how much improvement is there left to be unlocked and why do you think so many smart people are saying that the gains are about to level off? I think we're seeing incredible progress. You've all seen it today, all the amazing stuff we showed in the keynote. So, um, I think we're seeing incredible gains with the existing techniques, pushing them to the limit, but we're also inventing new things all the time as well. And I think to get all the way to something like AGI, I think may require one or two more new breakthroughs. And, you know, I think we have lots of promising ideas that we're cooking up and we hope to bring into the to the main branch of the Gemini branch. All right. And so, there's been this discussion about scale, you know, is scale does scale solve all problems or does it not? So I want to ask you in terms of the improvement that's available today. Is scale still the star or is it a supporting actor? I think I've always been of the opinion you need both. You need to scale to the maximum uh the techniques that you know about. You want to exploit them to the limit whether that's data or compute scale. Uh and at the same time you want to spend a bunch of effort on what's coming next maybe 6 months a year down the line. So you have the next innovation that might do a 10x leap in some way um to to kind of intersect with the scale. So you want both in my opinion. I don't know Sergey what do you think? I mean I agree it takes both uh you know you can have algorithmic improvements and simply compute improvements better chips more chips more power bigger data centers. I think that historically if you look at um things like the Nbody problem and simulating you know just gravitational bodies and things like that as you plot it the algorithmic advances have actually beaten out the computational advances even with Moore's law. Um if I had to guess I would say the algorithmic advances are probably going to be even more significant than the computational uh advances. uh but uh both of them are coming up now. So we we're kind of getting the benefits of both. And Debus, do you think the majority of your improvement is coming from building bigger data centers and using more chips? Like there's talk about how the world will be just wallpapered with data centers. Is that your vision? Well, no. Look, I mean it it we're definitely going to need a lot more data centers. Um it's amazing that, you know, it still amazes me from a scientific point of view. We turn sand into thinking machines. It's pretty incredible. But actually, it's not just for the training. Um, it's it's now we've got these models that everyone wants to use, you know, and actually we're seeing incredible demand for 2.5 Pro and I think Flash, we're really excited about how performant that is for uh the incredible sort of co low cost. Um I think the whole world's going to want to use these things and so we're going to need a lot of data centers for serving and also for inference time compute giving you know you saw you saw deep think today 2.5 pro deep think the more time you give it the better it will be and certain tasks very high value very difficult tasks you want to it will be worth letting it think for a very long time and we're thinking about how to push that even further and uh again that's going to require a lot of chips at at runtime. Okay, so you brought up testime compute. Uh, we've been about a year into this reasoning paradigm and you and I have spoken about it twice in the past as something that you might be able to add on to traditional LLMs to get gains. So, I think this is like a pretty good time for me to be like, what's what's happening? Uh, what is can you help us contextualize the magnitude of improvement we're seeing from reasoning? Well, we we've always been big believers in what we're now calling this thinking paradigm. If you go back to our very early work on things like Alph Go and Alpha Zero, our agent work on on playing games, they will all had this type of attribute of a thinking system on top of a model. And actually, you can quantify how much difference that makes if you look at a game like chess or go. Um, you know, we had versions of Alpha Go and Alpha Zero with the thinking turned off. So, it was just the model telling you its first idea. And, you know, it's not bad. It's maybe like master level, something like that. But then if you turn the thinking on it's be way beyond world champion level. You know it's like a 600 ELO plus difference between the two versions. So you can see that in games let alone for the real world which is way more complicated. And um I think the gains will be potentially even bigger by adding uh this thinking type of paradigm on top. Of course the challenge is that your models and I talked about this earlier in the talk need to be a kind of world model and that's much harder than building a model of a simple game. of course and it and uh you know it has errors in it and those can compound over longer term plans. So um but I think we're making really good progress on on all that all those fronts. Yeah, look, I mean, um, as Demis said, I mean, Deep Mind really pioneered a lot of this reinforcement learning work and, uh, what they did with Alph Go and Alpha Zeros. He mentioned um, it showed, as I recall, something you would take 5,000 times as much training to match what you were able to do with still a lot of training and the inference time compute that you were doing with Go. Um so it's obviously a huge advantage and obviously like uh most of us we get some benefit by thinking before we speak. Um and uh although uh not always I always get reminded to do that. Um but uh I I think that the the AIS obviously are much stronger once you add that capability and I think we're just at the tip of the iceberg right now in that sense. It's been less than a year than these models have really been around. Especially if you think about obviously with an AI during its thinking process, it can also use a bunch of tools or even other AIs um in in during that thinking process to improve what the final output is. So I think it's going to be an incredibly powerful paradigm. Deep think is very interesting. It I'm going to describe it I'm trying to describe it right. Uh it's basically a bunch of parallel reasoning processes working and then checking each other and then it's like reasoning on steroids. Now Demis, you mentioned that the industry needs a couple more advances to get to AGI. Where would you put this type of uh mechanism? Is this one of those that might get the industry closer? I think so. I think it's it's maybe part of one. Okay. Shall I should we say? Um and there are others too that we need to you know maybe this can be part of improving reasoning. where does true invention come from where you know you're not just solving a mass conjecture you're actually proposing one or hypothesizing a new theory in physics um you know that's I think we don't have systems yet that can do that type of creativity I think they're coming um and these types of these types of paradigms might be helpful in that uh things like thinking um and then probably many other things I mean I think we need a lot of advances on the accuracy of the world models that we're building um I think you saw that with VO the potential BO3 of how it amazes me like the how it can intuitit the physics of the light and the gravity. Having someone, you know, I used to work on on on get computer games, not just the AI, but also graphics engines in my early career. And remember having to do all of this by hand, you know, and and program all of the lighting and the shaders and all of these things. Incredibly complicated stuff we used to do in early games. And now it's it's just intuiting it within the model. It's it's pretty astounding. I saw you shared an image of a frying pan with some onions and some oil. Hope you all like that. There was no subliminal messaging about that. No, not really. Not really. Just a maybe a subtle subtle message. Okay. So, we've we said the word AG or the acronym AGI a couple times. There's I I think a movement within the AI world right now to say let's not say AGI anymore. The term is so overused as to be meaningless. But Demis, I it seems like you think it's important. Why? Yeah, I think it's very important, but I think I mean maybe I need to write something about this also with Shane Le who's our our chief scientist who was one of the people invented the term 25 years back. Um I think there's sort of two things that are getting a little bit conflated. Uh one is like what can a typical uh uh person do an individual do? And we can, you know, we're all very capable, but we can only do however capable you are, there's only a certain slice of things that one is expert in, right? And um or you know, you could say what can you do what like 90% of humans can do. Uh that's obviously going to be economically very important and I think from a product perspective also very important. So it's it's a very important milestone. So maybe we should say that's like you know typical human intelligence. But what I'm interested in and what I would call AGI is really a more theoretical construct which is what is the human brain as an architecture able to do, right? And and that's the human brain is an important reference point because it's the only evidence we have maybe in the universe that general intelligence is possible. And there it would have to be able to you would have to show your system was capable of doing the range of things even the best humans in history were able to do with the same brain architecture. It's not one brain but the same brain architecture. So what Einstein did, what Mozart was able to do, what Marary Cury and so on. And that it's clear to me today systems don't have that. And then the other thing that why I think it's sort of overblown the hype today on AGI is that our systems are not consistent enough to be considered to be fully general yet. They're quite general. So they can do, you know, thousands of things. You've seen many impressive things today, but every one of us have experience with today's chat bots and assistants. You can easily within a few minutes find some obvious flaw with them. some high school math thing that it doesn't solve, you know, some basic game it can't play. Um, uh, it's not very difficult to find that those holes in the system. And for me, for something to be called AGI, it would need to, um, be consistent, much more consistent across the board than it is today. It should take like a couple of months uh, for for for maybe a team of experts to find a a hole in it, an obvious hole in it. Whereas, you know, today it takes an individual minutes to find that. Sergey, this is a good one for you. Do you think that AGI is going to be reached by one company and it's game over or could you see Google having AGI, OpenAI having AGI, Anthropic having AGI, China having AGI? Wow. Um, that's a great question. I mean, I guess I would suppose that one uh company or country or entity will reach AGI first. Now it is a little bit of a you know kind of a spectrum. It's not like a completely precise thing. So it's conceivable that there will be more than one roughly in that range at the same time. Um after that what happens I I mean I think it's very hard to foresee. uh but you could certainly imagine there's going to be multiple entities that come through and in our AI space you know we've seen uh whatever when we make a certain kind of advance like other companies are quick to follow and vice versa when other companies make certain advances it's you know it's a kind of a constant leaprog so I do think there's an inspiration element that you see uh and that would probably encourage more and more entities to cross that threshold Dennis, what do you think? Well, I think we we probably do I think it is important for the field to agree on a definition of AGI. So, I will maybe we should try and help that to coalesce assuming there is one, you know, there probably will be some organizations that get there first. And I think it's important to that those first systems are built reliably and safely. And um and I think after that if that's the case you know we can imagine using them to shard off many systems that have safe architectures sort of built under under you know sort of provably underneath them. Uh and then you could have you know personal AGIS and all sorts of things happening but it's you know it's quite difficult as as Sergey says it's pretty difficult to predict um sort of see beyond the event horizon to predict what that's going to be like. Right. So we talked a little bit about the definition of AGI and a lot of people have said AGI must be knowledge right the intelligence of the brain what about the intelligence of the heart Dis briefly does does AI have to have emotion to be considered AGI can it have emotion I think it will need to understand emotion I don't know if um I think it will be a sort of almost a design decision if we wanted to mimic emotions um I think there's no I don't see any reason why it couldn't in theory um but uh it might different or we might it might be not necessary or in fact not desirable for them to have the sort of emotional reactions that that we do as humans. So I think again it's bit of an open question um as we get closer to this AGI time frame and you know uh sort of events which I think is more on a 5 to 10 year time scale. So I think we have a bit of time not much time but some time to research those kinds of questions. When I when I think about how the time frame might be shrunk, uh, I wonder if it's going to be the creation of self-improving systems. And last week, I almost fell out of my chair reading this headline about something called Alpha Evolve, which is an AI that helps design better algorith algorithms and even improve the way uh, LLMs train. So, Demis, are you trying to cause an intelligence explosion? No. Uh, not an uncontrolled one. Um I look I I think we it's an interesting first experiment. It's amazing system great team that's working on that where it's interesting now to start pairing other types of techniques in this case evolutionary programming techniques with the latest foundation models which are getting increasingly powerful and I actually want to see in our exploratory work a lot more of these kind of combinatorial uh systems and sort of pairing different approaches together. Uh and you're right, that is one of the things a self-improvement someone discovering a kind of self-improvement loop uh would be one way where things might accelerate further than they're even going today. Um so and and we've seen it before with our own work with things like Alpha Zero, you know, learning chess and go and any two-player game from scratch uh within, you know, less than 24 hours um starting from random with self-improving processes. So we know it's possible, but again um those are in quite limited game domains which are very well described. So the real world is far messier and far more complex. So remains to be seen if that type of um approach can work in a more general way. Sergey, we've talked about some very powerful systems and it's a race. It's a race to develop these systems. Is that why you came back to Google? Um I mean I think as a computer scientists uh it's a very unique time in history like uh honestly anybody who's a computer scientist uh should not be retiring right now should be working on AI. That's what I would just say. I mean there's just never been a greater sort of problem and opportunity a greater cusp uh of technology. Um, so I don't I wouldn't say it's because of the race. Uh, although we fully intend that Gemini will be the very first AGI. Clarify that. Uh, but uh to be immersed in this uh incredible technological revolution. I mean it's unlike you know I went through sort of the web 1.0 thing. It was very exciting and whatever. We had mobile, we had this, we had that. But uh I think this is scientifically uh far more exciting and I think uh I think ultimately the impact on the world is going to be even greater in as much as you know the web and mobile phones have had a lot of impact um I think AI is going to be vastly more transformative. So what what do you do dayto-day? I think I torture people like uh Demis um who's amazing by the way. He tolerated me crashing this uh fireside. Um I'm in the you know I'm across the street uh you know pretty much every day. Um and they're just uh uh people who are working on the key Gemini text models on the pre-training on the post-training mostly those I periodically delve into some of the multimodal work. uh V3 as uh you've all seen. Um but I tend to be uh pretty deep in the technical details. Um and that's a luxury I really enjoy fortunately because guys like Demis are you know minding the shop um and uh yeah that's just where you know my scientific interest is. It's deep in algorithms and how they can evolve. Okay let's talk about the products a little bit. some that were introduced recently. Um, I just want to ask you a broad question about agents, Demis, because when I look at other tech companies building agents, what we see in the demos is usually something that's contextually aware, has a disembodied voice, is often interacted uh with you often interact with it on a screen. When I see Deep Mind and Google demos, oftentimes it's through the camera. It's very visual. We there was an announcement about smart glasses today. So talk a little bit about if that's the right read, why why Google is so interested in having an assistant or a companion that is something that sees the world as you see it. Well, it's for several reasons, several threads come together. So as we talked earlier, we've always been interested in agents. That's actually the the the heritage of deep mind actually. We started with agent-based systems in games. We are trying to build AGI, which is a full general intelligence. Clearly that would have to understand the physical environment, physical world around you. And two of the massive use cases for that in my opinion are a truly useful assistant that can come around with you in your daily life, not just stuck on your computer or one device. It needs to we want it to be useful in your everyday life for everything. And so it needs to come around you and understand your physical context. Um, and then the other big thing is I've always felt for robotics to work, you sort of want what you saw with Astra on a robot. And I've always felt that the the bottleneck in robotics isn't so much the the hardware, although obviously there's many many companies and and working on fantastic hardware and we partner with a lot of them, but it's actually the software intelligence that I think is always what's held um robotics back. But I think we're in a really exciting moment now where finally with um these latest versions especially 2.5 Gemini and more things that we're going to bring in this kind of VO technology and other things I think we're going to have really exciting uh algorithms to make robotics finally work in in in it and you know sort of realize its potential which could be enormous. So I think this and and then in the end AGI needs to be able to do all of those things. So for us and that's why you can see we always had this in mind. That's why Gemini was built from the beginning even the earliest versions to be multimodal and that made it harder at the start because it's harder to make things multimodal than just text only. But in the end I think we're reaping the benefits of those decisions now and I see many of the Gemini team here in the front row of the correct decisions we made. They were the harder decisions but we made the right decisions and now you can see the fruits of that with all of what you've seen today. Actually, Sergey, I've been thinking about whether to ask you a Google Glass question. How far away? What did you learn from Glass that Google might be able to uh apply today now that it seems like smart glasses have made a reappearance? Wow. Yeah. Uh great question. Um I learned a lot. I mean, that was um definitely feel like I made a lot of mistakes with Google Glass. I'll be honest. Um I am still um a big believer in the form factor. So I'm glad that we have it now. Uh and now it's like looks like normal glasses doesn't have the thing in front. Uh I think there was a technology gap honestly. Now in the AI world, the things that these glasses can do to help you out without constantly distracting you, that capability is much higher. Uh there's also just um I just didn't know anything about consumer electronics supply chains really and how hard it would be to build that and have it be at reasonable price point um managing all the manufacturing so forth. Um this time we have great partners that'll are helping us build this. Um so that's another step forward. Uh what else can I say? I do have to say I miss the the um airship with the wing suiting sky divers for the demo. Honestly, it would have uh been even cooler here at Shoreline Amphitheater than it was up in Moscone back in the day. But maybe we'll have to we should probably polish the product first this time. Ready and available and then we'll do a really cool demo. So that's probably a smart move. Yeah. What I will say is I mean look we've got obviously an incredible history of glass devices and smart devices so we can bring all those learnings to today and I'm very excited about our new glasses as you saw but what I've what I've always always talking to our team and Sheram and the team about is that I mean I don't know if Sergey would agree but I feel like the the universal assistant is the killer app for smart glasses and I think that's what's going to make it work apart from the fact that it's all the tech the hardware technology is also moved on and improved moved a lot is this. I think I feel like this is the actual killer app, the natural killer app for it. Okay. Briefly on video generation, I sat uh in the audience in the keynote today and was like fairly blown away by the level of uh improvement we've seen from these models and I I mean you had filmmakers talking about it in the presentation. I want to ask you Demis um specifically about model quality. If the internet fills with video that's been made with artificial intelligence, does that then go back into the training and lead to a lower quality model than if you were training just from human generated content? Yeah. Well, look, we we you know, there's a lot of worries about this so-called like model collapse. I mean, video is just one thing, but in any modality, text as well. There's a few things to say about that. First of all, we're very rigorous with our data quality management and curation. We also at least for all of our generative models we we attach synth ID to them. So there's this invisible AI actually made watermark that um is pretty very robust has held up now for you know a year 18 months since we released it. And all of our images and videos uh are embedded with this watermark. So we can detect and and we're releasing tools to allow anyone to detect uh uh these watermarks and know that that was an AI generated um uh image or video. And of course that's important to combat deep fakes and misinformation, but it's also of course you could use that to filter out if you wanted to whatever was in your training data. So I don't actually see that as a big problem. Um, eventually we may have video models that are so good you could put them back into the loop as a source of additional data, synthetic data it's called. And there you just got to be very careful that you're you're actually creating from the same distribution that you're going to model. Um, you're not distorting that distribution somehow. Uh, the quality is high enough. We have some experience of this in a completely different main with with things like alphafold where there wasn't actually enough real experimental data to build the final alpha fold. So we had to build an earlier version that then predicted about a million protein structures and then we selected it had a confidence level on that. We selected the top three 400,000 and put them back in the training data. So there's lots of it's very cutting edge research to like mix synthetic data with real data. So there are also ways of doing that. But on the terms of the video sort of generator stuff, you can just exclude it if you want to. At least with our own work and hopefully other um gen media companies follow suit and um put robust watermarks in also obviously first and foremost to combat uh deep fakes and misinformation. Okay, we have four minutes. I got four questions left. We now move to the miscellaneous part of my question. So let's see how many we can get through and as fast as we can get through them. Um let's go uh to Sergey with this one. What does the web look like in 10 years? What does the web look like in 10 years? I mean, go one minute. Boy, I think 10 years because of the rate of progress in AI is so far beyond anything we can see, not just the web. I mean, I don't know. I don't think we really know what the world looks like in 10 years. Okay, Demis. Well, I think I think that's a good answer. I do think the web I think in nearer term the web is going to change quite a lot. If you think about an agent first web, like does it really need to, you know, it doesn't necessarily need to see renders and things like we do as as humans using the web. So I think things will be pretty different in a few years. Okay. Uh this is kind of an under over question. Uh AGI before 2030 or after 2030? Uh 2030. Boy, you really kind of uh put it on that fine line. I'm gonna I'm gonna say before. Before. Yeah. Dis I'm just after though. Just after. Okay. Um, no pressure, Dennis. Exactly. Well, I have to go back and get working harder. Is that I can ask for it. He needs to deliver it. So, exactly. Stop sandbagging. We need next week. That's true. I'll come to the review. All right. So, would you hire someone that used AI in their interview? Demis. Oh, in their interview. Um, depends how they used it. I think using today's models, uh, tools, probably not, but I think that would be Well, it depends how they would use it actually. I think it's probably the answer. Sergey, I mean, I never interviewed at all. So, um, I don't know. I I feel it would be hypocritical for me to judge people exactly how they interview. Yeah, I haven't either, actually. So, snapped on that. I've never done a job interview. Okay. So, Demis, I've been reading your tweets. Um, you put a very interesting tweet up where there was a prompt that created some sort of natural scene. Oh, yeah. Here was the tweet. Uh, nature to simulation at the press of a button does make you wonder with a couple of emojis and people ran with that and wrote some headlines saying Demis thinks we're in a simulation. Are we in a simulation? Um, not in the way that, you know, um, Nick Boston and people talk about. I think I I do think though this so I don't think this is some kind of game even though I wrote a lot of games. I do think that ultimately underlying physics is information theory. So I do think we're in a computational universe but it's not just a straightforward simulation. I can't answer you in one minute, but um but I think I think the fact that these systems are able to model um real uh structures in nature is quite interesting and telling and I've been thinking a lot about our work we've done with Alph Go and Alpha Fold and these types of systems. Uh I've spoken a little about about it. Maybe at some point I'll write up a scientific paper about what I think that really means in terms of what's actually going on here in reality. Sergey, you want to make a headline? Well, I think that argument applies recursively, right? If we're in a simulation, then by the same argument, whatever beings are making the simulation are themselves in a simulation for roughly the same reasons and so on and so forth. So, I think you're going to have to either accept that we're in an infinite stack of simulations, uh, or that there's got to be some stopping criteria. And what's your best guess? Um I think that we're taking a very anthropocentric view like when we say simulation in the sense that some some kind of conscious being is running a simulation that we are then in and that that they have some kind of semblance of desire and consciousness that's similar to us. I think that's where it kind of breaks down for me. Um, so I I just don't think that we're really equipped to reason about sort of one level up in the hierarchy. Okay. Well, Deus, Sergey, thank you so much. This has been such a fascinating conversation and thank you all. All right. Thanks, Alex. Thank you, Sergey. Pleasure. Thank you. Thanks, everybody. [Music] Heat. Heat. Hey. Hey. [Music] [Music] [Music] Hey. Heat. [Music] Heat. Heat. Heat. [Music] Hey. [Music] Heat. [Music] Heat. Heat. Heat. [Music] [Applause] [Music] How about [Music] [Music] Hello. Hello. Heat. Hey, Heat. [Music] [Music] Oh yeah. [Music] Hey, hey, hey. [Music] [Applause] [Music] Heat. Heat. N. [Music] Hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, hey, [Music] Heat. Heat. N. [Music] Hey, [Music] Heat. Heat. N. [Music] [Music] Hey, hey, hey. [Music] [Music] [Music] Hello. Heat. [Music] Heat. [Music] Heat. Heat. [Music] [Music] Heat. Hey. Hey. Hey. [Music] [Applause] [Music] [Music] Heat. [Music] Hey. Heat. [Music] Heat. Heat. Heat. N. [Music] [Music] Heat. Heat. [Music] [Music] Hey, hey, hey, hey, hey, hey, hey. Heat. Heat. N. [Music] [Music] Heat. Heat. [Music] Heat. Heat. N. [Music] Heat. Hey, heat. Hey, heat. [Music] [Music] Hey, Heat. Heat. N. [Music] Hello everyone. My name is Ran Karashka and I lead developer relations at Google Deep Mind. Hi everyone, I'm Josh and we're very excited to welcome you to our session Google's AI stack for developers. We'll start by giving you a quick overview of Google's AI stack. Who's at IO for the first time? Can I see some hands up? Oh, okay. Welcome to Google IO. It's a pleasure to have you with us today. So, we'll start by giving you an overview of Google's endtoend ecosystem of AI. And as you know, we've been leading the way in AI for decades since we open sourced TensorFlow in 2015 from when we we published our field defining research with Transformers in 2017 to Gemini and we are now in the Gemini era. So we've been releasing a lot relentlessly as it's been called today. We've been shipping many features, many new products, and in our talk, we're actually going to give you an overview of everything that's new for developers throughout the AI stack. Our mission is to empower every developer and organization to harness the power of AI. And Google stack is so good and flexible because it combines very robust infrastructure with state-of-the-art research. And all of these enables realworld applications come to life that change entire fields, industries, and companies. We'll start by uh discussing foundation models touching upon our Gemini Gemma and some of our domain specific models. After foundation models, we'll take a look at AI frameworks that we use to build them. So we'll talk about Jax, which is really great for researchers. We'll talk about Keras, which is really amazing for applied AI. Later on, we'll even talk a little bit about the work we're doing with PyTorch. We'll also touch upon some developer tools for all types of experience from beginners to advanced. Then we'll talk a little bit about infrastructure. And this talk is about software not hardware. Our hardware infrastructure is TPUs, which you've probably heard a lot of. But in this talk, I'll briefly talk about XLA, which is a machine learning compiler. And I'll talk about some of the work we're doing for inference. So making it possible to serve models at scale super efficiently with really cool new things with XLA um for Jax and PyTorch. And then one more thing to mention, I went too fast, so sorry about that. So a lot of this talk is about these huge foundation models. Towards the end of the talk, I'll talk about Google AI edge and we'll talk about deploying small models on device, which is also super important for many reasons. Awesome. Okay, let's start by exploring our core intelligence within our stack. We'll start with our Gemini models, which are our most capable and versatile model family. And our core philosophy here at Google is to provide developers with state-of-the-art models and tools that you can use to build powerful applications all throughout. And our Gemini models, they are known for being multimodel, have a long context window, and having very powerful reasoning. But we've built a variety of models for different use cases. So depending on what you're trying to build, Google will have a model that is tailored for your use case. And I would like to just give you a quick walkthrough of these models. I know you've heard it during the keynote, but just very quickly, Gemini 2.5 Pro, which is our most advanced model yet, especially for high complex tasks that benefit from deep reasoning. It's really good at coding and also more complex prompts. It leads the it leads coding benchmarks including webdev arena leaderboard and it's really our our most powerful model Gemini 2.5 flush which developers love it because of its efficiency and speed and it's now even better at almost every single dimension. So we improved all the benchmarks across reasoning coding multimod modality and also long context. Then we have our Gemini 2.0 know flash which is fast and cheap works fine and our Gemini Nano which is optimized for ondevice tasks and as you've heard we've been shipping relentlessly and I would like to give you just a quick highlight of everything that we've been shipping in AI studio and the Gemini API there's a talk tomorrow that I would like to invite you to attend which is by Shesta Bazumalik a group product manager on on on Gemini API and Luciano Martinez, our technical lead for Gemini API from Devril. And they're going to do a deep dive into everything that's new within the Gemini API. So, you definitely don't want to miss that session tomorrow. But for now, just a glimpse to get you excited about what's new in AI Studio. We've built a new tab that is called build that instantly generates web apps. And it's really cool because it enables developers and and builders alike to prototype very quickly with natural language. We have a new uh generative media experience in AI studio as well. And I'm going to demo all of this so you can see how it actually works. And we are always listening to the community. We listen to your feedback and we always build with developers in mind. And that's why that some of these features were actually requested by community and that's what happened with the built-in new dashboard. You request it, we built it. And we also have some new native audio and TTS support in AI Studio. On the Gemini API side of things, we're also new capabilities for text to speech, allowing you to control emotion and style for a more expressive and dynamic audio. And it's both available on the live API and also on generary API for real generating audio. And some of the use cases here that we had in mind when we built this were um more dynamic audio books, some more engaging podcasts or even for those of you in customer support being uh producing more natural voices into your workflows. We also have enance tooling which is really cool because now you can use rounding with Google search also with code execution together in just one API call and URL context which you heard during the keynote which provides the model with depth content from web pages and since you can chain it to other tools is actually really powerful to build search agents. So really cool stuff. Lastly, just to call out that we now offer J uh Gemini SDK support for MCP which reduces a lot of developer friction and simplifies building um agent capabilities. So you don't want to miss tomorrow's talk talk to learn more about this and Google AI studio who here uh uses Google AI studio can I see some hands okay awesome awesome we have a lot to chat after this session then Google AI studio is the perfect place for anyone to start developing with AI it's the simplest way to test the latest mind models we typically bring them to Google AI studio So you can start prototyping and playing with with the models and you don't need Google Cloud knowledge in order to set up um your environment. It's free of charge and you can create and test and save your prompts and there's also starter apps that will inspire you. And that's exactly what I'm going to demo here today for you. I am going to show you some of the work that the team is is been doing. So if we go to Google AI Studio here, we go into build and mumble Jumble is something that we've literally just created and it's one of my favorite apps. So mumble jumbo is is one of those applications that with just natural language allows you to interact with the model so you can build those more dynamic and interactive audio experiences. So let's have a look. So first of all it uses 2.5 uh preview native audio dialogue. You can choose if it's easy to interrupt or hard to interrupt. And then you can come and pick your your voice, high pitch. So there's multiple things that you can play with. And then you can customize. I really like the fox. So let's go ahead and pick the fox. Let's get the cowboy. Happy because we're at Google IO. We're all super happy to be here. And let's get a microphone because I have one. Howdy partner. Finley Flicker Fox here. ready to charm the boots off you. Hello. What's the weather like today in Mountain View? Well, now it's a fine day in Mountain View. Sunny and quite unpleasant. Perfect for mosying on out and about. I agree. Are there any hiking trails near the Google campus? Why, sure as shooting. There are plenty of trails near the Google campus. Perfect for a grand adventure and some fresh air. Well, now the best way to get to those trails is to head towards the foothills. Following the marked paths, y'all find them right along the way. You betcha. There are a few cozy cafes nearby. Perfect for a spell of rest and a good cup of coffee after your hike. Thank you so much. It was my pleasure, partner. Happy trails and enjoy your coffee. Thank you. So, as you could see, there's some really cool experiences that we're bringing into AI Studio. Audio is getting better, more natural experiences with voice. And in case you didn't notice, I even changed the language in how I interacted with the model. I spoke in Portuguese, my mother tongue, and it actually replied with very good information. So, what I did here, Josh is going to show you exactly what what's happening on the API side of things in just one second. I have one one prompt that I just want to very quickly show you. Sorry. Roll the dice twice. And what's the probability of the result being seven? Okay, let's just run this very quickly because I just want to show you one thing before I hand it over to Josh. So, as you can see, thought summaries. The model is actually showcasing how it how it thinks and you can see the summaries here. We have the result and then basically what is available in the UI in AI studio is also available in the API and Josh is going to show you that right now. Okay, great. So very briefly, we have something called the Gemini developer API which is really great. It's the easiest possible way to develop with Google's foundation models. Uh the best place to get started is ai.google.dev. There is a whole lot of capabilities in the API. It's got code execution. It's got function calling. I remember um sitting down with a team to build this from a blank piece of paper. Starting about two years ago, we had basically you could prompt it with text and now we have image understanding, video understanding, but now we can also generate images and videos that Joanna will show you later. Very, very briefly, a.google.dev has all of our developer documentation. There's lots of really great guides. There's information about the models, everything you need to get started. We also have the Gemini API cookbook. We have a link to this at the end. It's basically GoogleCookbook, go.gle/cookbook. And this will take you to a whole slew of notebooks that the team has put together. And basically, all these notebooks are endto-end examples that show you one thing that you might be interested in like what's the best way to do code execution, what's the best way to do function calling. You'll find that in the cookbook. I also very very quickly want to show you how easy it is to get started with the API. So basically in Google AI Studio, you don't need a credit card or anything like that. In about a minute, you can just click get API key, create your key. Now, if you're doing this for the first time behind the scenes, this will automatically create a cloud project for you, but that detail is not important. Basically, now I have an API key and I'm ready to install the SDK and call the model. If you open up any of the notebooks in the cookbook, well, let's just say it's in a different directory here, but let's just say we've opened up eh, we'll just say we opened up this one, which is in the quick starts directory. And this shows you exactly what Joanna showed how to get the thinking summaries. You can add your API key in Google Collab. If you zoom in, you can hit add new secret. And in this particular notebook, it's called Google API key, but you could call it whatever you like. So you would add Google API key there. You would paste your key there. And now you're ready to run this. So if you do runtime and run all, you're calling the API and you're running all the examples. You can also directly in Google Collab, we have this thing where you can grab an API key straight inside Google Collab. So it's just really quick and easy to do. Okay, we can go back to the slides. So very very quickly as a recap, Gemini developer API is the easiest way to get started. It's super lightweight. It's fast to install and you can get up and running like honestly in about a minute. Okay, I will use the clicker. This is the flow to get started with Google AI Studio. Go to Google AI Studio, get your key. Um, try one of the code examples on a.google.dev or in the codebook. I see people taking pictures. That makes me happy. Please try this. We spent so much time on making this easy and um I hope it works for you. If not, please file an issue and we'll we'll get on it. This is the Genai SDK for the Gemini API and this is something we've been rolling out. It's our latest SDK. We've been rolling it out gradually over the course of last like six months. It's super userfriendly. It's really easy to use. Really, the only point I want to make here because I don't want to read the uh the code examples or the documentation to you. You can call the API in a few lines of code. Basically, add your key, select a model, write a prompt, you can go ahead and call it. You can also get access to advanced functionality in like one line of code. So if you'd like to get the thinking summaries that Joanna showed you, you can just add a thinking config say include the thoughts. Now you've got the thinking summaries. And a good use case for this could be anytime you need to explain the model's reasoning. Particularly like you can imagine if you're building like an education app or a tutoring app, you can get the thinking summaries. In addition to really cool things that you can do with a single line of code, there's some more advanced stuff that you can do with the SDK as well. So, I know there's a lot of code on this slide, but we've talked a lot about building agents and agentic experiences. In this example, you could imagine that you have a Python function on your laptop called like weather function, and maybe that calls your own weather server to get the weather. What you can do is you can pass the definition of that function to the Gemini API in JSON including like the function name and the parameters that it takes. Then what you can do is you can write a prompt. So here the prompt happens to be what's the temperature in London. When you send the prompt and the function to the model, what the model will do is assess whether it makes sense to call that function based on your prompt. If so, it won't actually call it. But you can see in the function call.name that it returns and the function call.orgs, it returns the name of the function and the arguments to pass to it. So if you want, you're ready to call this function on your laptop. And we have code that you can copy and paste to do that. What's really cool, too, is this works with multiple functions at the same time. So you can imagine you have a function like schedule a meeting or something like that and you can very easily well with some work you can build an agent to actually do that. So function calling is super important and uh it it works extremely well. So now Joanna's going to talk about gen media. Awesome. So as you could see what you can build in the UI within AI studio also available in DPI and and also just building on the capabilities of our foundation models. Our core intelligence also encompasses a powerful suite of generative media models and they are designed to transform creative experiences ac across uh content generation uh across different modalities like images, video and audio. And I would also like you to to demo one of one of these new apps that we have in AI studio. So I'm going back to the laptop and I'm going to show you something that is uh that the team also just created. So AI studio also got a facelift and has some new new features and the new um the chat interface still the same but you've seen the talk to Gemma live live during the the keynote. We have the new generative media console which allows you to to create and interact with our most creative models. And then we have the build which is where all these new apps are coming to. So I just wanted to show you very quickly this one. There we go. And then we basically can can choose here what are the the sounds that that we want. And this is all powered by by LIA our music generation model. [Music] And just for for the interest of time, I'm not going to keep playing it, but you can see some of the the capabilities of these models that we're bringing to AI studio. We'll continue to the slides in this in the console. As you could see the you have access to our image generation, our video generation and music generation models with applets to get you started. And so that's a really cool thing for you to play with. after this session. Some of our videos very realistic images with a really good understanding of real world physics and dynamics uh improved quality and and more and more capabilities coming to these models. And this is the example that I just showed. We've made LIA real time our interactive music generation model which powers music FXDJ. It's available in the API and AI studio and you can check our API documentation for more information. This also allows everyone to interact to create and to perform generative music in real time. It's really it's really cool. You might remember that in the show before the first keynote you might have seen this console. That's exactly why I wanted to show you this particular app in this session. But there's a lot more that that you can that you can try afterwards. And shifting the gears towards Gemma, early this year we released Gemma 3 which is our most advanced model and it comes in four sizes 1 4 12 and 27B and offers developers the flexibility to optimize performing performance for diverse applications from efficient ones ondevice in inference to also scalable cloud deployment and in particular 42 and 27BS multimodel multilingual and has a long context window up to 128,000 tokens. And the fact that is available in more than 140 language is really cool because 80% of our users are actually outside the United States. And you heard during the keynote as well that Medma is our most capable collection of open models for multimodel medical text and image comprehension. It's a really good starting point for building medical application and it's available in 4B and 27B. You can download the model and adapted to your use case via prompting, fine-tuning or agentic workflows. And we also announced Gemma 3N. It's optimized for ondevice uh operation on phones, tablets, and laptops. And as you can see the gem versse is booming with all these new variants coming and being developed all the time. Chill Gemma, dolphin gemma, now medge gemma, sign gemma, so many different capabilities and and option that it's truly exciting to see. And one last thing that we are really excited about is the fact that we now we brought to to AI studio the possibility to deploy the Gemma models directly from AI studio into cloud run with one click. So you can use the Gen AI SDK to call it and just requires a twoline change. Change API key, change base URL and you're set. That's the easiest deployment. And now Josh is going to tell you all about frameworks. Thanks. Okay, so we've talked a lot about foundation models, Gemini and Gemma. Now I'll talk a little bit about the frameworks that Google and the community use to build them. So a lot of cool stuff to cover. Uh let's start with the easiest possible way to get started to fine-tune a model. So in the developer keynote, Gus showed a version of Gemma that speaks emoji. And this is a language that he came up with his uh daughter. One way to do that is you could just prompt the model to speak emoji. And in a lot of cases, you can get away with the prompt. But if you have a very large amount of data or maybe you're building a really serious application like something in healthcare or medicine, what you can do is you can fine-tune the model to work even better with your data. And this is a really really great thing about this is the truth is it sounds complicated, but it's not in practice. All you really need is a two column CSV file. And here what you're looking at is something with a prompt and a response. And if you've got a couple thousand rows using our framework, Keras, and Keras is my favorite way by far of doing just applied AI, that means using AI in practice. You can tell I care a lot about both of us care a lot about healthcare and medicine. So there's a lot of wonderful, like more than you could ever count um opportunities to do good in the world in those fields using technologies like this. You can train the model to do something really useful. So we have a really great tutorial about this. It's honestly about five key lines of code. You import a model of Gemma from Keroshub. This model is already instruction tuned. You can prompt it in a line of code and you can also do Laura fine tuning in about a line of code which also sounds fancy but it's not. So Keros is great for applied AI. If you're doing research, we have a really wonderful framework called Jax. And Jax is a Python machine learning library. And I guess I have two things to say about it. One is that at the highest scales, Jax is the best place to go. So, it scales really easily to tens of thousands of accelerators. It's super powerful. We use it to build uh Gemini and Gemma. The community uses it to build a bunch of really large awesome foundation models as well. But one thing I like about Jax because I'm operating at a much simpler level. At its core, Jax, it's a Python machine learning library with a NumPy API. And when a new model comes out or new paper, it takes me a long time to understand it. What I like to do is basically implement it line by line in numpy. And I very carefully just understand the input, the output, the shapes, debug it just in numpy. And what's really wonderful if you use jacks, you can do that in numpy. There's transforms that you can read about. You can add a line of code like grad to get the gradients. You can add a line of code JIT to JIT compile your model. And now without changing anything else, you can run it on GPUs and TPUs. So Jack's core gives you this really good way to think very carefully through different techniques in machine learning and then when you're ready, you can scale them up without really changing your code and that's really really awesome. On top of Jax, which is out of scope for this talk, there's a huge ecosystem of libraries. So there's great libraries for Google and the community for things like optimizers and checkpoints and implementing neural networks. You don't have to do that from scratch if you want, but just as I'm learning things, if you do it totally from scratch once, you really can. At least it helps me get my head around it, even though it takes a little while. Um, if you want to skip that part and you want to go straight to just show me a super optimized uh large language model implemented in Jax that's ready to scale to hundreds or even thousands of accelerators, then there's two really cool GitHub libraries that I point you to. Max Text, as you might guess, has reference implementations of large language models, and Max Diffusion has, as you might guess, reference implementations of models that you can use to generate beautiful images and stuff like that. Um, those can take some work, but we're we're working on making them super user friendly. Um, but right now they're designed for I I think the way I think about it is like well anyway, they take some work to uh to scale, but they're great. Um, using Jacks. This just came out yesterday. I wanted to point you to new really amazing work from the community. And so we've been talking about Google's foundation models. This is a new foundation model that Stanford University just released. This is called Marin. Uh it happens to be built with Jackson TPUs which is which is great. But what's really special about it is that Marin is a fully open model. And so in addition to sharing the weights in the architecture, they've shared the data sets that they used to train it, the code they used to filter the data sets, the experiments uh that worked, the experiments that didn't work. So this is a really great foundation for open science and building these really cool models in the open. And um they train this model using Google's TPU research cloud. And this is a collection of TPUs that if you're a researcher, you can apply for access to. And um it's basically a free of charge uh cluster of TPUs that you can use to do really cool research like this. Very briefly, we talked about uh doing Laura training or excuse me, Laura post training in Keras. And now I'll show you a little bit about what we're working on for tuning in Jax. So we're working on a new library called Tunix. And the vision here, it's very very early stage. We're building it with the community. So we're working with researchers from uh these great universities. And the vision is to make it a really easy to use library for developers, but also a really good framework for researchers to implement like the latest post-training algorithms and jacks. And uh we're working on a bunch now. I think it's going to be really good. and stay tuned. So that's Tunix. In addition to the libraries, very briefly just want to talk about infrastructure. So TPUs, hardware out of scope. Um, but there's a really cool software package that I wanted to briefly mention called XLA. And XLA, it's basically a compiler for your machine learning code. The way this works is that when you use a library like Jax or Keras or TensorFlow or even PyTorch, what you're doing is you're writing code in Python and then somehow if you it gets compiled and optimized and run on GPUs and TPUs and XLA is the compiler that we use at Google to do that. It powers our entire production stack. It's used by some of the largest large language model builders in the world. And what it does is it takes your Python code, does a whole bunch of optimizations, and gets it ready to run on accelerators. One thing that's really cool about XLA is it's portable. So if you run an XLA, you're never locked into TPUs. You can use your exact same code to run on GPUs and other types of accelerators. So it's really it's really great for that. Uh we like it a lot. The important thing here is that PyTorch now also works with XLA. So if you're a PyTorch developer, it has a wonderful ecosystem, really great libraries. If you want, you can use PyTorch XLA to train your models on TPUs and get all the really good price performance benefits uh that come with that. Um in addition to training models, we're work we've done a great work with the VLM community. So now you can also serve your PyTorch models using VLM on TPUs and VLM is a super popular uh inference engine. We've added TPU support. So that's available to PyTorch developers now as well. And we're also working on adding Jack support to VLM. Here's some more really great work that's happening with community. So this is a new partnership between Red Hat, NVIDIA, and Google. And it's working on a project called LLMD. And the vision here, this is for distributed serving. The vision here is to bring the very best of serving into open source and make it available to everybody and to have this work with both Jax and TPU, excuse me, Jax and PyTorch. So really cool new project. There's some more sophisticated stuff uh which you can check out and stay tuned for this. It's going to be really good. Okay, so at warp speed, we've talked about basically foundation models Google has uh different frameworks that we use to train them, different ways that you can serve them on the cloud. Now, let's briefly look at how you can deploy them on mobile devices. The way that you would do this is using Google AI edge, which is basically a framework for deploying machine learning models on things like Android, iOS, get them running in the browser, and also on embedded devices. And I know it's Google IO. A lot of you are mobile developers, so a lot of this is probably intuitive to you. But if you're coming from like I'm a Python machine learning developer, I I work in the back end. This is all like really awesome points. There's many good reasons why you might want to deploy on mobile. Like one is latency. So you can imagine if you're doing something like sign language recognition and maybe the user is holding up their hand and they're signing, you don't want to drop frames. And if you're sending those frames to a server on the cloud, unless you happen to have like the world's fastest internet connection, you're probably going to drop frames. But if you have that gesture recognition model running locally, you're not going to. So that's one huge advantage. Others, of course, are privacy. Data doesn't need to leave the device. I mean, a lot of this like offline. I know this is obvious to mobile folks, but if you're working on an airplane, you know, maybe you want to run your machine learning model there. Cost savings is a really important one, too. So if you're serving a model to lots of users on the cloud, you might be paying for the compute that you need to serve it. But of course, if it's running on the phone, the compute's happening locally, so you don't you don't need to bother with serving infrastructure. This is there's a lot of really cool new stuff in Google AI edge. Um, on our side, we've added support for things like the latest Gemma models. And by the way, this is for both classical machine learning, well, deep learning, which is now suddenly becoming classical, like things like gesture recognition, which were state-of-the-art like four years ago. Now that's classical ML because we're talking about large language models and generative AI. But you can run small large language models on device. We have a new really awesome community with hugging face and there's a lot of really smart people putting together models that are ready to run pre-optimize on device and we have a private preview this is coming soon for uh AI edge portal which is basically a testing service. So you submit your model to a cloud service and it runs it on a fleet of real devices of different sizes just to verify that it works really well. So um if you're interested in mobile development check out Google Edge Google AI Edge it's really cool. And with that, I'll hand it over to Joanna to talk about what's next. Awesome. Thank you, Josh. And you've heard it in the keynotes in the previous session with with Deis and Sergey. We're pushing the boundaries of what's possible to build with AI here at Google and Google Deep Mind. And we're really excited to bring all this innovation and put it in the hands of developers, in the hands of the community. And it's never been a better time to build and co-create together. So we really believe in a in a future where AI is changing various fields across scientific discovery, healthcare, and so many more. And we're going to achieve this radical abundance in a safe and responsible way. And we want to get there with you, with the community. So let's have a look at some of the domains that we believe that have a huge potential for developers and humanity at scale. Alpha Evolve, a Gemini powered coding agent for designing advanced algorithms, a self-improving coding agent that and we all know that large language models can can summarize documents. They they can generate code. You can even brainstorm with them. But with Alpha Evolve, we're really expanding these capabilities and we are targeting fundamental and highly complex problems on mathematics and coding. Alpha Evolve le leverages Gemini Flash and Pro and it's one of the big promises for the future. Another one and I'm really excited about AI coscientists. It's it's another um another scientific breakthrough that we we're seeing especially in the medical in in medicine and and research fields and our goal is to accelerate the speed of discovery and drug development. And with AI co-scientist, you literally give a a scientist can give a research goal to the agent in natural language. And then the AI co co-scientist is designed to give you an overview, an a hypothesis and a methodology. So in order to do so, it uses a coalition of different agents that can that that work together. And we have the generation agent review, ranking, evolution, proximity and meta review that uh are all created within the inspiration and driven from the scientific method in itself. So it's another huge breakthrough and another domain that will continue to see evolving here at Google deep mind. And lastly, an area where we're seeing tremendous progress and we expect to continue having more future breakthroughs is in domain specific models and Gemini robotics models which are currently in private early access are advanced vision language uh action models with the addition of physical actions as a new output modality specifically for controlling robots. These models are robot agnostic and it uses multi-mbbodyment which is a technique that it can be used on anything from humanoid from humanoids to largecale industrial machinery. So this is really really exciting and Gemini robotics has been fine-tuned to be dextrous and that's why you can see so many different cool use cases and applications here on stage from folding an origami which is something a bit more complex and and just holding a sandwich a sandwich bag. So many new innovations are coming to you are coming to life and we'll continue pushing the boundaries of what's possible across all these different domains. And now if you want to learn more there's many ways that you can keep engaging with us that you can keep get giving us feedback. We are also uh active on on social media and we have a developer forum where you can interact directly with Googlers. So in order to learn more, Josh, what do our developers have to do? We have just a few links for you. So no, no problem. But we talked about a lot of different tools in the stack. So I don't want to read the slide, but let me just point you to a couple highlights. AI.google.dev is the best place to go to get started with Gemini and Gemma. We have a cookbook for Gemini. We have a cookbook for Gemma. Uh Google AI Studio is AI.goolestudio.google.com. If you're interested in Jackson Keros, there are the links. If you happen to be interested in XLA, please check it out. Open XLA.org. Google AI Edge is at the very bottom. If you're a mobile developer and you're interested in mobile deployment, um and there's just to be clear, there's so many amazing things in the Google AI stack we didn't have time to talk about today. Um Vertex has really amazing tools for enterprise developers, but please start here, have fun, and uh yeah, we're round after the talk. Yes, absolutely. And the developer relations team is just outside. We have some really cool demo stations that you can experience. Engage with the team. Check out the sessions tomorrow, especially on the Gemini API, Gemmaverse and robotics. We have a lot of cool stuff that we want to put in the hands of developers, many early access programs as well. Stay in touch, stay engaged, and let's co-create the future of AI together. Thank you so much. Thanks a lot. Thanks. Heat. Heat. [Applause] [Music] [Music] Nat. [Music] Hey, [Music]