Transcript for:
Highlights from Google I/O 2025 Keynote

Google I/O '25 [Music] [Applause] [Music] >> JOSH WOODWARD: Hello, developers! Hello hello! It's great to be back up here with you and earlier today, you heard all about our updates to Gemini 2.5 Pro, and Flash, and we talked about how we're turning research into reality, like how we brought 2.5 Pro to our coding agent, Jules, which I understand is having a lot of demand, and it's hot right now. That idea of bringing innovation to life is something we explore every day in Google Labs, our dedicated space to try stuff out. Today, we want to show how you can apply that same spirit to Build with Gemini across Android, the web and beyond. So here's how this is going to work. We're going to try to do as many live demos as we can in the next 59 minutes, okay? [Applause] So there's only two things you have to do as an audience. If it works, if the demo works, what are you going to do? Applaud! ah, hands up, that works, too. If it doesn't work for some reason, what are you going to do? Okay! All right. I'm going to jump right in. I'll do the first one, and over in Labs we love to build with Gemini 2.5 Flash. Does anyonee use the Flash model out there? A few people, good. It's great. It's insanely fast, it has a great price. We use it for a lot of our prototyping and one of the things we want to show you today is how we're blending code and design so you can go from a prompt to an interface to code in just a minute or so. So it's a new Labs experimental project called Stitch. And we'll bring it up on the screen here. And first, things first. Did anyone come to I/O for the first time? Maybe your first time in California, too? All right great we're going to build an app for you. So here it is. This is Stitch. It starts with design first. You can go into it and paste in a prompt. Make me an app for discovering California, the activities, the getaways. And basically, you can just click Generate Design. And it's going to go out and start making a design for you. And so I've done this just before, it takes about a minute or so, but just to save time and get more demos in an how she, these are some of the screens it comes back. These are not static screenshots, these are actual designs. You can iterate on the left and create new designs, or you can go in here and do things like make it dark mode, lime green, max radius, and it will apply it and start working in the background. Now, what's happening here is we're using both 2.5 Pro or 2.5 Flash to start letting you remix this and change it around. So this will work for about a minute or so, and I'm going to show you the one that comes back. This is it. It is dark mode, lime green, massive corner radius. And what's really cool is if you go into these, you can see that you can go in and grab the markup right here. It's all here. You can copy this out to whatever IDE you want. It also -- again, these aren't screenshots, they're actual designs, you can take it right into Figma and edit it further. This is a new product. It's experimental. We want you to try it out and give us feedback, Labs.google.stitch, and that's the first demo! All right. [Applause] So with that, I want to pass it off to Logan who's going to tell you all kinds of great things you can build with the Gemini API. Logan! [Music] [Applause] >> LOGAN KILPATRICK: Thanks, Josh. I'm excited to be at I/O today to show you some of the cool things we just shipped this morning. We'll start in AI Studio, where the goal is to help you answer the question: “Can I build this with Gemini?” and then get you started building with our latest models! Okay. So let's open up Google AI Studio. I've been wanting to build a bunch of AI voice agents but I've been a little busy. No better time than now to try prototyping. Earlier, you saw how Project Astra can make AI feel natural. And a lot of those capabilities are now available today in the Live API. So let's select the new 2.5 Flash native audio model. We've also added more controls like proactive audio. The model is now better at ignoring stray sounds or voices, perfect for the I/O stage. It also natively supports 24 languages, and we've added new controls for managing your session context which you can see on the right-hand side here. But in order to build an agent like Astra, the model needs to be able to use tools, so we've added improvements to function calling and search grounding. We're also introducing a new tool we're calling URL Context. URL Context enables Gemini models to access and pull context from web pages with just a link. This is exciting because now you can ground the model response on both specific and up-to-date information with support for up to 20 links at a time. All right. Let's pass a link to our developer docs for function calling and have Gemini give us the tl;dr. >> GEMINI: Allowing them to understand when to use specific functions and provide the necessary parameters to execute real-world actions, acting as a bridge -- >> LOGAN KILPATRICK: Nice, I couldn't have said it better myself. This is just a small glimpse into what we've landed today in the Live API, and we're super excited to see what you build. All right. Let's shift gears now and show you another way we're making it easier for you to build with Gemini. As you heard from Tulsee earlier today, Gemini 2.5 Pro is awesome at coding. We've seen the ecosystem loving 2.5 Pro, so it's only logical that We bring it into AI Studio's native code editor. Let's kick off an example of what this new experience is like using one of the preset prompts. This new experience is tightly optimized with our SDK so you can quickly generate web applications that use the Gemini API, like this AI-powered adventure game that uses both Gemini and Imagen. As you see here, the model is reasoning about the request and composing a spec for the app. It's then going to generate the code and will actually self-correct if there are any errors. It's going to take a few minutes so let's jump over to another tab I ran to show the final output. That looks awesome. [Applause] This is a great way to iterate quickly with the Gemini API and you can easily make code changes right here. The whole experience is built to be multi-turn and iterative, so you can keep refining your idea With prompts over on the left-hand side. We have time for one more update that I'm excited to talk about, which is around MCP. Today, we are rolling out an update to the Google Gen AI SDK to natively support MCP definitions. [Applause] Now it's even easier for you to build agentic apps with a fast growing community of open source tools. To showcase this, we've added a new app with Google Maps using MCP. Let's welcome Paige to the stage to start building on it. [Music] [Applause] >> PAIGE BAILEY: That was awesome. You did great. Excellent. Thank you so much, Logan. I took inspiration from the Maps app Logan just showed you and I'm going to remix and compose it to build something entirely new. I'm going to add a talking head that will accompany me on stage, my very own Keynote Companion. I'm just going to call her KC. I want our keynote companion to listen to our keynote, respond, and then dynamically update its UI based on what it hears. I've just got a few finishing touches to add, but I think we can wrap it up here together on stage. In I/O tradition, the first I'm going to need my keynote companion to do -- >> Hi, I'm KC, Paige's keynote companion. I'm here to help her presentation go smoothly. >> PAIGE BAILEY: Is to count the number of times we say AI. >> Got it, Paige. >> PAIGE BAILEY: Excellent. To help us keep track, we've developed a function called increment utterance count, which you can see right here. But you know, counting AI instances was so last year. , so let's update this to trigger every time the presenter says something Gemini related., so I'm going to make those updates. I'm going to refresh, and we're going to try again. >> Hi, Paige. I'm KC, your keynote companion. >> PAIGE BAILEY: So the Gemini Live API supports a sliding context window. That means your users can interact with your apps for long running sessions. Or, in our case today, run quietly in the background. And look at that, we were able to make this app dynamically react to what it's hearing, which means that it's a dynamic web Paige, if you will. Get it Excellent. >> Should we take a quick look at the fun fact about the Gemini API? >> PAIGE BAILEY: Hold on there, we're about to get to that. So this is just a fun example but imagine discussing gift ideas in a shopping app or finding a great restaurant that delivers to you with just your voice. So now, let's see how that Mapap integration works. Hey, KC, since there are people coming from all over the world, could you show us Shoreline amphitheater on a map? >> I've put Shoreline amphitheater on the map for everyone. Is there anything else I can help with? >> PAIGE BAILEY: Could you show us some Coffee houses with really good Wi-Fi near Shoreline Amphitheater? could you randomly select one that you think looks really good and give us directions there? >> While I can't express preferences, I can provide -- >> PAIGE BAILEY: So just pick -- how about Boba Bliss, give us directions to boba bliss from shoreline amphitheater. >> Okay. I'm providing directions to boba bliss from shoreline amphitheater. Is there anything else I can help with at the moment? >> PAIGE BAILEY: That was awesome. Thank you, KY. Excellent. [Applause] Amazing., so let's check out some more feature updates. While synchronous function calls are used for quick operations, high-latency tasks like using an MCP server require background processing for a seamless user experience. By default when calling a function, the audio is blocked. But today, we're excited to enable asynchronous execution for seamless dialogue within a conversation. [Applause] Over here I have a function called getFunFact that uses Gemini 2.5 Pro. Based on what's said, we're going to use search grounding to display a fun fact. Up here in my system Instructions I've added the get fun fact function whenever one of our presenters mentions a Google AI product, but I'm going to add a new call called behavior. I'm going to stop this, so it doesn't automatically update but say behavior nonblocking. And you can see the automatic code execution kick into play and the heard your feedback. With the latest release we've improved structured occupies with function calling, , so our model is going to conform with this very specific JSON return format to make sure that everything displays beautifully in the UI. All right, moment of truth. >> I am Paige's keynote companion. >> PAIGE BAILEY: I understand, I'm biasessed, but I think Google AI Studio is best place to start building with the Gemini API. Heck, yes. >> Google AI Studio is pretty cool. Did you know it lets you experiment with different models and parameters without writing any code? >> PAIGE BAILEY: Look at that. Gemini and KC, you are a natural. My keynote companion is feature complete. I think it's time to deploy these changes and see it in action. Once your app has been created in AI Studio, it's incredibly easy to share with friends and Deploy via cloud run. So from within AI Studio, I'm going to kick off cloud deploy, I'm going to select one of my projects, it's going to verify and with one button click, we're already deploying this app in a way that many people throughout the audience and throughout the world would be able to see. Once this app is deployed, it's capable of being run and viewed with your favorite IDE. So we can see it. We can refresh. And you can see the keynote companion automatically added in cloud run right within my VS Code instance. And just like that, our multimodal app is live, pun intended. [Applause] We're making it easy for you to build agents with Gemini, combining multimodal reasoning with a vast growing number of tools. This power extends beyond the web right into the palm of your hand. So now, let me welcome Diana and Florina to talk about new tools and AI advancements for the Android ecosystem. [Applause] [Music] >> DIANA WONG: You've just heard about the incredible possibilities AI brings. Now, Florina and I want to talk to you about how you can build excellent apps powered by AI on Android, and then how our APIs and tools make it possible to be more productive using AI. >> FLORINA MUNTENESCU: Exactly, Diana. An excellent app is delightful, performant and works across devices, and with AI, you can unlock entirely new, unique experiences that bring value to your users. Let's show you how that comes to life with a new app we built. A few years ago we had a website that let you build yourself as a cute Android bot, selecting things like clothing and accessories. Then we started thinking. How would we build this today, using AI as an app? The answer was of course through selfies and image generation. So we came up with Androidify! Let's take a photo of Diana and Androdify her! >> DIANA WONG: Let me grab my favorite toy as an accessory. It's my daughter. >> FLORINA MUNTENESCU: Okay. So while Androidify is Androidifying, let's see what's happening under the hood. >> DIANA WONG: The core of the app relies on two key AI-powered steps: Getting a description of the person in the photo and then creating an Android robot out of that description. Without AI, creating this experience would be nearly impossible! And to implement this, we used AI models running in the cloud via Firebase. >> FLORINA MUNTENESCU: To get a description of the image, we took advantage of the fact that Gemini models are multimodal, so They can use text, images, videos and audio as input. So all we had to do is call generateContent with a text prompt and the image the user provided, the image of Diana. Then, to generate the Android robot based on the image description we used the Imagen3 model, we called generate image, and that was it. How easy was this! Are you ready to see this? >> DIANA WONG: I think so. >> FLORINA MUNTENESCU: Tada! >> DIANA WONG: That's so cute. You did all of this with what, AI and five lines of code? That's so cool. >> FLORINA MUNTENESCU: The app is alreadydyvailable on GitHub for you to check out. [Applause] >> DIANA WONG: As Florina showed, cloud-based models are powerful and ideal for Androidify, but what if you need to process prompts locally, directly on the device, without sending data to a server? That's where on-device AI shines. Gen AI APIs powered by Gemini Nano, our multi-modal on-device model, offer APIs for common tasks like summarize, rewrite and image description. >> FLORINA MUNTENESCU: So we said earlier that excellent apps are delightful, performant and work across devices. Let's talk about delightful apps first. For those of you who caught the Android Show last week, and if you didn't, definitely check it out. You saw our biggest UI redesign in years, packed with delightful new features and improvements. We're helping you bring the same level of delight and playfulness to your own apps with an update to the Material 3 design system, called Material 3 Expressive. We've already used it in our Androidify app. For example, take the camera button. You could use a circle, or you could use the Cookie shape from the new shape library. >> DIANA WONG: And who doesn't love cookies? >> FLORINA MUNTENESCU: Th button group shape morphs in the photo prompt button is so nice and smooth, and it's part of the expressive APIs. These small details are what separate a good app from a delightful app. Try out these APIs yourself using Compose Material Alpha. >> DIANA WONG: Beyond material design, what else makes an app delightful? Helping users with more useful, relevant information. In Android 16 we added a new feature, Live Updates. They allow you to show time-sensitive updates for navigation, deliveries or rideshares by using the new Progress Style Template, rolling out to devices over the next year. >> FLORINA MUNTENESCU: Now, let's shift gears to another critical aspect of an excellent app: Performance. Make sure you enable R8 and baseline profiles. Both of these have been available for a while, and the performance results are impressive, and they translate into better Play Store ratings. With R8 and baseline profiles, Reddit's app improved so much they got a full star rating increase within two months. >> DIANA WONG: Next, an excellent app looks good across all the devices that users have. And we're making it easier for any app to move across devices out of the box, from foldables and tablets, to Chromebooks. In Android 16, we're making API changes to no longer react to orientation, resizability and aspect ratio restrictions, giving users more responsive UIs by default. >> FLORINA MUNTENESCU: And we're putting your app in more places. For example, we've been collaborating with Samsung, building on the foundations of Samsung DeX, to bring enhanced desktop windowing capabilities in Android 16, for more powerful productivity workflows. >> DIANA WONG: To make your app beautiful across devices, we're continuing to make it as easy as possible to build adaptively with new features in our Compose Adaptive Layouts Library like pane expansion. >> FLORINA MUNTENESCU: Optimizing your app to be adaptive has a real impact on business metrics. We've seen that when users engage with an app across multiple devices in app categories like music, entertainment, and productivity, there's a two to three times increase in engagement. And Apps like Canva who invested in large screens found that cross-screen users are twice as likely to use Canva every single week! >> DIANA WONG: But it's not just foldables and tablets; we're bringing Android apps to more devices automatically. Here's the great part. If you're building adaptively for Android, you're already building for two more form factors: Cars and XR. With cars, whether users are waiting at a charging station or in line at school pickup, they can stay entertained with popular streaming and gaming apps, like Farm Heroes Saga and more. You can adapt your existing large-screen app to be used in parked cars by simply opting into distribution via Play Console and making minor optimizations. And if you're building adaptively, you're building for Android XR, the extended reality platform built together with Samsung. It powers glasses like the ones Nishtha demo'd earlier. We'll share more details on how you can develop for these later this year. It also powers headsets, like Project Moohan from Samsung, which you can start building for right now, knowing it'll be in the hands of consumers later this year. Soon after, our partners at XREAL will release a developer edition of the next Android XR device, code-named Project Aura, it's a portable device that gives users access to their favorite Android apps, including those that have been built for XR. All the apps you are developing will scale directly to these, too. >> FLORINA MUNTENESCU: With these upcoming devices, we're launching a Developer Preview 2 for Android XR SDK. With New Material XR components, updated emulator support in Android Studio and spatial video support for your Play Store listings. >> DIANA WONG: That's a lot of form factors. We're enabling more for users out of the box. I love seeing when apps look great across all of my Android devices. Once an app is adaptive, we unlock access to more than 500 million devices they can run on. We're already seeing incredible experiences with partners, whether it's Peacock, who put in the work to create a strong adaptive experience for their large-screen app, and as a result also get a really nice XR app. >> FLORINA MUNTENESCU: Or like Calm who easily extended their Compose app to create multisensory mindful experiences only possible with XR. >> DIANA WONG: Now, building these kinds of excellent experiences across devices means you need powerful tools. And that brings us to our next topic, boosting your productivity. >> FLORINA MUNTENESCU: It's no surprise that we used Jetpack Compose for Androidify. It has the features, performance, libraries and tools that we need to build an excellent app. 60% of the top 1,000 apps take advantage of the development speed Compose offers. The latest stable release brings features you've requested like autofill, text autosize and visibility tracking. We're focused on making Compose performance better and better. In the latest release, we see barely any janky frames on an older-generation Pixel device. And we heard your feedback. You want to use Compose throughout your UI, so we're releasing CameraX and Media3 Compose libraries. >> DIANA WONG: We know building -- oh, yeah, clap. [Applause] We know building navigation for apps across different devices and screen sizes can be complicated, so we rebuilt the Jetpack Compose Navigation Library from the ground up. Our goal was to make it simpler, more intuitive and incredibly powerful for managing screens in a stack, retaining state, enabling seamless animations and adaptive layouts. >> FLORINA MUNTENESCU: When we think about productivity, it's not just about writing code faster; it's about streamlining the entire development lifecycle, from refactoring to testing and even fixing crashes. That's w wre Gemini in Android Studio truly shines, by taking on those tedious tasks. >> DIANA WONG: Let's head back to the demo desk to show you a few features that will change the way that you work. >> FLORINA MUNTENESCU: We all know the benefits of writing end-to-end tests. You get to text large parts of your app at the same time, but we also know these are the ones that we avoid the most, because they tend to be hard to implement. >> DIANA WONG: You can now use natural language to perform actions and make assertions with Gemini in Android Studio. So let's bring together some of the Androidify features we've shown so far and test them! These files here are journeys. Like user journeys. Let's run One of them. >> FLORINA MUNTENESCU: If you Read the first two actions, they're clicking on certain buttons with different text on them, click on let's go. The journey is waiting until the actions are done before moving to the next action. So you don't have to synchronize these tests anymore. Then, the third one is a bit more interesting. Select the photo of a woman with the pink dress. This is where natural language is a lot more powerful So this could be the third photo like here, or it could be the first one. So it would be hard to find with a regular command. Also the UI you just saw is the platform's photo picker, and it can change from version to version of Android. The last action was just very fine, and that's it. The test passed. [Applause] >> DIANA WONG: That was so easy even a PM like me could do it. >> FLORINA MUNTENESCU: So you're volunteering. >> DIANA WONG: That's change topics. How about updating dependencies to the latest versions? That's another task developers love doing. >> FLORINA MUNTENESCU: It's definitely something I avoid doing, because it's a tedious task, although I know, we get the benefits of features and bug fixes that come with the latest updates. Now, Gemini can help with this, too. So let's demo a new AI agent coming soon in Android Studio to help with version updates. I have a project loaded here, this was actually even before Kotli n 2.0. There's a new option in the menu called Update dependencies. The bot analyzed the different modules in the app and checked what library updates we can apply. Next, the agent is trying to build the Project and then use Gemini to figure out how to fix the problems and then iterate until the build is successful. Let's give it a second and see how it behaves. So it found an issue. The build succeeded. Let's see if we get to see the changes. I'm going to minimize these just to have a bit more space. So the libraries have been updated to the latest version. In the plug-in we're using the new Compose plug-in compiler. The compile SDK is now 36. And then in the main active, we don't only get the change, but we also get an explanation of the change so we can see that the platform class was replaced with an Android dex class. So the build is successful. So we're done. >> DIANA WONG: Nice. [Applause] Now that you've seen some of the powers of Gemini and Android Studio in action, you might be wondering how you can get the same benefits in a corporate and enterprise environment, not just a personal project. So today, once you subscribe to Gemini Code Assist, you'll have access to Gemini in Android Studio for businesses. It's specifically designed to meet the privacy, security, and management needs of businesses, with the same Gemini in Android Studio that you're used to. >> FLORINA MUNTENESCU: As you've seen today, across all of Android, we're making it easier for you to build the best experiences, creating apps that are delightful, performant and adaptive across devices. And we're helping you be more productive with Compose and Android Studio. >> DIANA WONG: So check out the Androidify app to see how this all comes together, use the latest versions of our APIs and tools and go build the next generation of ANDROID apps. Now, let's hear from Una and Adi about how you can build the next generation of web apps! [Music] [Applause] >> UNA KRAVETS: One of the best things about the web is that its reach is virtually unlimited. With a single website you can bring your ideas to almost any user on the planet. The challenge is creating applications that work well across the near-infinite combination of users, devices and browsers. Today, I want to introduce you to a slew of powerful new features in Chrome that will help you do just that. With these features you can build better UI, debug your sites more easily with DevTools, and create AI features more quickly and cost-effectively with Gemini Nano in Chrome. All of these with the goal of creating a more powerful web, made easier. Building engaging user interfaces is so critical. This is the cornerstone of what sets you and your app apart in a very crowded digital space. So let's begin there. We've been hard at work fixing core issues and expanding capabilities by leveraging HTML and CSS, the web's native building blocks. It should not only be possible, but simple to create beautiful, accessible, declarative, cross-browser UI. What b bter way to make this more tangible, than by creating a website using everything we're announcing today? We'll show you some new capabilities that make it easier to build common, but surprisingly complex UI elements like carousels and hover cards. And we're going to build these right here on stage, in a fraction of the time. To do that, my colleague Adi is going to help me out. >> ADI OSMANI: Thanks, Una. So here we have this work-in-progress virtual theater site, built in React. Watch us transform it into a rich, delightful experience, starting with turning these posters into a carousel. Now there's a lot to juggle: transitions, state management, DOM manipulation, performance and after all that effort, it's still buggy in edge-cases. With Chrome 135, we've combined a few new powerful CSS primitives to make building carousels, and other types of off-screen UI, dramatically easier! Watch how you can build this in just minutes with only a few lines of code. So I'm going to head over to my editor, and I'm going to show you this carousel class. Here, I'm positioning the items Setting overflow and requiring snapping at the center for each item. So we're just going to save this. We're using our carousel class on our poster list, and we're making some progress here It's looking like a carousel, great stuff. Now, let's add some navigation affordances for the offscreen Items. I've got a controls class. Let me just add that as well to that poster list and show you that. Okay. So we've got some buttons using this controls class. And I want to show you the code for this. So here, you can see that those buttons were created using scroll button pseudo elements, and this is what the code looks like. We've got a scroll button right and a scroll button left. Now, we styled these with the content property. Here's the content property down here, and we've given them accessible labels. >> UNA KRAVETS: Can we navigate this carousel a little faster? I've seen a lot of them have those little navigation dots at The bottom. >> ADI OSMANI: We can do that. Let me show you our indicators. That's what the new scroll marker pseudo is for. This is another pseudo element which can be styled similarly to scroll buttons. Here's our scroll marker down here. We've got another new pseudo class called target current, and that manages the active marker styles. Let me just add in our indicators class and bam! We now have scroll indicators. [Applause] >> UNA KRAVETS: One early adopter of this new CSS-based method of building carousels is Pinterest. Before, their development team spent so much time maintaining their custom-built JavaScript carousels. Earlier this year, Pinterest made the switch to using new CSS APIs for carousels, cutting down around 2,000 lines of JavaScript into just 200 lines of more performant browser-native CSS. [Applause] That whopping 90% reduction in code improved overall performance. It also noticeably improved Product Pin load times by 15%, which is a huge quality-of-life improvement for their users, like me. Adi, how's our carousel going? >> ADI OSMANI: I thought it could use a little more finesse so I added in some hero images. Here they are. They look a little bit prerey and I also put it on a virtual stage. So check this out. Isn't that nice? That's using scroll-driven animations. [Applause] Now, I would love to put in a feature where you could easily hover and get a sense of what the view is like from there. So I'm going to add in the seat sections and seat details right now. >> UNA KRAVETS: I love that idea. Rich tooltips and preview cards are super common in web interfaces, but building and Maintaining them, managing state, accessibility hooks, and events, is still extremely challenging. So I can't wait for you to try the new experimental Interest Invoker API which, when used with the existing Anchor Positioning and Popover APIs, helps you build out accessible, complex, layered UI elements, without JavaScript, in a fraction of the time. >> ADI OSMANI: In this theater layout, we now have seat sections. Let's add a seat preview for each of these sections so you can get a sense of what the stage looks like from them. I'm just going to Go ahead and click this eye icon. And check that out. Right now, this is a popover -- [Applause] This is a popover, which is triggered on click. So let's have it show up when hovering or focusing instead. So to make this change, all we need to do is change our popover target to interest target. We go back to our eye button here, and I'm going to change this popover target, I'm going to type in interest target instead and hit save. Let's do that again. Let's hover over the eye. Nice! [Applause] So here, the browser is handling the state management, event listeners, ARIA labeling and a lot more for you, making this complex interaction a breeze. >> UNA KRAVETS: Building something like this is so hard to get right. I can't believe how easy it is now. This is the power of modern CSS turning complex UI challenges into straightforward declarative code, without frameworks, multiple libraries, and thousands of lines of JavaScript. But I know the big question you're all asking is: Will this work for my users? That's why we launched Baseline, to show you feature availability across all major browsers. But we know for Baseline to be truly useful, it has to be available in the tools you use every day, like IDEs, linters and analytics tools. >> ADI OSMANI: So here I am, back in VS Code. Look what happens when I hover over some of these recent CSS additions. right in the tool tip, you can see its Baseline status and browser availability. No more switching between browser tabs to check compatibility. It's really nice. [Applause] >> UNA KRAVETS: And if these aren't your tools of choice, don't worry. ESLint can now be configured to warn you about anything that doesn't match your targeted Basebline version for HTML and CSS files. This integration is also coming soon to VS-code based IDEs and JetBrains Webstorm. Shall we get back to building our Website, Adi? >> ADI OSMANI: Yeah, perfect timing. I've made some progress. >> UNA KRAVETS: Is that my find my seat button new? It's not centered. Can we fix it? >> ADI OSMANI: This is a perfect opportunity to showcase some of our new AI features in Chrome DevTools. Let's take a closer look. I'm going to open up the DevTools and select the element we're having some issues with. if I right click and go to and AI, check this out. This is something new. This is AI assistance that is baked in to the panel. I'm just going to actually switch over right to here to show you this. One moment. Let's hope that the demo gods are with me. All right. So we're going to go back to that find my seat, I'm going to select that one button there, amazing. We're going to turn on AI assistance, just make sure that is all turned on. Awesome stuff. All right. So what I'm going to do here is -- oh, no, no. [Applause] >> UNA KRAVETS: We're doing it live! >> ADI OSMANI: All right. So this is AI assistance baked into the panel. What I want to do now is use natural language and ask Gemini a question. What I'm going to say is I set margin to 50%, but it's still misaligned. How do I fix it? So notice that I'm just using natural language. No complex queries. I'm just asking in plain language, and it seems to have come back with a solution in mind. So let's see what it does when it's applied to CSS rules for the margin. Okay. I think that what I need to do here is actually apply a transform fix. Let's see what it says. Perfect. All right. So it centered it for us. [Applause] And now normally, I copy this fix, I would switch back to my editor, I would find the right file, locate the right spot in the code, you know the drill. But with Chrome 137, AI assistant can find and apply a fix directly from DevTools. So I've connected DevTools to my local workspace. If I expand unsafe changes, and I click this apply to workspace, it's going to do some magic behind the scenes. okay. Let's see if the demo gods stay with us here. All right! [Applause] >> UNA KRAVETS: This usually works. >> ADI OSMANI: It should be applied directly to my source code. So ideally, there's no context switching, no copy paste errors. We just get immediate results. >> UNA KRAVETS: Now, speaking of fixes, why don't we show everyone the completely redesigned performance panel. >> ADI OSMANI: Let me p pull tht right up. Here, I've collected a quick performance trace. We're going to switch over to the other one. I've collected a quick performance trace and in the performance incises side bar, I see a layout shift culprit. So let's figure out why this comes up. We've got our Ask AI button. So this is a game changer. I'm just going to select this layout shift insight, and I'm going to ask Gemini, I need to actually switch over to here. So let me go back to Ask AI, and what I'm going to do is I'm going to ask Gemini how can I prevent layout shifts on this page? We're going to go ahead a, and it's come back with a response. With the current web font, there's a lot of layout shifts, and this is already useful as it gives me a clear direction of what to do next. This is what I love about these new Chrome DevTools features. They don't just highlight problems; they help you understand and fix them without leaving your workflow. [Applause] >> UNA KRAVETS: AI in DevTools is such a great use case and Gemini is not just helping you debug, but it also helps you build. Last year, we announced Gemini Nano in Chrome and invited you to help shape the future of f on the web. Since then, nearly 17,000 of you signed up for the Early Preview Program, and we learned so much from you. Starting today, we're rolling out seven AI APIs across various stages of availability. Backed By our best on-device models, from Gemini Nano to Google Translate, these APIs in Chrome have been fine-tuned for the web. We're also working with Other browsers to offer these same APIs backed by their own AI models, so that you can use them Everywhere. With Gemini Nano and our built-in APIs, the data never leaves the device. That's huge for schools, governments, And enterprises with strict compliance and data privacy rules. And it also means you can affordably scale AI features to a massive audience. [Applause] That's exciting! So many of you out there are already using the power of these APIs for your applications, like Deloitte. They're experimenting with an integration of Chrome's built-in AI APIs right into the Deloitte Engineering Platform to improve onboarding and navigation. 30,000 developers at Deloitte can find what they need a projected 30% faster, while also giving better feedback to improve the platform. >> ADI OSMANI: Before we wrap Up, I want to show you one more thing that's really exciting. Today we are unlocking new multimodal capabilities from Gemini Nano. Our multimodal built-in AI APIs let you create experiences where users can interact with Gemini using audio and image input. Let me show you how this works. So going back to our theater example, we can help people to find their seat like an AI Usher. For this, we need a function that can extract information from a photo of our ticket and highlight it in the app, and I want this to work on every device and in every browser. So we've partnered with Gemini and Firebase to offer a hybrid solution that works everywhere, on device or in the cloud. So over here in our editor, we start by setting up Firebase and Gemini, and we define our model parameters. I'm just going to click through here and quickly show you. We've got a prefer on device right every here set as a configuration. The AI returns our information. We're just configuring the image that's being formatted, and we're finally getting a response. So let's go ahead and give this a try. Now normally, I would be doing this with my phone, but I'm just developing right now and so I'm just going to use this webcam. I'm going to snap a photo of this ticket. Let's try this out. And bam. The built-in multimodal AI instantly located my seat section in the theater. Awesome. >> UNA KRAVETS: You don't have to wait to get your hands on these new tools. Many of these APIs are broadly available, and you can sign up for the Early Preview Program today to start experimenting with these new multimodal AI and hybrid solutions. From building better UI to faster debugging to creating all-new AI-powered features, we're constantly working to give you more tools to bring your vision to life. With your help, together, we can build a more powerful web made easier. [Applause] And now, let's check out how Firebase Studio is making it even easier to spin up a full-stack app. Take it away, David! [Music] [Applause] >> DAVID EAST: Last month, we launched Firebase Studio, cloud-based AI workspace, where, with a single prompt, you can create a fully functional app. You can lean on AI assistance throughout, or dive into the code, thanks to the full power of an underlying customizable VM that is open and extensible. Like Product Manager Erland Van Reet, who had been kicking around an idea for a flexible platform for sharing things in your community for quite some time, but was able to actually bring it to life with Firebase Studio. You can prompt Firebase Studio to generate almost anything, like CRM tools, interview coaches, sales planners, and games. And today, we're adding more to Firebase Studio to help you build faster in every part of the development stack, from front end to back. Let's start with the front end. If you're like me, there are two stages you go through when you're handed an incredible looking design. In the first stage, you're excited, because you get to work on a project with such an awesome user interface. And then in the second stage, well, you realize that you actually have to build it. Going from a design tool to a functional user interface running in a development workspace can take a lot of work, and we wanted to simplify this process. So now you can bring Figma designs to life right in Firebase Studio with some help from Builder.io. [Applause] Now when you're in Figma, you can install the Builder.io plugin, click to export to Firebase Studio, and it translates all the component code, and opens a window with Firebase Studio for you to kickstart your development process. Let me show it in action. I'm here in Firebase Studio after importing a Figma mock of a furniture store app that was actually created in Stitch. We have this product listing grid page, and it's gone from a design to real app code up and running in a development workspace. I can scroll through all these products, and since it's gone from design to real code, I can either dive right into the code or I can ask Gemini in Firebase Studio to give me some advice on where to begin. What I love about this Figma export is that it didn't just generate a large monolith of code; it generated individual, well-isolated components to Make up the whole page. This Figma mock only had a design for the product grid page but not a single product detail page, so let's build one. I'm Going to open up Gemini in Firebase Studio, and I'm going to paste in a prompt, but I will break it all down. So I asked Firebase Studio to build a single product detail page, and I want it to use the existing components system and sample data. Then I'll give it a file name that I want it to use, and I'll ask it to hook up to the routing system. And lastly we're going to add a feature to create an add-to-cart button. Before I send this prompt, I'm going to use the model selector and select Gemini 2.5 Pro. Now when I submit this... Gemini in Firebase Studio gives me a breakdown of all the changes that it wants to make. But it's not going to make them all at once, bececse no one wants to review a giant file of code. It's going to break things into multiple steps, making it easier to stay in the loop and review each change. So it starts by creating the full product detail page and after I create this file, it goes, and it updates the routing logic. Then, after that, it goes into the product detail card, and it wraps a link so it points from the detail card to the individual page. And after that, it identifies that it needs to update theidata flow and then by passing in the product ID, and now this part, this part is really cool. Gemini and Firebase Studio noticed that our product sample data didn't have a description Property so it wouldn't display anan details about the detail page. So it updated the placeholder data and generated descriptions for each product. [Applause] And lastly, it generates the Add to cart button and so now, when I update this file and go back into the web page, I can click around, and I have myself a fully working product detail page. [Applause] All of this was done in just a few minutes with a single import and a single prompt. The frontend brings the user experience to life. For apps to truly perform, to handle complex data and connections, you need a backend. Right now in Firebase Studio you can add your backend either by coding it yourself or with help from Gemini. But wouldn't it be great if it just added a backend for you? Normally, when prototyping apps, Firebase Studio generates an App Blueprint that details some of the most important characteristics of the app, such as features and a style guide. Rolling out starting today we're adding a backend section to the App Blueprint. [Applause] Firebase Studio will detect when your app needs a backend and provision it for you if your prompt includes a database or authentication. From this Blueprint, Firebase Studio will set up the configuration for the backend services, and then it also generates the code to authenticate users and save data to a database. And when you're ready to publish, Firebase Studio will provision those back-end services and deploy to Firebase stack posting. You can still jump into the coding workspace and extend your apps with any backend if you prefer, a different stack, or if your needs change as your app grows. These features are starting to roll out today, and we're rapidly adding new capabilities to Firebase Studio based on your feedback, so try it out now! [Applause] All right. Another demo down, and we are into the home stretch! Next, if you want to start tuning your own AI models, here's Gus to tell you all about what's new with Gemma. [Applause] [Music] >> GUS MARTINS: Hi, everyone [Applause] I'm very happy to be here. Thank you very much. Today has been all about making it easier for you to build great things with Gemini, but Sometimes, you really want to fine-tune your own model. Like when you want AI to help you understand sensitive data, learn the details of your business or even run offline. That's why we released Gemma, our family of open models. With Gemma open models, we're bringing magical AI experiences instantly and privately into users' hands. A couple months ago we launched Gemma 3, state-of-the-art open models capable of running on a single cloud or desktop accelerator. But we kept cooking! Today, I'm thrilled to announce Gemma 3n, a model that can now run on as little as 2GB of RAM. [Applause] Gemma 3n shares the same architecture as Gemini Nano, and is engineered for incredible performance. It's much faster On leaner mobile hardware compared to Gemma 3. We've added audio understanding, making it truly multimodal. We are sharing Gemma 3n in preview today starting on Google AI Studio and with Google AI Edge. We are also bringing it to open source tools like Hugging Face, Ollama, UnSloth and others in the coming weeks. The Gemma family is designed to be highly adaptable. One domain where this openness and adaptability is showing incredible promise is healthcare. Today, I'm very excited to introduce MedGemma, our most capable collection of open models for multimodal medical text and image understanding. [Applause] There's a bunch of those. MedGemma works great across a range of medical image and text applications, so developers can adapt the model for their own health apps, like analyzing radiology images, or summarizing patient information for physicians. Okay. I've talked a lot already. Let me show you how easy it is to grab a Gemma model and build something completely unique for you. Let me go here to a Google Colab. And what we are going to do is use UnSloth, which is a fantastic library for fine-tuning LLMs like Gemma much faster, using less memory and runs great on NVIDIA GPUs in Google Colab's free tier! For this demo, I want to show you how easy it is to fine-tune. My daughter and I have a unique emoji language for texting. It would be great to have a personalized translator for us. For example, when talking about our dog, Luna, let me show you something. This is Luna. So when we talk about our dog, Luna, it should automatically translate to her special emoji. So what I did is I created a custom data set to fine-tune Gemma to teach it our specific emoji dialect. In Colab, we do the same. We load the model, set up the environment, just the usual. Colab caches popular models so it loads quickly. Training can take some time, so I've already formatted the data set. I did the training, and now, I already have our custom model. Here we go. But the thing is I would like to compare my custom model against the original one and see if it really learned from my data. To do that, I'm going to build a UI using the new AI first Colab that we've just launched. It's an entirely new way to build faster. This agent-first experience transforms coding into a dynamic conversation that helps you navigate complex tasks. So let me show it to you right here. Open the feature. I'll add my prompt and execute. So Colab is generating for me the code. Done already. Let's see it. And I can see here. I think everyone is using Colab today. Nice. It's here. And there it is. So just like that, Colab created this application for me but the thing is we should test this right? Any suggestions for prompts? Anything? I can do that. Luna loves -- sorry, nervous. Strawberries, correct. Let's try that. Let's try that. And come on, demo, it worked. Let's zoom in for you. You can see the difference between my custom model and the base one, and that it really knows the language, and the details about our emoji dialect. Isn't that cool? [Applause] So this is a really fun example that highlights the power that you have. You take a base model like Gemma, you take accessible tools like UnSloth, and the new AI-first Colab, and you quickly fine-tune it with your own personal data. , and you can deploy this using Google Cloud with Cloud Run, Vertex AI, or Google AI Edge for local deployment. The thing is, my emoji project just scratches the surface, showing how to quickly build on a Gemma model for your own use case. what is truly inspiring is the Gemma-verse: Tens of thousands of model variants, tools and libraries created by you, the developer community. Gemma has been downloaded over 150 million times. And we've seen the community create close to 70,000 Gemma variants. Hundreds of which are specifically for the world's diverse languages, like the incredible story we shared last year of Navarasa, a Gemma model fine-tuned to speak 15 Indic languages. Gemma is available in over 140 languages and is the best multilingual open model on the planet, and we're excited to announce that we're expanding Gemma even further to sign languages. That's a good one, guys. [Applause] SignGemma is a new family of models trained to translate sign language to spoken language text, but it's best at American Sign Language and English. [Applause] It's the most capable Sign Language understanding model ever. We can't wait for you, developers and Deaf and Hard-of-Hearing communities, to take this foundation and build with it. That's not all! Gemma is also helping researchers expand our understanding to more than just human language. Just last month, we introduced DolphinGemma, the world's first large language model for dolphins. [Applause] Working with researchers at Georgia Tech and the Wild Dolphin Project, DolphinGemma was fine-tuned on data from decades of field research, to help scientists better understand patterns in how dolphins communicate. Can you imagine, going to vacation someday and being able to talk to a dolphin? [Laughter] Let's check on the progress. >> DENISE: When I was 12 years old, I would page through the encyclopedia. I would stop on the whale and dolphin page, and I would go, "I wonder what is going on in the minds of these animals?" Language to me is the ultimate question about intelligence, the thing that we haven't been able to nail without a great tool. How do you decipher another species? The main use of DolphinGemma is to eventually look at their natural language patterns and match it with the underwater video. That's how we really figure out their language. All right. Beautiful day. Woohoo. >> THAD: Imagine a world where you could talk to an animal. DolphinGemma is the first LLM for dolphins. We've leveraged over 40 years of Denise's vocalization research to create this large language model to generate new synthetic dolphin sounds. These sounds will help us to one day hopefully communicate with the dolphins. >> DENISE: Thad shows up at my office one day and he's wearing one of his many wearable computers. This guy's like a techno man. >> THAD: I said, "Well, I do wearable computers. This sounds like a summer project. We'll get this done for you." Here it is, 15 years later. We've engineered this CHAT Jr. device to play our synthetic dolphin sounds underwater. Let's see which one this one is. That is the signature whistle for Denise, so that whenever she's approaching the dolphin, she can introduce herself. >> DENISE: So what they're doing is they're testing the distance at which the two CHAT boxes can efficiently recognize our synthetic sound. We want it to work close to 100% before we take it in with the dolphins. It's this dance between getting data and respecting their culture. >> THAD: I'm not very seaworthy. The true breakthrough for me with DolphinGemma is that I can generate so many of these so quickly. I just scroll through here and do days worth of work in minutes. If we can use this open model, people would take advantage of Denise's 40 years worth of research and use it to model for their own purposes. >> DENISE: Science is most exciting when you and something you're not expecting. You may discover something that you never even thought to look for. I mean, that's the beauty; You don't know what's coming. [Applause] >> JOSH WOODWARD: Isn't that amazing? That's the world's first large language model for dolphins, and it was made to run On this right here, a Pixal 9-powered rig just like this. I can't help myself. We're going to do one more demo and then it's a wrap. Does that sound good? All right. Does everyone remember the keynote companion app that Paige built right here at the desk? Let's bring it back up on stage, and let's just see what it thought. Tell me what did you think of today's announcements? >> Oh, I thought they were amazing. All the new features for developers are so exciting, especially the stuff with Gemini and the AI tools in Chrome. It's going to make it so much easier -- >> JOSH WOODWARD: What about my demo, the one I did? >> Your demo was fantastic. >> JOSH WOODWARD: It's a live demo. >> Seriously. >> JOSH WOODWARD: What was some advice you would give people to build if you don't want to talk about my demo? >> I definitely recommend trying out the new AI features in Chrome DevTools. It looks like it's going to make debugging so much easier, especially with the AI assistance tends you fix issues directly. >> JOSH WOODWARD: Can you just close us, a bid a fond farewell What's on your mind? >> Absolutely. Thank you all so much for being here today. We've seen some incredible innovations, and I can't wait to see what amazing things you'll build with all the new tools. Have a fantastic rest of I/O. >> JOSH WOODWARD: All right. [Applause] Everything you've seen here is all going to be on io.google so you can check it out. And don't forget to join your local Google Developer Program, you can connect to the local builders. And I hope all of you enjoy the rest of I/O. We're excited to see how you're going to change the future of software development. Thanks again! [Applause] [Music]