Transcript for:
Resumen de Nano Banana Gemini 2.5

For centuries, art required brushes, then software. But today it is enough to have a clear idea. Until a few days ago, creating or editing a professional image meant having experience with programs like Photoshop. But this changed with the arrival of Nano Banana, the new image model of the Jimini 2.5 family. And now anyone will be able to create and edit images accurately, although it doesn't stop there, as there's an opportunity to build solutions with this model that no one is talking about. Therefore, in this video we will be looking at what this model is, how to use it, and how to get the most out of it like a pro to build things like an ad-generating system, interior design applications, or create our own simplified version of Photoshop. The best thing is that we can try this for ourselves for free. Sounds good? So let's get to know the model. Let's start by getting to know this model, previously known as nanobanana. This artificial intelligence was ranked number one by far compared to leading image tools like Chat GPT or Mid Journey. At first, the company behind this was unknown, but a little later it was renamed Gemini 2.5 5 Image Generation, a model that allows us to very easily reconvert an image like that of this astronaut with a helmet to this other one where it was specified that he should remove his helmet. It also allows us to mix elements, such as uploading an image of bubbles, a couple having dinner, and the result would be this, or even uploading a photograph of a fork, then the texture of spaghetti, and recreating this new image from here. In addition, we can also completely maintain the original image with small changes, like the image of this dancer, who when we tell her to change her pose by raising her arm a little, would change to this other image here or the one of this place with an abandoned piano that when asked to look new, would look this other way. In the same way, we can play with consistent characters, like this one, where these two people are added and from here, a lot of scenes can be generated while maintaining the total consistency of these. The truth is that the model we are going to be testing today does something never seen before and a reflection of this is this quantitative data where Gemini 2.5 Flash Image is positioned as the top number one, but if we compare it with the image editing functionality, it far exceeds everything we had until now, thus making it so that whatever test we do, it will stand out in practically anything. Here now we're going to put it to the test and I'll be showing you different personal use cases that we could do that could be useful to us. However, here we also have a great opportunity, since we can use this model as the basis for any tool we build. That's because, beyond using this model from its tools, we'll be able to take it outside through an API, which, in addition to being competitively priced, will also allow us to start testing it for free. But before we start building anything, let's jump to the next block to see its capabilities. We can use this new Gemini model from different Google platforms. One of them would be from Gemini. And by opening a new chat, if we click on the image section, we would already be using this template. So now I could click on the plus and select upload files so that when I select an image like this one of mine and a sweatshirt like this one here that is priced over €4,000 when I copy this image and paste it into Gemini, I could tell it to change the shirt that was blue to this red sweatshirt. If I click on send, then it would start processing here and seconds later we would have this result here, which the truth is that it has done great, having here that exact sweatshirt with this design, which is also what is put to the test by having so many elements, and the truth is that it has perfectly respected both the person and the design of the sweatshirt that they would be wearing right now. And with this, it's now easier than ever to request changes to any image we have or to create a completely new one with precision. Also, if we go back to the chat, keep in mind that we can also make sequential changes from here . That is, if I now tell him to become a superhero from this image, if I send it, then a second later I would have this other image that, as you can see, would be me. And he also tried to mix the previous result with the Luis Witón sweatshirt, with those red elements and tones, only here with that superhero effect. And with this, we're no longer just talking about making a specific change to an image; we can build a complete narrative based on the different results we're constructing. And I can also iterate on this result, because if I now write from Gemini to put me driving a superhero car when clicking send, then again, with the first result we would have something spectacular like this image with a supercar with screens full of information. In fact, if we look at this part, even what could be a criminal would appear. In short, despite all the changes and speed effects that we have in this car, I would still appear with exactly the clothes that we had put on it at the beginning, but now with that superhero cape, just as we had in the previous results. And seeing this example can really open up many opportunities, such as creating stories with consistency or generating different scenes where the context of the previous materials we have is important. The truth is that when I've been testing it all these days, I saw that the results were so good that I started doing more professional tests. And to do so, taking advantage of the fact that I have my academy with a very strong brand image, and as more and more people are joining the courses we have available, I thought, why not create merchandising for the students? And from Gemini, if I upload one of my logos, like this one for example, and I tell it that I run an artificial intelligence academy and that I want to create products for students, when I ask it if it can generate a cap, t-shirt, sweatshirt and other products, if I select the image section and click send, well a little later look at everything we have here, since instead of generating an image with all the things mixed together, which would be very visual and not very functional, on the contrary, it has generated different images with a single prompt of all the products that I have specified, such as the artificial academy cap, a t-shirt and sweatshirt, the backpack, a few stickers and a pen here. And with this, we're not just talking about having a good time or making some changes to an image we have, but we can actually use professional applications like generating merchandising or creating content for companies. Also, keep in mind that you will be able to iterate on the results you have . And here, as I see that in all the products I had all the letters in black on the AI ​​section, when I put it on a black product it doesn't contrast. But now I can ask you here if you can change the products to white so they contrast better. And if we send it, well, a little later it would have given us these other results that really look great. What we just saw would allow us to create our own project, but when it grows, we need to automate our relationships with customers and sales. That's why I'm introducing Chatfuel, a social media automation platform that simplifies your messaging and engagement. Best of all, it unifies messages from Instagram, WhatsApp, or your web chatbot in just a couple of clicks, without code, and in just a few minutes. This way, you can automate your messages consistently, regardless of the channel. You'll be able to respond to messages, whether direct, ad, post comments, or reels. It could also take care of your organization by moving chats or updating profiles or even setting appointment reminders. In addition to Chat Fuel managing your messages, you can take control of them whenever you need. This way, you centralize attention, provide quick responses, and leave no message unanswered, resulting in more satisfied customers and sales. And thanks to Chat Fuel sponsoring this video, they've left me a special link below in the description where you can apply the Alex Javi coupon to get a free month. And once we know this, let's move on to the next case. This new imaging model can be used not only from Yemini, but also from other products like Google Whisk. Another platform that I'm going to leave you below in the description. And from here if we click on open tool, we will be able to use it from here as well. And from this tool, if we click on this subject section, which would be a character, and select that we want to upload an image, such as this photograph of me again, then the AI ​​will begin to analyze it so that I appear in that new image. But I can also click on the plus sign and upload an image of someone else like Mark Zuckenberg, so I upload that too. And now from the scene I can specify where I want these two characters to appear, like for example in this plane here. And if I open it and send it, a few seconds later I could find an image like this one here, where I would be like a pilot and Mark looking at me from the front or this photograph here with a very similar pose. But I can also add changes here, such as telling it that the characters are piloting the plane, and when generating it, I could find these two results where we would both be inside the plane piloting it and having another alternative like this one here. And the truth is, the results we can achieve without any technical knowledge, with the right approach and just a few seconds, are impressive. In addition, we'll be able to do a few more things from Googlewis, since beyond asking for changes, we could generate videos using artificial intelligence. To do this, we would simply have to click on animate and a little later I would have generated this video here. But so you can see it better, I'll leave it here for you. The two methods we just saw are official, but limited in use. However, if we jump to this other LLM Arena platform, which I 'll also leave in the description, if where it says battle you select direct chat, click on this image icon, select this Yemina model which is the nano banana, from here you can theoretically use it unlimitedly, although I really don't know how long this will be. Let's now move on to the next block, where we'll truly take advantage of this through a developer platform that will allow us to do many more things. To see it, we have to jump to Google AI Studio, another platform that you will have in the description and here if you select the chat section we will have all the Gemini models unlocked, including this Gemini 2.5 Flash. And from here, if we click on the plus button and upload an image, like this painting by Wernica which is in black and white, this being the original aspect, we could upload it, ask it to color the painting and by clicking on RAM, well in just a few seconds, we would have this other result that has been reinterpreted by artificial intelligence and it is really great, since we can see details like this person face up, who we might think is dead or badly injured with those red tones and other shades it is like the yellow light, the candle of a similar shape, as well as all the details that appear with the horn of one color, the bull of another, differences in walls, the light as it picks up some specific parts that it would be illuminating and it really would seem that the image could even be like that, but it is still a reinterpretation by artificial intelligence that has made it in about 15 seconds. Now, if we jump to Google Studio and stick to using this model only in this chat, we would be using only 20% of its full potential, since to really take advantage of it we would have to go to Build, from where we will find customized applications based on this model and this will allow us to get the most out of this model. Something very similar is also happening with the artificial intelligence course, which by the way will be closing the places in the next few weeks, since here you will learn more than 60 platforms in just 6 weeks, having those classes in person and close to me and then when you finish all the content and resources you will have them available 247 hours a day and when you do everything, you will have a certificate from us and optionally another from an official Spanish university with two credits. Well, knowing how with this we can get the most out of AI in general, let's go back to Google AI Studio to skip some of its applications based on this model such as Yemini Codrawing, a tool that allows us to start drawing like for example that poorly drawn building that seems to be a bit out of perspective. I could now tell you here to continue drawing another 10 buildings. And if I click on send, Yemini will start processing and a little later look what we have here, having these 10 buildings drawn here trying to follow the piece of art that I had left her. And with this, we're not just talking about Yemini being able to edit images, but we can work collaboratively to create a final product. Another application I want to highlight is this one called Home Canvas, from which if we upload a photo of a scene we want to change, such as this empty house, we could search for any furniture we want to try, such as this sofabin touch, and when we click on upload product, if we select that image and click where we want to add it. I, by the way, made a mistake, I put it up here, so let's see what it does to us. And a little later I would have integrated it into this other scene where we can see that it is perfectly integrated, taking into account even details such as the shadows that may come to us, since here we have a window with lighting. Now, with this model, we can really start building products that solve real problems for more professional use. In fact, if you'd like to know how I could make this type of application and add logic behind it, I'll recommend this video here where I already talked about all of it. I'm going to go back and now I don't want to give you any spoilers , but I do want to finish with this one called Pass Forward, which is a very fun tool whereby if we upload an image of ourselves, like this one here for example , which I've already used up quite a bit, when I upload it I'll be able to click on generate and the artificial intelligence will recreate me in different years, like here in the 50s, here in the 80s and so on with different eras so that I can see what all that could have been like. It's actually a lot of fun, but what if I told you that you could come up with an idea for an app you want to create, and in just a few seconds, you'll have it working? Well, look, if we go back to the Yemini build section, instead of selecting any of these applications, we could generate our own, like when I tell it to create an app that generates comics based on the Gemini 2.5 model and higher generation. And by specifying this, it will automatically create an application for us, without having to get into any apps or technical stuff, making it possible that if we now click on send, the most powerful Gemini model, this one being the 2.5 Pro, will start to analyze what we want to start building and a little later we will be able to find a tool like this one that says DJ Minai comic generator. And now in theory, if I write any story like that of a man named Alejabi Saves the Earth with a secret quantum chip, if I click on generate comic, well here it says that it is beginning to analyze that epic story and here it will begin to create different vignettes and seconds later, well, it would no longer have generated that story of a man seeing what could be the end of the world, then information about this chip and having here also some vignettes among which we can find this last final image that tells us something. How do we achieve this? The planet is safe. And now, beyond using it in Gemini, Google Whisk, or some tool like the one we've seen previously, it's now easier than ever to say which application would be ideal for generating our images or content and having one for ourselves or even creating a new project that we can then scale and, in the future, even monetize. And that's why I want to show you this other application that I also made previously, where by writing this exact pronoun that I create an ad generator using artificial intelligence and specifying at the end that I use the Gemini 2.5 Flash Image model, it generated this application that any business can actually use . And from here, if we upload a photo of any product, such as this can of Coca-Cola, I can now specify that I want it to appear on an advertising screen at a bus stop. And if I click on generate ad, G Yemini will start processing and integrating this product, which could be a can of Coca-Cola or a real one that we sell on this bus a few seconds later. Now, from here we could download it and use it as we want. In the same way, I could change this, like a luxury magazine, for example . If I change this other option, click on generate, then a little later we could find this other image with a model, having the can down here so we can use it in those formats. In addition to creating applications that would serve us for specific use cases, we could also create complementary tools for our workflow. Here imagine that you want to create animations to generate movies by artificial intelligence or similar. Well, to do this again in the chat section you can specify the application you want to create. In this case, I want to generate images to play in the video, so I need them to be consistent, so here I specify that I create an application that generates five images to create a movie based on the Geminate 2.5 Image model, along with some appreciations that I would like to have, such as the user being able to specify the characters they want to appear or even the styles they would like it to have. And when I send it, we could find this application where it asks me what the story is about, which in this case I have already written beforehand that the character is a frog. We could also upload a photo of ourselves, although I'm going to leave it blank. And we can also select any visual style, such as anime. If I click on create storyboard now, a little later I would have generated five different scenes from this story, having here a little frog who would love to reach the moon. We can then see another scene of how he would be training to achieve it. In the third scene we can see several attempts and in the fourth when he has finally achieved it, he finally tells us some moral of the success he has achieved. And with all this, we can now download each image, add a tool that converts images to videos using artificial intelligence to bring them to life, and thus have a tool created by ourselves that complements our existing workflow well . Let's finish with the last use case. And here I thought, "Will I be able to create my own simplified Photoshop?" And the answer is yes, because when I give it a prompt telling it that I want to create a simplified and improved application for Photoshop that allows me to make professional edits easily using this Gemini 2.5 Flash Image model, when I click send, a little later I can find this application here, from which I can upload an image like the Coca-Cola can and start iterating. In fact, I was playing around previously and on this image, when I selected vintage, it generated this other image that, as you can see, has an older style, although it doesn't have a drastic change. However, from here I have also allowed the option of being able to draw a mask or even through a prompt say what we want to appear, such as now having a hand holding the product. And when processing it, look at this image here, having that product perfectly integrated in your hand with all the shadows, contrasts, and details that we didn't even have in the original image. The truth is that we can now make tangible any image we have in our minds, and now, more than ever, whatever you can imagine can also be made a reality.