Transcript for:
Recent AI Developments and Announcements

Well, it's been another insanely busy week in the world of AI, and I don't want to waste your time, so let's get into this week's AI news breakdown. Starting with news that actually came out last week, but I record these videos on Thursdays, and this news came out on Friday of last week. It was when OpenAI released their O3 Mini. Now, we did talk about it in last Friday's video, because we knew it was going to come out on Friday, but now that we actually have access to it, I figured let's talk about it real quick. This new O3 model out... outperforms pretty much every other model out there in math, except for O1 Pro, which is not actually listed on this chart in PhD level science questions. The O3 Mini High version beats everything else that's out there, except for, of course, O1 Pro. It's good at coding, good at software engineering, and it's pretty much the most powerful model on the market other than O1 Pro, which is only in the $200 a month tier. This new O3 Mini, however, is available in every tier, and available in the API as well. Pro users will have unlimited access to O3 mini and plus and team users will have triple the rate limits versus O1 mini. Free users can try O3 mini in chat GPT by selecting the reason button under the message composer. So even free chat GPT users are getting access to this newest state-of-the-art model from OpenAI. You can even combine this O3 mini model with their search model. Even on free plans, OpenAI said, try search plus reasoning together in chat GPT. Free users can use OpenAI 03 mini with search by selecting the search plus reason buttons together. So if you're on the free plan and you want to use the new 03 model, you'd select the reason button. If you want to combine it with search, you select both search and reason. And I guess when it was originally released for free members, it didn't actually show the chain of thought. But as of February 6th. Even that's been updated for both free and paid users. OpenAI said updated chain of thought in OpenAI 03 mini for free and paid users and in 03 mini high for paid users. Now, the chain of thought that it's showing here isn't actually the true chain of thought that's happening. It's not like what you see in DeepSeek R1, where you see literally everything the model's thinking before it gives you the response. This gives you sort of like a summarized version of what it's thinking before it gives you a response. McKay Wrigley here even argues that it's actually worse than giving us nothing at all. He says, oh, three mini is exceptionally great, but I do worry that summarized chain of thought is actually worse than nothing at all. True chain of thought exposure acts as a prompt debugger. It helps us steer the model. Summarized chain of thought obfuscates this and potentially adds errors and it makes it harder to debug. So if you're looking at something like deep seek R1 and you can see literally everything it's thinking and it gives you an incorrect answer. you can literally go back and look through the chain of thought and figure out where it screwed up. These summarized chains of thought that OpenAI 03 is giving us, you can't really do that. But in my opinion, the even bigger news that came out from OpenAI this week wasn't even the fact that they gave us 03 mini on Friday. It was that over the weekend, they gave us deep research. Unfortunately, deep research is only available to pro users on the $200 a month plan, which I do know makes it economically infeasible for a lot of people. of people, but I have used it and it is really, really good. It is kind of interesting that they named it Deep Research because Google has a product called Gemini with Deep Research. It's exactly the same naming scheme, which is definitely going to confuse people, but it does work really well. I asked Deep Research to help me with a YouTube strategy. It actually gave me some follow-up questions so that it could better understand what I was trying to accomplish. Like my current strategy on long form versus short form videos, my current video length and format, how I decide on tutorials, like what my competitors are doing, what my monetization focuses are, things like that. I answered its questions and then it gave me just an absolute beast of a write-up of how I should manage my YouTube channel. And it is really, really, really in depth and honestly created an amazing killer strategy. Like I'm... literally following through on this strategy with my YouTube channel now. It wrote up this giant essay here and I actually pasted it back into ChatGPT. This is the entire write-up that it gave me. I pasted it back into GPT-4.0 and asked it to give me a step-by-step checklist. And you can see here that it simplified everything and gave me a checklist of what to do for my channel, even gave me a four-week breakdown to dial it all in. So deep research has been a game changer for me. I know it's on the $200 a month plan, but had I hired like a YouTube consultant to look at my channel, analyze everything I was doing and give me a detailed like 10 page report with a step-by-step checklist of what I need to do on the channel, they would have charged me way more than $200. So I feel like I got the value out of that from that alone. But I also don't want you to feel like I'm trying to sell you on getting. the $200 a month plan. For most people, it's probably still not worth it. I've just personally found a lot of value from it. There was a recent benchmark test that came out titled Humanity's Last Exam, and you can see how some of the other existing models performed on this benchmark test. GPT-40 got a 3.3% in accuracy. OpenAI's 01 got a 9.1. DeepSeek R1 got a 9.4. The new OpenAI 03 Mini High got a 13.0. OpenAI with deep research got a 26.6% on the accuracy. If you have a pro account, if you combine 01 pro with deep research, it is hands down the most powerful AI large language model I have ever tried. It is absolutely insane because it does the research for you using the deep research. So it will go off on the web and search out items for you as part of the research. And then it uses the 01 pros reasoning to really, really think through. everything that it came back with. And that's how I got that insane detailed report on what I should do on my YouTube channel. It wasn't only using what was in its training data. It literally did the research, did the chain of thought reasoning, and then spit back out that entire report. That's what makes it so powerful is when you start combining all of these things, they all combine for an insanely powerful experience where the output is just mind blowing. And even if you're in the EU, you also get access to deep research. Deep research is now rolled out to 100% of all pro users, including in the UK, EU, Norway, Iceland, Liechtenstein, and Switzerland. And one interesting thing that Sam Altman said not long after this came out, my very approximate vibe is that it can do a single digit percentage of all economically valuable tasks in the world, which is a wild milestone. Yes, only a single digit, meaning, you know, between one and 9%, but that single digit percentage still likely adds up to. billions of dollars worth of value that this deep research is capable of doing and not only that but sam teased that there's still something else coming he said note this is not the one more thing for o3 mini a few more days for that and he said that on the same day that deep research came out he was commenting that o3 mini came out and then oh here's deep research which makes all of this stuff even better and we still have one more thing to show you which is exciting but we're not telling you yet But OpenAI wasn't even done there with announcements this week. They had a handful of smaller announcements, like the fact that ChatGPT search is now available to everyone over on ChatGPT.com. No signup required. So if you don't want to use Google search anymore, you'd rather use ChatGPT for your search. You can just go to ChatGPT.com and do web searches that are combined with AI now without even logging in. So now it's like an actual true competitor to what perplexity is doing. They also increased the memory limit in chat GPT for plus pro and team users by 25%. And so, yeah, it's been a big week for OpenAI. And since OpenAI had so much going on this week, they actually took to Reddit to do an AMA where Sam Altman, Mark Chen, Kevin Wheel, Srinivas Narayanan, Michelle Pokras, and Hongyu Ren, I'm sure I butchered at least one of those names, all joined in on this Reddit AMA. A few comments they made. They are still planning on doing a 4.0 image generator. So an image generator that's different than DALI. They mentioned there's some updates coming to advanced voice mode and that they're not calling the next model 5.0. It'll just be GPT-5. They talked about how they are planning on increasing context length. They're working on the ability to attach files to the reasoning models like 01 and 03. But the comment that's probably gotten the most press that most people have been talking about was when Sam Altman said, I personally think we've been on the wrong side of history here and need to figure out a different open source strategy. This was in response to somebody asking, would you consider releasing some model weights and publishing some research? He goes on to say not everyone at OpenAI shares this view, and it's also not our current highest priority. Essentially, Sam Altman believes that they've been on the wrong side of history with open source and that maybe they should have been open sourcing more of this stuff along the way instead of keeping it all closed off. But besides OpenAI. Google had a huge week as well, releasing a bunch of new models, including Gemini 2.0. The new Gemini 2.0 models look pretty strong in all of the benchmarks, although these are just comparing them to previous Gemini models and not with the whole range of AI models that are available. And with this release, they released actually three new models, Gemini 2.0 Flash, which is now generally available, Gemini 2.0 Flash Lite, which is a more efficient version of Gemini 2.0 Flash, and Gemini 2.0 Pro, which is their... best state-of-the-art model that they're making available right now they also have their gemini 2.0 flash thinking model which does some of that extra thinking at the time of inference like we're seeing from things like 01 and 03 and deep seek the two gemini flash models both have a 1 million token context window while the pro has a 2 million you context window and pretty soon 2.0 flash and pro are going to be able to output audio and images we recently had logan kilpatrick from google on the next wave podcast his episode comes out next week and he goes into some details about what's actually coming with these gemini models and it's pretty exciting but the biggest sort of deal around these new gemini models is not necessarily how powerful they are it's how inexpensive they are to use If you're a developer and you want to develop with the Gemini APIs, Gemini 2.0 Flash costs 10 cents per million tokens. To put that into context, if you're using the GPT-4-0 API, it costs $10 per million tokens. That's quite a bit of savings there. Their O1 model, $60 per million tokens. If you're looking at Clod 3.5 Sonnet, $15 per million tokens. And even Haiku, their smallest model, is still $4 per million tokens. And so far, I've just been looking at the outputs. And so I guess if you're comparing it to Gemini 2.0 Flash, it's actually $0.40 per million tokens for the output compared to $15 per million tokens for output. Still quite a bit of a price break. So if you're a developer and you want to build with a large language model API and you want to do it as inexpensively as possible, Gemini 2.0 is definitely your route right now. Now, when it comes to actually comparing these models against other models, there's really two places to look, especially if you're confused by all of the benchmarks that they share. First is the LM arena, where basically people are given a blind test. They enter a prompt. They get two outputs. They pick which of the two outputs they like better. And that's how this ranking is generated. And if we look at this based on the blind testing, Gemini's 2.0 flash thinking model is the number one ranked overall. model right now, just based on users giving it an input, not knowing that they're getting Gemini back and then voting Gemini as the best response. Gemini 2.0 Pro, the new one that came out on February 5th, came in second place, followed by GPT-4.0, DeepSeek R1, and then Gemini 2.0 Flash. So Gemini holds three out of the top five spots right now. And the new model from OpenAI, 03 Mini, falls all the way down here at 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. The other place I like to look at models is this site, Open Router, which I actually learned about from Logan Kilpatrick when he was on our podcast the other day. And this is actually looking at which models are actually getting the most use. So this isn't based on voting. This is just based on what is actually getting used right now. It's somehow watching the APIs and going, okay, these models are what most people are using. And on the day of this recording, which is Thursday, February 6th, Claude Sonnet holds the top. two spots for the all category section but then google's gemini models hold the third and fourth spots so when it comes to usage right now clod and gemini are being used more than open ai's apis at least today if we look at top this week very similar story clod clod gemini gemini followed by open ai top this month clod clod gemini gemini and then if we look at trending to see which thing people are switching over to and starting to use more and more of recently. Look at this number one right here, Gemini Flash 2.0. This is the most trending model right now. And this is all categories. If we look at programming, we've got Claude, Claude, Flash. If we look at technology, we've got Claude followed by Flash. And if we look at translation, Gemini Flash, the previous generation of model is number one. Kind of a cool resource to keep tabs on which AI models are actually getting the most use in the moment. But Google had some other news this week for developers that use their API. You can now use the Imagine 3 AI image generator from their API. And we've looked at Imagine 3 quite a bit in previous videos. It is a really, really solid model. In fact, if we jump back over to the arena here, click on our leaderboard, if we check out their text-to-image leaderboard, we can actually see that Imagine 3, the model from Google... is ranked the top model. And these are ranked in the same way. You're given two images for a prompt. You pick which one you like best. It doesn't tell you which model you picked until after you picked it. And that's how this stuff gets ranked. And Imagine 3 is number one, followed by Recraft, followed by Ideogram, and so on down the line with stable diffusion falling in last. But if you're a developer and you want to use this model within your workflow, you now have access to it. If you're not a developer and you want to play with Imagine 3, The best way to do it is over in Google labs over at labs.google slash FX slash tool slash image effects. This image effects here is actually using the imagine three model and it's totally free to use and play around with right now as well. Oh, and I was just talking about Gemini for all that time. I forgot to mention you can use all the Gemini models for free as well. If you go on over. to AISTudio.Google.com, over on the right, you have the option to select from various models to use. And this is totally free right now. You've got Gemini 2.0 Flash, Flashlight, Pro Experimental, Flash Thinking, plus all of their previous models and their open source models, all available for you to play with and enter prompts here. And we can see we've got over a million context window as well. All totally free to use over at AISTudio.Google.com. Another cool resource for you. You know that feeling when you're trying to get help from a company and you end up stuck in this endless loop of let me transfer you to the right person or we'll get back to you in 24 to 48 hours. And even when you finally do get help, they still need to do manual things like check your order status or schedule a meeting with you. It's like watching somebody use Internet Explorer, but in 2025. Painfully unnecessary. And that's why for this video, I partnered with Chatbase. They're revolutionizing the customer experience. with AI agents that don't just chat, they actually do things for you. We're talking AI that can instantly book meetings through Calendly, create support tickets with Zendesk, or even check real-time data from your own systems. What makes this really cool is that these AI agents can be trained on your own business data. They're not just giving generic responses, they're providing personalized help that actually makes sense. It can do things on behalf of your business for your customers, things like upgrading their subscription for them. adding members to a dashboard for you, and checking the limits of their plan, all based on your custom workflows. Plus, they work across all your channels, from your website to WhatsApp to Slack, so that your customers can get help wherever they are. And the best part, you don't need to be a coding wizard to set this up. Chatbase has made it super simple to set up and manage these AI agents. No matter what your technical level is, anybody can set these things up. If you want to see how Chatbase can transform your customer experience from, please wait. to it's done, check out the link in the description. Trust me, your customers are going to thank you for this one. And thank you so much to chat base for sponsoring this video. There is a little bit of darker news to come out of Google this week. Google actually removed some terms. They removed the pledge to not use AI for weapons and surveillance. In fact, I believe that when Google acquired DeepMind, one of the terms that DeepMind put in place around that acquisition was that Google had to agree to not use AI for. weapons and surveillance. That was part of the agreement with DeepMind. So it is very interesting that they sort of flipped on this stance. And Demis Hassabis, who's the CEO of DeepMind, seems to be on board with this change because he did say there's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. Demis Hassabis and Mustafa Suleiman, the original founders of DeepMind, both put that rule in place in the beginning of, you can't use this AI for weapons. Mustafa's out. He's over at Microsoft now, and it seems like Demis has sort of changed his thinking on it. All right, so OpenAI and Google were sort of the big stories of the week, but there was a handful of smaller but still interesting AI news to come out this week. So now I'm going to sort of rapid fire a whole bunch of other little things that happen in the world of AI, starting with the fact that Mistral AI, a sort of competitor to OpenAI out of France, launched a new version of Le Chat. Now they've had the chat for a while. It's a free chat bot that you can find over at chat.mistral.ai. And it can do a lot of the same things you would get out of chat GPT. Things like search the web, generate images, code interpreter. And it even has a canvas mode where it'll put any sort of code and writing inside of a canvas. Very similar to chat GPT. They do now have a pro plan, which I believe is 15 bucks a month. Yeah. So they do have a pro plan now available that gives even more access. and reduces the limit of messages per day. But even the free version is still pretty dang impressive. The most impressive part about Mistrawl is how fast it is. People have been claiming they're getting a thousand tokens per second output when they ask it a question, which is mind-blowingly fast. In fact, I came across this video from Val on X here, who is an intern over at Mistrawl. And well, just check this out. They give it the prompt, generate me a kawaii calculator in Canvas. And we can see that it actually generated everything in like near real time. That calculator that popped up, that happened in real time. I didn't speed up this video. They didn't speed up their video. They gave it the prompt to generate the Kauai calculator and it generated the code, showed an example of it. They started giving it some extra prompts like now make it nature themed. And within seconds created a nature themed calculator. And it's all like practically instant. That's how fast it is. And we can see. Val here says, no, this video is not sped up. Genuinely mind blowing. And it's available to all users right now. So it's available for free. Just to give it my own test, I'm going to make sure I have the canvas turned on. And I'm going to. Just type generate a kawaii calculator and we'll see how fast this is. I'm not going to speed this up at all. This is my own test here. And when I press the button, I will keep on talking. It wrote all that code practically instantly. And that was super fast. Now it created an HTML. So let's just double check to see how it did. And here's the calculator that it generated. Let's actually see if it works. Nine plus nine equals 18. 18 times. two equals 36 so this calculator actually works uh it's pink and yellow and it generated it in two seconds maybe mind-blowingly fast again totally free chat.misterall.ai there's a little bit of news out of anthropic this week they gave us an area to try to jailbreak clod and see if we can get it to output dangerous responses there's eight levels that it goes through and they actually have a bounty where they'll actually pay you if you manage to jailbreak all eight questions. So far, nobody's managed to do it. But there is a little bit of other news around Anthropic. Lyft is starting to use Anthropic's Clod for their customer service, claiming that it reduces the average resolution time for a request by 87%. So if you're using Lyft and you run into issues and you try to contact customer support, it's actually using Clod to sort of help you get through whatever issue you've got. We also learned that Amazon Alexa has an event coming up. On February 26th, Amazon's holding an event and a spokesperson said the event is Alexa focused, but then declined to elaborate. So really, all we know is that they have an event coming up. They're going to be talking about Alexa and most people believe that they're going to roll out Alexa with a much smarter AI. Amazon said in the past that their AI in Alexa is going to be powered by Anthropix Cloud. So that's the announcement that everybody's expecting on February 26th is that Alexa is now going to use Cloud. and it's not gonna be as dumb as it used to be. GitHub Copilot now has what they call agent mode. It says here that the new agent mode is capable of iterating on its own code, recognizing errors and fixing them automatically. It can suggest terminal commands and ask you to execute them. It also analyzes runtime errors with self-healing capabilities. So it sounds to me like it's using one of these like reasoning models where it will generate code, sort of double check its own code and then give you the code. It says in agent mode, Copilot will iterate on not just its own output, but the results of that output. It will iterate until it has completed all subtasks required to complete your prompt. Instead of performing just the task you requested, Copilot now has the ability to infer additional tasks that were not specified, but are also necessary for the primary request to work. Even better, it can catch its own errors, freeing you up from having to copy paste from the terminal back into chat. I've personally never used GitHub Copilot. I've been much more on the... cursor train myself, but this sounds really, really handy for it to sort of double check its own work and pull stuff in from the terminal when something's not working properly. That just sounds like great quality of life updates that I imagine tools like cursor will get as well. And since we've mentioned cursor, I want to point this out real quick because I found it fascinating that cursor is literally the fastest growing SAS company in the history of SAS. So SAS is software as a service. And if we look at this chart here, we can see this is cursor's growth curve it basically took one year to get to 100 million in annual recurring revenue we can see wise deal together ai core weave open ai and docu sign and all of their respective charts it took docu sign 10 years to get to 100 million in annual recurring revenue it took cursor only one year that's pretty mind-blowing how quickly cursor is growing and i think it comes down to the fact that tools like cursor make it so literally anybody on the planet can write little software for themselves. I've used it multiple times to solve little problems in my own workflows. Like I wanted a tool to quickly convert files from any image format into a JPEG. I use cursor to create that app in about 15 minutes. And now I have a simple workflow where whenever I grab an image from any app or download on the internet or anything, I don't have to open it up in like a photos app and save it as a new file. I literally just drag and drop it over a box. It converts it for me automatically. It saves me so much time. And I've created a handful of little tools like that because of tools like Cursor, and I don't really know how to code. So I see why it's growing so quickly. It totally democratized the ability to make simple apps. All right, let's move on to the sort of creative side of AI, because there's been a handful of updates in that world as well, including the fact that if you use Grok inside of X, you can now actually edit images. If I head over to Grok inside of X here and tell it to... generate an image of a wolf howling at the moon. We get four images here. Now, if I click on one of these images, there's a new button that says edit with Grok. I can click on this button and describe what I want to change in the image. the image. I'll say make the sky a red color. We'll give it that prompt and you can see we get pretty much the same image composition back but now the sky is a reddish color. Pika Labs rolled out a couple new features this week including the Pika Scenes which allows you to upload an image of your pet and it will actually turn that image into an AI generated video of your pet doing something interesting. They also rolled out this new feature called Pika Ditions. This is where you can give it a real life video plus an image and it will take what was in that image and add it to your video. Like this rabbit we see here or this person opening their laundry where an octopus climbs out. Here's a video of a woman with curlers in her hair and then a lion pushes her aside with curlers in his hair. So you can see here's people playing basketball. Here's an image of a bear. It puts the bear in with somebody opening a door, somebody doing yoga with a train behind them. So you can basically give it like any video plus an image and it will figure out how to like. work that image into the video it's called peak additions and this little baby popping out of the trash can is probably my favorite scene i've seen from it but if we head on over to pika.art we can see down in the bottom we have a few new buttons like pika scenes and peak edition so if we do pika scenes i could throw in a picture of my dog here give it a prompt like the pet is flying on a private jet and here's the video we got out of it with my dog flying on a private plane actually looks pretty good kind of looks like him other than the fact that like his back legs don't move you properly when he's walking around. It actually got the face and head looking pretty accurate, honestly. So that was the peak of scenes. Now let's try the peak edition. You'll notice you can upload a video and you can upload an image and it pre-fills the prompt in for you. It says, add this to my video based on the current actions in the original video. Come up with a natural and engaging way to fit the object into the video. So I uploaded a quick video of me talking in front of my camera here and I threw in an image of a wolf howling at the moon. Let's just see what happens when we try to blend those two together. And my first attempt did not work at all. It kind of made my face look a little more AI generated, but it didn't add the wolf howling at the moon. Let's add a donut and see what happens. And well, this time I can definitely see it added a donut in. Let's see what it looks like. So it pretty much just put the donut in the corner of the video. I guess you probably need a video with a little more action going on than me just, you know, talking into the camera like this. But that's. peak edition, something fun to go play around with. But I also wanted to show off what came out of Topaz Labs this week, a company that makes a really, really good upscaler. I use it to upscale images all the time. I use it to upscale video footage all the time. They actually just released what they call Project Starlight, which is the first ever diffusion model for video restoration. So it takes old, low quality videos. and turns them into high resolution videos. So let's take a peek at this video down here of a Muhammad Ali fight. You can see on the left how sort of grainy and pixely it is. And the one on the right is the more upscaled version that used this Project Starlight. And it's pretty impressive how much higher quality it is. Here's another example where we can see the side by side of what looks like was recorded on a VHS tape to something that looks quite a bit better quality here. It looks like it's in early access right now, and you got to like and comment to get access. So I'll link it up in the description if you want to get involved. There's also some really cool research that came out this week, like this OmniHuman1, which is basically a tool where you can give it a single image and an audio file, and it will combine them to make like a deep fake. So check this out. Here's like a 10 second clip of one. The first frame was the image that they uploaded. And then the audio you're going to hear was the audio they uploaded. And it turned it into a deep fake of that person talking. Give people something to believe in and they will move from you and me, us. And here's another one with Einstein. What would art be like without emotions? It would be empty. What would our lives be like without emotions? They would be empty of values. So we're at a point now where you can just have an image of a person. a soundbite from that person that could even be made in 11 labs. So it could be something that they ever actually said. And you can combine those two and make like a deep fake with them. That's OmniHuman1. And then there's also this one called VideoJam, which is like a new way of training video models that make them so much more coherent. Like we can see gymnastics, what it looks like for most videos on the left here. And then if we look at it again with the person on the right, it actually looks like somebody doing gymnastics. It figured out the proper physics and how people should move. Here's another one of somebody doing like a weird ring thing where that doesn't look right. But if we go back and look at the updated version on the right, you can see it actually figured out how to make it look. And this is just, again, a new way of training these AI video models so they have a much better understanding of the physics and how they should look. You're going to see this in a lot of other video models. You'll probably see this in Kling and Runway and Hello AI and Pika and all these other tools. Because with this research, they can actually sort of attach this to their existing technology. Now, I'm not going to go too deep into these research papers because I actually did a video earlier this week called Seven Insane AI Video Breakthroughs You Must See. I talk about those two papers that I just showed you as well as five other papers that I find really fascinating that have come out within the last couple of weeks. So check that out if you want to dive deeper into all of this cool AI research that's coming out that maybe we don't have access to. But it's like. within weeks, maybe months away of it being publicly available for anybody to get their hands on. And a couple of last real quick things. There was a new bill introduced, I believe in the Senate that wants to make it illegal to download deep seek with a penalty of up to 20 years in prison. Now, I don't think this thing's ever going to get passed, but there are people in the government that want to make it illegal to use some of these open source models. It's something to be aware of. And in the final bit of news that I'll share this week, the Beatles won a Grammy this week for a song that was assisted with AI. So their song Now and Then used AI to clean up some old vocals that John Lennon recorded. And they put together a song with these like AI remastered vocals. And well, the song went on to win a Grammy. So that's pretty cool. And that's what I got for you today. Again, another week with tons of news. I mentioned it's not gonna slow down anytime soon. It didn't slow down this week. I doubt it's gonna slow down next week. So if you want to make sure you stay looped in on all of the latest AI news, I make a breakdown video every single Friday where I try to cover all of the news that I think is worth talking about from the past week in the world of AI. I also like to create tutorials and talk about different tools and research that are coming out in the AI world. So if that's the kind of stuff you're into, give this video a like and maybe consider subscribing to this channel. That'll make sure more stuff like this shows up in your YouTube feed. I've also been doing some experimenting with the channel. You'll probably notice I've been testing new thumbnail styles and new titling styles and new video styles and things like that. So if you have feedback for me, I love to hear it. Put it in the comments. I really, really appreciate anything you guys put in the comments. All actually useful feedback is really, really valuable to me. And finally, before I go, I should remind you to check out future tools.io. This is the site that I built where I share all the cool AI tools I come across. I add tons of new tools every single day. There are just so many AI tools. So I made it super easy to filter them and find the exact tool you're looking for, for your needs. Even put a match picks on here so you can find the tools that I think are the most interesting right now. I keep the AI news page up to date on a daily basis. I keep it simple and basic and just a list of here's all the important AI news that's happening. And if you want to get the latest news and the coolest tools mailed to you twice a week, join the free newsletter. I'll keep you looped in directly in your inbox. And by joining the free newsletter, you also get access to the AI income database, which is a little database I've been building out of cool ways to make money using the various AI tools that are available. Again, it's all free over at futuretools.io. Thank you so much again for tuning in. Thank you for nerding out with me today. Thanks so much to Chatbase for sponsoring this video. Really appreciate all of you for tuning in and I will hopefully see you in the next one. Bye-bye.