Transcript for:

we have so many topics to talk about today there have been a number of announcements and Innovations just in the last few days so let's just get right into it first Mark Zuckerberg during an interview talked about the future of meta AI glasses and if you watch my channel you already know I love my meta Rayband glasses so where are we headed full interactive Holograms let's watch this clip basically going to be three different products there's going to be kind of the displaylist glasses that just do Ai and can capture content and you can you know list audio books and music and take phone calls and all that then I think that there's going to be another step up where it may not be it's not going to be full holographic display in the sense that it's not going to be like your full field of view as a hologram um but I think you'll get maybe like a little bit of a heads up display and I think that that's going to create a bunch of interesting use cases too you'll be able to get notifications you'll be able to message with people you'll be able to um message with with AI um answer any questions and not just have it speak to you but also be able to kind of see which is is higher bandwidth so that's going to be exciting and then I think there's going to be the most premium version which is kind of the full field of you which is you know we're having this conversation in the future and like I'm a hologram sitting on your Liv on your living room couch or you're here with me and and it's not just a video call it's not just like there's a screen and you're there as a hologram I mean we can we'd be able to interact right so like you want to play cards it's like okay here we we got a deck of cards it's a hologram we're interacting I think that's going to be pretty wild and that that still is where I think it's all going and it seems like meta is not alone in this thinking Apple recently was reported to have given up on Apple Vision Pro 2 and instead is focusing on a much cheaper much smaller version and obviously less capable I bought the Apple Vision Pro and I got to be honest I barely use it now but maybe I'll do a follow-up review of the Apple Vision Pro soon and explain why all right and sticking on the topic of meta apparently the biggest version of llama 3 might be right around the corner in fact what's at beta users on Android have access to it this is a screenshot and you can see right here llama 370b default and llama 345b preview better for more complex prompts so apparently there's been a number of people posting this kind of screenshot showing that they've had at least limited access to llama 3 405b you know when it drops I'm going to be testing it extensively and I really think llama 3 405b is going to get open source really close to Frontier models but with 405 billion parameters it might not fit on any local machines even with high quantization but we shall see next the text to video platform gen 3 by Runway ml is now publicly available to anyone who wants to use it it is s quality text video except where is Sora but now we don't have to wait anymore we can use gen 3 check out some of these awesome demo videos [Music] [Music] so really happy to see Runway ml release gen 3 Alpha to the public we're going to see some incredible creations and it really is only a matter of time until we have entire episodes of Television entire movies completely created by Ai and what I'm most excited about which I just talked about in a previous news video AI video games the next topic I want to talk talk about and something that is a bit foreign to me is AI role playing and apparently there are a ton of people who are completely addicted to character. a now I saw a few people post about it on Twitter and I'm trying to find news articles but simply typing in character. a addiction Reddit there are a bunch of different threads going and this one is all the way from a year ago character AI is tons of fun but can be scarely addictive this app is causing addictions to people I'm addicted to character AI I feel terrible why does so many people genuinely get addicted to character. a AI role playing is just not something that has piqu my interest in the past and it's maybe just not for me maybe I'm just not in that demographic it tends to be younger maybe teenagers who are getting into this but it does seem to be a real problem so is this social media 2.0 is this another technological innovation that is going to harm younger people's mental health it sure seems to be heading in that direction plus and this is is something I've talked about in the past we have declining birth rates and loneliness epidemics throughout the entire world and AI as a solution to loneliness in the short term probably makes sense and will probably help a lot of people but in the long run if people continue to get disconnected from other real human beings this is only going to exacerbate the issue so I'm going to keep my eye on this trend I want to know what you think have you used character AI if so what do you use it for do you find it to be addictive what do you think about this in general drop your comments below next Beth jizos posted this clip where he says models training on each other's data is like a human centipede effect and I think what he's saying is basically you get derivative data and then the models don't really become higher quality based on training on other similar quality models data and it kind of makes sense and I've heard Sam Alman talk about rather than using more and more data these models should be able to do more and more with the existing data and then Elon Musk hinted at grock 2 so sadly quite true it takes a lot of work to purge llms from the internet training data grock 2 which comes out in August so that's a reveal will be a giant Improvement in this regard now obviously x. A's Gro has a ton of unique data that nobody else has through the xplatform Twitter but it takes a tremendous effort to curate all of the garbage that is on x to something that is actually usable and high quality enough for a training set for artificial intelligence but one thing is we know it's now coming in August and yeah Elon Musk hasn't always delivered on his timelines but I'm excited nonetheless all right next and potentially what is the creepiest video that I've seen on the internet in a long time Brett Adcock posted about the University of Tokyo researchers developed a new technique to bind living human skin to robotic surfaces the technique involves using perforation type anchors inspired by human skin ligaments another step closer to more adaptable humanoids this is gross it is absolutely gross to me I don't know why now if this is what humanoid robots look like in the future I say no thank you but this is maybe a baby step towards humanoid robots actually having human skin kind of like Terminator so with eyeballs and it being forced to smile this is the thing of nightmares thank you to on demand for sponsoring this portion of the video on demand is the FastTrack to deploying realworld AI applications let me tell you about it first they have bring your own model so with on demand you can easily deploy your own custom models directly from hugging face like llama 3 this allows you to integrate the models that you know and love easily into your applications they also offer bring your own inference so if you have models that are hosted on an external service like Amazon sag maker you can easily integrate those into your applications as well allowing you to use your preferred inference service they also have a plug-in marketplace where you can explore a vast array of AI plugins to either enhance your projects or to share your own with the community you can create distribute and monetize your own plugins through on demand fostering collaboration and Innovation next I want to talk about playground this is your testing ground where you can configure debug and see live outputs from your plugins and models is an invaluable tool for fine-tuning your AI applications before deploying them to production and you can export your generative AI applications into any programming language and finally automations combine AI agents and plugins to manage complex workflows effortlessly which means you can automate repetitive tasks so you can focus on what matters most innovating and developing AI applications so check out on demand they have a suite of powerful features for AI development if you're a developer a research an AI Enthusiast like myself on demand has something for you thanks again to on demand now back to the video next Brett Adcock also gives an update about figure so news figure plus BMW groups spart andberg plant fully autonomous Aid driven Vision model neural networks for All Grass more details below so I'm going to play this video I'll play It sped up and I'll cut some of it out but it really shows some pretty cool progress for figures robot next Andre karpathy who is one of the leading Minds in artificial intelligence posted this over the weekend 100% fully software 2.0 computer just a single neural net and no classical software at all device inputs audio video Touch Etc directly feed into a neural net the outputs of it directly display as audio video on speaker screen that's it this is something that I have been talking about for a while now and I've been talking about it as input to large language model direct to compute and I really do think that is the future of the Computing model there is no need for software at that point there is no need for applications at that point and what I've also been saying is there's likely not going to be a need for developers at that point because if everything is done with the large language models the only thing we really need is Engineers who can control and kind of guide the large language models but at a certain point if you read situational awareness we're going to have AGI that can actually do the research and guide the AI itself so I really do believe that this is the future of software development and deployment and apparently Andre thinks so as well so someone commented it would be even nicer if the neural net can self-improve learning and then Andre responds with in context learning is learning then you bunch up things and the next time your computer goes to sleep it fine tunes on it so cool somebody asked so it can't run doom and Andre says it can probably very close to simulate Doom if you ask nicely and that's the point you can ask it to do anything and dynamically it's going to create the quote unquote software or at least a simulation of the software in that moment and that does seem like the inevitable conclusion of software in general all right next lm.org relias route llm which seems really cool and Akin to agents with an orchestration layer on Top This is an approach that I really like not all prompts require the highest level Frontier Model to answer in fact that's kind of what we saw with mixture of agents and a lot of prompts can actually be handled by much smaller much more efficient and much less expensive models so let's read what this says not all questions need GPT 4 we introduced route llm a routing framework based on human preference data that directs simple queries to a cheaper model this is fantastic with data augmentation techniques route llm achieves cost reductions of over 85% on mty bench and 45% on mlu while maintaining 95% of GPT 4 performance fantastic so a theme that I continue to hear from the likes of the biggest chip makers in the world is they really want to push the majority of AI compute down to Ed devices so your phone or your computer and of course I love that I love local models and I love having the Privacy the security the low latency that you can achieve by running everything locally and now with things like route llm you can actually do even more of the compute on device cheaply super efficiently while still achieving gp4 performance and only when you absolutely need a large Frontier cloud-based model do you just send it off to them comparing against commercial offerings Martian and UniFi AI on Mt bench and achieve the same performance while being over 40% cheaper our model data sets and code for serving and evaluating llm routers are all open source we are excited to see what the community will build on top they released the code they released a bunch of other information about it and I plan on making a full video on this topic specifically so that's it for today if you enjoyed this video please consider giving a like And subscribe and I'll see you in the next one one