Transcript for:
AI News Update

we have so much AI news to go over today Apple Tesla Elon Musk even Nintendo has updates let's get right into it our first story if you remember just last week I reported that apple is joining open ai's board as an observer and now it turns out they're not and not only that Microsoft and Apple are both leaving open AI board because of increased scrutiny here's the Bloomberg article Microsoft Apple drop open AI board plans as scrutiny grows apple is also not joining as expected after positions scrapped Global Watchdogs are looking into big Tech's clout in Ai and so what they're thinking is if they get too incestuous between all of the big tech companies open AI which it seems like it's heading in that direction they are going to be seen as a monopoly and yeah that is going to happen if they continue down this path so I'm actually kind of glad this is happening we need more competition we need more companies able to compete and I want Microsoft building their own AI I want Apple building their own AI I want open AI of course building their own so this I think might actually be a good thing now as a reminder Microsoft already owns 49% of open Ai and with that purchase of open AI they were able to get a board seat but now they're leaving apple on the other hand as I mentioned last week isn't paying open AI anything and they were supposed to get that board seat so kind of interesting the Dynamics at play there but as it says in this article Regulators in the US and Europe had expressed concerns about Microsoft sway over open AI applying pressure on one of the world's most valuable companies to show that it's keeping the relationship at arms length Microsoft has already integrated open AI Services into windows and co-pilot AI platforms the article goes on still the board resignation is unlikely to resolve the US Federal Trade commission's concerns about Microsoft's partnership with open AI a source familiar with the agency's thinking said now just a few months ago Microsoft pretty slightly did an acquisition of inflection AI which was run by this guy Mustafa Suliman and he is one of the absolute leaders in the AI industry now here's why it's so interesting Microsoft rather than doing an acquisition and likely having that acquisition blocked by different regulatory authorities they basically just hired this guy paid off the investors gutted The Entity that was inflection AI but left it there running and so they didn't actually do an acquisition so they're probably thinking that's enough to fly under the radar but I don't think anybody is that stupid not to see what actually happened and now Mustafa is CEO of Microsoft's AI division so they probably already sensed the FTC and the EU starting to creep up behind them and they were like okay well we'll just leave the board of open AI so maybe this will help them maybe it won't but I think again another smart move from Microsoft and it seems Microsoft isn't being singled out according to the article the UK is also looking into amazon.com's 4 billion collaboration with AI company anthropic and anthropic is the company behind Claud which is in my opinion the best AI model out there Claude 3.5 Sonet absolutely stunning I'm typically one to say the government can keep their hands off of Private Industry but in this case I want to see more competition and I think if everybody's jumping into the same company or a couple of companies it's going to reduce competition greatly and speaking of anthropic and Claude they seem to be releasing new features multiple new features every single week so let's talk about two new features that they released just in the last week first you can now fine-tune CLA 3 ha cou which you've been able to do with GPT 3.5 turbo and GPT 4 for quite a while now so this is kind of them just catching up but here they say our fastest and most costeffective model in Amazon bet Rock is now fine-tunable and fine-tuning we've covered a lot on this channel but it's basically the ability to kind of guide the AI into responding how you want them to respond so here they say in testing we tuned Haiku to act as a moderator on an example data set fine tuning improve classification accuracy from 81% to 99% and reduce prompt tokens per query by 89% so it is available today in preview next from anthropic claud's artifact feature which we touched on last week you are now able to publish artifacts online so as a quick reminder artifacts are an incredible UI element that as you are asking Claud something if it's anything lengthy like a piece of code or an SVG image or a graph it'll actually open up a separate window within that same Tab and output everything there and it keeps the conversation portion of the window nice and clean and now you can publish and share your artifacts next a seemingly small update from Amazon but cool nonetheless they are launching new Echo here from their Twitter post it says say hello to the allnew echo spot the latest edition to our lineup of Alexa devices so I think it looks really cool it comes in multiple colors it is just 44.99 and that's something I like about all of Amazon's devices they actually tend to be really cheap compared to Apple or Google and it has better visuals and improved audio quality it would be much nicer if they Incorporated large language models in to Alexa but I don't think they've really done that yet more on that in a future video next olama has released AMA 0.2 and it includes concurrency now enabled by default so a few notes about it olama can now serve multiple requests at the same time using only a little bit of additional memory for each request this enables use cases such as handling multiple chat sessions hosting code completion llms for your team processing different parts of a document simultaneously running multiple agents at the same time that is the best in my opinion and not only that you can run multiple agents with different models and if you've watched this Channel at all over the last year you know that is something that I am extremely bullish on is having multiple agents working together powered by different smaller expert models at specific use cases AMA now supports loading different models at the same time this improves rag agents and running large and small models side by side so I I've been thinking a ton about open- Source models smaller models more narrowly focused models and how to get the most out of them whether you're using techniques like route llm mixture of agents and this just helps with those use cases so congrats to olama for their recent release next it seems Elon Musk is going all in on building his own server Farms Elon Musk xai and Oracle have ended talks on expanding their server rental agreement talks broke down due to to disagreements over timelines and power supply concerns xai is now building its own AI Data Center in Memphis Tennessee the potential deal was estimated to be worth 10 billion the company is purchasing Nvidia chips from Dell and super micro for this project XII already rents about 16,000 Nvidia chips from Oracle Elon plans to build a supercomputer with 100,000 Nvidia gpus for training grock 3.0 Oracle has signed a deal with Microsoft to provide Nvidia powered servers for open Ai and so on and then Elon Musk actually commented on it so xai contracted for 24, h100s from Oracle and grock 2 trained on those grock 2 is going through fine-tuning and Bug fixes he already said and he just double down it's probably ready to release next month although we all agree Elon Musk has not been the best at timelines but that's okay he's doing great work anyways xai is building the 100,000 h100 system itself for fastest time to completion aiming to begin training later this month it will be the most powerful training cluster in the world by a large margin by the way for those of you who don't know xai has not been around very long open aai has been around for a decade plus xai has been around for like a year so the fact that they're competing so closely with the frontier models is amazing and hopefully they continue to open- Source their models the reason we decided to do the 100,000 h100s and next major system internally was that our fundamental competitiveness depends on being faster than any other AI company this is the only way to catch up Oracle is a great company and there's another company that shows promise also involved in that open AI gb200 cluster but when Our Fate depends on being the fastest by far we must have our own hands on the steering wheel rather than be a backseat driver and that seems to be Elon musk's strategy across the board that is what he did at Tesla they are completely vertically integrated that is what they did at Space X they are almost completely vertically integrated so that is his business strategy and something that has obviously worked extremely well for him over the years so excited for grock 2 even more excited for grock 3 you know I'm going to be testing it next it seems that even VCS are trying to get their hands on gpus nowadays for the simple reason of being more competitive in AI startup investing deals Venture Capital firm a16z stashing gpus including nvidia's to win AI deals so apparently they have purchased a ton of h100s and they are basically just kind of storing them and whoever they invest in they will give those h100s to that company and you might be thinking well why doesn't the company just buy them well it turns out it's not actually that easy to get your hands on vast amounts of computing power so the fact that they just have that ammo ready to go ready to give to whatever AI company that they invest in that is going to make them even more competitive in these VC deals so the Venture Capital firm which has $42 billion in assets under management has rented out the gpus to many of its portfolio companies and it also plans to expand the Venture which it describes as oxygen and of course oxygen is needed to grow fire to more than 200,000 gpus next from open AI they have announced a partnership with Los Alamos National Laboratory on bioscience research they are developing evaluations to understand how multimodal AI models can be used safely by scientists in laboratory settings so open Ai and Los Alamos National Laboratory one of the United States leading National Laboratories are working together to study how artificial intelligence can be used safely by scientists and laboratory settings to advance bioscientific research they are working on evaluation study to assess how Frontier models like gbt 40 can assist humans with performing tasks in a physical laboratory setting through multimodal capabilities like vision and voice this includes biological safety evaluations for GPT 40 and it's currently unreleased realtime voice systems to understand how they could be used to support research and bioscience unreleased when are you going to release that open AI please let us know so you can read more about it in the blog post it is quite deep so I'll link that in the description below if you want to learn more and not all AI news is good news this week it turns out Nintendo is putting a Line in the Sand and they are saying we're not using AI in our video games and that is counter to what a lot of other video game makers are saying they want to incorporate AI in fact most companies want to incorporate AI in some sense but according to PC Gamer Nintendo becomes the biggest company in the games industry and maybe the world to say no thank you to generative Ai and I'm not too surprised they create incredible IP they have characters that have been developed over years and years and are cultural icons and they don't want AI to just generate some character that they're going to be using in games they want everything to be their own now if you've watched this Channel at all you know how I feel about Ai and video games I truly believe video games in the future are going to be dynamically created in real time for an audience of one and that is going to be powered by AI a few weeks ago I showed off a couple of examples of that of what looked like the video game Call of Duty but it turns out it was simply AI creating it and it looked pretty good and of course it was only about 20 30 seconds long but we're in the first inning of of what is possible so imagine 5 10 years from now I truly believe and nvidia's Jensen Quang also says this that video games and really all content are going to be generated in the moment based on exactly what an audience of one wants to see or play so in an investor call the CEO of a Nintendo said in the game industry AI like technology has long been used to control enemy character movement so I believe that game development and AI technology have always been closely related generative AI which has been a Hot Topic recently can be more creative in its use but I also recognize that it has issues with intellectual property rights our company has the knowhow to create optimal gaming experiences for our customers for decades while we are flexible in responding to technological developments we would like to continue to deliver value that is unique to us and cannot be created simply by technology alone I kind of love that and you know what good for them let's have multiple different different approaches to creating the best of anything next a video was released by Google Deep Mind and Gemini 1.5 PR's long context window is now able to help robots navigate the world let's take a look so these are a thread of their latest experiments the first one shows a robot obviously navigating around it's a very short clip but it is being done by Gemini 1.5 and here they go on to say a limited context length makes it a challenge for many a models to recall environments so basically as they navigate around an environment they cannot continue to add that into the prompt but with a million token context window they can and that's what they're seeing here we took the robots on a tour of specific areas in a real world setting highlighting key places to recall such as Lewis's desk temporary desk area and then we asked it to lead us to those locations and it seems that it was able to do that so I'm showing you a sped up clip of that right now next we provided more multimodal instruction such as map sketches on a whiteboard audio requests referencing places from the tour visual cues like a box of toys with these acting as inputs the robot could carry out various actions for different people and that is what we're seeing right here again I sped it up so you can read all about it in this paper right here and I'll drop that in the description below and last from Tesla full self-driving 12.4.2 this is a really cool video of it actually anticip anticipating a pedestrian crossing the street it did not wait for The Pedestrian to start Crossing it actually slowed down as it saw that it was looking likely The Pedestrian was going to cross suddenly and as you can see it wasn't like this pedestrian was kind of leaning into the street obviously going to cross he was just moving around and just by those motions the Tesla slowed down so very cool update from Tesla so that's it for today if you enjoyed this video please consider giving a like And subscribe and I'll see you in the next one