Transcript for:
Concerns Over AI Investment Viability

All right. Investors are suddenly getting very concerned that AI isn't making any serious money. Oh man, I really hope this is a good one. I really hope this is a good article. Oh my gosh, this is exceptionally, exceptionally bright. My eyeballs apparently have futurism not on my dark readers. Oh, that's why. Okay. Oh, nope. Here we go. Investors are suddenly getting very concerned that AI isn't making any serious money. We sense that Wall Street is growing increasingly skeptical. Do you think that might have anything to do with the fact that open AI might? go broke. There's a bunch of articles that are saying that it's going bankrupt because it's projecting a $5 billion loss. That's a lot of billions. Imagine spending $5 billions in a year on losses while making money. Negative $5 billion. That's a hell of a miss, man. Oh, look at... Is this sad, Google, man? I lost 5 billion. More, please. Is this why Sam Altman tried to raise $7 trillion? He's just like, I need $7 million now. All right. An increasing number of Silicon Valley investors and Wall Street analysts are starting to ring the alarm bells over the countless billions of dollars being invested in AI. An overconfidence they warn could result in a massive bubble. Could? Is. I believe the keyword you're looking for is the word is. Wow. You don't say. A bubble, you say? Ain't no way. Gollin'bros. Dude, it is actually wild that this, it's taken this long for this article to come out. I'm surprised that anybody that's used the AIs hasn't kind of figured that out. That all it's gonna do is destroy customer service. Like that's what AI is gonna do. It's gonna destroy customer service in the worst possible way. I just need one, just bro, just one more ad, bro. Bro, just, just bro. Bro, just one more ad. Just one, bro, just one more ad. The Washington Post reports, and investment bankers are singing a dramatically different tune than last year, a period marked by tremendous hype surrounding AI, and are instead starting to become wary of big tech's ability to actually turn the tech into a profitable business. Yeah, that's good. That's good. At least, hey, I actually really do hope this happens. And what I mean by that is I really do hope that we can get to the point where investors realize that AI isn't the greatest thing ever. And then AI can actually assume its proper role, meaning that there's something nice about AI. And I think that there's potentially a future where things are very, very nice due to it. I think there is a reasonable future in which coding could get a lot different due to AI. But I think that there's also a very reasonable future that we've already hit about as good as it can get, and coding will never get that much better due to AI. And so I'm just like, I'm just simply. happy that it could get better. And I'm happy to see investors finally, I'm hoping at least, you know, again, mainstream media, you know, the mainstream media can't trust mainstream media. I'm just hoping that this is actually real and that investors are doing that. And hopefully some cold water is poured on the industry right now because it is out of control right now. It is absolutely out of control. Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful. Goldman Sachs most senior stock analyst Jim Covello wrote. in a report last month. Oh my gosh. Is somebody actually using, whoa, whoa, whoa, whoa, whoa, hold on. We actually do, we might just have cold water hitting the hype train. Let's go. Overbuilding things the world doesn't have a use for or is not ready for typically ends bad. Let's go. Typically, let's go. Let's go, Goldman Sachs. That's great. Earlier this week, Google released its second quarter earnings, failing to impress investors with razor thin profit margins and surging costs related to training AI models. Capital expenditures are surging. far past where the company had been previously spending, as Wall Street Journalist reports. Remember, for those that don't remember, there's been all these reports about Google kind of going into this like wartime mode when it comes to AI, and effectively focusing the entire company on making these large LLMs, because it's kind of been this, you know, this utopia in which everyone's been promising, and so now all these companies are just going completely into it. And obviously, Google's LLM, not very good for those that can't remember. It does ban C++ to below 18-year-olds because, of course, C++ not safe for work content. I'm talking about that Malik and Freeboys. Little unsafe. Earlier this week, Google released its second quarter earnings, failing to impress investors with razor-thin profit margins and surging costs related to training AI models. Capital expenditures. I can imagine that training AI models has to be just insanely expensive. Because my little bit of work on MLPs and RBFs, it was already eight. Hours upon hours of training just for the smallest little tiny network. Capital expenditures are surging far past what the company had been spending previously, as Wall Street reports. This year's total spend expected to surpass $49 billion or 84% higher than what the company had been averaged over the last five years. I mean, it makes sense. The thing is, is that. This is going to totally pay off if Google is right. If Google, 49 billion is an insane amount of money. It is an insane amount of money, but at the same time, it's also Google that makes billions upon billions of dollars on their one thing that has ever made money, which is ads. And you've got to remember that Google, whole company rides on this one single thing. And ChatGipity, which is also losing billions per year, is creating their own search engine, right? You must remember like how... how important those things are to Google. If Google loses search, Google goes buh-bye. Therefore, their only attempt is to go into this next future of so-called guided search with LLMs and all that, right? However, Google CR Sundar Pichai is holding onto his guns, arguing that the risk of under-investing is dramatically greater than the risk of over-investing for us here. I generally agree with him. I think that he's completely right, because if... if this guy over invest and it turns out LLMs are like a big, just a big pile of crap and they don't actually end up being anything more than just, you know, customer service replacements, then yeah, they never, they they're fine. They get a hold onto their search. But if it is true that in the future that search goes away, Google would cease to exist, which is kind of a weird world to be in, right? It's a weird world to be in thinking that Google could exist without like, or we could exist without Google. Please let Google die. You know, like as much as I dislike Google and all the things they do, including traffic shaping and all that, like I get that that would be that it's it's like what they do is pretty terrible. I still remember that one video. I forgot who it was. I want to say it was Steven Crowder did this video years back and I tested it and verified it and actually saw. Yes, it did work where you could not find this content in the US. But if you went and search the same thing with like a Brazilian VPN, it would show up as the first result. And I remember seeing that, realizing that Google truly is playing God with our search. And because I always heard those things, I didn't really believe those things actually exist. And so I was like, ain't no way that that is real. And then it was real. And I was I was like, holy cow, they're actually doing this, like they're actually doing this, like whether or not I'm going to watch this guy's content doesn't matter. I actually did not want Google taking that choice away from me. And that's what I thought was really kind of terrifying about the whole thing. Right? I thought that was kind of terrifying about the whole thing. Google is in control. Yeah. But it's also weird. Like, remember, I grew up I was born in 86. I grew up in the 90s. I used Yahoo search engine in which you would search up anything and you'd get the worst possible results. And so then you went and started using what's it called? Google when it came out and it actually was like amazing. Crowder is a nut. Yes. No, yeah. Crowder is a nut. I'm not like, this is not a defense about Crowder, but what he is saying is 100% right, which is the dangerous part, which is that's what makes me like dislike Google. But at the same time, it'd be weird to actually like live in a world without Google because it's been such a thing in my life forever. Like we, you've got to be able to separate out how you feel about something versus just like your entire life having something, right? You can be a nut job and still write about stuff. Yeah. Broken clocks are right twice a day. Exactly. Uh, I'm a, you know, just be a normal person here. I feel like you're not paying the, let's see, hold on. I feel like we're not paying the real price of AI. I don't think authors of art have been paid if their work has been used to train AI. The cost of, uh, the cost of costing and research, uh, is probably insanely high. I wonder what will happen. Yeah, there's still an entire world that exists where AI is gonna go through its next phase, which is gonna be, I mean, how many of these places are gonna foresee lawsuits and problems and all that as things actually get taken off, as jobs actually are getting lost, right? There's like a whole bunch of stuff that they haven't really talked about yet, which is gonna be very interesting. Not investing to be, oh my goodness, why can't I scroll? Did I actually, frozen. I actually got too many ads on that page. I couldn't even scroll. Not investing. to be at the front here has much more significant downsides, Panchayat told investors on Tuesday. Sure, but the tech giant has a lot of cash to burn, but seeing any returns on those 49 billions will likely prove far more difficult. With AI market clogged with products that are still mostly free, well, I mean, they're not free. They're there to capture you, to make you pay at some point, right? Like that's the whole point. We all should know this by now. How many people have to be bamboozled before you realize this is how it works every single time? Things are free. You start using them, you build your life, your company, your whatever around free, you are either the product or they're about to rug pull you. There's no other possibility, okay? Just look at planet scale. Free, free, free, free, free, goodbye, right? This happens every single time. The tech costs a lot to run, but isn't bringing in much cash. As such, Google is facing similar challenges to Microsoft and Meta, which are committing vast swaths to their available resources to AI without a clear monetization plan. I feel like OpenAI has a pretty clear monetization plan and Microsoft's invested into it. What do you mean Microsoft? Microsoft is trying to make the greatest operating system ever to take pictures of what you're doing every 30 seconds and then provide you search, which is going to make your wife. Pretty surprised when she uses her search. According to Barclay's analysts, investors are expected to pour $60 billion a year into developing AI models, enough to develop 12,000 products roughly the same size as OpenAI's ChatGipity. But whether the world needs 12,000 ChatGipity chatbots remain dubious at best. Now, this is a great thing. This is a great call-out. Do we want more ChatGipities? Do you really want to go to Wendy's? and get it to program a Python script for you. Like, do we need this? Do we need more of it? More bots, please. We do expect lots of new services, but probably not 12,000 of them. Barclays analyst wrote in a note, as quoted by the WAPO, we sense that Wall Street is growing increasingly skeptical. I wonder what that's gonna do to NVIDIA, right? Because if NVIDIA is really looked at as the kind of like the bottom of the AI pyramid, then what's gonna happen right here? Wendy's nuts, I know, that was the joke. For quite some time now, experts have voiced concerns over growing AI bubbles, comparing it to the dot-com crisis of the late 90s. This wouldn't be a shocker to me. I don't think this would be a shocker to me. NVIDIA gonna crawl back to gamers? They're gonna just crawl back to gamers. Let's go. This actually wouldn't be surprising to me, honestly, if this happened, if we had a huge crash. Because for those that don't remember, back then, Qualcomm and Books A Million. If you don't remember Books A Million, Books A Million, bam, right? Books A Million. Like this was a real thing online. This thing was evaluated at like billions upon billions at one point or whatever. It was some outrageous evaluation back in the day. And back in the day, in 2000, 1999 slash 2000, this right here had a PE ratio that was outrageous. And back in 1999 slash 2000, PE ratios meant a lot more, right? You didn't invest into a company that had a 35X PE ratio. It was considered, you know, value investing was kind of the way the market worked. Now value investing doesn't quite work that way, right? CrowdStrike can literally disable the world for a moment and still have a PE ratio of 495, right? You know, it's just we live in a different world. And so is this possible that we're just valuing all these AI companies outstanding and now it's going to just kind of like explode? Possible. Yeah. Capital continues to pour into the AI sector with very little attention being paid. to the company fundamentals. Tech analyst Richard Winsor wrote in March research note, I'm sure, let's see, in a sure sign that when the music stops, there will not be many chairs available. I think the thing that's missing here is that I think that the hope of AI is being poured into this. I think there's this belief that AI is going to be able to be everything that we think it could be. And so the extrapolation is that in five years, AI is gonna be just outrageously amazing. There's no proof that it is. But there's a belief that it is. And I think that's why all this money is being poured into it. Because if it does end up that way, holy cow, these companies that invested into it have an impassable moat. Right? Like, you will have Microsoft. You'll have Google. You'll have Meta. And they'll have moats in which no company can ever, like, take down. Like, they'll literally own the AI space. And nobody will be able to compete with them. And they will be this, like, forever group of kings in which are impossible to take down. It's just FOMO. Of course, it's just FOMO. But the problem is, is there is a possibility of missing out. There does exist at least one future in which that can happen. You think that, okay, so the reason why AI hype is going away is because of censored AI models. It is actually something really big and not a bulb. Prove me wrong. I think the other way is that you have to prove why AI hype is the problem, why they're going away. I don't think AI censorship is the reason in which AI models are going away. It leads to techno-feudalism. Yeah, I think that this is the possibility. If AI really is what it is, we'll be in techno-feudalism. If AI isn't what it is, we're going to see some companies kind of go under due to it. Like open AI might not exist in a few years. Big companies love censorship. They do. Big tech company loves censorship, for sure. Absolutely. No one argues that one. I think anybody, anybody that is objective will say that that is true. All right. This is precisely what happened with the internet in 1999, autonomous driving in 2017 and now generative AI in 2024. In a blog post the last month, Sequoia Capital partner David Kahn argued that the entire tech industry would generate $600 billion a year to remain viable. It's a lot of billions, man. While speculative frenzies are a part of technology and so they are not something to be afraid of, he argued, AI tech is anything but a get-rich-quick scheme. It requires you to be rich to get richer. Right? Like that's, that is it. You can only get richer by, or you can only be rich to create this. There is no possibility to you create this as somebody who doesn't have the money. Insert Devin here. The wild thing is that Devin still is attempting to exist. Despite, despite neat code, literally taking it out back and shooting that thing like old yeller. A super interesting article my boss sent me could describe Facebook open sourcing all their shit. So much competition in the space. I can't see big moat standing. You don't think so? You don't think big moats? To me, big moats is the money, right? That's, that's my assumption is that, oh my gosh, I can't scroll on that page. All right. Dude, this is so wild. All right. But whether AI chatbots like ChatGipity will ever turn into cash printing machines to recoup these enormous investments remain to be seen. As of right now, the cost of training these AI models and keeping them running is massively outpacing revenue. I think the thing is, is that ever since Google happened and Meta and Amazon, there's this notion that there is this next big thing that's going to create the next trillion dollar company. And everybody wants to be a part of the trillion dollar company. And so this is why this happens, because there is a possibility it will happen if it does, right? How much time does the tech industry have to stop bleeding cash as it pours money into tech? Well, again, there's such a massive money-making per employee as it is that they're able to do this. If recent reports are to be believed, OpenAI may lose $5 billion this year and run out of cash within the next 12 months, barring future cash injections. Any early warning sign that smaller companies already struggling to compare with big tech may be snuffed out before too long. It's a cool article. I really think this is very, very interesting. This is a super, super interesting article. Like, it is super, super interesting that we actually are facing the potential future. Remember, because I've always said, in the early 80s, there was like an uptick in AI, or in the early 70s and 60s, there was a big uptick in AI. Then from the 80s through the 90s, there was just like nothing. AI effectively wintered super hard. And then there's some changes in models and things, how they're going. And by the time I started getting my master's in AI in 2010, AI was pretty interesting, but it was still considered in the winter time. And then... What, 2024? All of a sudden it was, or 2022, when Copilot came out, I still remember my first time using Copilot. And I did something really, really simple. That was something like this. It looks something like what? I did if player one wins, then I did like console.log win, you know, P1 wins. And then as I did the next one, Oh, sorry, there's no formatting or autocomplete. I did else if, and then it actually autocomplete player two wins. And I was like, wow, that's pretty cool. That's pretty cool. Look at that. It autocompleted that. And then it autocompleted this. And I remember being like, whoa, it autocompleted that. And then it autocompleted this entire else statement. Else. And I remember seeing that and thinking, that is wild. I cannot believe it just did that. I cannot believe I just watched an AI. auto-complete a tie portion. And it was so cool in 2022. And I think all of us saw that. Copilot was amazing when it auto-completed API keys. Those were the good days. But I remember seeing that and just seeing that, like my fascination and everyone's fascination in AI went at pretty much a vertical slope. But the real question is, does that vertical slope continue? Or does that vertical slope go right back and we're done? And so this... is where we're at. We're at one of two points. Vertical slope continues, it dies, or it grows fast enough that it will be relevant within the next couple years. Hey, hate it now. Said goodbye to Copilot last fall. Yeah, see, I've disliked Copilot lately as well. I've said goodbye to Copilot in the last three months because it no longer, it just doesn't, it's not quite the same. And so anyway, so that's kind of like where we're at, I'd say one of the most interesting points in the tech curve of all time. We're at a point in which we don't know what's going to happen to some of these really big bets of tech. And we're either going to see a just gigantic economic explosion around AI or the positive explosion where it actually does get better. And I think when ChatGipity 5 comes out, if ChatGipity 5 comes out and it isn't amazingly better, I think that's the end of OpenAI. I think chat jippity has to be, it has to be something that is just incredible. It explodes or it implodes. There's only one, one of the two is going to happen. When will it come out? Yeah, I don't know. Hype cycles describe this process pretty well. Yes, they do. I was watching a podcast between Andre Karpathy and Lex Friedman. Karpathy mentioned that he hopes to see that we'll start to use AI LLMs as a second brain to help us break down some unsolved problem barriers. Yeah, I know. But again. That's just not how these things work currently. The problem is that ChatGipity, remember, you have input goes in, gigantic cloud of LLMs, out comes the predicted result, right? And this is based on things that it's seen, right? It just guesses what comes next. It would be amazing. It's just probability magic, right? If A goes in, the probability of B coming out is the most likely that's the one we're choosing. That's not, it's not doing anything else. How I like to think about it is, here's the problem, here's how I think about it in my head. Here's the set of possible problems that I'm working on. Here's what ChatGPT has been trained on and trained on. If I look for something right here, which I've just successfully drawn a dick. If I look for something right here at the bottom of the shaft, the problem is, is the bottom of the shaft does not have training data. Therefore, Chad Jippity cannot seem to solve this problem with any sort of goodness, right? It just starts guessing stuff and there's nothing you can do about it. It just sucks, right? And so bottom of the shaft guessing seems to be where it falls apart. And so when people say we're gonna use these LLMs to like solve these novel problems, I just keep thinking of the bottom of the shaft problem, right? Like this just doesn't work, right? Yeah, that's it. Uh, the bottom of the shaft, not as good as the top. Well, the top is not even in the problem, right? Okay. We're doing, instead of just the tip, it's just the base. Anyways, there you go. I'm excited about this. I'm excited to see where the future goes. Part of me hopes for the downfall and part of me hopes for the upsell. The only problem I worry, the only thing I see is that the AI succeeding almost will necessitate, I think, a lot of humanity losing. That's kind of why I, that's like my big hesitation to it. I do think techno-feudalism will. happen afterwards. I think there will be a permanent kind of echelon that is created, or if you don't have, if you aren't able to afford your own world, then that's that. I know you disagree, but what's, what's the alternative people being good enough to just give AI out for free and everybody is able to use it. Like I just, people often don't just give things nicely away. Now I know the, to be completely fair, we used to work. An insane amount, right? Just an absolute insane amount of work. And then with the industrial revolution, we do work less. We all know what's not happening. Oh yeah, we all know that's not happening. I know. The thing is, is that I just, I'm black pilled on a massive change in technology being great for people. This is what George Hotz wants to do, right? Make private AIs available to everyone. But who trains it? Right? Who pays for the training? That's my problem is who pays for the training? Who makes it? Yes. So if somebody, if somebody has to pay for the training, that means you have to buy it. Like you have to buy it. I somehow, I doubt, I doubt this whispering Ravens. Um, you do know that the average, like, okay. So I grew up in Montana and in Bozeman, I hung out with a lot of potato farmers. Like you do understand that for about three months straight, he'd work from sunup. Till sundown, all of his tractors had duct-taped lights on it, and he would light up his field at night and work nonstop throughout the evening, from like 6 a.m. to 12 p.m. every single day, seven days a week, for three months straight. Okay, this notion that farmers had this nice, easy, simple life is just absurd. The name is the primogen.