Elon Musk discussed AI's rapid growth and his predictions for 2025, highlighting both excitement and concerns. I think at this point it's obvious to everyone that AI is advancing at a very rapid pace. Yes.
You can see it with the new capabilities that come out every month or every week sometimes. You know, AI at this point can write a better essay than probably 90%, maybe 95% of all humans. So they write an essay on any given subject. AI right now can... can beat the vast majority of humans.
If you say, draw an image, draw a picture, it can draw, like, if you try to say Midjourney, which is the aesthetics of Midjourney are incredible, it will draw, it will create incredible images that are better than, again, like 90% of artists, objectively the case. And it'll do it immediately, like 30 seconds later. We're also starting to see AI movies.
We're starting to see short films with AI, AI music creation. And the rate at which we're increasing AI AI compute is exponential, hyper-exponential. So there's dramatically more AI compute coming on online every month.
You know, there seems to be roughly, I don't know, the amount of AI compute coming online is increasing at like, I don't know, quote, roughly 500% a year. And that's likely to continue for several years. And then the sophistication of the AI algorithms is also improving. So we're bringing online a massive amount of AI compute and also improving the efficiency of the compute and what the AI software can do. He mentioned situational awareness, pointing to faster AI development due to stronger compute power and efficient algorithms.
Quantitative and qualitative. I think next year you'll be able to ask AI. Musk believes AI will soon create short films with tools like Meta's MovieGen already showing promise.
If optimization is probability of misgendering is zero. No humans, no misgendering. Problem solved.
Now we're back to Arthur C. Clarke, who's pretty prescient. Yes. So that's why the most important thing is to have a maximally truth-seeking AI.
That's why I started XAI, and that's our goal with Grok. People will point out cases where Grok gets it wrong, but we try to correct it as quickly as possible. He aims for Grok, his chatbot, to be the best AI this year, despite tough competition from Gemini, Claude, GPT-5, and others.
And, yeah. That's what we want, obviously. Is there any way, I guess, to... set limits on the decisions that machines can make that affect human lives and make certain that there's some trigger in the system that inserts a human being into the decision-making process.
Well, look, the reality of what's happening, whether one likes it or not, is that we're building super intelligent AIs, like hyper intelligent, like intelligent, more intelligent than we can comprehend. Yes. So I would liken this to like, let's say you have a child that is a super genius child that you know it's going to be much much smarter than you, then what can you do? You can instill good values in how you raise that child.
So even though you know it's going to be far smarter than you, you can make sure it's got good values, philanthropic values, good morals, honest, productive, that kind of thing. Controlling, at the end of the day, I don't think we'll be able to control it. So I think the best we can do is make sure it grows up well. He reiterated concerns about super-intelligent AI, recalling his early open-source vision. when founding OpenAI.
So I think the best we can do is make sure it grows up well. You've been saying that for a long time. Yes, I've been saying it for a long time. Yes. Are you still as worried about it as you seemed to be two years ago when I asked you about it?
Well, I think that my guess is like, look, it's... 80% likely to be good, maybe 90. So you can think of a glass as 80% full. It's probably going to be great.
There's some chance of annihilation. And you'd say the chance of annihilation is 20%? 10 to 20%, something like that. Musk noted super intelligent AI can't be fully controlled.
He monitors P-Doom, the probability it might endanger humanity. How concerned is Sam Altman about annihilation, do you think? I think, in reality, he's not concerned about it. I don't trust OpenAI. I mean, you know...
I started that company as a non-profit open source. Yes. The Open and Open AI, I named the company. Yeah. Open AI as an open source.
And it is now extremely closed source and maximizing profit. So I don't understand how you actually go from being an open source non-profit to a closed source for maximum profit organization. I'm missing...
Well, but Sam Alban got rich, though, didn't he? At various points, he's claimed not to be getting rich. But he's claimed many things that were false. And now apparently he's going to get $10 billion of stock or something like that.
So I don't trust Sam Altman. And I don't think we want to have the most powerful AI in the world controlled by someone who is not trustworthy. And sorry, I just don't. That seems like a fair concern. Yeah.
But you don't think as someone who knows him and has dealt with him that he is worried about the possibility this could get out of control and hurt people? He will say those words. He criticized OpenAI's shift in mission.
and expressed dissatisfaction with current leadership there. If AI did, if it became clear to the rest of us that it was out of control and posed a threat to humanity, would there be any way to stop it? I hope so. I mean, if you have multiple AIs and ones that are, hopefully you have the AIs that are pro-human be stronger than the AIs that are not.
Battle the AIs? Yeah. I mean, that is how it is with, say, chess these days. The AI chess programs are vastly better than any human.
and incomprehensibly better, meaning we can't even understand why it made that move. Why they're so good, right. We don't even know why it made that move.
We don't even know why it made the move. In fact, some of the moves will seem like blunders, but then turn out to checkmate. For a while, there was some... the best human chess players with the best computers could beat just a computer and then it got to the point where if you added a human it just made everything work and then it was just ai it's just computer programs versus computer program um that's that's where things are headed in general make sure we instill good values in the ai what's everyone going to do for a living i mean in a benign ai scenario that is probably the biggest challenge is how do you find meaning if ai is better than you and everything um that's the benign scenario that's the good news well yeah but i guess A lot of people like the idea of retiring.
Really? Are you looking forward to it? No, not me.
I'd like to do useful things. Don't you think it's a universal desire? It's not universal in that there are certainly, I know, many people who prefer to be retired. They prefer to not have responsibilities and engage in leisure activity. And we're on the cusp of this.
Finally... Musk addressed how AI could disrupt jobs and economics, suggesting ideas like universal basic income as society adapts. Meaningfully regulating AI, which will eliminate the purpose for most people's lives and could kill us all, it's a little weird.
Yeah, I think we should have something above nothing. in that range. Yeah. But why don't we?
I don't know. You know, all the way back, during the Obama presidency, I met with Obama many times, but usually in like group settings. The one one-on-one meeting I had with Obama in the Oval Office, I said, look, the one thing that we really need to do is set up at the beginnings of an AI regulatory agency. And it can start with insight where you don't just come shooting from the hip throwing out regulations. You just start with insight where the, The AI regulatory committee simply goes in to understand what all the companies are doing and then proposes rules that all the AI companies agree to follow, just like sports teams in the NFL.
You have proposed rules for football that everyone agrees to follow that make the game better. So that's the way to do it. But nothing came of it. What did he say when you said that to him?
He seemed to kind of agree, but also people didn't realize where AI was headed at that time. So AI seemed like some... super futuristic than sci-fi, basically. So like I'm telling you, this is going to be smarter than the smartest human.
And my predictions are coming absolutely true. And so we need to have some insight here just to make sure that these companies aren't cutting corners, doing dangerous things. But Google kind of controlled the White House at that time and they did not want any regulatory... Well, that's it.