Transcript for:
Insights from Sam Altman's AGI Blog Post

Welcome back to the AI Daily Brief. Over the weekend, OpenAI CEO Sam Altman dropped a new blog post called Three Observations. As is always the case when Altman writes a blog, the whole AI world started discussing it. What we're going to do today is go through and look at the key parts of the piece, I'll read a few excerpts, and then we're going to discuss five observations that I have about these three observations. I think one big thing that stands out to me. is that we are all still potentially radically underestimating the scale of the change that we are about to experience. This piece is all about AGI, artificial general intelligence. As a funny aside, they make sure to note that they're not using AGI in any term that would change their relationship with Microsoft. He actually had to put that in a footnote. But the point is, it's all about the world after AGI and what it's going to mean. The first part is a version of poetry, just talking about the steady march of human innovation, how it's always led to new prosperity. And then we get to the sub-theme. which actually isn't one of the three points, but which is woven throughout. Sam basically says, In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we're building together. In another sense, it is the beginning of something for which it's hard not to say, this time it's different. The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy our families, and can fully realize our creative potential. And here's the key line. In a decade, perhaps, everyone on earth will be capable of accomplishing more than the most impactful person can today. We'll come back to that, but first let's get into what he states are his three observations, specifically about the economics of AI. The first is, the intelligence of an AI model roughly equals the log of the resources used to train and run it. He identifies those resources as training compute, data, and inference compute. And he says, it appears that you can spend arbitrary amounts of money and get continuous and predictable gains. The scaling laws that predict this are accurate over many orders of magnitude. Number two, the cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. This is our own version of Moore's Law and Javon's Paradox all in one. The specific example he points to is the token cost of GPT-4 dropping 150 times between early 2023 and mid-2024. Last observation, the socioeconomic value of linearly increasing intelligence is super exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future, i.e. this is not a bubble. Then he goes on to talk about some specifics. A key piece of this is agents, which he says will eventually feel like virtual co-workers, and you get the feeling that eventually is later this year. Pointing to software engineering agents, he said they will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple days long. Importantly, he says it will not have the biggest new ideas. and will require lots of human supervision and direction. Still, he writes, imagine it as a real but relatively junior virtual co-worker. Now imagine 1,000 of them, or 1 million of them. Now imagine such agents in every field of knowledge work. And yet, somewhat paradoxically, in the next section, he talks about how, at least in the short run, everything will go on the same as it has. People in 2025, he say, will mostly spend their time in the same way they did in 2024. But, he writes, the future will be coming at us in a way that is impossible to ignore. and the long-term changes to our society and economy will be huge. We'll find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today. So what matters in that future? Well, he says, agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value. Resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness and enable individual people to have more impact than ever before. He points out that the impact will be uneven, specifically saying that the scientific progress that comes from AGI may be the impact that surpasses everything else. In terms of how this affects specific prices, he says that many will fall dramatically, specifically those where the constraint is the cost of intelligence or the cost of energy, but luxury goods and inherently limited resources like land may go up even more. And then he talks about policy and society and how unclear what to do next and how to address this future really is. They have only the barest of guidance. We believe, Altman writes, that trending more towards individual empowerment is important, with the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy. He points out that it's going to be important that the benefits of AGI are distributed broadly, but there may need to be new ideas for how to do it. One specific warning, vague though it is, in particular, he writes, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. He continues, we're open to strange sounding ideas like giving some compute budget to enable everyone on earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect. And then he ends on this doozy. Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025. Let me read that again. Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025. Altman is saying that all of us, all of us listening to this podcast, and everyone else living their lives around the world, all of the intellectual capacity they have access to from themselves, their friends, their family, and the AI at their fingertips combined is what anyone will have access to one decade from now. And that, it seems, is where Altman finds his optimism. He concludes, there's a great deal of talent right now without the resources to fully express itself. And if we change that, the resulting creative output of the world will lead to tremendous benefit for us all. All right. So like I said, now let's go do five observations from my reading of this. The first is that there's a clear way in here on the scaling debates that we've been having for the last few months, with Altman continuing to come down on the line that scaling laws hold. Now, what's interesting is that he is now bundling inference into those scaling laws. So rather than drawing test time compute, which is the way that they're scaling these reasoning models as something fundamentally different, it's just a different version of the same equation of more resources equals better output. The unspoken piece here is that from where they're sitting. there's no reason to think that this doesn't just carry through all the way to whatever we decide AGI is, which is of course a controversial point, given that there are some like meta chief scientist Jan LeCun, who don't think that today's current architectures can ever get to AGI. So nothing particularly new here, but a doubling down of OpenAI and Sam Altman's previously stated positions. Second observation, again, a very obvious one, but it really isn't saying just how fast the cost of intelligence is coming down. We're thinking about this in Superintelligent, where we're pricing a product. that has some meaningful upfront cost because of the modality of interaction with AI. But we're trying to figure out if we expect it to cost a tenth of what it costs now in a year, how we should price it. One novel point here is Sam officially reifying Jayvon's paradox by arguing that lower prices do in fact lead to much more use. He doesn't back it up with any specific examples, but that's something I'd be interested to see from OpenAI's point of view. A third observation and a big sub-theme from this is the once again very obvious point, but one which I'm still not sure we're totally grokking. which is that there's a significant skill shift that's going to be required. Sam obviously paints a picture of the skills that he thinks are going to be most important, agency, willfulness, determination. But there's another one implicit in this idea of having access to a thousand junior virtual co-workers or a million junior virtual co-workers across every field of knowledge work. Presumably that means we all become managers. And that is obviously a very different skill set than doing whatever it is that we'd now be managing the robots to do. One of the things that I think that we are just starting to put together now is that the skill shift. that's going to be most required with AI is probably less going to be specific prompting techniques and tool usage, but instead totally different managerial disciplines and entirely new ways of thinking. A fourth point, which I think again lies just under the surface here, is the magnitude of this change that's coming. Altman has for some time been trying to downplay this, and there's even a little bit of that in here now. The idea of people in 2025 doing the same thing as people in 2024. However... it also very clearly feels like he's starting to get to the next point of his narrative. It comes out in this line where he says, in some sense, AGI is just another tool in the scaffolding of human progress. But, and you get the impression that this is what he really means, this time it's actually different. He also has a line just before the agency willfulness and determination line that feels like maybe it's a thesis statement for the whole piece. The future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. I think you can maybe read this piece. as trying to put a stamp on this scaling conversation and saying that from OpenAI's point of view, yes, AGI is still coming, despite what you've heard about the problems with AI scaling. My fifth observation is that there are no real policy ideas here, save perhaps the very lightly floated idea of universal basic AI or a universal basic compute budget. As Professor Ethan Malek points out, there is no clear vision of what the world looks like, and the labs are placing the burden on policymakers to decide what to do with what they make. Now, I'm sure that what Altman and OpenAI would say is that they're trying to provoke a conversation that we can all have, not just expecting policymakers to figure it all out for themselves, but the notion that there may need to be a little bit more prescription on at least the type of conversations that we're having might be a place to explore in the next blog post. Ultimately, all of this feels like another log on the fire of acceleration whose flame has just gotten bigger and bigger over the last couple of months. I personally can feel right now, or at least imagine myself to feel, some shifting sands. that are going to have fairly dramatic impacts on the years to come. All of those things, of course, which we will be talking about every day here at the AI Daily Brief for the foreseeable future. But for now, that is where we'll wrap. Appreciate you listening or watching as always. And until next time, peace.