OpenAI stopped being non-profit or split up. Can you describe that whole process? Yeah, so we started as a non-profit.
We learned early on that we were going to need far more capital than we were able to raise as a non-profit. Our non-profit is still fully in charge. There is a subsidiary capped profit so that our investors and employees can earn a certain fixed return. And then beyond that...
Everything else flows to the nonprofit. And the nonprofit is like in voting control, lets us make a bunch of nonstandard decisions, can cancel equity, can do a whole bunch of other things, can let us merge with another org, protects us from making decisions that are not in any like shareholders interest. So I think as a structure, this has been important to a lot of the decisions we've made. What went into that decision process for taking a leap from? non-profit to capped for-profit.
What are the pros and cons you were deciding at the time? I mean, this was a point 19. It was really like, to do what we needed to go do, we had tried and failed enough to raise the money as a non-profit. We didn't see a path forward there.
So we needed some of the benefits of capitalism, but not too much. I remember at the time someone said, you know, as a non-profit, not enough will happen. As a for-profit, too much will happen.
So we need this sort of strange intermediate. You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here?
Because AGI, out of all the technologies we have in our hands, the cap is 100x for OpenAI. It started. It's much, much lower for new investors now.
You know, AGI can make a lot more than 100x. For sure. So how do you compete? Stepping outside of open AI, how do you look at a world where Google is playing, where Apple and Meta are playing?
We can't control what other people are going to do. We can try to build something and talk about it and influence others and provide value and good systems for the world. But they're going to do what they're going to do. Now, I... I think right now there's like extremely fast and not super deliberate motion inside of some of these companies. But already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here.
And I think the better angels are going to win out. Can you elaborate on that? The better angels of individuals?
The individuals within the companies. But, you know, the incentives of capitalism to. create and capture unlimited value. I'm a little afraid of, but again, I think no one wants to destroy the world. No one wakes up saying like, today I want to destroy the world.
So we've got the Malik problem. On the other hand, we've got people who are very aware of that. And I think a lot of healthy conversation about how can we collaborate to minimize some of these very scary downsides.