Transcript for:
California's AI Safety Bill SB 1047 Overview

An update on an AI safety bill we've talked about quite a bit, California's controversial AI safety bill called SB 1047. It has now cleared both the California State Assembly and the California Senate, which means it just needs one more process vote before heading to Governor Gavin Newsom, who must then decide by the end of September whether to sign or veto the bill. As a reminder, SB 1047 requires AI companies operating in California. to implement several safety measures before training advanced foundation models. So these precautions include things like the ability to quickly shut down a model in the case of a safety breach, protection against unsafe post-training modifications, and maintaining testing procedures to evaluate potential critical harm risks. Now, this legislation, like we've talked about, has faced some criticism from major players in the AI industry.

OpenAI is pretty much against it. Anthropic has pushed back on some things, but appears to be largely for it. And critics, though, have argued that the bill focuses too heavily on catastrophic harm and could negatively impact innovation, open source development, and other areas where AI is moving forward at a pretty rapid clip.

Now, the bill has undergone some amendments, which Anthropic proposed some of these. These included replacing potential criminal penalties with civil ones and narrowing some of the enforcement powers under the bill. However, people are still not entirely happy with this.

So, Paul, can you kind of just walk us through what the significance is of this particular bill in California for U.S. regulation of AI as a whole? Yeah, the key here is like it's not just. companies in California.

It's companies that do business in California. So it's a really important distinction. And I think we talked about this on the last episode that California is a massive economy.

I mean, I want to focus here on why this matters. We keep talking about SB 1047 and what is the significance to people? I think that it comes down to a few things. So there was an article, I think this was Wall Street Journal, we'll put the link in.

It said, AI regulation is coming. Fortune 500 companies are bracing for impact. So I thought this is a good one because it's like, what does this mean to corporations?

So this article said roughly 27% of Fortune 500 companies cited AI regulation as a risk in recent filings with the SEC. One of the clearest signs yet of how rules could affect businesses. A recent analysis by Arise AI, which is a startup. Building a Platform for Monitoring AI Models, shows 137 of the Fortune 500-sided AI regulation as a risk factor in annual reports, with issues ranging from higher compliance costs and penalties to a drag on revenue and AI running afoul of regulations, the inability to predict how regulation takes shape, and the absence of a single global regulatory framework for AI creates uncertainty.

That was a quote from credit card company Visa. And then it said, some corporations are hoping to get ahead of regulation by setting their own guidelines. And this is, I think, an important takeaway is we just don't know.

And this is like why having your own internal policies are really important. Understanding the regulations of your own industry that AI may fall under already is really important. So I think there's an element here that this uncertainty matters to businesses, that if this gets signed into law in the next 30 days, then maybe you got like six months to comply with this. Well, that's going to that's going to boil down to everybody.

Like the CMO is all of a sudden going to have to care about this law. Everybody's going to have to like understand it. And this isn't it.

There's like hundreds of AI related laws going through states right now. So there's just like massive uncertainty. The one thing that seems almost like a given in this is the models are going to take longer to come out.

So like what I mean by that is whether SB 1047 passes or not, we're going to talk in the next rapid fire item coming up about. the US AI Safety Institute and what's happening there. But what's going to happen is these companies are going to be working with the government, even if it's voluntarily, to try and convince them that these models are actually safe.

And so they're going to open up access to the models. They're going to demonstrate these to governments, as we learned in the previous topic with OpenAI demoing Strawberry to the government. So what's going to happen is the models will be done.

But now they got to go through additional layers of safety and eventually additional layers of regulation. So we may go from a 8 to 12 month cycle of the next frontier model coming out to like an 18 to 24 month cycle. So what that means to us as users, as business leaders, as practitioners.

is when we finally get GPT-5, whether SB 1047 is in place or not, whether government, you know, federal government put some regulations in place or not, we may be on a two-year run now before we see GPT-6. Because they're going to train this thing, they're going to build it, and then they're going to do their own red teaming for five months, and then they're going to bring the government in and show them what they've got. And it's just, it's going to take longer.

And so I think what we'll continue to see as users of this technology. is the iterative deployment that we're actively seeing from Google and OpenAI in particular, Anthropic is following the same capability, where rather than doing massive model drops, where we just go from one and we get an order of magnitude better model 12 months later, I think we're just going to see over an 18-month period, every three, six months, some new capability. Now we have video capability.

Now we have advanced voice mode capability. And they like, they... build these models in an iterative way where now they can just go show the government or the regulators, okay, we're going to launch voice mode in three months. Here's everything we've done with it.

And so rather than a single model, then they do it iteratively. And then you can give yourself the runway you need to cover the regulation. So I think these big companies, OpenAI, Anthropic, Google, they may all be opposed to this. There may be government leaders opposed to it. There may be open source advocates opposed to it.

But I think they're all under the assumption the regulations are coming, whether it's this one or the next one. We're going to have regulations of some sort. And so I think they're going to line up to voluntarily participate in whatever the federal government is doing, because it's going to give them a lot of cover to kind of keep moving forward. So I don't know.

It's just and again, I still don't know where exactly I fall on this. I do think that the way that this thing is. designed is pretty arbitrary. So it basically tries to say, if you're training a model over this size, or if you're fine tuning a model in this way, then you have to, you know, we would have to approve of that basic, but with an exponential growth curve of these things, the capability and how they're trained, it's like the thing we think is not going to be safe today. A year from now, we'll laugh at it as like, oh, that was kind of like obsolete technology.

And that obsolete technology wouldn't even be allowed to be built without the government's approval under this premise. So I don't know. I feel like we're just too early. But I do also worry that these capabilities are going to just explode. And if we don't have some kind of regulation, then we're going to get in trouble fast.

So I don't know. I continue to kind of sit on the fence and listen to both sides of this. And I just don't know that I have enough information personally to say definitively that. this is a good or bad idea. Yeah.

That last point you made is really interesting. I saw some posts that basically equated it to what if we had tried to pass a law like this in California in 1994, we would have throttled like the internet revolution, because how would you even, not that it's bad to pass this type of law, but how do you even, from a technical perspective, wrap your head around what's going to come 10 years from now? Yep.

Yeah. And I do buy... I think the argument of the people against it that I align with best is that we should be regulating at the application level, not the model level right now. So that this is a general purpose technology.

It's going to have all kinds of capabilities and it can be used for good or for bad. So like I think Andrew Ong in his article in Time Magazine equated to like an engine. Well, an engine could be used in a bomb or it can be used in a car.

So do we regulate the invention of the engine and the improvement of the engine? Or do we regulate the use of it in bombs? And I think that's the kind of concept here where it makes a lot of sense to look at the application layer and to allow existing laws to cover illegal use of things and people doing harm and let this play out a little bit.

So I don't know if I had to pick a side today, I would probably err on the side of I think we need to be thinking deeply about this, but I don't think we're at the point. point yet where we need to step in and stop innovation because I think innovation is critical to growth in GDP, to the safety and security of our country. I feel like we don't want to stop this yet. And I don't think a pause or anything like that is very helpful at the moment.