today on the AI Daily Brief former Speaker of the House Nancy Pelosi has come out against controversial California AI Bill sp sp 1047 the AI Daily Brief is a daily podcast and video about the most important news and discussions in AI to join the conversation follow the Discord Link in our show notes Hello friends quick note before we dive in once again and I think this tends to happen with these big policy discussions with lots of open letters the main part of the episode ran long and so we will not have a headline section today we will be back with our normal format presumably tomorrow but for now we are just going to get into to the latest in Sp 1047 now if you have not had a chance to watch it yet I highly suggest you go back to my show from about a week and a half ago called s SP 1047 the world's most important and problematic AI policy debate I go Fairly in depth there on the background of the issue and the points that the various sides have been making and there have been points of Plenty the story of SP 1047 recently has just been open letter after open letter after open letter on August 7th for example we got a set of professors arguing in favor of SP 1047 calling it extremely light touch indeed using the phrase that bear minimum for Effective regulation of AI but if the Godfathers were for it the Godmother quote unquote came out against it f Lee came out and said that while it was well- intended she believed that it would actually harm the usai ecosystem startups and Venture capitalists have both published their own letters including very notable ones from a16z and why combinator and where I had left off was that congresswoman Zoe lren the ranking member of the House committee on science space and Technology had also come out against the bill in her letter to Bill sponsor Scott weiner she wrote I firmly support AI governance to guard against demonstrable risks to Public Safety but this bill would fall short of these goals creating unnecessary risks for both the public and California's economy as we started to get into in that previous show part of the reason that the debate has been so intense is that this is the first time that there has been a specific instantiation in potential policy of the debate around AI safety Brian chow and reason magaz expressed one side of that in a piece he wrote titled California lawmakers face backlash over doomsday driven AI bill in that piece he claimed that quote lawmakers were swayed by one-sided narratives from the open philanthropy funded Center for AI safety which claims that mitigating the risk of Extinction from AI should be a global priority until recently policy makers remain largely unaware of just how disconnected these narratives were for the broader AI Community now I'm not using this particular show to debate the merits of a risk or anything like that but it is clear that this bill and where it decides to put its focus is aligned philosophically with the AI safety movement indeed that was at the core of lofgren's criticism that there was basically too much in this bill about future theoretical risks and not enough about clear and present and current risks and that brings us to the current where the most significant update came a few days ago when Nancy Pelosi and a group of other California Representatives weighed into the ficacy in opposition to s1047 for a highly nonpartisan take I turned to the New York Post which characterized this as Pelosi blasting the bill as ill informed there were actually two parts of this the first was a letter from a group of Congress people to California governor Gavin Nome and separately was a statement from former Speaker of the House Nancy Pelosi adding her support to that letter so let's see what the letter actually says it was signed by Zoe lofgran Anna ISU rokana Scott Peters toi cardinas Amy Bara Nanette baragan and Jay Le Korea the letter begins dear Governor Nome it is somewhat unusual for us as sitting members of Congress to provide views on state legislation however we have serious concerns about s1047 and we felt compelled to make those concerns known to California State policy makers they then go through their credentials and say based on our experience we are concerned that s1047 creates unnecessary risks for California's economy with very little Public Safety benefit and because of this if the bill were to pass the State Assembly in its current form we would support you vetoing the measure so what are their arguments many of the points of the letter Echo things that were in the previous letter from Zoe lren they point out for example that the quote Bill requires firms to adhere to voluntary guidance issued by industry and the National Institute of Standards and technology which does not yet exist for example they write even though we do not yet have the standardized evaluations necessary for a developer to confirm with confidence that an AI system could cause critical harm the bill bases its liability Provisions upon such hypothetical guidance they also point out that the approach to regulating based on compute thresholds will quote almost certainly be obsolete by the time the law would go into effect such premature requirements they say based on underdeveloped science call into question from the outset the efficacy of the bill in achieving its goals of protecting Public Safety we should not seek they right to cement our current understanding of AI Safety Science into law but should instead provide ample flexibility agility and public consultation to allow the law to grow as our understanding grows in its current form s SP 1047 falls short the letter also calls out the focus of the bill on the more existential risks as opposed to more current problems they write sp sp 1047 is skewed towards addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation discrimination non-consensual deep fakes environmental impacts and Workforce displacement unlike other letters they go even farther arguing against the notion that AI somehow represents a new threat they write there is little scientific evidence of harm of mass casualties or harmful weapons created from Advanced models they use the example of the threat of AI models creating nuclear weapons but write that the quote production or acquisition of the file material is the primary impediment to the creation of such a weapon because the technical details of nuclear weapons have been known since at least 1945 and can almost certainly be acquired on dark corners of the internet now this is not to say that they don't think any of these issues are worth dealing with they write understanding measuring and monitoring the risks inherent in AI systems as they evolve will be key especially the marginal risk of AI systems causing mass casualty events is contemplated by s1047 there is also a pressing need to develop the scientific tools tests and standards to enable the effective evaluation and Assurance of AI systems they also as has been the case with almost all of these letters say that they support things like The Whistleblower protection Provisions this letter also goes into some detail about the other AI related bills also under consideration in California that are focused on more of what they consider the demonstrable harms writing that quote these bills have a firmer evidentiary basis than s1047 now this counter response to the focus on AI safety concerns is the first and primary critique that they have getting top billing in the letter but secondarily they also say that they're concerned around the unintended consequences from s SP 1047 treatment of Open Source models they write not only is it unreasonable to expect Developers to completely control what end users do with their products but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users including their privacy rights as such the natural response from developers will be to stop releasing fully open AI models and instead Implement limited release models like apis they basically argue that the provisions including things like the kill switch would quote decimate the ecosystems that spring up around AI models one notable thing that goes farther than previous letters that we've seen is that they're open to the possibility that at some point open sourcing models does become too dangerous they write it may be the case that the risks posed by open sourcing models with potentially dangerous capabilities justify this precaution but current evidence suggests otherwise after seeking comment from the community and looking at the risks the national telecommunications and Information Administration released a report last month saying government should not restrict access to open source models with widely available model weights at this time but instead should actively monitor the ecosystem should risks evolve basically the argument here is that at current capabilities the benefits of open sourcing outweigh the risks but of course that could change of course there is an economic Dimension to this for this particular group of Representatives and they're not circumspect about that they write given that most of the discoveries that led us to this moment were achieved through openness SP 1047 could have a pernicious impact on us competitiveness and governance in AI especially in California in short they conclude we are very concerned about the effect this legislation could have on the Innovation economy of California without any clear benefit for the public or a sound evidentiary basis high-tech Innovation is the economic engine that drives California's PR erity there is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California this is not entirely speculative as an example they point to meta deciding to not release its multimodal systems in Europe due to the eui ACT thus they write while we are confident of the good intentions of the Bill's proponents we are equally confident that this bill would not be good for our state for the startup Community for scientific development or even for protection against possible harm associated with AI development because of those reasons if the bill were to pass the legislature in its current form we would support you toing the measure now if you thought that state senator Scott weiner would be deterred by this you would be wrong in his own press release he wrote I am aware of speaker Amer Nancy Pelosi's statement opposing SP 1047 while I have enormous respect for the speaker amera I respectfully and strongly disagree with her statement I don't know that it's the most effective technique but he takes quite a ky tone saying I encourage the speaker amera to have a conversation with the AI luminaries who are quite quote unquote informed and who are supporting s1047 pointing of course to Benjo Hinton Lawrence leig and Stuart Russell weiner reiterates in his press release his belief that it's a quote straightforward Common Sense light touch bill that it only requires AI developers to do what they've already committed to and he says really all the opposition is just big Tech he says that untrue narratives have continued to percolate around about s SP 1047 weer argues that some in the tech space have taken a get off my lawn approach to policy makers telling us we should do nothing while offering no constructive policy ideas other than to just let technology companies do whatever they want to do without even minimal regulation in the public interest and this is the point at which it becomes clear that this is not just about AI legislation but about a Reckoning with tech policy in general weer writes the aggressive anti-regulatory approach by the tech sector has led to almost no federal protections for public health and safety relative to technology and then he goes on to list places that he believes that California has had to quote step in to protect our residents this theme that this is all just a well-funded campaign by the technology industry to have no regulation at all is something that has particularly infuriated the AI safety Community the AI safety memes account which I will say for those of who are unfamiliar has a very clear perspective which of course you can guess from the fact that they're the AI safety memes account but is generally thoughtful and not prone to histrionics writes the a16z dark money campaign is getting uglier recently they've gone full corporate villain mode throwing around cash and just openly lying about what s SP 1047 actually says not just saying misleading things lying repeatedly on the record over and over they will say anything to kill this bill they shriek endlessly about their outgroups being cult while blocking anybody who disagrees with them the actual culti thing you do to create the the false appearance of consensus it's not just a16z of course but most of the opposition is from a handful of billionaires in big Tech despite the fact that the big AI corporations are all lined up against this bill they're actively pushing an absurd conspiracy theory that actually it's the tiny AI safety nonprofits doing all the lobbying I think for most of you viewers listeners who sit fairly comfortably in the middle of on the one hand the accelerationists and on the other hand the most concerned subsection of the AI safety group there's a lot that you can understand about the narratives being pushed by either side of this and how having the contextual grains of salt to understand how to contextualize whatever they're saying while still looking at the legitimate points underneath while I saw numerous arguments from the AI safety side of the aisle arguing that this letter from these Congressional Representatives simply reflects lobbying money being successful my guess is that probably a lot of the listeners to this show are fairly sympathetic to one concerns around preemptive regulation for areas of hypothetical if serious risk and two sympathetic to concerns around unintended consequences around open Source the debate as it always is with politics is around the details there is no such thing as perfectly knowable information and that's only extra true in the case of this new and Incredibly fast moving technology now the latest update today comes once again from Zoe lren and Anna ISU this time to Robert rias the speaker of the California State Assembly basically what this letter is pointing out that in the wake of the letter that they published last Thursday the bill was amended substantially during markup they write we would like to acknowledge the efforts that the author of the bill in the state of assembly made to improve the underlying Proposal with additional flexibility and Clarity Unfortunately they write there are still substantial problems with the underlying construct of the bill it is our view that the bill and its current form should not be approved by the legislature most of the rest of the letter is just a rehash of those things that they had said in the previous letter they also shared however the memo that the staff of the science committee had sent to ranking member lran in their review of the updated Bill those staff write overall SP 1047 is considerably better than it was before they weakened or clarified many of the key ulations however the problematic core concerns remain there is little evidentiary basis for the bill the bill would negatively affect open source development by applying liability to Downstream use it uses arbitrary thresholds not backed in science and catastrophic risk activities like nuclear biological deterrence should be conducted at a federal level ultimately they write the changes seem to be aimed at plating various concerns from specific parties rather than addressing the core challenges of the bill they Point specifically to the example of anthropic having had their specific changes adopted suggesting that anthropic quote likely no longer is opposed some of the major amendments they point out one area is around clarifying harms they write one Amendment Scopes harm to only those caused or materially enabled by a developer this is a positive addition they write as the initial lack of a strong standard was problematic the bill also exams harm from information publicly accessible by an ordinary person one of the things they don't like they write the amendment adds a $10 million cost threshold for fine-tuning an AI model similar to the 10 to the 26 Computing threshold this is another arbitrary threshold with little to no scientific backing that will do little to constrain the scope of the law as models in the companies that build them improve over time that said they do acknowledge that quote some researchers believe this is an improvement on the previous language overall they still come back to this idea that the fundamental architecture of this bill is about the wrong thing and to me having reviewed this and continue to watch it I think this is simply the inextricable challenge of this particular piece of legislation now as I'm sure you've notice this has been an extremely tense conversation but to the extent that you were looking for a silver lining of this this is exactly the type of regulatory process that tends to lead to massive political education and ultimately better policy the AI safety folks for example may not like that a lot of their beliefs are not being affirmed by these sitting Congress people they may have concerns as well that they're being influenced by money from Big Tech but when you read these letters and the staff members going into them it's undeniable that these questions are being taken seriously I simply tend to believe that questions being taken seriously tends to lead to better policy than things just slipping through now of course this is all a little bit of a sideshow to a broader National level conversation around what federal policy should be and perhaps one of the legacies of this process will be that it will create context for Congress to actually try to bring these issues into the federal discussion of course there is a little chance of that happening before the elections but I don't think that the impact of this bill whatever happens from here will simply be limited to whatever happens in California anyways those are the updates from here on SP 1047 I will of course keep you informed formed as things change for now though that is going to do it for today's AI Daily Brief until next time peace