Welcome back to another edition of the AI Summer School. I'm Kevin Frasier, the AI innovation and law fellow at Texas Law and a contributing editor at LawFare. Today's class dives into one of the most complex and controversial aspects of AI and the law, liable. Eugene Volik, a senior fellow at the Hoover Institution and longtime professor of law at UCLA, is an expert in the field and penned a paper on the topic back in 2024. For those looking for extra credit, be sure to read in its entirety, including all appendices, large liel models, liability for AI output, and we've got a link in the show notes. For those content with a P or a passing grade, our conversation's going to cover the essentials. So, we've got you covered there. As always, we'll use our standard format here. First, we're going to explore the fundamentals of the law. In particular, we're going to dive into liel and then we'll spend a bit of time looking at section 230, the first amendment before Eugene will detail how AI maps onto these key aspects of the law. Finally, we'll discuss some open questions and let you get on with your uh day and hopefully get into our homework. All right, Eugene, thank you so much for joining the AI summer school. Uh, thanks very much for having me. Uh, it's funny that uh you talked about my having pen to paper, even though I never used a pen and no one reads it on paper. It's funny. There we go. And and soon legacy media we've inherited through our language. Even a computer. We talk about computers, but in most situations, we're not using them primarily to compute. Although, of course, a good deal of computation goes on in the background. They're data processors. They're word processors. They're communicators. They're entertainment centers. There we go. Well, I guess now too, uh, my my lawfare colleague and co-author Alan Rosenstein and I have have talked a lot about how scholars will use generative AI to pen a paper. So, what do we say then? Do we say we generated a paper? Gen. Let's call it Jen. Jen. I like it. Okay. We're already creating new vocab. It's the sign of a great class. Uh Jend. That's going to stick with me. All right. So, we'll get our TM there uh on Jen. This is great. Well, Eugene, a key aspect of your paper is liel. And for folks who have forgotten their free speech course or perhaps never took a free speech course and just skipped straight to the bar, what is liable? What are the key things we're looking for when we're talking about liel? So to oversimplify, liel means false statements of fact about a person or a corporation for profit or nonprofit uh that uh damages that entities or person's reputation. uh and it of uh in order to prove up a liel case you often have to show certain kinds of mental state uh famously for example if you're talking about public figures or public officials you have to show so-called actual malice which is not actually malice it means is knowing a reckless false for speech about uh private figures for if you can show actual pretty uh provable uh loss as a result of damaged reput ation. Well, then negligence might be enough. So, those are generally speaking the elements of liel. Uh and of course, uh the liel could be by it has to be in writing generally speaking. Uh but, uh it could be could be handwritten, could be printed, uh and of course it could equally be uh on a computer. So, it could be jenned. We'll get to that in a second. We'll get to that in a second. Uh but thinking also about liel a couple key considerations come to mind that we'll map on later but can you talk a little bit more about this publication requirement? Obviously if I just whispered a lielist uh state or jenned a lielist statement and handed it to my partner and didn't share it with the rest of the world would that be of concern or what's this publication requirement? Yes, that would be liable. Um the public there is a publication requirement in liel law. But as with actual malice, lawyers have this habit of uh using words in ways that uh that differ from how ordinary humans use words. Publication for purposes of liable law merely means communication to one person other than the person being defamed. So if you write a letter and to a friend saying some third party has done these bad things that could be liable. Classic examples of that kind of liel were historically letters sent to someone who's about to get married saying that their prospective spouse has committed various kinds of misconduct. Another example uh which is very common uh today and uh or I shouldn't say very common but it's a fact pattern that we continue to see today uh is a job reference. Uh so somebody says uh oh I wouldn't hire this person because because uh he was fired for stealing from petty cash or even he has he has acted incompetently in some particular specific ways. Even if it's said to one person, the prospective future employer, that could very well be uh be lielist or perhaps we may say more broadly defamatory because similar rules apply to slander, which is oral defamation. And thinking about just passing a letter on to someone, what if I qualified and I say, uh, Eugene, I really wouldn't recommend hiring Allen because I've heard from other people. I can't verify this, but I've heard from other people that Allen's jokes are just the worst and you're going to tire of them very quickly. Well, depends if you're going to if I'm hiring Allen for uh uh for a job as a comedian. I wouldn't recommend it, but let's just let's just say for the sake you are. The the only reason I quibble about that is uh that not every statement uh that is uh negative about a person not even fact every factual assertion uh that's negative about a person is uh uh is defamatory. It has to really threaten their reputation in in a fairly serious way. Uh and one classic way in which it could not the only way by any means is by suggesting they're incompetent in their profession. By the way, one other factor is it has to be a factual assertion and statements that I don't like as jokes or even the jokes are very bad jokes is almost always going to be seen as a matter of opinion because humor is a matter of opinion. Um, on the other hand, if I say uh I wouldn't hire this person because rumor has it that he was fired from a previous job for uh getting drunk and physically attacking a customer. uh factual assertion, something that would indeed materially injure the person's reputation and uh in part because it suggests that commit tends to commit crimes and is also not competent in in his job. Um yes, that would be potentially lielist even if you qualify it with rumor has it. uh that uh um uh I I oversimplify here. Some courts have departed in some measure from this. They may say well you know if there is such a rumor then then you're not saying something false when you pass it along. Uh but the the predominant view is that passing along a rumor even um uh even while saying that it's a rumor is generally um uh is generally actionable. Um, there are actually some exceptions in situations where you should be entitled to pass on rumors, usually kind of one-to-one communication to people you have a relationship with rather than a statement to the public or to strangers. So again, it's a complicated body of law, but generally speaking, a disclaimer that says, you know, this might be false, but I'm going to pass this along anyway, does not prevent uh defamational activity. And just to stick on that idea of a disclaimer uh for a little bit longer, let's say I'm a particularly cautious lawyer and I am very fearful of being sued for defamation. So anytime I pass every text I send, every email I send, I say I, Kevin, am unreliable. Sometimes I make things up. So don't trust anything I say in this email. Don't assume that it's factually accurate. Would that allow me to get by defamatory statements? I I very much doubt it. I mean, I don't know of any case law on point because very few people actually are that candid about their uh their lack of reliability. Uh but again, I think it's the same principle as rumor has it. When you're passing along a an assertion about someone, even if the listener understands that they can't take it to the bank, it could still be quite damaging to that person's reputation. Uh, for example, someone considering whether to hire someone might say, "Look, you know, maybe there's only a 60% chance that the accusations that were passed along to me are are true, but I don't want to run that risk, especially when I can hire someone about whom these accusations haven't been made." Uh, so as a general matter, I mean, I think human beings understand that other humans are often unreliable. Sometimes there may even be a signal such as rumor has it or I've heard that or they say that um that kind of accentuates um uh the the uh the possibility that the statement may be may be unreliable but uh that is generally not enough to avoid defamation. Okay. And so shifting a little bit to where we've seen concerns about liel pop up uh well since the dawn of the internet has been our social media platforms uh or internet forums where we've seen folks go to that blog or go to that social media site and liel someone uh uh make some factual assertion that may harm their reputation. How have those platforms managed to evade liability? if you could walk us through when does section 230 apply and what is the general kind of values animating this idea of section 230. Yeah, so uh uh section 230 which is section 230 of title 47 of the US code uh provides for pretty substantial immunities for online platforms uh when they're passing along material produced by others. So C1 which is the most relevant section uh says no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. So if uh somebody posts something on Facebook, Facebook is the one that's distributing it to the world. In a sense, it's Facebook's actions that cause the most damage in many ways because if the person just posted it on his own on his own computer, virtually no one would see it. Um, but Facebook would not be liable because this it was information provided by another information content provider. That is to say, the user. So the theory is sue the user who created the uh the information and not the platform that is merely redistributing the information and to get behind the original impetus for section 230. Can you walk through how some of these concerns about chilling speech really brought about the the move for section 230? Sure. So like all statutes, section 230 is animated by multiple concerns. But one of the concerns was indeed that if platforms uh were held liable for material that's posted by their users, then they would be um uh uh they would have too much of an incentive to take it down or maybe never even put it up. So one extreme might be platforms may just go out of business or never go into business because the risk of liability is too high. They can't get insurance because of that risk of liability and such. Uh or perhaps someone more likely that go into business. But the moment someone sends a complaint saying this is lielist towards me. They would they would do this calculation and say, "Well, if we take it down, then we alienate a user, but we're not going to be legally liable." Among other things, our terms of service say we can take down anything we want, anytime we want. Uh, but if we leave it up, then we might have to pay hundreds of thousands of dollars or maybe more to lawyers to defend ourselves. And if it turns out that the statement really uh was mistaken, how do we know? The user is the one who who made the assertion. They're the ones who have the facts. If it turns out it is mistaken, we could be on the hook for millions of dollars for defamation liability. Recently, we've seen potentially almost a billion dollars in defamation liability. that was a settlement in uh uh one of the um cases brought by election uh uh machine company uh uh saying that it was defamed. There's also a twothirds of a billion dollar uh verdict that was recently entered against Greenpeace uh for allegedly participating in defamation uh um uh of of companies involved in the North Dakota uh pipeline um uh project. So uh so as a result uh platforms would say look you know the moment somebody files a complaint we're going to take stuff down and that would mean that entities that are willing to be latigious uh uh could be could be individuals could be businesses could be could be nonprofits could be churches uh would be able to to get uh to get criticisms removed. Um so as a result uh and again there are other there are other concerns involved but uh as a result uh Congress said look we're not going to completely eliminate liable liability for online liable. We're just going to put it uh uh um uh put it on the shoulders of uh uh the the people who actually posted the material and generally can you frame how that may comport with again obviously with the caveat that there are a lot of values that are baked into the first amendment or mapped onto the first amendment. How is section 230 generally framed as fitting in with some of the broader narratives we talk about when we talk about well it's complicated. So let's go back to New York Times v Sullivan the most famous liel case of them all uh 60 years old now but but still still good precedent and uh um it concluded that liel law was substantially constrained by the first amendment. Throughout American history, it's been understood that uh uh liel liability has to be judged by standards of freedom of expression. Historically, it had the court's conclusion had been that liel law is consistent uh with free expression principles, but uh uh New York Times v Sullivan said it needed to be cut back. But how far? So the majority, which was six justices, said uh that when you're speaking about matters of public concern regarding public officials, um uh you should not be held liable uh for defamation unless uh you uh uh know the statement is false or know the statement is likely false and are just recklessly published. despite that the knowledge or recklessness standard again sometimes confusingly called the actual malice but three justices would have gone further they would have said that's not enough to eliminate the chilling effect of liel law the deterrent effect on liel law on publishers and speakers uh because uh even with this heightened standard that's required for the plaintiff to prove still a lot of times newspapers and other speakers will be deterred will be unduly deterred from publishing even things that are true for fear that a jury would say it's false and that a jury would also find knowing or reckless false. So they would have completely categorically eliminated liel law uh at least as to matters of public concern. I think maybe only as to public officials but the logic of the opinion seems to suggest to any matters of public concern. Those are um justices uh Black, Douglas, and Goldberg would have taken that view. But the majority rejected that view. The majority wasn't willing to go that far. Majority written, by the way, by someone who generally thought of as an arch liberal, uh Justice Justice Brennan, uh who had long been a protector of free speech. So uh uh the uh first amendment provides for considerable protection for speech uh but also aims to retain some considerable scope of of liel law uh as again false statements that damage people's reputations. Section 230 in some respects is similar. It too tries to draw a line that aims at uh protecting speech uh but um but at the same time not completely eliminating uh the uh uh defamation liability liable liability um but it just it draws the line somewhat differently uh that under under New York Times v Sullivan probably we can't be sure because section 230 prevented these cases really from from from uh coming up to determine first amendment liability. But um under New York Times v. Sullivan probably um social media platforms would have some liability. Once they're on notice, once they know a statement is false, they know of the statement. They've been alerted to what makes it false. They know it's false or at least likely false. Probably there would have been subject uh liability there, which would have created sort of a notice and takeown type of regime. Not necessarily a great regime, but that's probably what it would would have led to. uh section 230 goes further in protecting uh platforms even more and therefore in a sense goes further in undermining liel protections even more and important thing to point out for our gunners but uh for the rest of our students always read the descent right uh uncovering some very interesting threads here and I think Eugene one thing that stands out to me is how this mapping on of dignity concerns has been a key consideration under the first amendment for decades if not longer. Uh we don't need to go all the way to the founding just to see the importance of those dignity and reputational interests to balancing some of these various considerations. And so taking all of that legal foundation and now moving into the AI context I could interrupt for just a moment. I just want to uh uh b a little bit at framing of this as dignity. Yes, it's the defamation is often called dignitary toward or sometimes called the dignitary tor of the dignitary tors. But as a general matter uh speech that merely injures someone's dignity is constitutionally protected at least as on a matter of public concern. We see that in cases like Hustler v Fwell involving the scurless cartoon parody cartoon uh trying to trying and succeeding perhaps in injuring Jerry Fwell's dignity. Snider v Phelps which was the the funeral picketing uh with uh uh um uh really nasty messages about about soldiers about gays. The sort of the the line people most remember is they had signs saying God hates right? and this was a thousand feet away from a from a military member's funeral. You know, that's something that might be seen as very seriously damaging people's dignity, but that is constitutionally protected. So, it's not so much just dignity as protection against false statements. And protection against false statements not only is harmful to the plaintiff, excuse me, false statements are not only harmful to the plaintiff, they're also potentially harmful to public debate. Right? So this is one of the things that people have been talking about um with regard to for example false statements about election results and such is they could undermine uh um uh public debate undermine the search for truth because they are false. Now not all such statements are constitutionally unprotected because there's really danger in uh restricting even even false statements. But when they the combination of undermining public debate and damaging a person's reputation uh that is something where uh um uh the courts have recognized there is uh there is a substantial room still left for for defamation. And moving into the AI space, you provide appendices full of case studies of where we may see libalist statements generated by AI tools. So the initi uh in the introduction itself of the article you outline uh asking a model prompting a model to detail for you the uh criminal wrap sheet the the the crimes that a RR uh you use uh an individual's ac uh u initials what this RR has done and chat GBT or whichever model you were using reports that there have indeed been allegations of criminal activity by RR. So what makes liel analysis complicated in the AI context? If we could just start with what are some of the key issues that don't allow us to just say, "Oh, okay. Well, we knew what liel looked like in 2022 before chat GPT3.5 and we know what it will look like after it." What's the the complicating factors in this analysis? Uh well so each one of the elements obviously lawyers are going to be fighting over. I think some of them should be pretty easy to establish but uh uh but um some people might disagree. So for example uh people are aware that AI models uh sometimes hallucinate. There are disclaimers that the AI provides. uh now at the same time of course uh they they are seen as sufficiently useful that uh search engines now automatically include AI generated output in at the very top of of what they generate often. Uh so uh so I think that uh uh that those disclaimers are not going to be enough to completely completely uh preclude liability. If the disclaimer said, "Look, this is fictional. This is just this is just a joke we're putting together like I don't know, magic eightball or something like that, then that might be enough." People say, "Okay, this is obviously obviously fiction." Uh, but if the disclaimer simply says there might be errors here, that's generally not. Likewise, I don't think there's section 230 immunity for the platforms because move on to section 230 just to to hang on to this idea of disclaimers because I think a a really good point you make is it would be one thing if the models were saying or excuse me if the AI labs were saying, hey, you know, we're generating a new eightball that you can shake and it's going to come out with uh outputs and ha that was funny, right? that says they did commit a crime or they did uh you know break that person's foot, whatever. But you point out that these labs are quite invested in making reports and press releases about look how well it did on the bar, look at how it's replacing doctors, look at how you can rely on this to replace that intern. So it's not as though they aren't trying to make these as accurate as possible. So I think that that does a lot of work for your argument. Exactly. I think I think uh uh uh you you've you've hit um uh hit the nail on the head with that. I think it it has to do with the way that the AI companies themselves are promoting it among other things in the course of justifying the tens of billions of dollars that have been invested in them. Uh uh they're promoting this as something that is not completely reliable, but you know, nothing in the world is completely reliable. They're they're promoting it as it's reliable enough that you should use it. Well, if that's so then it's unsurprising that people would view it as reliable enough that they might refuse to do business with someone because of something that uh uh that is output by right and you know expertly too that it's uh especially in when you're considering oh well maybe I'll go to this one specific doctor or maybe I'll go to this one specific lawyer if you're using generative AI to get an assessment of you know I want to know what are professor Frasier's class rankings and what crimes, what's his rap sheet, what's his background like? All it may take is that one generated response that says, "Professor Frasier did X, Y, and Z." For one student or one prospective student to say, "Huh, maybe I'm not going to sign up for that class or maybe I won't go to that school." And so this publication question too is a really interesting one in terms of thinking about who the output is actually being shared with and what the actual response may be to that output. Right? So I actually don't think that the publication uh um uh element of lielo is going to be much of an issue here in in situations where at least somebody else has run the query and has has seen the output and sometimes of course that the AI companies may have logs of who has who has run what queries. Um, so if I run a query and it says something about me, uh, and then I sue based on that saying this is all false, the the defense is, well, wait a minute, only was output to you. You can't damage your own reputation with yourself, right? You've got a pretty good reputation, so you know, pardon. I said your reputation's pretty sterling at this point. So well but it doesn't matter because whatever people other people might believe about me that's false presumably I won't believe things about me that are false except in highly unusual circumstances which the law does not focus on. Um so uh so uh um uh in that situation publication requirement would be absent but so long as other people are running this query and seeing this output um then I think the publication requirement is present. Again, remember doesn't have to be broadcast to the world at once in the same form. If it's shared with a bunch of people one at a time here and there, which by the way is the way websites are visited too, right? They're just shared with each individual user as the user goes there. Um that's enough for publication. Even if just shared once with somebody, passed along once to somebody other than the plaintiff, that is generally speaking enough uh for publication. Um, section 230 also I think would not be that much of a barrier to liability because remember uh it says no provider or user of a uh uh interactive computer service shall be treated as publisher or speaker of information provided by another information content provider. But the whole point of generative AI is that it's generative. It's that it's generated by the AI company's products. So the lawsuit would be against the company for uh passing on information that's generated by itself. So the premise of section 230 is don't go after Facebook, go after whoever posted the thing on Facebook. Well, here if it's chat GPT, it's open AI that is posting the material to the user. So again, I think section 230 would not be uh much of a barrier. We can talk about some of the other things, but I think is really the the issue here has to do with mental state. So remember, modern liel law generally speaking concerns itself heavily with the speaker's mental state. If uh the plaintiff, the person who allegedly was whose reputation was allegedly damaged was a public official or public figure, the plaintiff has to prove uh that the defendant knew the statement was false or was like knew the statement was likely false, was reckless about that. If the plaintiff is a private figure and can show actual loss, not just hypothetical likely loss, but actual loss, um then uh the plaintiff merely needs to show the defendant was negligent, was careless in its investigation. And actually, when it comes to speech on purely private matters, maybe there could even be strict liability, but let's bracket that. That's that's pretty rare for a variety of reasons. Well, what does it mean to ask about the mental state of a computer program that has no men's no mind, right? Men's rhea, guilty mind. Well, it has no mind. It can have no guilt. Um, so what does that mean? And I think the answer has to be that we look at the mental state of the organization that is uh that is uh uh responsible for the platform that has uh that has created the code and that is operating the code. Now by the way that's complicated because sometimes it could be quite different. Somebody let's say puts out a public domain large language model then other people are are operating. Interesting questions. Let's bracket that for now though I talk about them uh in in the article. uh but uh uh if let's say it's chad GPT it was created by open AI it's being operated by open AI if the question is no knowledge or recklessness question is what does open AI know now at the beginning presumably it knows nothing about particular individuals who are being um uh who are being uh uh discussed as it were by its software. doesn't even know that. I mean, maybe you can guess somebody's going to be asking about Donald Trump or Bill Gates, right? Uh but it doesn't know what's being output about. But let's say somebody says, "Look, your software is outputting material that's false about me." And I I I realize you didn't know that, but now you know. I told you. In fact, not only did I assert this, I actually send along a print out. you can check it against your logs and I sent along supporting data that that shows that this is just not true. Um, so let me give you an example. There's a case pending although probably it's going to end up uh being uh uh disposed of in arbitration because of arbitration agreement uh where somebody named Jeffrey Battle says oh Microsoft is outputting information about me that that reports that I was uh that I was convicted of uh of serious felony and sentenced 18 years in federal prison. Um, and uh, uh, that's not me. It's another person with the same name. But it's linking the two of us together because the output begins by describing my actual current job. I'm an aerospace expert. Uh, and then says, however, comma battle did these other things. And I can show you there's a Wikipedia entry that uh, that des that obviously the answer was drawn from the first part of the answer which describes me. There's another Wikipedia entry that describes somebody else with the same name and the liable is in in reporting that the two are the same person. So open and shut not one of those he said she said situations, right? Um so at that point uh the company in this case it was he suing Microsoft would know that this is and would be able to do something about it. Now apparently untraining or retraining large language models to sort of tell them stop saying this is I'm told technically very difficult but large language models aren't the only kind of software right I think any of us could easily design software that says okay after output is generated um uh look up any things that you can identify as names and generally speaking um uh there are algorithms that pretty reliably identify whether or something's a person's name, look them up in a list of known falsehoods that have been said by the softer about them. And if indeed the name appears uh within the same sentence as felony or was the accusation was uh excuse me the the other Jeffrey Battle was convicted of levying war on the United States. So if it appears within the same sentence as that or same paragraph, then just don't produce this output. I mean that's that's not difficult code to write. It's over and under inclusive. It won't catch everything and it may block things that that are not false. But maybe that's what's called for if you're going to let out into the wild the software that can generate potentially very harmful assertions. You'd need to have these kinds of controls there. And in fact, I am told uh well I've seen news accounts that indeed um sometimes if you if uh you put a person's name into into particular um a particular uh AI program uh it just refuses to give you an answer. Uh and that's in a sense a chilling effect, right? Uh but the theory is better to be somewhat chilled there than to output something that you know is false and that that it turns out it's been reported to you that this information uh uh uh that that there has been false information written about the person. So that may be a somewhat like 1.0 version of this kind of uh uh control mechanism. Presumably, you'd want to have something that is more uh u uh kind of more more carefully tailored. What about negligence? Well, there it turns out that we have a decent amount of experience with negligence, liability for uh machines and for software. That's usually filtered through the law of uh a product liability and design defects. Now, I oversimplify here, but basically, if I am injured by a self-driving car, let's say Whimo, uh I'm walking down the street and a Whimo hits me, uh I wouldn't sue the car, right? Obviously, but I could sue Google, which runs Whimo, on the theory that there was negligent design, that they that the software didn't recognize me as a pedestrian, and there was a better design that would have uh would have prevented that. And then of course there would be a battle of the experts as to well would it be effective design or not not a great thing for lay jurors to decide but that is the way our tort liability system works. So I think those are going to be the complications the question of ascribing mental state uh uh in these kinds of situations where the output immediately is created by a thing that has no mind but is ultimately the responsibility of an entity that's populated that's staffed by people who do have minds. Well, there's there's a lot to unpack there. I want to start quickly with just this 230 argument. You have said that you don't think 230 would apply. You outline a great case in your paper. What's the strongest argument you've heard for why section 230 should apply to AI models and and how would you refute that? Right. So I have to say I haven't heard any really persuasive argument as to why section 230 by its terms does apply. I mean some people have said well really large language models are all based on training data. So really you're holding them for information provided by the source of the training data. But in most of these cases the training data does not contain those false assertions. Uh if it's true if the training data says Eugene Volik was convicted of stealing from petty cash and that's why he was fired from UCLA just to make clear that it's not so the Bruins didn't kick you out for that reason. That's good to know. Right. Right. Nor was that amicable amicable retirement from teaching. Uh but let's say there is the training that it's false but the but the software sucks it up and then reoutputs it. That's the garbage in garbage out. uh uh scenario um then it maybe may maybe be immunity to section 230. But the problem with large language models is it's sometimes gold in garbage out, right? All the training data may be perfectly accurate, but the output is still false because it weirdly recombines words. Not even recombine. I mean, in a sense, all output is recombining words that already exist, but it's responsible for how it puts the words together. That is the that is its speech. So, it would be held liable. Um, so so that's uh so I don't think there's a statutory basis for or a statutory construction basis for saying section 230 pass. There's a policy argument that basically is look we should have something like section 230. Maybe we should create a new section 230 because we don't want to have an undue chilling effect. We don't want to deter the creation of the software and we don't want to encourage it to over restrict the way that apparently it has been doing in some measure again by saying look we just won't answer any questions about a particular person. Um so uh so we should have a new section 230 that does that. The problem is that would essentially be saying there's nobody who would be responsible for that. uh and that if people's reputations are damaged, well, too bad for them. Uh and you know, that's a possible policy decision. It's just I'm not sure that it is a wise policy decision, especially since some of these companies are extremely wealthy. They have the tools to try to make their software better. And to the extent that let's say they it's even technically impossible to guarantee perfect safety. Well then maybe the answer is they wouldn't be held to be negligent uh uh if sort of a product design argument. And even if they are held liable for something, well, you know, that's uh cost of doing business uh and they should factor it into their uh into their uh uh uh financial analysis and that may encourage them to to produce more reliable more reliable output. It's again it's in a sense like self-driving cars. Self-driving cars I think are a wonderful thing. I ride in Whimos whenever I can. They're only available in some places, but I'm happy happy to use them. Uh, but uh I don't think anybody says, well, in order to encourage the development of self-driving cars, we should make them categorically immune from any harm that they cause. So, that's what I think. In that instance, I would say every San Francisco resident, hide your kids. Uh, but that's another conversation. Well, the thing about self-driving cars is they're probably better for society because they're safer than humans. Oh, you do want to in some measure avoid undue discouragement of self-driving cars, but at the same time, I think the answer is to provide a sensible level of liability rather than giving them complete ignorance. My short remark on that is anyone who's opposed to autonomous vehicles, come drive in Miami and you will become the most rapid supporter of AVs uh known to man. But that's another podcast we'll save for another day. Eugene, another point you make is that a lot of these lielist outputs from models tend to be quotes, tend to just be uh professor Volik said quote in quotes X Y and Z and that's obviously just a slam dunk easy liel case. So you create some innovative and very straightforward solutions to this quotation issue. Can you just walk through those uh those mechanisms? Yeah, sure. So I should say in 2023 when I wrote the the article uh often these uh uh programs would output things in quotes which is extra dangerous right because quotes are sort of signals to to us I oversimplify here there are scare quotes there are quotes used in obvious fiction but generally speaking in many contexts there are signals that essentially say we're actually reporting on something somebody else wrote and that makes them extra hazardous. If I see a paraphrase, I might say, "Well, I need to check the source." If I see a quote, probably going to be a little bit more likely to trust it. Uh, but apparently what was happening is the software was just treating a quotation mark as any other kind of token. And if it predicts that the following token is going to be a quote even and then then it just includes that and then includes whatever the next token it thinks is, even if it never appeared in the training data without any attempt to verify the quotes, let's say doing a Google search, seeing if the quotes appear somewhere and such. Um, now in more more recent years as I and more recent months as I've been using um uh the the software, I've seen it uh I've seen a lot fewer chords. Uh not none. I actually was just doing an experiment with a student of mine where I asked the the uh uh the um I think was yeah it was definitely Chad GPT4. I asked a a legal question and it gave me actually the correct answer citing the correct case but giving a quote that did not appear in the case. So it was generating hallucinated quotes. Um so one possibility might be uh uh one possible actually let me step back. When somebody says there's a design defect uh in uh uh in a product, usually again I oversimplify, but usually what that means is that the uh product was negligently designed in that there was some relatively cheap precaution that could have been taken but wasn't taken. So, a self-driving car, they could have just by adding this particular piece of piece of code, they could have recognized that this this blob going across the field of vision was a pedestrian, let's say. Um, uh, so uh, so, uh, likewise, what you're looking for in hypo in or what you would be looking for if this issue came up in a in a in a real case involving uh, large what I call large liel models. um uh then uh uh what you're looking for is is there a way that they could have diminished the risk of this harm? And one possibility is have have a code that says do not output quotation marks unless the things between quotation marks appear somewhere either in the training data or if you don't have access to the training data appear in some corpus. Maybe do a quick Google search and see if you can find them. And if they don't then just don't include the quotation marks because then you can't vouch for it for the accuracy. Again, there are complications. What if it's quotation marks in fiction that that the AI was asked to write? But you know, one of the things that I think the AI companies will have to recognize if they make all these claims, oh well, we just too complicated for us to uh to implement these fixes is they have to say yes, we can create software that uh performs at the 90 90th percentile on the SAT and on the bar exam and this and that, but checking to see if the quote actually exists somewhere. Oh, too difficult, right? I just don't think that on the facts the uh uh the the AI software developers will get away with that kind of work. You mentioned that you could foresee for the largest labs this being a sort of cost of business of updating their software or updating their systems to make sure we're checking for these sorts of lielist statements. How do you respond to concerns that mapping this sort of requirement onto AI systems may quash AI innovation may make it unduly burdensome for small? It is a very serious concern and it is concerned with any product right any service as well. medical malpractice liability uh may uh uh may undermine possible innovation in medical practice because usually doing what everybody else is doing is likely to be seen as reasonable. Whereas trying to do something better if if things go badly even if it's not really your fault that may be seen as unreasonable. Not your fault in the sense that you you you had really good reason for doing it. But the result was in this particular instance bad very much uh um uh it's a very reasonable fear that your actions would be seen as unreasonable there. Uh so uh likewise with regard against self-driving cars uh I think Tesla and Google can afford the risk of liability. But yeah, if somebody wants to create self-driving car kind of in his garage and sell it to people a lot more cheaply, let's say, than a Tesla uh is is sold. Well, I guess I'm not sure how full self-driving Teslas are, but certainly that's the goal. and Google's Whimo is fully self-driving um uh then uh uh then in any I'm sorry uh uh the risk of liability may deter this this uh this startup and not even just in the garage if somebody is looking for uh for investors they may say well wait a minute you know we don't want to invest all this money and all of this will go go to the lawyers and go go to to verdicts against you very serious concerns but on the other hand It's also a serious concern if uh innovators are not held responsible for the harm that their innovative products cause. Uh because then they may just not act as safely as possible. Maybe, in fact, I shouldn't be creating self-driving cars in my garage. Maybe I shouldn't be letting loose a um a language model that I know people will use to make decisions if the model just makes stuff up about people. Um, by the way, um, you know, this this issue has led to some statutory action in some context. So, medical malpractice recoveries are capped in some states and there's some procedural rules uh that that are aimed at not unduly deterring uh deterring reasonable behavior. As I understand it, uh um uh uh nuclear uh uh nuclear power plants or nuclear power plant operators have their uh liability capped at some many hundreds of millions of dollars, but still have it capped uh in order to avoid deterrence of nuclear power. Now that there a lot of people are a lot of environmentalists are now speaking out in favor of nuclear power because it's ultimately cleaner than the alternatives. We might be seeing that that becoming an important protection again for new power plants. Uh but very rarely is the rule well we so want to promote innovation we'll have no liability whatsoever. Right? Usually if the legislature steps in, it tries to balance these concerns. Just like with section 230, it didn't completely preclude defamation liability. It just said it has to be on uh uh placed on the original speaker. Well, if you are going to preclude liable liability even for the original speaker, for the entity that's responsible for generating the output, well, there need to be a a legislative judgment, I think, along those lines. probably either the legislature will say no. Uh if anything, a lot of people think section 230 itself already goes too far. I'm not sure that's right, but but I think that's the sentiment among many. But at the very least, that probably say, look, there's got to be some sort of compensation, some sort of mechanism for protecting uh people, innocent third parties whose reputation may be damaged and who may be economically ruined potentially as a result. And shifting our perspective to what's on the horizon, one subtle part of your paper touches on uh considerations of the use of open source models. So models being used by downstream developers. And I think one question I'm particularly keen to to know how you're initially thinking about is the idea of AI agents. So we can have agentic systems where it's an AI agent talking to another AI agent talking to another AI agent who then shares an output uh and posts that let's say on your LinkedIn and you never even thought about what it was going to post or when it was going to post it. So in these instances of multiple uh entities or individuals relying on multiple AI systems, how complicated is all this going to get? Do we need to start thinking of wholesale reforms to our conception of liel or do you think that this pre-existing structure can be amended uh or adapted enough to fit this crazy technical world we're living in? Right. Well, it's hard to know for sure among other things that we're it's still early days at this point. I know of two lawsuits that are being litigated in US courts. One the battle case which again has been shunted off to arbitration. It's in federal district court in Maryland. Another one in state trial court in Georgia where the uh where um uh uh Open AAI's uh um motion to dismiss was actually denied by the court. So the judge allowed the case to go forward although it's still not at trial yet. Uh there's also a a complaint that's been filed recently in Norway with a Norwegian data protection authorities uh uh about uh lielist output accusing actually a Norwegian man of killing his own sons. The good news is everybody's alive and well. But the bad news is he's saying look you know it's making up very very serious allegations um about me but still it's only three such instances that I know of few plus a few others where the liability has been lawsuits have been threatened but the only three filings that I know of so probably it'll there won't be a lot of movement for ma massive reform until we see some decisions there at least we see how courts are handling this right now I will say as a general matter our legal system is quite well acquainted with harms that are uh that um uh stem from a combination of actions by many parties. That's sort of a staple of firstear tort law uh for those who have who have taken it. Just remember a lot of times the lawsuit is let's say some train uh causes somebody's some vendor's cart to to or actually I'm not sure train let's just say some let's say some bus causes a a a vendor's cart to uh to tip over uh and then there's that doesn't damage the the goods but as a result thieves come and steal the goods. To what extent is the bus operator responsible for uh for the theft of the goods? Well, the answer is maybe. Even though it's a third party, it may be the negligence of one enabled and the intentional misconduct of another. And then you can multiply it further, especially when you get to product liability. Historically, you know, there's been the the seller, there's been the manufacturer, but the manufacturer may have bought parts from many other people, right? Could be a contractor and a bunch of subcontractors. So, the legal system is familiar with that. It may be that it'll map the existing rules in a in a way that doesn't make sense onto the this new technology. And if that's so then I think quite possibly Congress will step in uh or state legislators in some situations will step in. uh but uh uh but for now at least I think the answer is going to be that it's that courts will be applying these familiar rules uh developed over centuries having to do with uh liability of multiple causal factors as it were parties that caused things uh in you know variety of different ways into a variety of different degrees that'll try to map them on to AI. And before we let you go, we have some pre-law students, I'm sure, who are watching this. We have folks who are decades out of law school who have uh maybe moved on to thinking about the black letter law, but are really involved in in theory and policy. What are some things that are top of mind for you that if you were to reach out to folks who are curious about diving deeper into these issues, what questions do you recommend they look into or some some cases or future trends that you think are particularly worthy of their attention? Yeah, you know, uh really hard to know. Really hard to know. I did not anticipate in 2022. I did not anticipate uh what Chad GPT would be doing. It's very hard to predict what what's coming down the pike. Uh, among other things, uh, um, uh, there are there may very well end up being lawsuits over physical injury as a result of AI. There's of course a lawsuit pending right now involving a suicide of a uh teenager who was involved in chatting with AI and for whatever reason as a result of the output the claim is committed suicide. Um that those kinds of cases are percolating up especially when children are involved usually pretty hard to hold an entity liable for someone's suicide. Um, and in those kinds of cases, I'm actually pretty skeptical of liability, but courts are going to have to deal with it. And then on top of that, of course, or just the thing that one might be thinking about is what if, uh, there's other kinds of physical harm. For example, people follow the medical advice of an AI, and that advice is provably false. Like there's a log that says you should do this and that, and that that clearly is not the right thing to do, and it was indeed what the person did. Uh so uh so the causation may be pretty straightforward. To what extent would there be that kind of responsibility? Uh so um and you're quite right that uh uh that the agent environment where lots of things are happening like we let something loose and we think we know what's going to happen and it turns out the result is vastly broader or at the very least vastly different than what we'd expected. Um the legal rules may end up being familiar. Is it careless? Was it for harm foreseeable and such? But how they'll actually play out as a practical matter uh may be may be quite surprising. And again, because it's surprising, difficult to predict. Well, folks, uh we're going to have to let class out and allow you all to get to your homework, but for now, thank you so much, Eugene, for joining the AI summer school. Uh thank you so much for having me.