Transcript for:
LexisNexis AI Insider Webinar: Permissive vs Restrictive AI Regulation

Okay, good afternoon everyone and thank you for tuning in to this LexisNexis AI Insider webinar, Permissive versus Restrictive AI Regulation Reviewing Approaches Around the Globe. My name is Isabel Cook. I am a PSL or Professional Support Lawyer in the LexisPlus Intellectual Property team. I specialize in copyright, database rights and intellectual property transactions primarily, but I also work on our emerging technology content that covers AI.

alongside my colleagues in our InfoTMT and commercial teams. So I would like to start by introducing my co-host Matthew Newman who is a global chief correspondent for Emlex which is our sister company. Matthew writes about data protection, privacy, telecom, cyber security and of course artificial intelligence. His career has spanned many jurisdictions and having been with Emlex since 2012 he reports on regulatory developments from around the world.

Hello Matthew, pleasure to have you with me today. It's great to be here, thank you very much. Great, thank you.

Matthew, before we introduce our wonderful panel, I am going to ask you to help me with some scene setting that will ground our discussion this afternoon, but first I would just like to let our audience know some of the headline developments that we are going to refer to as we move through the event. Many of you will be familiar with these already, so I will keep it brief, but the first of those is of course the EU AI Act. which is a legislative initiative in the EU for comprehensive harmonised rules on the development and use of AI.

It is going to take the form of a regulation and therefore be directly applicable without requiring transposition by member states. It has had something of a bumpy ride through the EU legislative process so far. We do have political agreement on the Act that was announced in early December last year. The agreed text hasn't been officially published.

Some of you may have seen though that it has been widely leaked and therefore commented on. There is no specific regulation for AI in the UK as yet, however we have seen some political developments which include the publication of a government white paper entitled AI Regulation, a Pro-Innovation Approach. There was the holding of the AI Safety Summit that was hosted by the UK government in Bletchley Park last November and there has also been a private members bill introduced in the House of Lords later that same month.

It's not part of the government's planned legislation so it remains to see how that is progressing. There's also been publication by many sector area specific regulators such as the Information Commissioner's Office addressing AI in relation to their work. And then in the US the most significant development has been President Biden's executive order on AI from last October which directs various US government departments and agencies to evaluate AI technology and implement processes and procedures regarding its adoption and use. It also imposes some obligations on the private sector as well.

So all this and more is to come in our discussion. Just a note that we are not confining our panellists to only discussing regulation in the strictest interpretation of the word, but really our discussion will come from a broader compliance perspective. Matthew, to start us off, I would like to lay out some of the macro harms that are becoming apparent as AI is growing in its technical capabilities and its general availability around the world. What do you think those harms are and how are they shaping this scramble to regulate that we're seeing in whatever form?

Well, thanks, Isabel. Thank you for the introduction and thanks for that broad scene setting. I think AI is on the top of mind for regulators, companies and just about anyone who's thinking about what's going to happen in the future. But there's a real reason for this. And it's because there's some.

Fear and then there's opportunities with AI and some of the harms that I'm just going to mention will the highlights of those will be misinformation and disinformation. Just to define that very quickly, misinformation is false or inaccurate information while disinformation is deliberately false content which can be used to spread propaganda and so fear and suspicion and the aim of spreading misinformation can be can um The idea is that generative AI is extremely cheap and it's used to create fakes, deep fakes. And experts say that the output can actually be better at fooling humans than human created content. And so the risks of that misinformation could be quite serious.

The one example I can think of is in politics. But there's also... the stock market could be moved by generative ai could also erode trust in a shared sense of reality and the threats of democracy are quite apparent the world economic forum said just this month that politics could be disrupted by false information leading to riots strikes or crackdowns on dissent from governments and a good example of misinformation occurred just this month in new hampshire This is when voters received recorded phone calls from an AI-created voice that sounded just like President Biden, who told them not to cast their ballot in the state's presidential primary.

Threats to human rights and hyper-surveillance. There's a right to privacy that's often cited as the main human right and the basis for other rights, such as freedom of association, thought, and expression. An example of a threat to privacy would be using AI systems to profile people based on their internet activity and retaining their data and then re-identifying them. with so-called anonymized data.

That would mean that the data has been scrubbed to personal attributes, but then can be traced back to individual users using AI. AI systems could also harm freedom of expression because they may erroneously take down certain forms of legal and legitimate expression more frequently than human content moderators. Then finally, as a potential harm, I'm going to focus on the automated processing and that's the very by its very nature AI systems are used automated processing and we encounter these systems every day with recommender systems on movies music search terms and this kind of processing could also be used for job applications financial services and what's the harm is is that bias in a particular system could be built into it and Once those biases are built in, it's amplified or replicated.

Now, there's some exceptions under GDPR, under Article 22, for using automated processing. We can go over those in more detail, but they involve human involvement, explicit consent, necessity for the performance of a contract, and when it's authorized. IP and copyright are other harms. We're going to go into that just now. So I'm going to let our panelists.

describe more about the minefield that has become copyright issues and AI. Thank you. Great, thank you, Matthew. That's really helpful, I think, in setting us up for what we're going to be hearing from our panel.

So with that, I should introduce them. We have Keris Swin-Davies, who is a partner in Pinsent Masons. Keris is an expert in IP, IT, and information law, specializing in strategic advice on IP protection, exploitation, and enforcement.

as well as data privacy compliance and data commercialization. She brings her extensive experience in drafting advising on complex technology agreements and providing thought leadership on topics that include AI, Internet of Things and data commercialization to our discussion today. We also have Matt Hervey.

Hi, thank you. We also have Matt Hervey, who is the head of artificial intelligence at Gowling WLG. Matt is a leading advisor on AI across multiple sectors, including being highly experienced. intellectual property advisor, particularly in patent disputes.

Matt is a widely published author on AI matters. He is co-chair of the American Intellectual Property Law Association's AI subcommittee and has written the World Intellectual Property Organization's guidance on generative AI. He participates in AI working groups for the IPE Federation, the International Association for the Protection of Intellectual Property, and the International Chamber of Commerce, amongst others. So welcome, Matt.

Thank you for joining us. We also have Guy Matsushita who is counsel at Nishimura and SAE in Tokyo. Guy's practice focuses on intellectual property, IT, AI, data and other technology related matters with a particular focus on patents that is supported with his background in engineering.

Guy has represented clients in numerous international dispute resolution cases and advised tech ventures on technology matters. He contributed to Japan's first comprehensive guidelines on AI and data related contracts. the contract guidelines on the utilization of AI and data which were published by the Japanese government in June 2018. Thank you Guy and we also finally have Jonathan Armstrong a partner in Cordray specializing in technology and compliance Jonathan advises multinational companies on risk compliance and technology matters across Europe and has handled legal issues in over 60 countries his expertise spans emerging technology corporate governance ethics code implementation, reputation management, internal investigations, marketing, branding and global privacy issues.

Jonathan was appointed to the New York State Bar Association Presidential Task Force on Artificial Intelligence and in this role he collaborates with leading practitioners, regulators, judges and academics to develop frameworks for the use and control of AI in the legal system. So I'm sure you will all agree we are extremely lucky to have such an experienced panel with us today. To kick off our discussion in earnest, I would like to ask each of the panelists to give us a very brief update on the state of play of AI regulation in some key jurisdictions.

So Keris, can we start with you and the UK, please? Most certainly. So I think as already mentioned, the UK has no holistic... overarching body of law that governs and regulates the use of AI or the development of AI in the UK. But it's important to realise that there is a body of law that is applicable, is directly applicable to those activities.

So we've mentioned already copyright law, other intellectual property laws have a really big part to play. So we will talk more about those a bit later. Data protection. absolutely key, very fundamental to the use of AI.

But we have liability laws, which come into play as well, human rights and employment laws. So we must bear in mind that it isn't a free for all, it's not lacking in regulation, but it's that overarching regulation that we currently don't have. And the UK government has expressed its view in March of this year in its white paper.

that it wants this pro-innovation approach. And we're seeing this in common with a number of other countries around the world, whereby the UK wants to encourage AI developers to carry out that development work in the UK. So what are they proposing?

Well, instead of having statutory regulations, rather it's a proposal for existing regulators, for example, the Financial Conduct Authority. to step in and within their sectors to apply a series of principles that have been laid down. And those principles are modelled on the OECD principles. So, for example, safety, generative AI, I think has brought that very much to the forefront.

Transparency, again, we hear about this all the time. Fairness as well is another key one. But those principles then are to be encouraged amongst the sectors.

It's not a question of enforcement, but the government has also made it clear that if it needs to, then it may step in to make sure that those principles are actually being adhered to. In terms of developments, again, Bletchley Park, very much with generative AI in mind and safety, there was the Global AI Safety Summit that was hosted by the UK. where these issues could be discussed.

And the Bletchley Declaration came from that, whereby essentially the countries attending would cooperate, collaborate to raise awareness, to let each other know what they were doing around this prime issue of safety. Great, thank you. Matt, would you let us know where we're at in the EU?

Yeah, big topic. I'll do my best. And I have to echo, first of all, what Keris has said about national pre-existing laws. It obviously applies to the member states as well. They may have their own national laws and regulations, and it covers the same sorts of stuff.

So IP, human rights, employment law, etc. There's also overlapping pre-existing regulation, particularly privacy. Article 22 has been on the books for some time and is of direct importance to algorithmic decision making.

And we also have to look at the EU's European Strategy for Data in general. So to achieve data governance, access and reuse across the EU. But it does have two specific AI measures in draft. The first is the AI Act, which you've already mentioned. And the second is the AI Liability Directive.

I'll briefly talk about the second one first. So the AI Liability Directive is to extend product liability protections. to AI and to make sure that that's harmonized. And really, it's about things like the burden of proof and the right to disclosure, the sorts of things which are baked into UK common law anyway, but they're making sure it's harmonized across European member states.

Now, the key aspects and the status of the AI Act are somewhat in flux. The current draft is about 260 pages long, so I'm not going to summarize it. And there's even doubt about whether it's going to pass. So the recent information is that there are three, maybe four countries looking to vote against.

And so it might in fact not pass at all. And I know our overarching theme here is restrictive or permissive. So in the eyes of some member states, it appears that it is too restrictive a measure and might make the EU less competitive. In very broad terms, key features of the proposed AI Act is to regulate AI per se and to apply to providers and deploys of AI in the EU or where they're not in the EU, where the outputs of AI would be used in the EU. There are massive fines on offer, such as the greater of 35 million euros or 7% of the previous year's worldwide turnover.

The structure is certain AI systems would be prohibited entirely, such as subliminal manipulation. So in that sense, at least the regulation is restrictive. Below that, you have AI systems which are high risk and those which represent a significant risk of harm to the health or safety or fundamental rights of natural persons.

So we talk about profiling using personal data, tracking your performance at work, tracking your health or your reliability. biometrics, critical infrastructure, education, employment, all of those sorts of things may be high risk. Now, if it's high risk, it doesn't necessarily mean you can't do it. It's not necessarily restrictive in that sense.

But there are requirements about registering your AI, about documentation, about showing risk mitigation, ensuring AI literacy of those using the AI, human oversight. There's also rules for general purpose AI. Again, we're talking about requirements for documentation, evaluating models, tracking and reporting. Going back to some of Matthew's themes, there are also specific regulations on the labelling of deep fakes, the watermarking of generative AI outputs and the disclosure of training content.

Now, is that restrictive? I think it certainly limits the level to which companies can go ahead and automate and not have humans in the loop. And it's certainly front-loading compliance work.

But I think the argument on regulation is always the same. Will that actually save money and mitigate business risks and improve certainty and be an overall liberalisation, as it were, of a market? We shall see.

But certainly they are deliberately trimming back its impact. So certainly for my clients, particular interests that there are exclusions for scientific R&D to avoid overlaps of financial services regulations. But they're also narrowing the definition of AI itself and. limiting the sorts of compliance required from SMEs. So there's a little overview for you.

Wonderful. Thank you so much. Guy, I think we might come to you next and ask you to give us an overview from Asia-Pacific, but Japan in particular. Sure. So I can give you an overview from the Japanese perspective.

And in Japan, there is a tendency to prefer disciplines with soft flow. such as guidelines rather than legislation like European AI Act, for example. And currently the Ministry of Economy, Trade and Industry and Ministry of Internal Affairs and Communication are collaborating to create AI operator by lines which are open to public comments until the mid-February this year. And it is worth noting that these guidelines, although not binding, aim to incorporate a risk-based approach like which is adopted in like AI act and also try to coordinate with the international standards and one example would be that these guidelines outline the fundamental principle initiative necessary in the development provision use of AI and decided to take a human centric approach and this human centric approach includes acknowledgement of the increased risk of air generated disinformation misinformation and bias information and which may destabilize and disrupt the society and also it emphasizes the needs for taking necessary countermeasures. Furthermore, internationally accepted principles such as safety, fairness, privacy, protection, insurance, security, transparency, and accountability are also covered.

And importantly, these guidelines also incorporate the Hiroshima Processing International Guiding Principles for Organizations Developing Advanced AI Systems. And so we are kind of, and we are actually still trying to follow the international standards accepted around the world. Aside from these guidelines, we are also in big debate on how we should approach the copyright issue as well as the privacy issues related to AI development and users for now.

Great. Thank you so much. Finally, Jonathan, I will clarify that you are not a US lawyer, but you do keep abreast of events there.

So could you give us a brief overview of developments across the Atlantic? Yeah, happy to. I mean, I think that as a broad rule, if we're looking at permissive versus restrictive, I think the US is. broadly falling into the permissive camp, if you like, and that tends to mirror US developments in a lot of areas. But there are exceptions to that.

And clearly, part of the task force that I'm on at the New York State Bar is because of an incident where lawyers used chat GPT to draft submissions to court. That wasn't a good idea. And as a result, bar associations, particularly are ramping up the requirements on lawyers and judges are doing that as well. As far as general regulation is concerned, then many states are looking at AI legislation.

There are some federal proposals as well. It falls into broadly six main areas. Elections, obviously, is a key priority if you speak to US lawmakers.

That's one of their biggest concerns. Obviously, we're coming up to what might be a hotly. contested election in the US there are allegations of nation state involvement etc so as a result that seems to be top of most lawmakers agenda evidence in courts would be the second area there's federal and state legislation there transparency is another key area I know we're going to return to that theme the FTC have already possessed powers they're the broad equivalent of the CMA and they're looking at activity in that area just as a CMA are here Fabricated videos and audio we've talked about already, AI licensing regimes we've talked about. I don't think, I know the EU is looking at a sort of early opt-in version of the EU AI Act as well, and that's something that I think US lawmakers are following with interest.

And then hiring and promotion seems to be the area where there's been almost the most activity so far. People very concerned about discrimination in hiring processes as well. So a number of states either enacting or looking at legislation in that area. And then, of course, you've mentioned the Biden executive order in October, setting up things like an AI safety and security board, looking at issues like discrimination in federal agencies as well. Great.

Thank you. Thank you. All of you.

That was. really helpful and really interesting. I think we might move on to the Q&A portion of this webinar now. So, Matthew, I'm going to hand over to you to start us off with that, please.

Excellent. Thank you. Fantastic overview from our three panellists. And you touched upon some of the big themes, copyright and privacy issues. And so, the first question is going to be for the panellists.

does the jurisdiction that you train your model in matter? And what could be the impact of that for the organization who built it and for the organizations using the model? So the two main themes that I mentioned, copyright, exceptions, and data protection rules.

Let's start out with Jonathan. How do you see that in your jurisdiction, or what are your thoughts about that? If I leave the IP lawyers to concentrate on the IP aspects and pick up the data protection aspects, then broadly it doesn't matter from a data protection privacy point of view where you're training your model. And we've already had quite a lot of activity on training data.

For example, just yesterday, the Garanta, the Italian data privacy authority, reminded us that they had suspended ChatGPT over concerns about training data amongst other things. last year that there are investigations continuing and we believe that OpenAI have been given 30 days to file submissions to avoid regulatory action. So there is a clear willingness to use GDPR by regulators to police AI or try and bring measures around transparency.

But I know the answer is different from an IP perspective, so I'll leave the I. IP teams to comment on that. Okay, so let's hear from our IP experts.

I think we'll go to Matt first, if that's okay. Yeah, yeah, I'll happily cover that. So, I mean, obviously, this is a regulation talk. We can debate whether copyright law is regulation.

But by the way, in the AI Act, IP does raise its head. It's about protecting fundamental rights and IP is listed as such in Recital 28A. Also, Article 52C requires currently in the draft a summary of a content.

use for training. A previous draft mentioned copyright material in particular. So we'll see if that leads to litigation across Europe or indeed internationally when people have to disclose their training sets. Now for IP, the key stages of training a model are to scrape or gather the information.

But for an LLM in particular, you're talking about often using the common crawls of the entire internet. You need to process that content. You need to train your model.

build your model and then their outputs and so the question is which of those stages if any infringe and do you get an advantage by taking part in the training in one jurisdiction and deploying it in another and the key issue here is that copyright is a national right and that is why in the UK the Getty versus Stability action has already had an interlocutory hearing seeking to strike it out on the basis that Stability didn't perform certain acts nationally. It's continuing to travel, we'll see what happens. And the truth is that internationally, there are very relevant exceptions which are simply not housed. So in the US, you have fair use. That is a principled-based, open-ended exception.

And in the past, the cases such as Google Books and Perfect 10 have deemed that certain technological measures were transformative and enabled new services such as web searching. And so they were allowed to do this. And we wait to see in the 10 or so cases, raising it in the States at the moment, what happens.

Contrast that to the EU in terms of harmonised exceptions and the UK. And there you have specific narrow exceptions and you have to fall within the exact parameters in order to be allowed what you do. The relevant ones are temporary copies, probably pastiche and most importantly, text and data mining.

And in the UK, that's non-commercial only. So we wait to see what happens in the stability case, among others. So the key issue here is if I've trained elsewhere and maybe in a place where I'm allowed to do so, can I then move the model or use the model in another jurisdiction? And the critical issue there is, is the model a copy?

And there's a lot of academic debate about whether models memorise significant numbers of works. And a lot of challenges are putting in place technical safeguards and human centric policies to make sure that outputs don't infringe inputs. I will cede the floor to someone who knows more about privacy, procurement or the like.

Great. Thank you. I think, Guy, we might ask you to give your perspectives on some of the things that Matt has picked up there. Yeah, sure.

So from my IP perspective. I mean, where you train the model does matter because of the copyright exceptions. I just want to describe the circumstances in Japan first.

In Japan, in 2018, Japanese Copyright Act was amended so as to ensure the copyright exception for non-engagement purposes. The current Article 30-4 allows broad use of copyrighted materials for information analysis such as text and data mining, and this is even for commercial users unlike UK. With this Article 30-4, uh 30-4 japan is considered as one of the strong pro-ai development countries so that some of the japanese core has expressed japan as ai development heaven and my understanding is that the similar rules are also adopted in other southeast asian countries such as singapore and as and under japanese laws the the train model usually is not considered as a copyrighted material so that it may be able to uh I mean copy really I guess but I'm not really sure of the circumstances outside of Japan and also in relation to this like copyright exception in Japan there's a big debate going on because now the needs for protection of right holders especially artists are under like deep debate because of the introduction of generative AI so it Currently, the Agency for Cultural Affairs, which looks after the Copyright Act, is now drafting a legal issue list in relation to application on Copyright Act to develop and the use of AI. The issue list contains some interpretation of Article 30-4 which may limit free use of copyrighted materials even for text and data mining purposes. Although I have said that Japan could be like copyright, I mean the AI development heaven, it may no longer be depending on the outcome.

of the public consultation which is now taking place and which may come out and which may take place until the mid-February and the according I mean the depending on the outcome of the public consultation the issue this may actually be restrictive on AI development in Japan and may have a strong influence in terms of the training taking place in Japan so I'm not sure what's going to happen in the future. And that's for IP perspective, I guess. And should I pass it to the... Yeah, that's great. We'll move on to another question that was extremely helpful.

So, Matthew, do you want to jump in with that? Yes, please. Thank you.

Yeah, so the second main question is about what happens when your client wants to introduce an AI tool to their business, which they will be buying from a third party. What do they... need to be worried about from a legal and compliance perspective and what practical steps can they take to alleviate those worries. So let's, why don't we start with Carys. Thank you.

Yes, so one of the first things I would say is understand the AI tool and of course you're not going to understand it to the level of the developer, even if they understand it totally. But you need to understand its various elements because of the issues that we've been talking about around copyright, the data, but also the nature of AI technology, which means that it's learning. It's moving very fast. It's going to keep moving. So we need to be doing our due diligence around those aspects that we can identify.

And we need to understand in particular. that an AI tool will assimilate data, so it takes that data in, it will transform that data in the process, and it will then create new data as it continues to learn. And that is really important for a business to understand. So we've looked at the potential data protection issues that there may be a breach of data protection laws, there may be infringement of copyright. So as a user of that tool, you're going to want assurance both on the inputs, the training and the use of the tool, but also on your outputs as well.

That at the moment around generative AI contracts is a bit in a state of flux, but that certainly needs to be looked at. You also need to look at it from the other perspective, whereby in using the tool, you may well be inputting your confidential information. business information and personal data. And you need to understand and get assurance from the supplier of the tool what they're going to be doing with that.

So you need assurances around confidentiality. So your confidential information isn't going to reappear in an output that somebody else might use, that the personal data is going to be totally anonymized as well. So you need both the practical technical reassurance and the the legal reassurance in the contract that you put in place.

And that becomes particularly important to do with the learning, because most suppliers of AI tools will want the tool to continue to learn from the information you provide, particularly if you're contracting for what's often described as a community model. So other users will use it and the tool will learn from them and you'll benefit from that learning and vice versa. But then you need to think about those issues I've mentioned around confidentiality and anonymizing personal data.

But also, are you going to allow the supplier of the tool to use your data for the purposes of further learning and perhaps for teaching yet another tool? So that's certainly something you should think about choosing. And then other issues you need to think about is the cybersecurity side of things, issues around open. open source software as well, issues around how, in fact, the tool can be used.

So the purpose of use becomes very important, particularly when we look at a regulatory framework. So there are lots of issues there. And then liability. And this, again, depending on when it's off the shelf or it's a bespoke tool, you need to be thinking about understanding all of the risks.

But then there may be some liability that you're prepared to take. but you've gone in understanding what that is. So I think it's a whole new thinking that needs to be applied, as I say, both to your due diligence, getting the right people involved in that, and then the right thought processes and drafting around your contracts.

Can we move on to Jonathan? Yeah, I agree with most of what Keris has said, I think. But one of the conflicts we have is that, well, first of all, a lot of AI applications that our clients are buying are what I might call stealth AI in that they're not sold as AI offerings per se, but they're sold as a tool to do things better. that might incorporate AI and I think they can be particularly problematical in terms of things like transparency and understanding what's happening because the vendor or the person who's selling to your client might not actually know the full trail of what's happening and there is this tension I think we see with AI with what you might call the black box where developers and those promoting AI often try and hold on to the secret sauce because obviously they're trying to commercialize that. And a lot of these businesses are pretty early stage and they want to attract funding on the basis of that secret sauce.

Whereas at the same time, your client wants more openness so that it can meet its transparency obligations and its other compliance obligations down the line. And I think this just is a tension that clients have to understand. Obviously, it's something that they have to try and resolve commercially, but that can be challenging when already some of the AI providers are quasi monopolies.

And we've already got antitrust investigations, competition law investigations into some of these providers. But you do need assistance from the vendor or their vendor in terms of doing things like a data protection impact assessment, which is going to be critical. to manage the data protection and some of the data security risks. That's fascinating.

The tension that I can just, it's palpable, especially when you see the European Commission is already investigating some of these investments. Guy, can I ask you to step in? Yes, sure.

So in Japan, in terms of like bargaining power between customers and vendor, I mean, customers are usually in the inferior position. So what they can do is whether to just accept the terms and conditions as is or not. And so what we usually advise to the client is study carefully the terms and conditions and see how the vendor will treat input and output, especially input, whether they will use it for further training or not. And also there's no duty of transparency or disclosure under Japanese system. So we cannot really ask the vendor to disclose the...

what they are using to train the AI, for example, and they are just providing the tool, a bare tool, rather than the AI. And then the situation is the same in Japan. However, since some of the, I mean, and so usually the beta actually doesn't take any responsibility, but since some of the global companies started to provide some sort of like warranty in relation to compliance and laws such as non-infringement copyright. The Japanese companies are also starting to accept that there could be such duty for the provider to take responsibility. And so if the practice actually becomes more common, the tendency may change in Japan.

And also in terms of regulatory restriction, I would like to give one example from Japan, because this may be interesting. And in Japan, legal tech is a field currently under discussion for the business application of generating AI. And within legal tech, there are various types of services, but a particular concern is the type where, for example, it learns from past contracts and other training datasets and provide functionality such as pointing out clauses with legal risk or offering draft contracts by considering the circumstances and background leading to individual cases.

However, in the guidance released by the Ministry of Justice in August 2023, it is noted that a review and drafting of such contracts under Japanese legal system for under the provision of legal assessment and if performed by non-lawyers for a fee there is a risk of violation of the Lawyers Act therefore while it is acceptable for lawyers to use these tools there is a potential violation of Lawyers Act if non-lawyers for example within a company utilizes them and due to such risks companies find themselves in a situation where they're abandoning the provision of advanced services in the legal tech field and that utilizes generative AI The only service they may provide is to show a template, which you may easily find in textbooks. There's ongoing explanation of the balance between existing regulations, in this case protection on the lowest profession and the use of AI, and that's the demo we are facing now in Japan. Great.

Thank you, Guy. If you don't mind, Matthew, I'm going to jump in with a follow-up question to that because we've heard a lot there about some of the kind of risks, the things that companies need to be worried about and some steps they themselves can take to address those. Obviously, one of those is going to be governance and corporate governance.

So, Matt, I might ask you, are there any sort of common themes that we're seeing emerging from all the multiple codes, guidance documents, etc. around the world? Yeah. those companies can internalise in their own policies.

Yeah, definitely. I think there's great international consensus on what those are. And if you look at the EU ethics guidelines for trustworthy AI or GPAY or OECD, they're all saying exactly the same thing.

Your AI should be legal, it should be ethical, and that brings in issues of bias and the like. It should be robust and secure, and you need to deploy it in ways which ensure transparency. accountability, and oversight. And in order to achieve governance, you're going to have to do impact assessments, and you need to mitigate risks.

And this is all about ensuring appropriate use of this technology. And the simple truth is, full automation simply may not be appropriate in all circumstances, and you really may need to have the right level of human oversight or direct involvement. In terms of impact, implementing governance. The gold standard at the moment is the NIST AI risk management framework.

And I would say to what may be a largely legal audience, that legal and regulatory compliance is 0.1 out of over 60 implementation points. So there really is quite a task here. But they themselves say that it's supposed to be a flexible implementation plan that evolves. And what you have to do is start with a project, apply the framework, get the learnings and roll it out more widely.

Excellent. Yeah, that's really helpful. Thank you. Matthew, I think you have another question you'd like to put to the wider panel.

Absolutely. So the we've heard a lot about the harms and the problems. And my question is about the consequences of getting it wrong. in your approach to AI now and into the future.

And I'm thinking about private litigation, regulatory scrutiny and fines, we mentioned that with the AI Act, reputational harm, and also keeping oversight on what your AI systems are doing. So, Karis, I'm going to ask you to jump into that one. Yeah, certainly. So, yeah, we've identified there that the there are all these issues around infringing and breaching privacy rights, copyright. So your risk there is that as a developer of a tool, as we've seen in Italy, you could be stopped from actually bringing that tool to market or you've brought it to market even worse and that has to stop.

We've got issues around liability. I think this is one that is going to grow. So when AI first started reaching the market just in the same way as other technologies, such as connected cars, we started seeing the whole issue around who's going to be liable, what are the consequences going to be, where does insurance come into that?

And I feel that's been a bit of an issue that has moved to the side whilst we've been dealing with more immediate issues in the spotlight. But that is very much something that people are going to have to assess and think about what the consequences could be, because there are consequences for employment. There's bias. There are consequences as well for health. It's a really key area in the life sciences field for AI to be used.

There's so many efficiencies, so many accuracies that can be used from those tools. But we also need to think about. the purpose of use and the consequences of that. We've talked, Jonathan mentioned earlier about doing your impact assessment, that will be really key to look at what if things go wrong, but also we need to come back to exactly what Matt was saying earlier about ethical use and governance and I think increasingly we need to move to a position where AI is very much fundamental move for society and therefore we need to make sure that we're using it in the right way.

We're thinking about its impact. thinking about the issues of if it goes wrong, how do we behave? Because that is how companies are going to be judged, which very much goes to the issue of reputation, Matthew, that you raised, having the right structures in place to deal with that.

And I find it fascinating, actually, working in the life sciences field, that ethical policies and governance are very much part of their business. It's something that they've... put in place for many years and now we're seeing that all businesses need to be thinking about these measures in order to address AI risks. Excellent.

Matt, could you round us out a little bit on that? Sure. So, I mean, Keira said at the top of the whole panel part that pre-existing laws apply, right?

And this is so fundamental. to the risks and harms that could occur to a company. So product liability, professional negligence, and breach of contract.

We have new factual scenarios raised by AI, but all the laws are in place, okay? So we're debating whether in a permissive or a restrictive regulatory environment, but ultimately, if you do wrong, the tools are there after the event at least to put things right. Now, those don't require...

regulation, but regulation, guidance, best practice, all of that acts at least as soft law that influences whether you are liable in terms of what best practice would be for professional liability or product liability or the like. Then we've got regulatory scrutiny and fines. So no one wants an investigation. It's incredibly costly. There are existing regulators who will cause you problems if you do wrong.

So we're looking at the SRA for the legal community, the MHRA for life sciences, and obviously cross-cutting regulators like the Information Commissioner and, in due course, AI regulation per se. We've already flagged up the scale of the fines that might be available under some of those. You mentioned reputation, Matthew. That's been a huge theme, at least since the trustworthy AI documents out of the EU, because It's not only for trust in individual companies, it's for trust in AI itself, which is going to be delicate and needs to be protected.

You've talked about the misuse of data. We've talked about fake news. And the other big issue, Keris has talked about impact assessments and so has Jonathan.

The issue of impact assessments and what the AI risk management framework from NIST emphasizes is this is an ongoing monitoring situation because so much of AI will have unintended consequences. and will do strange things in edge cases. And then finally, don't leave it too late is a key issue because to comply with some regulatory requirements, to comply with ethical requirements, you need technical solutions that don't necessarily exist yet. And I would just raise the right to be forgotten.

How do you get private information out of a model without hundreds of millions in retraining? These are the sorts of technical legal questions that need to be addressed now. That is really insightful Matt, thank you.

I think the getting it right in advance is something that we need to be learning from previous regulatory, large regulatory changes in other areas that maybe didn't happen at the time and I think on that topic Jonathan I'm going to ask you for kind of our final question in this Q&A portion to talk about some perspectives looking into the future and in particular what style of regulation do we think may emerge when the dust settles? Is that going to be something that looks like perhaps a Brussels gold standard emerging with the AI Act as it did with GDPR? What were the issues there and if not what else might we see some kind of jurisdiction shopping or competition between nation states in getting AI investment?

Yeah, I think I'd just emphasise what Matt and Keris have already said about existing regulation. I think the future, at least the short-term future, is the past. I think it's reusing GDPR, particularly in Europe.

We've had, I'm going to attempt it, the Autoritas Behongerhevens, the Dutch regulator, yesterday say not doing a DPIA of itself. is a wrong. We've had the Garante, as I've said, the Italian debt protection authority has been very active. They've looked at fairness already in the delivery and food, you know, cases, you know, is racial discrimination embedded into AI tools?

So we've already had regulators use the past, if you like, to regulate the future. It's important to remember that transparency and fairness are core principles of GDPR. They're used very heavily by regulators.

They're the little and large, the Abbott and Costello, if you like, of GDPR. And GDPR fines are not insubstantial as we sit here about 4.28 billion, 2,500. 45 fines.

I know Matt and his team did some work last year looking at how often transparency and fairness feature. Certainly from my perspective, I'd guess that 70, 80 percent of those fines feature transparency and fairness, but also 70, 80, maybe more, feature extraterritoriality. So commonly, EU regulators fining U.S. corporations.

for GDPR stuff, sometimes when they're concerned about monopolistic practices and it's perceived that antitrust competition law is slow to respond. So in terms of sanctions, then we're definitely seeing fines. We're definitely seeing suspensions, you know, from Italy, from Ireland over Bard, for example, as well.

So I think in some respects, that's a mirror of what the future looks like. More activity from existing regulators. Of course, the EU AI Act will come on stream, but that might be two, two and a half, three years. And there are all sorts of complicated provisions on what's likely to come in when we will see different countries mirroring that approach. But we also know from FTC statements that they're looking at existing laws as well.

to cover the gap in the short term yes i think that definitely is chiming with what a lot of what we have heard uh so far guy for a last word can i ask you to comment on how what jonathan has just said seems to fit in with where japan is placing itself in the international stage Well, I actually agree with Jonathan's observation, and that's going to apply to Japan. For example, Japanese companies operates globally and compliance with European AI regulation would become very significant. As Jonathan has mentioned, the compliance with GDPR was the first step the Japanese company would take when they are to decide like a strategy to comply with the like recreation outside of Japan and I think the same thing may happen for the air regulations but at the same time there will be a gap and that gap will be filled by the domestic laws of Japan as well.

Great thank you. Well I'd like to thank all the panelists I think that was just absolutely fascinating and really broad yet specific a difficult balance to achieve so I really appreciate it. Matthew, just before we finish up, can I ask you what you think your main takeaways from that wonderful discussion were?

I have to agree that the theme was fantastic, permissive versus restrictive. We first heard about some of the big problems with the training of AI models, and that would be copyright. And we heard from Guy just then about Japan and how it could possibly be.

more permissive and that means more people might be training there because of their copyright laws. But then there's a wide variety, so we were reminded that copyright is national. So you look at the United States and there's a fair use law.

And then we have lawsuits in the US, just to note the pending New York Times lawsuit. And then in the EU we'll have specific exceptions like the Texan data mining. exception in the copyright directive, so lots to watch out on how to comply with that, and the copyright holders asking for their content not to be used. The second big takeaway is this tension, I heard, between the providers of the AI systems and the clients, and so we heard from Karius about being careful about what kind of data you provide, whether or not the providers providing confidentiality, cybersecurity.

And then we're also concerned about protecting what was described as the secret sauce and the black box. So this tension between transparency obligations that are built in to the AI Act, but also getting investment for your fancy new AI system. Who's going to want to invest in it if you? reveal what your source code is or what's in that black box.

The third big takeaway would be risks. And I'm just thinking more and more about how to avoid that. And the point about impact assessments came up multiple times.

And I think that's really important if you think about the ethical use of AI and liability. And we were reminded that there's real consequences to this. So we have existing laws in the UK on product and professional liability and the need to protect your reputation. So if you cross that line, be careful. There are existing laws and there will be new laws coming on online with the AI Act.

So I think that we are in for a wild ride. We were heard to look to the past. by Jonathan and look at GDPR enforcement and just yesterday the Italians went after open AI.

So we are off to a great start. Yeah I think that last point especially is so true. AI feels so new and futuristic. It feels like we've never seen anything like it before so we don't know what to do but in fact the point's made by all our panel that AI is not growing a vacuum.

It's growing in the world that we live in with the legal structures that have been around for many many years. And so we as lawyers probably do have the tools to be able to give good solid advice on it. I think that's really helpful to remember going forward. So with that, I would like to again thank our absolutely terrific panel, Keris, Matt, Guy and Jonathan for that wonderful discussion. Thank you, Matthew, for hosting this event with me.

It's been an absolute joy. And thank you to all our attendees, our AI insiders for joining us. for this webinar, Permissive versus Restrictive AI Regulation, Reviewing Approaches Around the Globe. We hope you have enjoyed it.

Please keep a lookout for an invite to our next webinar, which will be at the end of February, entitled The Future of Law, Grappling with Security and Privacy in Generative AI, which should definitely build on some of the themes we saw emerging in today's discussion. So with that, I will say thank you very much once more and goodbye.