[Music] well good afternoon everybody and thank you for tuning into to this Lexus Nexus AI inside webinar the future of law grappling with ethics security and privacy in generative AI I'd like to start by introducing myself my name is Emma dickin I head up the in-house and the public sector practical guidance teams at Lexus Nexus UK and I have the pleasure of chairing today's webinar for you but far more importantly than me let me introduce you to our brilliant panel members um first of all to Jeff Jenkins Jeff is the Chief Information Security Officer for Lexus Nexus legal and professional um Jeff has over 20 years experience leading cyber security teams across a number of different Industries including Financial Services media travel and data analytics I'd also like to introduce Jessica clay Jessica is a partner in the regulatory team at Kingsley napley and she advises on a range of srra related matters including compliance with ethical standards and Advising on Law Firm risk management with a recent Focus being on gen and its safe and effective use within firms which is very handy for today's session as well as being in private practice Jessica has also worked in house first at the legal Services Board and then at the SR working in the general Council teams she's also Deputy editor of cordry on legal services and sits on the international bar association's regulation of lawyers committee and last but by no means least I'd like to introduce Allison woodis Allison is head of the risk and compliance group for Lexus plus UK Allison is a risk and compliance solicitor specializing in data protection and privacy compliance Al joined Lexus Nexus after nearly 20 years in private practice and currently leads a team of compliance lawyers and experts providing risk and compliance guidance and tools for law firms privacy professionals and in-house lawyers so that's the introductions over and done with and without further Ado let's get going into the session itself now before we get into the meaty questions of the day that you'll have seen set out in the invite I'd like um each of our panel members to just spend a couple of minutes doing a quick bit of scene setting um around their respective areas of expertise to ground our discussions that we're going to have today but if it's okay with the three of you um I'll start us off um just by sharing a few high level findings from the hot of the press Lexus Nexus report we've just published which discusses the results of the survey um we conducted last month so in January with over 12200 legal professionals around their use of gen I so just a couple of um high level findings to wet your appetite just over a quarter so 26% of our respondents to this survey so that's out of the 1200 people told us they're already using gen tools at least once a month in their day-to-day work which is a sharp increase from 11% who answered that question back in last July when we previously did the survey 35% of all respondents have said they definitely have plans to use na tools in the future which is a leap up from 26% last July and the other interesting stat was we also asked um people if they if there was anyone that didn't have any plans to implement gen measures last July 61% of respondents said they didn't have any plans to do so that number has dropped down to 39% in six months so we can see that there really is interest in taking advantage of AI tools growing we also asked our respondents uh what sort of tasks they want geni to help them with as and when they start using it and the top four answers were to assist with drafting legal documents researching legal matters drafting emails or other communication based tasks or helping them to undertake document analysis and the survey also revealed that whilst there's a lot of excitement around gen in the legal market right now and the fact that you're all joining this webinar as Testament to that um it's fair to say this excitement is also accompanied by some concerns which people are wanting to have allay particularly around ethical implications security matters and Trust in the accuracy of information that's generated and we'll be touching on all of those points today we'll be pleased to hear and before I draw to a close just to say if you want to find out more about the survey and the report that we've published take a look at the Lexus plus AI inside a website and you'll find it on there it's called lawyers cross into the new generation sorry into the New Era of generative AI enough from me onto our panel man members that's who you want to hear from so Jeff can I pass over to you to give a bit of a theme setting few moments from you yeah thank you Emma um yeah there's there are many security challenges um however the underlying technology that's used for generative AI is not actually that significantly different um it's going to be the way that we use the technology um that differs and it's how it works in the underlying um systems that actually create this variation of what we can actually do now compared to what we used to be able do so understanding how it's going to be used what data is involved and where that data is going and then how that data can be used in the future is actually the bit that really interests me um I think you'll find more interesting elements uh outside of security and into the Privacy um and ethical side so I'll hand over to ask sure thanks a lot um from a data protection perspective the two main questions are always what's the law and what does the regulator expect me to do the first thing to say is there is no specific UK law governing AI from a data P data protection perspective yet so you mainly need to think about the UK gdpr in terms of the regulator uh early last year the Ico that's the regulator published a toolkit called generative Ai and data protection risk unfortunately the title of that toolkit's about the only simple thing about it if you download it into a PDF which you can it's 140 pages long it's pretty techy and not just from a data protection perspective but also from an IT and a project management perspective fortunately when chat GPT burst onto the scene and into the public consciousness the Ico followed this up with a Blog eight questions that developers and users need to ask about generative Ai and I'm hoping we're going to find a way of dropping that blog into the chat a link to that blog the blog post is much easier to understand than the toolkit although it does assume a base level of knowledge about the gdpr so of these eight things it's it's simple stuff like what's your lawful basis for processing personal data using Ai and how will you ensure you're transparent with people about what you're up to so take a look at that you've got to start somewhere with this it is a good starting point and as I say hopefully we'll find a way of dropping a link into the chat so I'll hand over to Jessica thanks Alison um without stating the obvious I think we're in a very fast moving environment at the moment with Gen um and in my view this is going to require an Adaptive and Progressive approach from leadership across all sectors and I think how law firms start to Grapple with the challenges will be no exception to that um and the same will apply to lawyers working within those firms or inhouse or indeed practicing on their own um from a regulatory perspective I think a good starting point um is what the government is doing so they're taking very much a principles based approach and that's set out in the UK AI white paper and there's a clear expectation in there that sector Regulators really need to take the initiative um and use their existing Frameworks to work out how best to regulate and manage the risks around gen in their specific sectors so the legal services sector is a good example of that um it's probably helpful at this stage just to set out what the principles are in the white paper and they're the ones that Regulators should have in mind I think in terms of the safe and Innovative use of AI so the first principle is safety security and robustness the second is transparency and explainability the third is fairness uh the fourth is accountability and governance and finally the fifth principle is contestability and redress and I'll talk a bit later about what some of the um specific Legal Services Regulators are doing and the regulatory Frameworks within which they have to operate and how we use gen safely and innovatively is absolutely captured within those Frameworks that's brilliant thanks everybody um for that quick bit of scene setting and so I think uh we're good to move on to the first um key discussion point of the day and this relates to the potential risks the ethical implications and the regulatory complexities that are associated with generative Ai and um Jessica just to give you a heads up I'm going to come to you first on this one but um you've mentioned the regulator and gen is rightly on the sr's regulator and We Know It released a special risk Outlook reports um quite recently about the use of gen in the legal sector and this report um mentioned a number of um opportunities that could come could come out of AI but it also um focused on some risks and challenges that it could pose so could you um pick out what you think are the key messages from this risk Outlook please and then also um thinking about um an Ethics angle just spend a few moments talking about what you think are the potential ethical concerns or implications that those of us working in the legal sector should be looking out for yeah absolutely Emma um so as you say the SR uh published a risk Outlet report about um AI more broadly so that was in November of last year um I'd say that the report recognizes that this is all quite new and everything is still a bit unknown um and when thinking about how to use gen AI um they're basically saying that lawyers need to be seen to be pushing the boundaries provide more Innovative services and provide different types of models for the Serv that they deliver to clients but then there's an absolute need to balance that with a need to comply with existing regulatory obligations including our ethical principles and codes of conduct Provisions which I'll talk about in a bit so it's really trying to find that balance between being Innovative pushing the boundaries but not forgetting that the regular obligations that we have as regulated professionals there's also um another piece of guidance that the Law Society issued again in November 2023 which is called AI the essentials that's more focused on generative AI actually and that very much helpfully reminds us as solicitors of England and Wales or if we're working within say an S regulated Law Firm we have to understand our Regulatory and professional responsibilities um and that includes when we're providing services to clients and this is absolutely no different in the context of using generative Ai and that could be for example Emma either in internal process purposes where we're trying to drive efficienc encies or it could be um using gen in terms of providing advice to clients and the public so I think it's probably helpful at this juncture just to perhaps touch on some of the um particular regulatory obligations if we've got time so sure um looking in terms of the SRA principles these are the ethical tenants of behavior expected of us as um solicitors and indeed anybody working within a Sr regulated Law Firm needs to adhere to those principles but I'd say the key principles there seven principles inal to Al um but the the key ones in relation to using gen gen AI safely and effectively would be principle 2 so that's about upholding public trust and confidence in the profession and in the provision of Legal Services also principle five which is acting with integrity and um finally principal seven so acting in the best interests of each client so they're the the main Provisions within the um the principles themselves but there's also some some key Provisions within the codes of conduct as well as we have the code of conduct for individuals that solicitors RS and Ruffles but also a code of conduct for firms um and specifically I would um look to paragraph three of the code for individuals and this is ultimately about providing Services competently to clients and it's about your general competency to carry out your role um and keep your um specific skills up to dat and that also extends to people that you manage as well and there's also Provisions in there about effective supervision um and accountability so i' say they they're the key um Provisions within the individual code uh within the code for firms there's also um Provisions around competency but I think more importantly um paragraph two within the code for firms talks about having effective governance structures and arrangements and systems and controls in place which which enable you to comply with your regulatory obligations it's also important to recognize there that there's a need to be accountable for work that's carried out by Third parties which would include contractors which is very relevant here um and also um 2.5 about identifying monitoring and managing all material risk to your business so that's absolutely key here in terms of risk management around the use of gen and finally I would just um say as well let's not forget about the uh srra enforcement strategy so that sort of sits within the framework of rules and regulations and here it's the key messaging around being accountable and being able to justify the decisions you take um and exercising sound judgment when sort of assessing and making decisions um that's brilliant thank you and um is there anything in particular you'd like to call out if anyone on the call Works in a global Law Firm yeah there anything different to worry about yeah there there's this notion of um regulatory Divergence and this is mentioned in the risk Outlook report so if you're working within a global firm I think it's just very important to have in mind um what what different jurisdictions are doing and how sort of their governments might be approaching or looking to regulate AI as I've said the UK government approach is very sort of light touch it's very much principles based um and it's not looking to impose different rules on different types of systems but it's allowing sector specific Regulators to very much take the initiative and work out what works best for them but that's not the case in all jurisdictions touching on just a couple um China Brazil and the EU as we know have gone more down the legislation route um and others are much more principle based as as is the UK approach so I think if working um in a firm that is operating globally just think about and be alert to these possible differences um in approach and understand what those differences might entail and just make sure that you give clear information to those working within those offices in different jurisdictions but also to clients and make sure that that information is kept up to date because as I've said this is really really fast moving at the moment brilliant thank you um and is there anything from um the ethical side obviously there was some of the Reg regulatory tensions you've just highlighted there are there just a c any couple of points perhaps you'd like to mention on the ethics side yeah I'll keep this brief as un conscious of time but um there are obviously um some ethical and biased concerns that that could be present so the possibility of large language models reflecting or potentially amplifying societal biases present in their training data and how those models learn and that could um potentially result in unfair or discriminatory content being generated it's not to say that will always happen but there is always that risk um and there's also this possibility of scaling up as well so when using gen that much more content can prod be produced much quicker than say just one human who's working without using gen um in terms of how quickly and how um how big that potential harmful content could get um and an important one for me that I mention quite often but is the possibility for Gen to produce misleading inaccurate or completely false output which you know to to us seems so convincingly accurate so depending on the context that could obviously have quite um major ramifications particularly if we're thinking about court proceedings um where Liberty or someone's livelihood could be at stake um so there is very much a need for human verification and fat checking and we remain accountable for that under our regulations and I think it's important just a flag that we have already seen some cases coming out of the US um where gen is been used and it's um generated sort of false citations for case law Etc so that's just something to have have in mind I would say that's great thanks ever so much uh Jessica for that and I guess I suppose the only other one to throw in the mix that we don't need to go into conversation about is obviously there's reputational risk as well um that we need to be aware of so so that's the uh The Regulators take on things Allison thinking about this from um a data protection perspective um is there anything in particular you'd like to call out around um you know potential implication Etc well I could hijack this entire session to talk about data protection risks but um I've been strict no I know I've been given strict instructions to stick to three so that's what I'm going to do um there are some obvious risks some less obvious risks most people are going to be concerned about inadvertently breaching client confidentiality or inadvertently releasing personal data so this could happen where perhaps a fe earner or a paralal is using open- Source AI just to give them a little help on a client matter so you know improve the draft of this witness statement or letter for me or here's a set of facts what's the law on X and you can't assume that just removing the client's name or even the client's name and address is necessarily going to help you information that your staff put into an AI tool will be personal data if it can be combined or aug M mented or cross-referred to other data to identify a living person and in the context of Open Source AI where data is scraped from the entire worldwide web and it's processed by an AI tool that can make connections that our brains can't even imagine the risk of identifying somebody is much higher so that's my first one and I know that keeps people awake at night the second one is perhaps less obvious and this is Mission creep into the world of AI and the data protection concept here is purpose limitation so an example of this similar the the Ico gives an example so at the time you collected personal data and you identified your lawful ground for processing that data it wasn't your intention to do the processing using AI but now you want to use the personal data to train an AI system so that you can actually do the underlying processing more efficiently you have to think about whether that's a new purpose and if yes you can only press ahead if the gdpr allows it I have not got time to go into that but all I would say is if you're thinking about using AI in relation to personal data and AI wasn't in your field of view at the outset you need to look at article 64 of the gdpr um and for the reasons I've already covered you can't just assume that oh well we just anonymize the data or try to anonymize the data you can't just make that assumption my final um risk is dealing with data subject requests that's requests from individuals who want to access or Rectify or delete their data there are strict time limits they're already problematic for law firms and inhouse and data protection officers the main challenges here are going to be can you identify what personal data has been input into an AI tool especially in open Source AI tool and how on Earth can you extract or Rectify or delete the data so that's my top three that's brilliant um yeah there's not GNA be much sleeping after this webinar just thinking about those three let alone the rest of them yeah sorry about that I could go on yeah you're the fun one aren't you on this panel I can tell um so over to Jeff to lighten the mood a little bit she says hopefully um if you were um Chief Information officer in a firm that wanted to deploy J I or was already using gen AI what do you think would be the biggest risk from a tech perspective that would keep you with your Tech hat on awake at night yeah well it's always good when you're going to the security person to try and lighten the mood that's not normally something that Happ won happen very often yeah um so from a CIO perspective um you have a choice of either embracing geni or not embracing it and I I think there's a real challenge for cios that would basically say we have to embrace this it is going to be a transformative technology it is going to bring good but it's also going to bring some bad with it as well and that's just the standard facts of any new technology it's what it does and from a CIO perspective I'm looking at multifaceted role um and I need to really consider how it's going to be used to enhance that product offerings how it can improve our internal processes and how our competitors will be using it and therefore uring that I don't get behind the curve um against my compet competition the big thing from a CIO perspective is how do we keep our data safe so that's going to be the entirety of my my thought processes is how am I going to make sure that whatever technology I use my data stays secure to the same level that it is at the moment so I won't go into the details of how we would do that but that's the kind of thing that would keep me awake from a CIO perspective obviously see so so I have to look at it from the security side and actually the angle on that is somewhat different and my role requires me to ensure that I keep our customer facing products secure our internal environment secure and the intellectual property that gives us a competitive Advantage secure as well so again that comes down to our data but the actual thing that would keep me awake at night is actually how and I'll simplify it and call them hackers yeah would actually use gen geni to increase their productivity their effective and their Timeless and that's because they don't really have to worry about ethics and privacy and all of the rest of it and they can use it with impudence um and so that's the bit that really concerns me is that their use of it will be much faster than our use in terms of protecting our world actually I take it back I think you might have toed Trump dallison after all that well done anyway let's not talk about that one anymore it's too scary let's go on to the second question question point that um we wanted to cover off today and this is all based around sharing with our audience some insights into um how legal professionals can protect their firms or businesses whilst reaping the benefits of this technological Revolution so Allison with your DP hat on um what insights can you give into protecting protecting your business Okie do the starting point with data protection is always find out what's going on in your business who's processing personal data using generative Ai and what AI tools are they're using and probably the best way to find this out is just to ask people but don't just go to them and say what AI are you using people might not realize that some of the open source tools they are using involve AI or generative AI so maybe they're using publicly available tools to check grammar or help them write documents they may not think of that as generative AI or even AI so it's possibly better to provide some sort of a questionnaire preferably Anonymous are you using these things or anything else you could also consider some sort of technological monitoring so tracking external websites people log on to but I haven't got time to go into the ethical and the employment law implications of that um and it it's not my bag from a technical perspective either so that's the first thing I do what's going on the second thing educate educate educate your staff need to understand what is generative Ai and how does it work it probably doesn't work the way people think it does if you can get and I'm not going into that but if you can get one message across just one message I would say to this from a simplistic perspective your staff need to think of an open-source generative AI tool as being like the global village Goss it will absorb everything they tell it and it will recycle and reuse it as it sees fit in its conversation with the next person that comes along once they've parted with the information to the AI tool they can't take it back again like every good gossip if it doesn't have the Gen the AI tool might just make something up or it might just gild the Lily and when it does as Jessica's already said it can be very convincing and i' I've actually seen that with open source Ai and and I've been on the receiving end of that and it really can be very convincing and then when you poke it and say are you sure it just doubles down um on on the information that it's serving up to you number three is a quickie but it's really important whenever you're processing personal data using AI do a data protection impact assessment it's a very clear expectation of the Ico and the great thing about doing a dpia is is it will flush out all of the other things that you need to think about and you need to do from a data protection perspective so I'm handing over to whoever is going next brilliant um well that's the data protection regime um covered off nicely um in a few key points but it's obviously not just that that we need to worry about if we're running a firm or business um we've got that regulatory side that additional layer um that Jessica's already mentioned so um can I come back to you Jessica just to continue you down um what law firms or lawyers should be doing from a Professional Regulation perspective with regard to ensuring their firm or their organization is protected of course I think this follows on really nicely from what Allison's just been describing as well um so yes you talked about the regulatory framework and the obligations but I'd like to think of this more in sort of a in a broader um with a broader approach and think about how clients manage the risk and I say manage the risk in relation to the use of gen and it's safe use as you would assess any other risk posed to your firm or manage this risk as you would as an individual solicor thinking about whether or not to take particular action in your day-to-day practice so it's very much um in my mind it's about compliance being part of the firm's culture and the use of gen as I say is no different and it needs to be embedded within your general approach to Law Firm risk management um thinking about this from a practical perspective some of the key considerations as Allison has said a lot of this comes down down to user knowledge and user expertise so absolutely make sure those using gen gen AI within your firm know what they're using what its capabilities are how to use it and what its main risks are and that will be different um depending on the product Etc and this also very much links to what Allison said about Education and Training as well which are absolutely key so when seeking to promote the safe and effective use of gen in law firms um I would say always consider um sort of the foll in things and all of these are prefaced by a keyword which is usually what makes them successful so in terms of systems and processes these need to be robust um this could be your it systems and processes ones relating to AI confidentiality data governance Etc and also review the um data management standards of any third party providers or suppliers in that respect as well it's about having not just having policies and policy statements around the use of AI but it's also making sure those policies are effective so they need to be living and breathing documents not hypothetically Gathering dust on a shelf so we know this is fast moving those documents need to be kept up to date um and they also need to clearly set out what the firm's expectations are around the use of gen AI in particular um so delivery of training as Allison's touched upon this is about effective delivery of training and this is about um having having engagement with that training so people are not just turning up it's not a a box ticking exercise um think about ways that you can really grab their attention so think about case studies think about horror stories I know it's not great but often that is the best way to to make something stick is to talk about real life examples of how things can go wrong and how you can learn from them and make sure those sorts of things are not happening to you uh Al I also think supervision so having really effective good supervision arrangements and models in place to make people feel that they're supported also that they're able to approach managers and speak up if they've got any concerns around how they're using these products or any or any uncertainty generally that there is somebody that they can go to to talk that through with um and finally a practical consideration I think a key for firms trying to roll out these products um is about having a diverse team so a multidisciplinary team in place to develop your AI offerings work out what your clients really want and what they are expecting from you in terms of the services you're providing so by having that multidisiplinary team you're promoting I would say cognitive diversity around the offering that you can give so I think there are just some sort of practical tips um around us how you approach risk management generally um and I don't think that needs to be any different for the use of geni brilliant and and I guess the same rules always apply with anything to do with the SR it's just making sure you can evidence whatever you've done should anything anything happen absolutely and that goes back to back to stry accountability um and decision making brilliant thank you so Jeff um do you have any top tips um about how geni AI tools can be sourced um or how they can be deployed that um kind of tie at the points that Allison and Jessica have just raised yeah absolutely um so the first thing and this is a bit of uh reinforcing what's already been said but um you need to have a clear policy what are you actually going to do are you going to embrace generative AI or are you going to limit its business use and if you're going to embrace it then you have to train your employees in terms of how you're going to use it effectively and also what not to do so if you don't want it used as an alternative then how much are you going to invest in preventing it how much are you going to invest in tracking those that are trying to circumvent the system and actually still want to use it because there will be people that are doing that and then how you going to report it and what you actually going to do at the back end of that it's really really important to remember that geni is a human efficiency tool not a human replacement tool so there is a requirement for us as individuals to validate and check the output that is generated from gen it's doing a really really good job most of the time um but it still needs validation and review and so as a legal professional you need to know whether you're using a private instance where the data you upload is only available to you in your business or you're using an open source model and that once you put that data in there it's there for everybody to use as Allison said earlier and that's a really important thing um You Know Your Role might not be in technology but you have to have an understanding of how you are using this technology and a clear comprehension because if you don't that's a very dangerous thing to have this power within your hands but no comprehension as to whether you're putting data in that's going out and being then used for everybody else in the world or whether you're put data in and that is still treated as though it's within your company's for walls and still protected so if you're going to use bar chat GPT any of those open source elements gen pieces then you're going to have to use it in a significantly different way to using something like Lexus plus Ai and because one is a closed loop system and one is very much open so again that's not to say you shouldn't use Bard and chat GPT you just got to be very very conscious about how you use it and what you're going to do so again think about what you're going to be using where you're going to be using it and how you're going to be using that um and the reality is that open source and uh closed loop systems are actually pretty much the same technology so if you look at them in terms of the way they're marketed they're going to be marketed in very similar ways they're going to be the ability to do generative Ai and large language models um so you actually truly have to get stuck in and understand what we're we're asking there um and then just think about what the what you're actually using it for how you're asking those questions how they're handling your data how they're combating bias how they're reducing hallucinations um and from a legal perspective I think it's really important how do they provide citations brilliant some great points there um links in quite nicely to the next question um or discussion point for today which is um thinking about the sorts of questions if you're going to want to explore using gen in your own firm or business any of our listeners are thinking about doing that at the moment um what sort of questions should they be asking when they're thinking about doing this so Jeff you're on a bit of a role so why don't I just stick with you for a minute um so what the top questions that you think our listeners should be um asking when they're thinking about implementing geni AI in their firms and and why should they ask these things I probably sound like I'm repeated myself but it is going to be what type will you be submitting to gen um so are you doing prompts for non-sensitive generic inquiries on the law to a degree who cares um all you're literally doing is a Google search but using generative AI power then what if you actually prompts include contextual references to get drafts on specific cases now you're getting more sensitive I want summarizations of case law well it's probably okay depending on what you're actually looking at um but then under summarization of case law with direct contextual references to company sensitive documents now obviously you really are looking at I need to be on a closed loop system so your risk posture really can vary depending on what you're actually inputting and what you're asking and so therefore your due diligence needs to change uh which ultimately brings me to my second question so what training are we going to give our employees and what training will our service providers like nexxis provide to our employees um the way we use natural language based gen is completely different to the way that we did gole searches you are asking conversational um contextual question questions um it's not just a simple kind of like where can I find the the nearest hardware store you're actually starting a conversation and really building upon it um so as an example if I say draw me a picture I'll just get some random output um and that's fine but if I say draw me a picture that uses gray scale in portrait and in this picture have a cat the moon and a large futuristic landscape you're really going to drive a more targeted output um and ultimately when you're thinking about doing that from a legal perspective that's a similar sort of conversational piece that you want to get into you're going to start from something very generic and then get more and more detailed in terms of how you drive that so you've got to think not only about where you're going to start in terms of the questions you ask but where you're going to end up um so actual training is really really important um and from a business perspective as well you're going to start getting into specific cases potentially uploading documents using people's names because you really want to have something that's generated that's useful to you in whatever case you're going to be taking forward so really really think about that the last um question I would ask is when you're using third parties to perform generative AI you need to understand that third party's security privacy e an ethical approach to generative Ai and verify that your data is protected appropriately and your proprietary data is not used to inform responses that would be available to competitors or the general population so again it's a real understanding of who that third party is what they're doing and how they're using your data brilliant so it's very easy to see how you can start off with just thinking you've got to ask a couple of questions and one question leads to another question leads to another question so it's something needs to be given a lot of thought rather than just jumping straight in by the sounds of it um Allison or Jessica and or Jessica um have either both of you got anything you'd like to add on that with regards to questions uh people should be asking happy to chip in if that helps um fine yeah uh so I'd say in terms of types of Trends we're seeing um with concerns that law firms have around the potential use of generative AI products um one of the first things that often um we get asked or we're discussing within our own firm is how can we be assured um our client confidential information will remain secure um so that's a a key question that we um that was all sorts of grappling with um another question uh what's the deal with um intellectual property rights who owns what who owns The Prompt who owns what's coming out of it um again probably throws up more questions than um you know that one concern that I've just put but that's another thing that we uh we hear a lot about um and then finally um on a slightly different tack how do we find the best products and the services that we need there seems to be so many suppliers out there trying to get business from us all trying to se the opportunity so what do we do like based on our budget do we push ourselves what can we afford what should we be investing in when so much of this is unknown so again that's a another concern and a question sort of coming out of of law firms I would say brilliant and Allison with your DP hat on are there any other questions you think people should be asking this is not so much data protection but it is one that the Ico talks about and it has been mentioned but I just thought I'd put a bit of context around it it's bias so I would want to know what steps have been taken to avoid bias particularly where an AI tool is being used to make decisions that affect people for example in the employment context and biased outputs can arise from two sources so firstly the data used to train the system was biased and that can be the case with historic data particularly in relation to employment or the algorithms underpinning the system are biased and that the bias could be completely unintended but it it's just irrelevant you need to know what checks have been made um to ensure that the outputs are not biased and also think about what checks you're going to make yourself to make sure the outputs are not biased brilliant thank you um final point I want to pick up on to today um is all around um sharing some insights with our viewers um with some real life examples of um Jeff and I'm going to come to you for this for example your experiences of working on our Lexus plus AI product um what what have you had to go through to ensure or what have we as a company had to go through to ensure that our data we're using is safeguarded um and it's in the great place it is because the points that I guess we've gone through internally from our side are the same sort of points that people should be asking their own suppliers who are talking to about other models if they've gone through the same process so they can get satisfactory answers so could you just give us a bit of a sneak peek um into the world of Lexus plus AI from what we've had to get right to make sure we can cover off a lot of concerns yeah absolutely I'm going to start by talking into Allison's point that she just made there about bias um and then transform that into hallucinations um to a degree um but this this technology was new to everybody pretty much um artificial intelligence it's been around for a long time not a problem there but generative AI really came to its for in 2023 understanding how hallucinations work within generative AI was one of the fascinating things that I learned um literally within the large language models within the technology we use you can utilize a sliding scale that says how much do I want this technology to actually um make up answers um and the primary reason for doing this was actually um because if you provide the same answer over and over again um in humans don't necessarily think that there's a lot of intelligence behind that it's just an ability to repeat information so if you slide that bar and say actually I'm open to having variations of answers ie hallucinations and then you get different answers to the same question put by the same person that actually gives some confidence that actually what you're looking at is something that's been generated by an intelligent source so IE generative AI um so actually within the Lexus Nexus product we've got our slider bar set right down to the bottom we don't want hallucinations we don't want we don't care about having different answers to the same question over and over again unlike something like chat GPT we actually want precise answers and accurate answers represented within our technology so that was just something that I learned last year um and it was an interesting piece and I was trying to work out how it would actually do hallucinations the more you read on it the more you find out about these things and it's quite fascinating um in terms of the actual question around how we've done uh security and um bias and hallucination around Lexus plus AI um then we were actually all involved right from the beginning of this process so that's if you've ever been involved in technology that can be quite surprising at times um to have your privacy team your ethics team your security teams your technology teams all involved right from the beginning and all working on a product right from the start um so that was really really good because it meant that we got to build this from the ground up and we got to look at large language models understand how we're going to use generative Ai and anticipate the questions that our customers might ask of us and expect of us um that's how we kind of got to that original point of the technology itself is not that materially different in terms of what we do before and what we do now so from an infrastructure perspective the security around it and the way that we handle it is not that much different than what we've done before um if you're a customer of Lexus plus then Lexus plus AI is built on the same principles around how do we protect the data how do we secure it um in resp to the hallucinations didn't say that very well and bias um the way that we're actually giving you confidence in our respons is being accurate is through our use of citations which I mentioned earlier and we also employ a humans in the loop approach so our subject matter experts our data scientists and our Engineers are fine-tuning these large language models um and we also use reinforcement learning from Human feedback um again another thing that I leared around this which was really really interesting in and what we do is we generate um questions to then put to the large language model we then get two answers back from that large language model and humans go in and they rate which one is better and then we continually train like that so we're not actually using customers data we're creating our own prompts and Engineering our own prompts and then we're putting those in and then we're basically training the model via that so there's no customer data going into this particular um large language model that we use for uh Lexus plus AI um and then we also optimize these and fine tune these um for certain desirable behaviors and ensure to ensure consistency and quality as well so if you've used this product or if you if you get to use this product you'll see that there's a thumbs up and a thumbs down um like social media pieces so that you can easily rate whether the answer is good or bad um and then there's feedback loops within there as well so we're trying to ensure sure that if there is anything that's actually not presenting the best answer for you we can actually train that and develop that accordingly um I've already said we don't train that models on your data um we take your prompt in terms of the technical way that this works we augment that with our upto-date data which is authorative and comprehensive and that's why you trust Lexis um and then we provide you with an enhanced response so we can also add greater value if we also add your documents to this um now again that was another piece where initially when we launched um we didn't have the document upload feature um because we had to do some more work on that and ensure that that was secure and protected so we initially launched with just prompts and then we moved to document upload the document uploads only last for 10 minutes after your session expiry so as soon as you can complete your session expiry those documents disappear um some of that is counterintuitive to a good experience so you might as an individual user be kind of like well I've uploaded this document I've closed my session I want to go back and reuse that document for the security of that your information within your company you have to then reupload that document we don't keep it we don't maintain it it has to go and it has to disappear so again that's another piece that's slightly different and slightly unique in terms of the way that we operate Within These these um boundaries we do all the normal stuff so we secure your data in transit and at rest so encryption at Transit encryption at rest um I don't know how many technologists there are on here so I'm not going to go into what particular algorithms we use and all of the rest of that but it's um but we are using industry standard encryption and protection of where you where we actually do have your data which includes your prompts so we retain your prompts for 90 days um unless you go in and say actually I don't want this prod kept in which case it's up to you and you can delete it immediately um so it's 10 minutes for documents that you upload it's 90 days for the prompts that you enter but again you can go in and delete those um and as I stated in my top three questions um creating prompts is a new skill set and we help our customers through this with training and prompt guidance within the portal um when we have gone through this whole process we have done a whole heap of work internally to try and work out what good prompts look like so we're aiming to actually transfer that knowledge to you to make your use of our products far better as well um and then lastly we've had thirdparty security testing and they in the process of a systems and organizational control audit which is otherwise known as a sop two audit um the UK quite often refers to ISO 2701 as its guiding principle um sock 2 is kind of the American equivalent if you like um and they do very similar pieces so um certainly from our company as a international organization if a if we're using a third party vender and they provide us with a soop two report or a ISO 270001 report then we're happy with those as a true assessment that the company has been reviewed and that their security processes are in a good State um so yeah so that gives us assurance that our security controls are designed and implemented effectively um and that's my kind of Whistle Stop tour of Lexus plus Ai and what we've done in terms of trying to make sure that we keep all of this data secure um and um provide really good value to our customers in terms of how we can use this wonderful new technology well I had to say I learned something there I'm working for Le that's saying something but um I think the point is that's just to give you all a bit of an insight into the process we've been through and also to give you an insight into what you should be checking any other suppliers you're talking to that they've been through the same sort of rigorous process because without that and I'm sure there's a million more things that Jeff and team have had to do other than the ones he shared um obviously you can't be quite certain um everything's as it wants to be and I think the key point that stuck to me struck out to me there was if you're looking to do it in your own firm or business get the right people around the table at the start um so get security team the IT team the compliance Team all of those people to start talking from um the get-go so really helpful um just conscious of time but just um very quickly Jessica um do you have anything you want to say just to bring this particular question to a close around potential impacts you're seeing in your firm or other firms that you'd like to highlight of things that they're having to think about around geni just a couple of points if you've got any yeah I think my Approach is from a slightly sort of different perspective but in terms of business planning so kind of business planning and forward planning um as you know we've talked about how kind of this is still quite in the early stages and there's still quite a lot of unknowns and uncertainty but I think in terms of business planning and and future proofing I think firms are thinking about their team structures potentially and what this might look like will this need to be different in the future um when but I think I could say if but I think more likely when um gen really takes off so are we going to have to have teams made up of um of broader skill sets different types of individuals more knowledge manage managers for example but with I think still a sort of a a layer of um expert sort of human verification at at a top level so it's just very unknown but I think people are thinking about what what might our teams look like that are providing services to clients in the future will will they be different um and also linked to that I think from business planning perspective will we need to think about different pricing models many law firms and you know are still sort of very much focusing on a on the billable hour I think there really does need to be a move away from that so thinking more about innovative ways of of pricing for clients as well so more fixed fees and how that reconciles with how we operate at the moment across the across the legal sector so as I say sort of more um business planning considerations but I think they are things that we have to think about in terms of how gen could impact law firms in the future that's super helpful thank you um well I'm conscious we've only got a couple of minutes left really in the session because these people have got to get back to their busy day jobs but that was a real Whistle Stop tour through um just those four talking points and I'd really like to thank our panelists for sharing all of your resp respective insights um with everybody and I'm sure the audience will agree that you were fantastic so thank you very much for that so it just remains for me to thank you all for your attendance again and wish you all a good rest of day thank you for joining [Music] us