welcome back to AI unscripted today we've an expert in AI governance joining us Oliver Patel the Enterprise AI governance lead at Astro zenica with a background the bridge is Academia public policy and Global media Oliver is at the Forefront of responsible AI in the pharmaceutical industry and brings a wealth of experience in ensuring that AI Technologies are safe trustworthy and ethically managed Oliver it's great to have you with us today Kieran it's really really great to be here today excited for the discussion and yeah looking forward to getting into it thanks so much for bringing me on to your podcast my pleasure Oliver you've had a fascinating career working in Academia government policy and now leading AI governance in Pharma can you share a little bit about how each phase of your career has shaped your approach to AI governance today it's a fantastic question and I I know it sounds a bit cheesy but I do feel genuinely privileged to be working on a topic and in a field that I've always been passionate about and it's only in recent years that working on issues relating to AI governance AI ethics has actually become a viable career path that someone can take in in the corporate world and in the business world so really I would go back to my my Master's uh degree when I studied philosophy and public policy at the London School of Economics that was really formative for me because I was introduced to various philosophical and ethical debates relating to AI for the first time so I did modules in things like bioethics and things like um Consciousness and whether we can replicate human intelligence and human levels of consciousness in machine form I studied philosophy of mind all of these kind of wacky topics relating to philosophy ethics psychology and I always found it really interesting I was super engaged and passionate about that and I knew what I sort of was interested in but at that time there were no sort of jobs in AI governance and AI ethics it was an area of research and it was an area of academic interest but it wasn't an industry like it is becoming today so I spent a few years um in working in Academia pursuing research on you know different but Associated topics like uh public policy uh data privacy because I was I always had this kind of interest of the intersection of technology and um public policy so I I went down the path of looking at data privacy and um data governance from an academic perspective so I spent quite a few years at University College London I then moved to the government so I worked for the UK government for a couple of years where I negotiated the uh agreement between the EU and the UK to basically keep data flowing between the two countries after brexit so what we called the data adequacy agreement on on crossb data transfers after brexit but then I started to realize that actually this topic this esoteric topic that I always had it interested in AI ethics AI governance was starting to pop up more and more in the conversation so it's really from about 2019 onwards that it's become a Mainstay uh topic and and surging up the global political agenda and yeah so I I joined um astroica at the start of 2023 to head up their AI governance function and as I said it's it's it's a privilege for me that I'm able to work on a topic that I was always academically interested in but wasn't really a viable career path before and now the AI governance profession is booming so alongside my work in astroica I also work with the um International Association of privacy professionals they have essentially established um one of the market leading training programs and professional certifications for AI governance and they recently announced that more than 10,000 professionals worldwide have taken that training so this is really going to become something quite big and I wouldn't have been able to predict that 10 years ago I have to admit it's not for pity you you you could have said I predicted this 10 years ago but it's always interesting to see how life turns out it really really is and it always does folks for anyone listening in Oliver the the pharmaceutical industry itself has unique challenges and potential when it comes to AI can you explain the role that AI plays and how it impacts operations and patient outcomes absolutely so when you know when people ask me questions like this around how is AI and AI Technologies and machine learning used in the phace sual industry the answer is there isn't sort of one big use case to win them all as you know Kieran it's general purpose technology that's that's essentially there's no domain of activity there's no area of activity that won't in some way be impacted in and and you know potentially benefited by the use of AI and the use of datadriven Technologies but in at its core one of the main problems or one of the main challenges um for the pharmac is the time it takes to discover and develop new medicines and to get those medicines to the patients who need them so really it can be a very long process of you know on average between 10 and 15 years and everything that can be done to to to speed up in a safe and responsible way but to speed up the different constituent parts of that process whether it's using AI or a totally different technology or organizational change it's a big focus and a big priority for the sector as a whole so this is not just about astroica every company is looking at this so because it takes so long to discover and develop new medicines and because doing so quickly could have such a transformative impact in terms of saving the patients or supporting the patients who need those medicines anything we can do to speed up that process is is really powerful so obviously AI plays a huge role there there's many different areas in which the use of AI could potentially lead to the acceleration of discovering testing and um triing new medicines so that that's that's key but then when we look at generative AI I'm not sure if farmer is you of markedly different from other sectors in the sense that the ability to create content at scale the ability to have um you know Enterprise um chat search that kind of thing I think there's a lot of parallels between all all big corporates in terms of how they're using gen so yeah it's um there's a lot of stuff that's unique to Farmer but then there's also a lot of stuff that's kind of I think generic across Enterprises with how they're using geni to um become more efficient productive and Innovative yeah I'd like people to use it more it's interesting Oliver I've seen companies publicly decrying or proudly suggesting that they've given up their gen licenses and you're sort of going well they're get $40 a month or $20 a month if you can think of one idea a day or two ideas that's going to generate uh General helpful advice and producct then there's something wrong with your ideas not the cost of this but in in a highly regulated field like Pharma how do you ensure that there's compliance in the business but without stifling you know Innovation are you using specific Frameworks or best practices to get the team to follow that to to bridge that Gap absolutely so um the company I work for asro published its AI ethics principles back in um 2021 so it was one of the first pharmaceutical companies to do so and then what we've been doing in the past few years is we've been establishing and implementing an Enterprise AI governance framework really the the framework consists of five core pillars and all of these pillars and all of this work is designed to enable um us to essentially Implement and operationalize our principles and values in practice in the context of our day-to-day AI related activities so the Five Pillars are the kind of policies and and standards and our just overall policy and framework for AI and AI governance the second pillar refers to how we uh do risk assessments how we review new projects new activities and assess risks and mitigate those risks the third pillar is um how we keep track of AI activities across the organization and across the across the ecosystem um the fourth pillar is just how we're keeping track of all the regulations and ensuring that our internal policies and processes and and um ways of working are fit for purpose and align with those regulations as they continue to evolve at break next speed and then the fifth and most important pillar the one that I'm really passionate about is how we are upskilling training and educating our Workforce to um be able to use and and benefit from AI in a safe and responsible way because what I always say is that AI governance is change management AI has existed long before AI governance we can trace the history of AI back to the 1950s there are many ways in which every major Enterprise has been using AI for decades now but what's changed in this era it is a cliche but since the uh launch of chat gbt is AI is now at the fingertips of all of all of us of all employees and that wasn't the case before so the technology itself may not be as new as some people portray it is but the democ ization of it is and that's why essentially doing AI governance and really just implementing an AI strategy effectively is all about change management because the point you made earlier about you know some companies dropping their AI programs or sort of deines from geni because they don't see the benefit I think that is at its core more of a people problem than a technology problem yeah I think that that's it dirty secret though isn't it that very often people hold it up not the actual technology itself but we're not allowed to talk about that I think sometimes one thing you're mentioning there just in terms of you got your principles got your values got those five and you mentioned earlier on about the government as well one one thing you're seeing in EU legislation is you know the or AI governance is the explainability and the interpretability of models if we're using deep algorithmic models you know complex AI to discover drugs and proteins or mixtures or whatever else how do you make sure that the AI is interpretable is explainable to stakeholders like Regulators clinicians patients or anyone who's interested because that seems like Quite a feat yeah it's it's a really hot topic in the field so I think first of all there's a lot of different concepts that get discussed about that are all that are all slightly different so transparency we could think of it as more of an umbrella term um for all set all sorts of things like which are at its core about enabling people to understand how where why AI is being used what it's doing what the impact is so we've got transparency as a kind of foundational concept then we have explainability which is more around the ability to explain to different audiences and stakeholders how um an AI model may have generated a particular output what elements of the um just being able to explain how it works why it did what it did and then interpretability which is slightly more technical and refers to how someone could actually assess the inner workings of a model in order to generate that explanation so you could you could look at interpretability and explainability as part of a broader kind of transparency agenda so the first thing I would say and answer in in response to the question is it really depends and it depends on the use case and it depends on the context there'll be some contexts where explainability interpretability and transparency more generally are mission critical you think about a context of decision making an arena that could really impact someone's life such as recruitment or you look at in the public sector sort of judicial decision making for example in those contexts it would it would probably be you you'd be um in a difficult place from a legal perspective if you weren't using AI models which you could using available techniques interpret and explain however as you mentioned there are more advanced models such as large language models such as really Advanced deep learning models where we do not yet have really um robust techniques to to Really deeply interpret understand and explain why particular outputs have been generated so it would always be dependent on the use case and we'd have to make a decision as to whether it was appropriate to use a certain type of technique or model um to solve a particular problem and that's why we need to look at each use Case Case by case because what model we choose has a direct impact on how explainable and interpreted it can be so in a nutshell in some context it's Miss Mission critical and that needs to inform the type of AI model that's used in other context we might actually just think that accuracy and robustness I.E the model working well is more important than being able to understand explain and interpret why it does what it does so play back just a little bit of more final advice uh if people want introduce AI governance within their actual operations what's the sort of practical advice you would give them and what are the pitfalls they need to avoid yeah so I'll I'll I'll try and um land three key tips here so first of all we live in an era where AI is just going to become the new software AI will become embedded in and infused with all software applications and digital applications and devices we use therefore AI governance does not and cannot mean governing all AI so the first tip is you have to take a risk-based approach and you have to be proportionate because it's never going to be possible in AI governance to to wrap your WRA to wrap your arms around absolutely everything um the second tip uh as I said before is that AI governance really is about change management so you could have the best policies principles and processes in place but unless you're bringing all the relevant state as long on the journey with you you're not going to get anywhere and the third and final point is that you know again it's linked to people you need to make sure you've got buying from the right people at the right levels to do this that is senior level buying of course to make sure that what you want to put what you want to advance with AI governance is supported is sponsored and you have what you need but perhaps more importantly AI governance professionals and people who want to drive this work in their organizations they need to learn how speak the language of AI and data science they need to be winning over the um the technical community within their organizations the people who are actually building this stuff and and deploying these models because often there's a bit of a disconnect between the governance world and the technical world and I'm not saying that you need to sort of learn how to code if it's not your thing however you need to be able to speak the language they speak and you need to be able to empathize with them you need to be able to understand their challenges is their priorities what are they working on why does it matter and how will your governance impact them potentially negatively potentially positively if you get all of that right then you can actually position governance as an enabler and a core part of the overall AI strategy and it can enable the business to safely and securely adopt AI at scale and perhaps even more quickly because there's more confidence that there are controls in place there are guard rails in place so if you get it right AI government can be a core enabler of the business but you need to approach it in the um kind of more nuanced way that I've been outlining I love all those descriptions and definitions and I love the fact Oliver that you've said look it's ultimately a a people opportunity or people problem take your pick depending on the moment that you're actually in uh but it is it feels like a change management piece and I'm absolute fan and behind you all the way I think your education piece inside of a business of understanding what AI is and isn't is absolutely key it will and is becoming a fundamental part of any business not a what I would describe as a hype technology but a real technology that as you mentioned earlier on it's an overnight success for the last 80 something years having been brought around in 1955 as opposed to when chap GPT was launched Oliver if people want to find out a little bit more about you how do they manage to do that yeah so first of all thanks so much Kieran for this discussion it's been really great and I I feel really energized on a on a Monday morning talking to you about these topics um if people want to connect with me or follow me LinkedIn is the best place I post loads of content there about all of these topics AI governance AI policy Ai and geni um and digital Technologies more broadly so connect with me on LinkedIn and you'll get loads of uh content there and also you'll be able to learn about the different trainings and and courses that I'm involved in as well fantastic and it is excellent content as well you may be too polite to say that but I am not Oliver thank you very much indeed it's amazing to be equally energized to hear someone who started out not quite knowing how far their career would take them doing a masters that they purely enjoyed to someone whose face lights up when they talk about their topic I wish you every success sir thank you so much