Let's look at AI because specifically the risk that could lead to the extinction of humans. Many top experts have signed a statement warning of the risks of artificial intelligence. And this is what that wording says. Well, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war. The G7 group of leading economies, the EU, US, well, they've all been holding meetings trying to work out how to tackle.
the challenges. Well, I've been speaking to Stephanie Hare, a technology ethics researcher, about the current risks posed by AI. We aren't talking about what they're doing to stop these risks from manifesting. So they are all still building this technology. They're not saying they're going to stop building it.
They're building it. And they're still seeking investment. And this investment is into the tunes of multiple billions of dollars.
So that's not really a mitigation strategy, is it? What's that? Wishing to disrespect people on the list who are serious people, and I listen to them. There's a lot of people who aren't on that list who are also very serious thinkers and who are warning of very different risks. Not the sort of science fiction risks, but the risks that are happening with AI right now.
And those are risks of discrimination. Those are risks of misinformation and disinformation. Those are risks to interference in our elections. And it's interesting because if we don't talk about the risks that are happening right now. then they can carry on making money and they can carry on having us all think about things that may or may not happen in the future.
I would like to have us thinking about both. Let's think about the existential long-term risks that are possible and let's think about what's happening right now and hurting people right now. Well, the view there from Stephanie here, let's talk to our panel, Aisha and Victoria Withers. It's quite a warning, isn't it, Aisha? The mitigating the risk of...
extinction from AI should be a global priority, so says this warning. Is this just scaremongering or should we be really concerned about this stuff? I think we should take heed from what a lot of these experts are saying.
I think on the one hand, AI could provide some really important solutions to a lot of problems that we have in society, right across particularly in the health sector. I was talking to a radiographer. who are saying they're hoping to develop AI which can really look at cancer scans and things like that very, very quickly.
So you can see that there could be some huge advantages. But, and there's always a but, I do think that there is a worry about how this AI is designed with inbuilt biases and also how do you regulate the stuff? Often, you know, policymakers, elected representatives. are often very behind the curve when it comes to keeping up with technology and how to regulate technology.
So I think we are right and politicians and regulators should heed the warnings, particularly from people who've been very involved in artificial intelligence from the beginning. And if they are sounding the sirens of warning, then I think we do have to look at it and take it seriously. Yeah, and that's the challenge, isn't it? Victoria because you know that list of people warning against this is a who's who of people in the AI industry there's some very big names but I hope you could hear that clip we showed a little earlier from Stephanie Hare and she said look the people that are warning against it are the very people who are still building it they're making money from it they were getting research and funding for it so if they were that worried they should just stop shouldn't they?
No it's a very good Good point. Good to be with you, Ben and Ayesha. And one thing that's been very striking to me is I touched on the AI world around 2019, 2020, when I was working for the U.S. Department of Energy, which is one of our lead agencies on developing AI.
And one of the things I was told then was that we were years and years away from this actually becoming a functional part of our society. And it is now suddenly happening in real time. And so my concern is that we don't have a handle on this. legally, ethically, and that it is about to be enormously disruptive to our societies in ways we can only imagine. And so I agree with Ayesha, we need to get in front of it now.
Yeah, and Ayesha, you talk there about some of the practical, very useful applications of this technology, particularly in healthcare. We talked just last week on the programme about how it managed to whittle down a list of potential antibiotics to treat infection and it managed to save hours and hours of lab time to find the ones that could work. But I wonder how we regulate, how do we separate the good from the bad?
Well, that is absolutely the key question. I think one thing that government and regulators and thinkers in this space should do is I think they should really collaborate with experts from the commercial world. of artificial intelligence because I think that with the best will in the world the expertise is not going to be found in government departments and with policy officials because this technology and the the speed and the advancement just moves so quickly. I think this is one area where regulators and policy experts really should bring in expertise from the private sector from these people who really are at the cutting edge because they're going to be the ones that can help if not get ahead of this stuff it's hard to keep it at least try and keep up with what's going on yeah and victoria there is a danger isn't there that you know regulation done in a panic is not the best regulation it's not really got the best interest of both users and the industry at heart no and i think we can take a sort of instructive lesson from what happened with the development of the of the internet and you know the the hope that that could be free and completely open and benefit everybody. And it has been obviously highly beneficial, but there are also problems because nobody figured out how to regulate the flow of information and how this was going to be consumed.
So I strongly agree that we need to figure out how to game these worst case scenarios of how this could become a dominant force that humans can no longer control and then put in safeguards, tripwires. warning bells, whatever you want to call it, to ensure that that doesn't happen.