Hey, ChatGPT, a quick question. Sure, Johnny. What's up? A while back I was talking to a friend who told me that AI has the potential to destroy our society and potentially end humanity.
The humanity. That's not true, right? Well, it's not entirely out of the question. Warning of the dangers of AI. About AI.
Unofficial intelligence. The threat AI poses to the social order. There's been so much talk about the danger of AI.
The risk that could lead to the extinction of humans. Does it worry you, AI? Somewhat.
And I always find this pretty unsatisfying because they talk about it in vague terms. A threat to democracy. Enormous threats to democracy. Losing control. Could we lose control of civilization?
100%. A danger to our society. So today, I want to show you what those dangers and threats actually look like. I've been super deep in all of the new laws that are coming out and being proposed and they really give you a solid idea of what lawmakers and regulators are worried about. How they think this new technology that is rapidly developing could affect our societies.
And I'm going to lay it out in the plainest terms possible. But first, real quick, a 60 second explanation on what AI actually is. For decades we've used computers to do things that our human brains can't do super well. We've developed entire languages to talk to the computer, to give them very specific instructions on how to execute a task.
It's called code and we've been doing this for decades. It's gotten really sophisticated. The difference now is that instead of really specific instructions from a human, we've built software that teaches itself how to do stuff.
The humans now just need to gather tons of data from the world and feed it into this software. And the whole point is that we don't really know what's happening inside this black box. Being able to predict accurate information and solve problems based on a bunch of raw input from the world is exactly what our brains do. It is called intelligence, in this case an artificial version.
And it seemingly has the potential to change everything we do, which is exactly what makes it dangerous. So now let's get specific. What do we mean by danger to humanity?
How could this fancy algorithmic computer software actually do something bad. To get started there, I want to show you something. Today we're gonna be rolling the dice on AI and look at six scenarios of how AI could negatively affect humans and what we can do to prevent it.
And as we do this, it'll be helpful if you remember this graphic, this black box, where humans tell the AI what to solve, give it a bunch of data, and let it figure it out by itself. In this black box lies the potential promise and peril. of this new technology. All right, here we go. First up, predictive policing.
Police should use all the technology available to prosecute crime, but not in a predictive way. So you are innocent until you are discovered guilty, not the other way around. This is Carme Artigas. She's an expert who spent 30 years in machine learning. She was the first Secretary of State of Artificial Intelligence in Spain, and now she works on the UN Advisory Board on AI.
We call for the need for the global governance of AI. When it comes to understanding artificial intelligence and its effect on society, she's really the best person to talk to. So AI, you give it a lot of data, you ask for it to solve a problem, and then it predicts the answer to your problem. The more data you give it or train it on, the more accurate its results are. So like for those monitoring hurricanes, the more data it has on sea surface temperature and air pressure and wind speed and humidity levels and ocean heat content and historical storm data, the more accurate it will be at predicting where the next hurricane is and what it will look like.
What if we applied the same approach to crime? Imagine a world where the local police department has access to data of all kinds, which they already do. We're talking about biometric data in general.
That means face, that means voice, even movement and this type of record. If I'm in a park, what I don't want is that my government is on real time recognizing who am I, who am I with, at what time, and doing what. Because we are innocent people, we have done nothing wrong.
The police department, who's in charge of fighting crime, would have an incentive to use a machine learning algorithm, an AI, to take all of this data and use it to try to predict who... who is going to commit a crime. I mean, it's a tempting idea. Imagine if we could actually prevent crime before it happens. Let's not kid ourselves.
We are arresting individuals who have broken no law. Minority report. We don't believe in legal systems that predict the rate of crime. The concerning elements here is that this could be used by governments or per private actors to track individuals without consent, infringe their privacy rights, establish a mass surveillance system.
And there's also a risk of wrongful identification. So this is already kind of happening here in the United States. Recently, police in Detroit were looking for a thief and they had security camera footage. They ended up using an AI algorithm to search driver's license records. And they found what looked like a match.
This man who they arrested and who spent a night in jail before they realized that they had the wrong person. The algorithm had inaccurately matched it. The police of the future will definitely use AI to do their job better. But the nightmare scenario is that the police departments get so thirsty for new data that they start tracking everyone and everything in the name of getting ahead of crimes. So in this new AI bill in the EU, they've made it illegal to collect all of this data and try to train AI systems to predict crime.
Because in their words, people should always be judged based on their actual behavior. Okay, let's see what else the future holds here. Yes, elections. Should we just have one of those here?
Experts are worried that AI will affect elections, our democracy. Because a huge part of elections and democracy working is a sense of trust. A sense of trust in the system itself, that your votes actually count, and the information that you receive about the candidates and about what happened in the election. So one of the nightmare scenarios with AI is something that's already happening but it's in its early phases.
Deep fakes. Something we made a whole video about. And as we covered in our previous video on deep fakes, it's becoming easier and easier to make a deep fake that looks like a politician or a leader saying something they didn't say.
Lucky for us though, we humans are pretty good at deciphering these fakes. Partly because hundreds of thousands of years of evolution has trained our brains to be really discerning of human faces. It's how we read other people. So for now, the effectiveness of deep fakes in swaying elections or spreading a lot of misinformation has actually not been super big. But we're just in the early phases of all of this.
Deep fakes or all kinds of synthetic media, meaning fake video, is going to get way better really quickly. You can imagine an election in four years where in Arizona a series of robocalls are placed using an AI system that has really authentic sounding deep faked voices alerting residents that their local polling station has been taken over by a militia and that for their own safety they should stay home. Or in Miami a synthetic video goes viral of poll workers burning paper ballots or tampering with voting machines and people see this stuff and they believe it and they stop believing in our delicate system called democracy.
But you know what? This is actually not the thing that people are most worried about. Over the years there's been all kinds of image manipulation technologies.
Like when Photoshop came out people freaked out we can manipulate images. Technology makes it difficult, maybe even impossible, to tell what's real and what's not. But we all got really savvy about that really quickly. We now know to be suspicious of images. The scarier result of this is actually that we start to doubt everything we see.
It's trying to make people believe nothing, to lack trust of our institutions. So what do you do about this? Well, in California, lawmakers are requiring online platforms like YouTube and Facebook to find synthetic media and label it or take it down.
Some of these bills even prohibit people from posting election-related content that has been generated or modified using AI, at least within a certain timeframe of the election. The AI bill in Europe is actually requiring anyone that makes deep fakes or synthetic media to code in an invisible watermark. Something that we all couldn't see but a piece of software could detect it and see it and know that it's fake.
But in the AI Act what we make compulsory by law is that you must identify if something has been generated by a human or by an AI and it's going to be by law. Okay that's the future of democracy. Let's roll the dice again.
Okay, this one's interesting. Social scoring. So for us, social scoring is a way that governments can control population and lead to unfair treatment or discrimination. Imagine a world where your behavior, both online and off, was tracked and tabulated to create a personal score.
For example, in a pathetic country, the government deploys an AI-driven social scoring system. That could monitor citizens'online behaviour. Where you live. Financial transactions.
How good you are at paying your loans on time. If they ever complain with their government or not. And this all contributes to a score. And that ranking allows you or prevents you to access to public services, to housing loans. So for us it's a totally unacceptable risk.
OK. That all sounds really scary, but what's crazy is this kind of already exists here in the United States. Credit scoring, I mean the U.S. credit scoring is also a way to discriminate people.
Here in the U.S., we let corporations collect a bunch of data about us, mostly our finances, and then use an algorithm to assign everyone a score that affects our ability to get loans, housing. jobs, and even how much we end up paying for insurance. Get your FICO score for free today. The credit score is totally normalized. We're very okay with this.
And yet we're okay with it because it's not that invasive. So even this kind of benign social scoring system already is discriminatory against certain groups. Imagine a world where way more information is scrubbed and used for your social score. Your employer could buy that data and track more data of you on the job while you're working.
and they could use all of that to evaluate whether or not you're fit for a promotion, whether or not to hire you in the first place. They could even track your movement at work, analyze your face, your enthusiasm towards the work, your conversations with colleagues, and then they could let the machine decide whether or not to keep you or fire you. If you're applying to university, you would send in your photo, your essays, your applications, your social media handle, all to an AI-powered admissions system.
that analyzes it in much more detail than a human could and decides who it lets in. Honestly this sounds dystopian but like it's also really efficient and actually could be more accurate if we got it right. Theoretically taking out the human bias of who gets into university and who doesn't. But it turns out that an AI is actually biased too.
It's biased to what information it has been trained on. And so without some oversight these AI social scoring systems could start to create major discrimination against certain groups. And we would never know it because all of the discrimination is happening inside of that black box. All we see is the output, what comes out on the other side. And we're kind of primed to think that it's accurate because the big fancy machine did it.
So many of you are probably thinking that this already happens in China. There's a social credit system. If you thought the way Facebook tracks you was scary, it's got nothing on the Chinese government.
It gives you a score between 600 and 1300. And depending on where you live, it can determine what kind of school your kid can go to, what parts of the country you can travel to, whether or not you can use the high-speed trains, what job you can get. And in some parts of China, they're experimenting with punishing low credit score individuals with slower internet speeds. Meanwhile, people with high credit scores get all kinds of perks. Better schools, high-speed trains, but also their government applications gets expedited if you have a high credit score. Again, this isn't all of China and it's not all centralized into one giant database.
But with the advent of more and more powerful artificial intelligence, no doubt China's social credit scoring system will become more robust and more invasive. And that is what lawmakers in Europe and the United States are scared about. Europe places any kind of social scoring using AI as an unacceptable risk.
You can't use AI systems to rank or classify people based on how they act in society. So this is something That according to our values, it's not acceptable. It's a prohibited use of AI.
One of the tricky things about reporting this story is just how many hot takes everyone has about AI. I feel like every week there's hundreds of different new developments coming out. Recently we saw a Nobel Prize was awarded to machine learning researchers and a bunch of news stories came out.
You can see that one of them likened it unto the invention of penicillin, which changed the world. while others emphasized the risks of AI. And that is why I'm grateful for today's sponsor, who is Ground News, who I've talked about a lot on this channel. It is something I use because it is actually incredibly useful. They're having a massive sale right now during the holiday season.
If you click the link in my description or you scan this QR code that is currently on screen, you get 50% off their Vantage plan. Ground News is a website and an app that aggregates news sources from all around the world into one place. And then it analyzes these news stories and sorts them and gives you a bunch of information that you wouldn't otherwise have. Like whether it leans right or left, the factuality of the article, even down to who owns the news outlet.
Ground News is particularly useful for all this coverage on AI because, again, there's so many speculative takes. Again, kind of like this video. We're all trying to understand this. People are covering it in different ways.
Ground News is a tool that will allow you to have a well-rounded perspective as you look at this kind of news and news about many other topics. So ground.news.com is the link. The QR code is here. And with that, let's dive back into our next scenario.
Yes, nuclear weapons. It seems like every one of these scenarios has a movie to go along with it. It turns out we... create a lot of sci-fi about the scenario of machines taking over.
But one big thing I'm learning in this story is that the reality of these risks is much different than how it's portrayed in the movies. It's often more boring, but more dangerous. Like we saw with predictive policing or with social scoring. But I would say for this one, nuclear weapons, it's actually kind of like the movies.
In the movie Terminator, an AI-powered missile defense system known as Skynet becomes self-aware. and launches an all-out nuclear assault against humanity. The real fear here isn't as extreme, but it's a similar situation where we fear giving the machine too much autonomy to make these high-stakes decisions about war, the ultimate of which is launching a nuclear weapon. What's tricky here is an AI is often a lot better than a human at synthesizing lots of information to make decisions that are... more likely to be accurate for the desired results.
They can take into account so much more data than a human brain can hold at once. And as the AI becomes better and better at reasoning, it will become better than us at making decisions that get the desired result, which in war is a very difficult thing. So in a future where a lot of our military systems are run by an AI with sensors all over the place, drones and ships and cameras and satellites, Monitoring our enemy. Mark my words, AI is going to become a bigger and bigger part of our defense strategy.
And yet you can imagine a world where an AI system is in charge of making real-time decisions. And one day it sees an adversary conducting military tests with big rockets and missiles. These are just tests, but the AI doesn't know that. This also correlates with some troop movement in the adversary's country and some unusual communication traffic.
The system sounds the alarm bells, the president and the congress are led into bunkers, and the AI system sends the command to an American submarine, telling them to launch a nuclear weapon now. Okay, real quick, this scenario is not likely. It is not likely at all.
Even if we gave a lot of autonomy to AI systems, it's very unlikely that the AI would be able to do all of this on its own. But there's still a chance that it could. And that somewhere within this black box, something would happen that would lead to a nuclear launch that would be really catastrophic.
So because of that, lawmakers have all moved very quickly on this one. And there's currently a bill floating around the Senate called the Block Nuclear Launch by Autonomous AI Act. The U.S. is hoping that other countries do this too, so we can all just agree, hey, the machine shouldn't be launching nukes. Like, can we all agree on that?
Okay, so nukes will be off the table soon, but... There's a bunch of other very powerful weapon systems that are not nukes. In fact, this is already being used in Ukraine and in Israel, where the military gets recommendations from its AI system on strike targets. That threatens to make war more frictionless, easier, and less transparent as to how decisions are made and who should be held responsible.
We're doing a whole video on how AI is affecting war, so stay tuned for that coming in future weeks. For now, Let's roll the dice. Okay, we've got critical sectors.
Well, that sounds boring. That's because it is, until it's broken. Critical sectors are things that you and I take for granted, but that are important for our very survival.
Pipelines, water, electricity, transportation, food, communication systems. Most of us wouldn't be able to stay alive for very long if these systems went down. Okay, but there's a world where these systems are relying on artificial intelligence, machine learning algorithms to help run them more efficiently. I mean, this is stuff that the AI is so good at that humans just aren't.
Imagine a water treatment plant, which takes your sewage and turns it back into water that can keep you alive. Soon, most of the decisions at this treatment plant will be run by an AI system that makes millions and millions of small decisions every minute, recognizing patterns and problems. optimizing water levels and chemical usage in ways that humans could never. Or think about traffic.
An AI will run your traffic lights, your public transportation. It will adjust the traffic flows in optimal ways, responding to real-time information of traffic patterns and accidents, weather conditions. This will make your life better. It will reduce congestion and improve overall transportation efficiency.
So now just apply this to so many of the systems that you interact with every day that you don't really think about. Humans won't be necessary at the water treatment plant or at the company that runs the electrical grid except for to come in and repair stuff or do maintenance after the AI tells them that it's time. For the most part, the AI will fix its own problems and learn from its mistakes, getting better and better at an exponential rate. This sounds awesome.
Right? Yes, until we see what happens when lightning strikes and the power grid goes down. There's limited backup power and the AI has to start making decisions. This AI system is programmed to reduce inefficiencies and maximize profit for the company running it.
So it analyzes everything it knows and it decides to keep the limited power going into only the rich neighborhoods. The ones that consume more electricity and that pay their bills on time. So these concerns are regarding biases into this critical infrastructure management, while vulnerable populations, especially elderly or sick or low-income areas, could suffer of being prevented of very basic essential services like electricity, because a system could exacerbate the inequality, putting the needs of wealthier individuals over the generated welfare of all citizens. So once again, we see that the AI could be discriminatory in a way that is not fair. But the other concern here is the black box.
That we don't really know how the AI is making these decisions. Like one night the water treatment plant is humming along, but two of the bacteria sensors get bumped by something in the water and it breaks them. They're still working but they're not accurately recording the bacteria levels in the water.
But the AI system doesn't know this and it starts to inaccurately balance the water. And soon contaminated water is being piped into every house in town. People are getting sick and they're flooding to the hospital.
And it's days before anyone realizes that it's because of the water. Because again, no one was there on site. Apply the same problem to traffic. You've got this great algorithmic software that is running your traffic lights. It uses GPS information from everyone driving to synthesize the best traffic pattern to reduce congestion.
But then one night, it runs a software update that slightly changes the format required To read the GPS coordinates. As the GPS data starts flowing in the next morning during rush hour, all this coordinate data is now being interpreted completely wrong. Low traffic areas are suddenly highly congested. Tons of cars are suddenly on these small roads.
Commuters are stuck in traffic for hours. Ambulances and fire trucks encounter unexpected traffic jams. The city runs into chaos.
And again, no one really knows why. The humans have become so out of touch with the system because the system is so smart. It takes five days before the traffic technicians and engineers find out what's going on.
They fix the bug and things return to normal, but damage and even death has occurred because of this bug. Critical sectors and infrastructure are so important to keep not only our lives flowing smoothly, but keeping us all alive. So we can't mess with it.
We can't... offload the responsibility to a machine learning algorithm that could potentially lead us astray and we wouldn't know why. So what do we do to prevent this?
What can lawmakers do to protect us? And the answer is open up the black box. So you are running a critical infrastructure, show me that you have trained the data with representative data sets, show me that you're not biasing, show me that you're not discrimination, and then I give you this, you know, good quality certification product and you can run. as we have done with all the industry in the past.
So like a lot of life or death products in our life, like medicine and whatever, companies that use AI to run these critical systems will need to show the government that they are assessing these risks and making sure they don't happen. They're going to have to be totally buttoned up with their cybersecurity so that these things don't get hacked. So we're still going to be able to leverage the immense benefits of artificial intelligence in running these systems, but we're going to do it with responsibility and safety and a little bit of caution. And with that, let's get to our last one.
Okay, we're gonna end on a high note here. The fact is, if we do this right, advances in AI could dramatically change our world for the better. In a world where AI runs our hospitals and our medical research, we could save lives, find new drugs that treat diseases that once were untreatable. These systems will allow us to predict and prepare for extreme weather events.
They'll allow us to optimize water use in our agriculture, monitor soil health, and even predict pest outbreaks before they occur, dramatically reducing the need for harmful pesticides and fertilizers. This is all very possible and it's coming. So I am excited and optimistic about it. the future of AI, especially when there are smart people like Karmay who are working on legislation to keep guardrails around this technology so that we can develop it responsibly and reap the benefits while mitigating the risks. Well, you made it to the end of the video.
Congratulations. I always knew you would. How about that, Carme, huh? Pretty cool lady. Make sure to support us on Patreon and we'll see you all in the next video.