So this is gonna sound weird, but a few years ago, the NYPD used facial recognition to catch a shoplifter, and they didn't even have a clear picture of his face. The clerk said the guy kind of looked like the actor Woody Harrelson. So they just pulled up a photo of Harrelson and put it in the system. It worked. They caught the guy.
It turns out he really did look like Woody Harrelson. But facial recognition systems were never built to be used this way. And incidents like this raise the question of...
whether police should have this technology at all. It's not the only time something like this has happened. This past year, Georgetown University conducted a study of some of the use cases, and they're pretty wild. Some police departments have actually pasted in different facial features in an effort to get the system to produce a match. If the left eye is blocked in your picture, just paste in a new left eye.
If the suspect's mouth is open, paste in a new mouth that's closed. These are delicate algorithms, but in most cases, there are no strict rules for how police use the algorithms, and whatever the machine produces can be used as grounds for a police stop. Okay, but before we get into that, let's talk about how facial recognition really works. At its core, facial recognition is about tracking key facial landmarks from photo to photo. The distance between your pupils, the angle of your nose, the shape of your cheekbones, basically all the details of your face that make it distinctive.
That works best from a straight-on photo with at least 80 pixels between. the pupils. Think like a passport photo or a driver's license. But once you've got that basic pattern, sophisticated programs can recognize the same features at an angle. It can even work if part of your face is blocked, as long as there are two pupils and enough other features to be sure.
Vendors like NEC, Morpho, and Cognitech pioneered these systems, selling their software to local and federal police forces. But in the past few years, Amazon and Google have been building it into their computing clouds too, which makes it a lot easier to get. With a couple hundred bucks and some coding skills, almost anyone can create a facial recognition system.
These programs work off accuracy thresholds. The tighter the match, the higher the number. But there's no firm rule about how high the number needs to be. Which means that at the same time police are playing with the photos they upload to the system, they're also playing with the standard for what counts as a match.
So if you look like Woody Harrelson even a little bit, An officer could adjust the accuracy threshold until it registers as a hit. And then if you ask why you're being stopped, they can just say, the machine said it was you. If you talk to the people making these tools, you're really not supposed to do any of this.
It's like steering a car with your feet. You can make it work, but it's bizarre and dangerous. And with police, the end result of all of that is stopping someone, maybe for no reason.
That's even worse because the algorithms are less accurate for women and people of color. It's not totally clear why that's true. A lot of people think it's just a result of algorithms that are mostly trained on white men.
But government testing shows it really consistently across the industry. You can see on this chart the red lines are the error rate for black people and the green lines are the error rate for white people. The red lines are almost always higher, which means the person getting stopped for no reason is more likely to be from a community at risk.
Now, The NYPD says that no one has been arrested on the basis of facial recognition alone. And that's true, but facial recognition has been involved somehow in more than 2,800 arrests in the five and a half years the program has been running. Even when there's no arrest at all, a false match can still lead to a police stop, which has dangers of its own. There's supposed to be a clear legal bar for making those stops, but facial recognition is short-circuiting that. Now, defenders of facial recognition will say that despite the problems, it's still an effective tool for police to protect their communities.
Detroit's Project Greenlight is a network of connected surveillance cameras recently upgraded with facial recognition, and it's credited with a 23% drop in crime in the city. But it's still controversial. Some community members say there's no transparent oversight, and the flood of new tips is overwhelming the police force. The fight's gotten really heated, so heated that one of the city's police commissioners was actually arrested at a hearing trying to speak out against the system.
Other cities have passed local laws banning the use of facial recognition by police for just that reason. San Francisco, home to some of the largest tech firms in the world, banned it last year. San Francisco supervisor Aaron Peskin was particularly critical, calling it Big Brother technology.
But... But this isn't just a tech problem. Fundamentally, San Francisco is saying the government just can't be trusted with this technology.
Not because it's so bad, but because we don't have enough oversight over how police departments will actually use it. That's a problem that goes much deeper than just recognizing faces. And as we find more powerful ways to peer into the average person's life, it's a problem that's not going away. Thanks for watching. Like and subscribe if you want some more.
And if you're looking for another video... My colleague Casey Newton has an incredible report about Facebook moderators and just what a difficult and disturbing job it is. So check that out.