Transcript for:
Lecture on AI, Implicit Bias, and Decision Making by Mazarine Banagi

[Music] I'm mazarine banagi uh I teach at Harvard University in the department of psychology what I study are the ways in which our minds work that are unknown to us I'm an experimental psychologist and the methods that I develop are methods to test how people are making a decision but the decisions that I'm interested in are extremely simple what do I associate good and bad [Music] with ever since the 1970s at least in my life I remember having discussions with friends and colleagues about Ai and the future and we are in a moment now when AI is available to us but also has been introduced into just about every aspect of of Our Lives we know that the introduction of algorithms in certain decisionmaking can actually be very good so for places that do health research um you know they will they will show us that an algorithm is able to detect a tumor or something X percentage times better than the average radiologist very good if an algorithm is catching more real tumors that a human was missing that's what we want AI for but there are many other instances in which AI has become introduced without testing and where we should have reason to worry and worry quite deeply there are algorithms that look at facial ticks and movements during a job interview and then those data are fed to the hirer and somehow it's been interpreted to say that this kind of twitching is not as good as that kind of facial twitching you know we we have no evidence about the validity of what is being measured so there's that then of course there's the problem that algorithms are not transparent even the people who wrote those algorithms the complexity of what is coming out is so great that there is no way I think even the Builders of that would be able to tell you what is it actually doing under the hood you know sometimes Fields produce something that's could be dangerous I think that in this case AI while everybody's going to use it there are people who actually build it and they're come from a few different disciplines but I would say largely these are people who were trained in computer science and in work on AI what's encouraging to me is that that Community is incredibly mindful about the problems of AI so there are large numbers of people studying bias in AI there are institutes that are cropping up whose main job it is to focus on these so I think we will be be okay but only if we don't let corporate interest Drive how this work unfolds if it's being unfolded with one interest money we're in deep [Music] trouble so what I talked about in the Symposium today is work that I've been doing with a computer scientist and a psychologist we now have access really for the first time to very large corpora of language we now have access to all of Google books that have been scanned in books from 1800s to today billions of books the great thing is that we don't have to go measuring individual people that's expensive then we can only look at people who are alive today we could go back into these databases and look at it historically and see what people's attitudes were in the 1800s to today the result that I talked about is that of course we believe about groups has been changing over the 200 or so years but every word has not only a meaning it also has an emotion attached to it so a word like love is not just about what love means but it is a positive thing or a word like peace or a word like Joy or sunshine or friend these are all positive words even though they mean very different things and similarly there are negative words with very different mean meanings but all negative one of the things that we show is that over time even though our beliefs about who different groups are has changed in in Google Books something that underlies these words their affect or their emotion has remained pretty much stable over that 200-year period so that's one of the results the other result that is really interesting is kind of a a result where you can go into 850 billion words that makes up something called the common crawl and this is basically two snapshots of the internet uh taken in 2014 and 2017 over two we period so everything on the internet in that little time period we can go into that data set and say what do people think about the category male and female for example when we do that we see how bias Works recently when we go into the language to look for the same thing we're finding that our language is drenched with stereotypes our language is drenched with um attitudes of A Sort we would not think we should even have and so now we have yet another measure yet another corroboration that what we see on the web if we believe that it can affect us we're going to see not only that we dump what our minds contain into the internet and then of course that material comes back to us through podcasts and whatever else uh and enters our mind and reinforces certain things so that's where the interest lies is in looking at language as a way to to see if it reflects our society and what one and and that we will get a picture of what our society looks [Music] like when people ask me what's important about this work I tend to say if we know these data on what we call implicit bias not explicit bias what if we can show you that by knowing about implicit bias you will be more likely to make better decisions so that's Point number one and the second is I think an even more persuasive reason what if I told you that by understanding implicit bias you will be able to bring your own behavior more in line with your own values so you know my values may be different than somebody else's but whatever my values are I'm totally committed to making sure that I behave in line with my values and the last 50 60 years of my field of experimental psychology has consistently shown that that is not the case that people intend well uh they speak well their values read like the Constitutions of countries uh but when you look at the nitty-gritty Behavior it's not in line and the reason and this is the part that I worked on the reason is that it's not consciously accessible ible to us people are not bad people they're not acting out of malice they don't hate any group they're unaware and they can't be aware until the technology is developed to reveal this to them what parts of their mind are actually doing and so my students and I have been at work for the last whatever I forget like four decades or so um doing research to develop methods that will allow us to reveal to ourselves what might have dropped into our heads that we don't even know exists what is that thumbprint of the culture on my brain that I'm unaware of but can a test reveal it to [Music] me business world has dealt with deriving data largely by surveys by asking people you know how much do you like being here how long will you stay here are you happy with your manager I will say that I'm skeptical of that type of data and I do believe that in the business World there is a place to start to introduce these more indirect or implicit measures of our group identity um of our stereotypes of our attitudes surveys are really good for some things okay um when we do these tests we find that you know with among reasonably close friends if you want to know who voted for whom in an election your friends and you can chat about that and you might say to them I voted for x and not y there I think survey data are perfect because there's not much of a chance to not remember who you voted for and amongst friends you are you trust enough that you can tell people how you voted but as you start to Veer away to ask questions that are more complicated you know who did more of this work or how do you rate this person in the interview that you you're running that's where enough subjectivity is involved that those survey measures pick up whatever we want to say right I can say I really liked her but you know in that last little bit I don't think that that she didn't like forcefully make that point or something like that like there's so much that I can do to make the survey data come out the way I want it to I will not say they're not valuable they're very valuable and in many ways they do the job adequately but if we want to just increase our Armament of measures and pull in different kinds of measures we will just get a a better view of what that mental state [Music] is and then of course there's the AI of things like deep fakes um and there I think most people are agreed that they're going to become a part of our lives this the next election we'll certainly see more deep fake advertisements but I'm encouraged by the state of uh New Mexico which just passed a bill that says that if you use deep fakes in a political ad you need to say you did and and show and that to me is probably the way we should be going uh that even though not everybody will know that's a deep fake at least you had to print right under it this involved deep fake blah blah blah Imran Khan's face is not his real face his voice is not his real voice things like that so this is the point where at and I think the we're really at a very important Fork where we will have to decide how much legislation how much regulation do we believe is necessary I'm not one for imposing lots of Regulation and legislation but I do believe that at the most basic levels we ought to be able to know how accurate the system is in what it's giving us in terms of data how fair is it you know how transparent is it and how accountable is it