Transcript for:
Rethinking Discrimination Law in AI Era

yes thank you so much thank you both of you actually thank you Father thank you thank you Philip work for inviting me to to be the fellow to be able to be um in in Berlin is looking the opportunities to talk to you today about one of my uh papers that I have received the published that is called the theory of artificial readability protecting other groups and for in discrimination law those Philip was already sick um I I do usually work with a bunch of people and I want to just give an opportunity to talk about them very quickly um so as I said on Central and a lawyer by background but I do have an amazing team that I work with at Oxford um which is the group of my research group The governance of the merch of the conscious those are the folks that I'm working with and instead was on the picture here and everything is in a room yet he might be coming later we co-founded that group a couple of years back um together with Chris Russell and it's a concealable years we have been very successful in very very interesting people on board we've just finished another fruit mental the group will be um will be getting bigger very soon and so as you can see it's a background very very different people from love you have people from ethics from complete science from psychology political science and so we're really really interested to look at governance from a holistic perspective from different types of areas so if you're interested in the work that we're doing at Oxford then I encourage you to to have a look at the project that you're currently meeting but basically we'll be trying to do is to figure out what kind of risks um emerging Technologies cost whether the law has any answers to it yet and if the law doesn't have any answers what it is that the war should be doing and this is where Netflix comes in and then lastly we think about whether there could be a Tax Solution in order to fix them up to couples if that's like one of the methodologies that we very often use we're very very Broad in in scope but explainability data protection and bias and discrimination have been topics very closely at the forefronto research and gender and the paper today that I'm going to be talking about and will exactly do that and so I just wanted to give you a quick overview of what guys want to discuss as I said the paper is quite lengthy so I cannot talk about everything there but I hopefully give you a nice overview over um so yes um it's just a quick overview of often talk today so I think everybody in this room will be aware that AI is here guys here to say AI is being used to make very important decisions about people you know AI is used to decide that you can alone if you get insurance if you cannot get limited to University if you have to go to prison life-changing decisions and I think everybody in this room will also be very aware that we know that I causes harm to a lot of people very often those communities are people that have been stigmatized in the past and are seen as you know some type of vulnerability protected growth so very often people get discriminated based on gender ethnicity ability sexual orientation that is not something that is really new unfortunately the thing that is quite new and that's what the focus of the talk is about is that AI might group you in a different way not in a traditional way of how we would think about a gender ethnicity but maybe it starts grouping you according to how you move your eye how you move your mouse how fast the rate of your heart is going and uses this grouping to make the same types of decisions about you so maybe this time you should get alone or if you have to go to prison and things like that so the first part of the talk will just give an introduction of how new types of groups are being created by Ai and then being used as a foundation for decision making and how that's very different to how humans make decisions about people and then we're going to talk about whether those new types of groups actually do find protection under non-discrimination now so the law that comes to protect certain harms from happening for example discrimination based on General and in order to answer that question we actually need to think about very philosophically about the question what makes a good morphe of protection when and if the society believed that certain groups should be protected over others I will show you that I love the opinion that those movements do not find any protection currently under the law and it will justify that claim as we go along but I will end by telling you how I think the law needs to change to extend protection to those who folks this is the overview I'm gonna go over again but let us start with just the basic question of like Ai and fairness and why we really need to to think about that again um we all need to care about because the technology in the way that it's built demands that so when you deploy I to make decisions about people that hiring or if somebody should be able to University the way that you do that is that you use historical data you train an algorithm tell the date they tell the algorithm what do all those people had in common come up with a profile and tell me what the idea can definitely be so that means if I'm hiring you know a chair professor at Oxford I want to hire a chef Professor there I want to use an algorithm to do so what I will be doing is I'm going to feed the algorithm with historical data of people who have held chair professorships in the past and Oxford so I'm going to feed into the algorithm the information that I have from the last 100 years for example I want to feed in the reference letters their purpose statements the grades that they've had their CVS and I tell the algorithm of all those resumes what have they in common what is the reoccurring pattern there try to find me the ideal candidate because we had these amazing people right so the algorithm comes up with that profile and then a new person will apply for that job and the algorithm will compare does this new applicant look like a person who is producing Orthodox surgeon has held and professorships there so as in the book you'll be knowing that most professorships that have been given out of the last 60 18 years people have gotten them that's not a very diverse group of people so that's a person like me would actually not get the job probably if an algorithm was making that decision so you can see that every time we make decisions the patterns the decision patterns that we leave behind will pick up the unfairness again and transport it into the future it's not just an hiring for every type of decision you have to ask yourself the question do you think that we have found a good way of making good decisions and usually the answer is no and so this is how it works from the tech perspective right and this is where we always have to care about bias and it's not just an academic problem it's unfortunately real reality so just three examples to show that this is from my my home country um very troubling example from 2019 where the Austrian Employment Agency rolled out and algorithm to help um assign or allocate resources and benefits with people who have lost in jobs and so the algorithm that was deployed there was heavily discriminatory towards women people living with disabilities and older people again was fed into the oven and sped out so you could see that the unfairness found its way into another system in my adopted home the UK something similar happens and this was in 2020 during the pandemic what happened is that the final exam is that students need to take before they can go to university a levels couldn't be held because of the covet pandemic so the government did instead is they created an algorithm that calculated the test score that they think a student would have had had they been able to sit through the course oh sit through the exact right what did happen is that it's heavily favored students from private universities and haven't discriminated against students of color or students from lowest sociochemic backgrounds why did that happen because the UK class and educational system is just underlying basis of that algorithm and therefore is forwarding that device in the last example that here is algorithms that discriminate against people based on the texture of orientation and also their Administration so we know that content moderation tools that are being used to flag unsuitable content and I heavily biased towards the lgbtq community so whenever you have the content moderation algorithm that is supposed to detect whether something is toxic it will assume that something that is set by engage versus more touch than restored person so again our assumptions of what is Toxic by human assumptions feeding into the algorithm which will make against those groups but of course this type of discrimination with new unfortunately we always have had sexism racism ableism heterosexism and we have laws for that but what have I told for that algorithms do group you in a very different way in a lot in a way that the law did not anticipate at all what if I told you that an algorithm might be able to detect or infer that you're a dog owner that's an idea and uses that information to tailor certain content to you offer any certain goods and services because they assume that you're a doctor what is an algorithm classifies human groups you as I said Teenager which is something that's Twitter can do and they show me certain content based on that an algorithm might also be able to infer that Europe a gambler or a single parent or a person living in object quality right all of those things are not legally protected in that sense also and algorithm might be able to infer to do a video game and Nick would rightfully say what who cares what is the worst thing that could happen in a few classes that is a video gamer well if you look for example what's happening in China that is actually something that could be very worrisome for you as you might be aware that China is rolling out the social credits as what has been rolling out the social credit system which means that there are all great big systems that collect data from public and private sources to calculate whether or not you're a good citizen so a good citizen means good things for you if you are able to listen means that you better offer some supermarkets um but if you're a bad citizen it might mean that you're no longer allowed to leave the country or not allowed to go to certain schools anymore so if in China I have an algorithm believes that you're a video gamer that's something that makes your social Squad drop so we actually have uh or Worse life than you have before so in that sense video gaming could have the negative discriminatory effect but we don't have to look to China to see how new groups form the basis for decision making we know for example if you're shopping online that if you're an Apple user you will have to pay higher prices than a PC user if you are applying for insurance in the Netherlands you might not be aware that the address that you have has an impact on what your premium will be so if you apply for insurance when your address has a number and a letter in the address you'll be a higher fee if you only have a number they feel no fee or that's my personal advice for you if you're applying for a job online I would recommend that you use a browser such as Chrome or Firefox do not use the browser like um Safari or Internet Explorer if you do that a bit more likely could get rejected from the job application so that's like still like an understanding okay something to do with browser history as an impact but it gets even more obscure than that where I don't even know how to describe that in language anymore so for example if you applying for an online loan the way how you move your mouse how fast you scroll and what I use capital letters or not has an impact on whether you get the job or not or facial recognition software that is being widely used for example in recruiting or for interviews where software is measuring how fast your retina is moving how much sweat you have on your forehead how often you blink the page of your voice all of that is being used to decide whether you should get a job same is happening with credit for example Facebook admissions offer is being used to decide if you should get a loan we also see a rollout in educational system very very popular is eye tracking software I tried myself where that is used to see how a child's eye is moving to make a decision on where to place them to just be advanced classes not advanced classes should they be put forward for for special education ADHD testing or in general students and pupils are constantly being severed how they move their eyes how they move their head how they move their Mouse how often do you watch around in the room do they look to the rest your life more often on the to the right and all of that gives a signal on whether there is suspicious behavior in a sense which feeds that I thought the last one that I have here is the emotional detection right where your facial cues how your muscles are moving how your eyes are moving gives allegedly an indication of what your emotional state is that is being widely used this is used for you know decision making in hiring this is being used at borders of immigration purposes it's being used for diagnosis of patient and it's being used to monitor how well broke beatings are going in the workplace so all of that has happened is measuring the oddest thing to make very important decisions about you so we end up in a situation where you have those two new groups that I kind of emerging that don't really fit with our common understanding of decision making we have those non-protective groups which is for example the dog owners or the Internet Explorer users or Gamers where we have like a social understanding of what that means and then we have completely incomprehensible growth where I don't even know what that would be the way how you move your mouse the speed at which you recognize moving electronic circles to pixels in the picture incomprehensible I don't even know what that means there's nothing on discrimination law the groups that you should not take into consideration and making important decisions those are based on birth age disability race sex gender Social bar engine right that makes a lot of sense but if all of a sudden new types of groups are being used to make decision making should we at least have a discussion on whether we should grab some type of protection and this was the starting point of my journey here um why is it a theory paper but it's a theory paper because theory was really necessary at this point because I started to look at first at the case law right I wanted to ask the question have people ever tried to go to court to fight for new groups to be protected under normal discrimination law surely that must have happened yes it has happened there are at least three cases where somebody wanted to go to court and say I think my particular grouping should also have protection in the law all three have failed the first one had to do with somebody who got fired because they were um obese and they wanted to make the claim that obesity should have should be a protective attribute under the law the court says no that's not possible the second one um had to do Lucifer Neighbors which was the case for somebody to try to say that they think that chronic sickness should be a protective attribute and the law they were making the argument that it should either be its own rights right a new birth that should have protection or that it should be protected under the realm of disability because disability is affected attribute to Neato that was allowed according to a court the court said no it is not disability and protection is not allowed you cannot stand it and new rights are not possible to create the last one in my opinion is the most deadly one for a bunch of reasons this is the case of Paris it's a case from the UK where um a same-sex couple or the Survivor or the same-sex couple was wanting to get survivors pension so the purpose that I remember where I'm trying to get to survivors pension which is something that the law did allow if the same-sex cup were entered into a civil Partnerships before the age of 60. so if you did that then you were entitled to survivor's friendship in this case at the time where Mr Paris and the same-sex partner were 60 years old it was illegal to get married yet so he came and said that's discriminatory it's looking for based on sex orientation as well as age because they have been living together for more than 10 years at that point he lost he lost he lost because the court said intersectional discrimination is not a thing you cannot try to say that something is a problem based on two protected attributes at the same time you gotta find your protected attribute and say that's the reason for discrimination you cannot say that it's age as well as sexual orientation and I don't have to tell anybody how damning that is for anybody who knows how into this section discrimination actually works are problematic that is to assume the Discrimination is only one-sided but unfortunately the court has been really really restrictive in allowing new groups to be added or even allowing new identifications of a protected group to be expanded so in my opinion it was time for Theory right if the if the end of the spectrum is when the law makes its decision in court then at the Forefront at some point there's fear and so I needed to think about what makes a group worthy of protection so at what point why did we decide to ethnicity or gender or worth your protection and other groups are not and if I can make the argument that the usage of groupings such as how you move your retina the fast the how fast your heart is beating whether or not you are moving your mouse in a certain way if the usage of those grouping causes similar crime then you can make an argument that they also should be protected but it comes down to what makes a group of your protection I was really really naive when I started at that project because I thought I'm just going to open up like one book and I'm going to find the definition of what makes a book rub your protection this will be an easy paper I don't know I think you can feel 700 libraries with interesting theories and interesting contributions from amazing people who have been discussing for God knows how long what makes a group worthy of protection there is no such thing no checklist that tells you and this is why you think that certain people should have eye protection so one of the contributions is actually to come up with a taxonomy myself based on what I have found there so you can agree or disagree with that but this is some framework that I started to work with where I try to make sense when an infant group should be protected under the law another current Doctrine right the current doctrine of the Discrimination law and so those are the four things that always come up when we think about when a group should be protected under the law long story short I don't think that those new groups fit anywhere in there at all but I'm gonna walk through you and tell you why they don't and then I'm gonna tell you why I think it's a problem so the first one has to do with immutability and choice so the law usually things are that you should not be judged on something that you have no control over so if you have an immutable characteristics or something that you're not able to change then it should not be the basis for decision making for example um when you get a new job so traditional notable characteristics is something like ability or age you cannot change that and therefore it should not be something that should form the basis of decision making on the same time there are certain choices that the wall wants to protect for example religion right I can choose and change my religionism please but it would be really unreasonable to do a job interview and they would ask me to change my religion based on that so in a way the law protects immutable characteristics and fundamental choices the issue is that algorithmic groups a leader of that they kind of fall in the middle so think the usage of your browser how you move your mouse how fast you scroll but in your video game or whether you have a dog that's not immutable and it's probably also allowed a fundamental choice in the same way that the religion would be so we actually fall in the middle and according to that it would not be a problem to use them because they won't deserve enough protection the second that very often comes up has to do with relevance arbitrariness engineering so it makes a lot of sense that we would say don't use gender or race to make important decisions because it has nothing to do with the task at hand saying that women or people of color are less good at certain tasks it's not legal because it has nothing to do with the competence of a person like gender or ethnicity nothing to do with it it's completely irrelevant therefore you cannot use sex to say I'm not gonna get the job this makes sense in a human setting where we understand the prejudices of people but with AI it's a little bit different because AI makes everything relevant in fact that's the whole point of AI is trying to find connections and correlations between data points where nobody ever thought to think so algorithms might find a correlation between high heart rates and being able to repair alone liking the color green and being a good worker right everything can be relevant so if we allow that relevance Criterion to sanction the use of uploadment groups into free for all and you can use basically everything so again no hindrance to use any of the algorithmic microbes the third one again will come as no surprise has to do with historical oppression stigma and structural disadvantage again think traditional protective groups ignitive gender sexual orientation social status why are they protected not because they have a long lasting still ongoing oppression that they have to endure right very often they are discriminated in a lot of aspects of the light ranging from education to employment housing and things like that that makes a lot of sense that doesn't already work either like the video Gamers input Explorer users people who move their retina in a certain way those are not people who face discrimination the same with a black people would right in fact it's also hard to say that stigma actually exists because very often you don't even know that you're part of your group or Society doesn't know that you're part of the group so how can a question even be uh perform if you will if nobody knows then you're part of a certain group and you could say well you can use stigma and new types of passion could actually arise but it's really really hard to point to that especially with algorithmic rose why well because those groups are ephemeral they don't exist forever they are not stuffed if they are constantly changing your behavior is also constantly changing so you're moving from one group to the other one day you didn't get the job because of your dog the next thing because your retina was moving in all the way and the day after because you have a great card so it's really really hard to actually say can you see there's a new type of pressure that's emerging because it's constantly changing so again structure that package doesn't really work too the last one has to do with social assailancy so the law doesn't want to protect random groups the law only wants to protect homogeneous groups and that means that the government has to have some meaning for the people involved and or Society at large so typical attributes that signify saliency are things like having a common culture shared identity history tradition all of that gives indication that there's social saliency and groupings with algorithmic groups that's not really the case right how you choose your browser how fast you type if you lose capital letters or not that's not a shared Community who or who are the reverence in that sense like there is not enough shared identity in fact I would argue it's really hard to even have a shared identity because there's not even social language or human language to describe what that great looks like right how pixel is changing how much time you spend on advertisement whether you are using capsules or not like what is that correct I don't know how to describe that so how would that group even have a shared an answer to your community so according to all of that those little groups have no protection whatsoever and you can just use them as you please because they are not worthy of protection so I want to be happy with that so I thought well maybe I have to add another layer of abstraction when thinking about Theory because we thought first into what makes a group of your protection then I started thinking okay but it's like okay why why those fruits why protecting those groups wise discrimination wrong to begin with right right why is this human nature woman again I thought that must be a very easy thing to answer but it isn't it is very odd it took me months and months to figure out what part of this relation was actually wrong but the a was to say if I can make the point that the wrongness of discrimination is comparable to when I'm using you know heart rate for decision making they would be for using ethnicity that I can make the argument that we should open up performance protection long story short I do not think and algorithm in groups invoke the same moral wrongness the Discrimination Hood why so if you look at what people believe makes discrimination wrong they will say things like people think of other people often serial ones they want to demean a person they want to treat them with disrespect or stigmatize them subordinate and they have an image of a person that is not capable that is beneath them when you think of other new groups that's not necessarily the case we think that coder is looking at the data set they don't really have that feeling they don't even know who's the data sets they have no attachment to what's going on there they don't know what's in there they don't know um what they deal in life right how could they sympathize or not sympathize with people that are ranked based on their spending habits how much time you spend on the web page how they move their Mouse they are optimizing their process there is no idea of this person spends 0.3 seconds longer on a web page therefore every lesser of them that doesn't exist in the same way right right and so using them to make decisions isn't problematic in the same way because they are not considering those people of less than Fury or it's like okay cool fine that didn't work I'm gonna give it a last shot to think about another layer of abstraction of the coconut on top of that to figure out whether we should open up that that spot and the last attempt that I did was to think about the aim of a discrimination law like what is the aim of the law what is it that the law wants to achieve what would Society look like if law got its way and if the usage of groups like how you move your retina and how fast your heart is being is standing in the way of that aim then if you make the argument that those groups should be protected as well unfortunately I don't think that they're standing in the same way um in like other protective groups why because the aim of more determination law in Europe I have to say is about social Equity it's about substantive equality it's about trying to eradicate the past neighboring effects of discrimination its ending oppression it's ending domination it's trying to combat subordination stigmatization so what that means is that the law rightfully assumes there are people who are able to oppress other people and the law wants to close that Gap that's the aim of the law right those groups that's distinction doesn't really work because there's not one algorithmic group that is always running out there isn't the dog owners that are always losing out when the internet explorers are always losing out of the people with the removing retinas no those verbs are ephemeral you changed and they changed and you moved from one group to the other they're completely random they are nonsensical and because filters of the group this this Gap doesn't exist the mothers need to step in right it's the randomness that is the harm not the picking on one particular group so I was at my website and I decided Well I guess not much more nationalized not it's not there to help in in that way but I really thought that that's a problem because I still think there is a problem coming back to those two groups that I have identified that are being used to make important decisions does not protect the groups those video Gamers English words and those completely you know non-comprehensible groups again heartbeat facial movement pixels the signals that you send off whenever you open your laptop they have an impact on your life they make decisions on where you're gonna get a job they make have an impact on whether or not you're going to get the loan you face is the arbitrary of we're having a new job and your students are going to be ranked on based on how they move their eyes and your emotions are going to open close doors for you without you having any control over it so there is a problem in my opinion it's just not a problem that the law didn't anticipate and so I've talked about maybe it's time to think about a new type of theory maybe I think you have to think about the harm differently and this is where my personal contribution comes in or my mind very sorry to think about a new theory of harm the harm of artificial immutability or artificially created immutability that's a new type of harm that I want to investigate and so what I try to do base this Theory honest to say that I think at the very basis of non-discrimination law the law wants you to be successful with your life plans look law wants you to have certain Liberties and freedoms and access to goods and services the law ones need to receive an education to train for a professional housing to have food to have all of that right and so that's the basic thing that we all wants to do and it's stepping in if something is turning in the way well something is standing anyway it's just a different thing that the law did anticipate right the law did anticipate that somebody who has bigoted views about gender gender identity ignacity sexual orientation will withhold resources like education or jobs for housing because they think of you of lesser moral worth so they think that we don't deserve them they both act as a gatekeeper algorithms don't necessarily have that image and yet they're holding you back from having access to those same resources they are not holding you back based on your gender but they're holding you back by assigning you to a random group that you don't understand like how your eyes moving in ephemeral group that is constantly changing and on sensible approach where you don't even have any understanding of who is pardon the group and how to get out of it something that you have no understanding have no way to contest heavy weight to control and if you don't have any way to control it then it's immutable to you because you came along and change it therefore immutable so the harm is to say somebody is using an arbitrary Criterium that you have no control over it to withhold resources but just the most the perpetrator and the process of bringing about that harm is different and so in my opinion this allows for everything I work with age I wanted to tease out a little bit more what I mean by artificially creative immutability and and that type of harm and again this is one of my my first thinking about that what do I mean by artificial immutability and I came up with five different instances where I think that artificially trading with ability is produced so again I use the word mutability in relation to other things that you cannot change like your DNA for example but this just um artificially created and so the first one has to do with opacity so if I don't know what kind of criteria you're using to admit people to law school I can control the brain I cannot control it what is uncontrollable to me is immutable to me so even if I use dog ownership to decide if somebody should get into Oxford if I don't know about it then I cannot buy a dog so I have no control to actually participate in that decision-making process I can give you some indication of what it is that I base in my decision on for example I could say and this is happening then your friends on Facebook if you do have an impact on whether or not you get the law so depending on whether you have good friends or bad friends credit decisions will be based on that I can tell you that but does that mean you know what are the good friends and the bad friends um what kind of friends I are the ones that you should get rid of the well you don't know because it's too big if it's too big you have no control it is immutable to you instability I already I already talked about that in in some way more detail none of those verbs at stake their Constitution and the models are constantly changing you are contributing to the changing thing for example um cell phone usage how often you visit an app that fluctuates and every time you change every you've been grouped somewhere else so you are constantly moving from one group to the other as well as the grouping updates because at some point you are written things app usage is no longer interesting dog ownership is more important so how would you prepare if you wanted to go to university how would you rush up your CV if the criteria are constantly changing in that way um involuntariliness that's a big one especially when it comes to fish with commission software if I'm measuring the sweat on your forehead if I'm looking how your Readiness I'm measuring your heart deep that is something that you have no control over right so it is immutable to you and the lack of a social concept um the clicking patterns the electron signals that you're telling you how fast you type in name that thing I have no word for it we don't have a word for it so if I cannot describe to you what it is that I'm measuring then you have no control to change it so those are some of the artificially free communicability attributes that I think are problematic and I think that they actually do contribute to disrupting or eliminating good decision criteria and good decision criteria again are at the heart of one discrimination law I don't think many people know that but one of the reasons why we push so much of a transparency when it comes to loans or employment decisions or who how we had made people to university is because the Civil Rights Movement was pushing for that Civil Rights Movement was really really active in making clear what kind of criteria being used to reign in nepotism to reign in sexism into reign in racism to create an equal playing field for whatever so transparency is really really important in that regard it's a good decision criteria in my opinion adults that I've listed here unfortunately yet again the immutability aspect of those new groups is disrupting and destroying them like transparency is really really important by the eye is not used to make anything transparent is the opposite like you give a bunch of data into an algorithm and tell them who will be the best person to hire it's not coming up with good decision criteria it's just saying this person or that person in fact nobody cares about the rules of decision making nobody cares about the American pain and explain what kind of criteria be used so the idea that transparency can be upheld is very really problematic because that's not what it's being used for yeah instability already said it because those systems are constantly changing how would you be able to prepare now if if your kids are going to prepare their University applications you will tell them have good grades be nice to your teachers and get some reference letters don't have any typos in your in your application those are evergreens everybody knows that if they constantly changing and it's like next yes because you have a poodle the year after because your retina is moving in a certain way then it's your heartbeat then it's your diet then it's how you type in one of those capital letters what advice are you going to give to your children to put forward a good application together to University if it's not stable it's not possible similar with empirical coherence and fairness in decision making also has to do with whether or not we believe that the criteria at hand would be using to make decision has something to tune a decision attend great for example we say you should have different braids because that's an indication that you're good at something having good grades will then be a good job it will then be a good spot in the University that is socially accepted that is either empirical coherence to say that grades are good indicator of how well somebody will be doing nobody cares about like during the Citron called the death of Fury for that reason nobody cares anymore what the social story is between the data points what that means is that you are seeing an interesting correlation the first complex a color green and is also going to be good with the job that's done nobody cares anymore the algorithm has said that that's enough but nobody cares in whether liking the color green is actually a good indicator for being creative being a broker B is your parallel none of that is there and that's all that makes it problematic from a normative successful standpoint and so again I think the the usage of this group is disrupting our standards for good decision making so what's the solution so whenever I say or talk about this people say well you know just more transparency fine just make everything transparent open up the black box and it's all gonna be okay and to that I say yes transparency is Meaningful and transparency is but it's only a partial remedy like I can tell you that the speed at which your retina moves has an impact on whether you get the loan this will not give you the ability to make your retina move slower so it's not just about transparency it's about power it's about control it's about autonomy it's about do you have an active role in that process having any ability to change the criteria that are being used to make decisions so whenever I say that people come back and say are all types of mutable characteristics problematic no of course not of course not in the same way that not all usage of traditional immutable characteristics are problematic for example age I cannot change my age unfortunately that'd be great um but even though it's an immutable characteristics we have laws against child labor which is a good thing I would argue so we have allowed that the legislature useless to protect the attribute to make decisions aged and the same type of assessment needs to happen with artificially created immutability characteristics so what that means is that if you're using eye movement retinal movement how fast your heart is fitting how you move your mouse when you use capitals or not how fast you scroll that is fine but you would need to justify it in the same way that you would need to justify that it's okay to use age when making certain decisions or how we have had this with so many other immutable characteristics intelligence is not something that you can change but we have schools that say you can only get in your fpfs certain IQ we have accept that there are hype requirements for basketball this is something I cannot change either I wouldn't make the team and we're all it's not a problem to use those characteristics if you can justify what I'm saying is that when it is artificially created immune if it's artificially mutable than it is on the face problematic you just need to justify in the same way that is on the face problematic to use something like heights that you cannot change in the same way so that's what I'm proposing I'm composing more engagement um with with those types of questions why is it important I think really really really important because I think we're barking out the wrong tree completely at the moment there is a lot of attention on trying to expand uh to use on discrimination a lot to solve the bias problem but the issue is that discrimination just occurs better often a different way that that the law has anticipated so the law is very much concerned with don't use gender don't use race don't use ethnicity companies are not doing that because it's illegal so they are really really shying away from using direct uh directly protected athletes they're really really cautious with that so it doesn't really happen What Happens much more is that they're using proxy so they're using you know your shopping history the higher products that you use where you go to shop to have an image about you know ethnicity and your gender but that's really hard to prove for the outside person like I'm not just looking at your you know driving patterns does that correlate chronicity maybe maybe not that's super hard to prove right so you have this group that doesn't link to a protective attributes and therefore doesn't find any protection so we think we're doing the right thing but the group is so diverse and so far detached from a traditional attributes that it doesn't find any protection also there are techniques where you can just diversify a group to make it look more harmless yet it is still a problem and very often actually especially in the advertising space um those little groups are really popular because advertisers want to advertise to job partners and video Gamers and single parents and to bring the support of things yes all of which don't have any protection but the whole advertisement industry is based on that finding those types of people not necessarily linked to gender ethnicity and AI is creating those non-comprehensible groups where nobody actually makes those mid-level inferences to figure out how it correlates to anything right it's about you opening up your laptop and the signals that you're sending there this is how fast they type do we use capital letters how much sweat is on your foreheads do you use your mouse to the left or to right first it's measuring out of that I don't have a word for that but that's happening that's when the bulk of data collection happens that's the bulk of the data that is being used for decision making and yet the law doesn't really protected because how could it because students use it for nobody thought about that so it's not a criticism of the law or technology it's just an unfortunate marriage in that sense so my proposal is not to do um just add groups to the license not from proposing because that's what people say what are you gonna do do you want to protect everybody Internet Explorer users and dog owners and beginners no no no no no I'm not saying just put any type of group on there actually I'm saying the opposite I'm saying let's move away from a material definition of group I don't care who is in that group I only care how the group is being created so what that means is I want to move away from a material definition and move on to a formal definition of a group so I do care about how the group is being established is it a random group is it a femoral group is it an offensible group that you have no understanding that you're part of no one no understanding of human language to understand what's going on there no way of contesting your membership or having any controller right if that is the case then in my head it is artificial immutable and therefore on the face problematic so I don't care who is in my group I only care on a group is being creative and if it's random ephemeral nonsensical then in my opinion the other problem so how would that work in practice um I actually think it would fit very nicely with um how we have it corrected with those issues in the past the first question is to ask is the Criterium that I'm using defect immutable is it artificially mutable has it algorithm attached it to you like web traffic eye tracking emotion detection right or is it based on a choice and if it is then you have to make it a stance when you have to make a justification on why it's acceptable to use in the same way that you have to justify intelligence High Intelligence for University admissions you have to have a justification for immutable characteristics that you cannot change so the paper is not there to tell you eye tracking is bad and web prep it is bad and emotion is bad but it's okay to use video gamer as a dog owners that's not what the paper is doing the paper is actually just a detection tool where framework to start thinking about the acceptability of all of that it's really about trying to think whether or not it's acceptable to use that and whether there's any grounding that that's a good metric for decision making has anybody actually thought about whether eye tracking is good to decide on student placements in that sense so this is really not telling people what to do but inviting people to think differently about harm and if we did that then I think we would in fact really realize the actual idea of what the law really wants right because the law really wants to equal the playing field um the law wants Fair competition and the law once you to win thank you very much [Applause] thank you