Lecture Notes: The Theory of Artificial Readability and Discrimination Law
Introduction
Lecture by a fellow invited to Berlin to discuss a paper titled "The Theory of Artificial Readability: Protecting Other Groups and Discrimination Law."
Speaker's background: Central and lawyer, part of Oxford research group "Governance of the Merch of the Conscious."
Group's focus: Emerging technologies, legal responses, and societal impacts.
Overview of AI and Decision Making
AI is used in significant decision-making areas like loans, insurance, education, and criminal justice.
AI can cause harm, often discriminating against already marginalized groups based on gender, ethnicity, ability, sexual orientation, etc.
AI creates new group classifications based on non-traditional metrics (eye movement, mouse movement, heart rate) for decision-making.
AI and Non-Traditional Grouping
Explores how AI forms new groups that may not align with traditional bases for discrimination, such as being a dog owner or based on behavioral data.
Raises concerns about AI's ability to influence decisions without human bias but still perpetuating unfairness.
Case Studies and Real-Life Implications
Austria (2019): Employment agency's algorithm discriminated against women, people with disabilities, and older people.
UK (2020): Algorithm for A-levels favored private school students, harming students of color and low socioeconomic backgrounds.
Social Media Moderation: Discrimination against LGBTQ+ content.
Challenges with Current Discrimination Laws
Current laws protect against discrimination on immutable traits (e.g. gender, race, age).
AI's groupings challenge this by focusing on mutable or non-traditional characteristics.
Examples: Obesity, chronic sickness, and intersectional discrimination cases have failed in court.
Philosophical and Legal Considerations
Immutability and Choice: AI groups don't fit neatly into immutable or protected choice categories.
Relevance and Arbitrariness: AI can find relevance in seemingly irrelevant traits.
Historical Oppression: Traditional groups have a history of oppression, which AI groups lack.
Social Saliency: AI groups often lack cultural or social cohesion.
Proposal for New Legal Theories
Suggests a need for a new legal theory to address AI-created groupings: "Artificial Immutability."
This new theory would consider arbitrary and uncontrollable factors that AI uses in decision-making.
Implications for Society and Law
Calls for a shift from material definitions of groups to formal ones, focusing on how groups are created.
New guidelines for determining acceptable criteria in AI decision-making.
Conclusion
Emphasizes the need to rethink discrimination law in response to AI's capabilities.
Urges for more engagement and understanding of AI's impact on fairness and decision-making.