🔒

AI and Data Privacy Risks

Oct 9, 2025

Overview

This lecture explored the changing landscape of data privacy in the age of AI, covering new privacy risks, challenges for practitioners, and opportunities to leverage AI for privacy protection.

Introduction to Privacy & AI

  • Data Privacy Day is dedicated to raising awareness about privacy and promoting data protection.
  • AI technologies have led to increased concerns about personal privacy due to new capabilities and risks.
  • Privacy is a key principle in ethical AI development, yet specific guidance for AI-related privacy risks is limited.

How AI Changes Privacy Risks

  • AI has increased the scale and scope of privacy risks, often exacerbating existing problems.
  • Identification risks are worsened by technologies like facial recognition, now effective with low-quality or masked images.
  • Surveillance is expanded via AI, enabling automated analysis of large data streams (e.g., CCTV, school tracking).
  • AI introduces new insecurity risks, such as memorization in large language models that can leak personal training data.
  • Distortion risks (e.g., deepfakes) and new exposure risks (e.g., non-consensual intimate imagery) are created by AI.
  • Physiognomy—andrology is a problematic, AI-enabled category, attempting to infer traits from facial features despite being discredited.

Practitioner Awareness and Challenges

  • Most AI practitioners do not focus on AI-specific privacy risks; their work is mostly driven by compliance, not specific threats.
  • Main motivators for privacy work: business interests, personal values, or legal compliance.
  • Inhibitors include rigid compliance rules, lack of incentives, and opportunity costs (tradeoffs between privacy and model utility).
  • Practitioners often lack AI-specific guidance and must rely on individual judgment and insufficient training.
  • Barriers include lack of holistic understanding of data pipelines and challenges assessing downstream privacy impacts.

Leveraging AI for Privacy Protection

  • AI can be used to help practitioners recognize and mitigate privacy risks (e.g., the "Privy" tool for risk assessment).
  • User-facing tools employing generative AI can help individuals identify and act on privacy risks in photos (Imago Obscura), texts (Privacy Mirror), and access controls (Sketch-based Access Control).
  • Effective privacy tools should balance user agency with surfacing potential risks users may not know.

Discussion & Future Directions

  • The biggest privacy concerns include AI reinforcing discredited ideas (like physiognomy), deepfakes, and AI's ability to scale attacks on social proof and sensory perception.
  • Professional norms, regulatory interventions, and improved training are needed to advance privacy protection.
  • There is a need for improved education for practitioners, not just researchers, in privacy and AI risks.
  • Privacy can be a business differentiator, but is often secondary to speed and innovation.
  • AI-based decision support systems should enhance, not replace, user awareness of privacy risks.
  • Moving privacy technologies from research to practice is most effective when targeting communities with specific needs first.

Key Terms & Definitions

  • Privacy — the right to control personal information and protect it from unauthorized access.
  • Identification Risk — linking digital data to an individual's identity.
  • Surveillance — monitoring individuals, often using automated data collection and analysis.
  • Insecurity — risks arising from poor data stewardship or technical vulnerabilities.
  • Deepfake — AI-generated synthetic media that distorts reality or impersonates individuals.
  • Physiognomy — discredited practice of inferring character or traits from physical features (now enabled by some AI).
  • Compliance — adherence to legal and regulatory standards.
  • Generative AI — models that produce new content (images, text, etc.) with potential privacy impacts.

Action Items / Next Steps

  • Visit the Safe Computing website for event recordings, upcoming programs, and privacy quizzes.
  • Explore tools like Privy, Imago Obscura, and Privacy Mirror for privacy risk assessment.
  • Consider integrating privacy risk education and training into coursework or organizational practices.
  • Participate in the six words of privacy project at safecomputing.um.edu.