Overview
This lecture explains the differences between misinformation and disinformation, how conspiracy theories spread and appeal to people, and the role of algorithms and AI in amplifying harmful narratives online. Real-world examples and research highlight these processes and their effects.
Misinformation vs. Disinformation
- Disinformation: False information created to deceive or harm.
- Example: Fake news or deepfake videos targeting political opponents.
- Misinformation: False information shared without intent to deceive.
- Example: Conspiracy theories believed and spread by people.
- The main difference is intent: disinformation is deliberate, misinformation is not.
Conspiracy Theories: Definition and Psychological Appeal
- Conspiracy theories simplify complex events, often blaming a powerful elite.
- They provoke strong emotions and fulfill psychological needs:
- Offer explanations and reduce uncertainty.
- Identify an enemy, giving a sense of orientation.
- Create a sense of belonging to a special group.
- Belief in these theories can make people feel unique or persecuted.
Digital Platforms, Algorithms, and Amplification
- AI and digital platforms make it easier to create and spread false information.
- Algorithms boost emotionally charged, divisive content to increase engagement.
- Platforms turn user attention and reactions into economic value.
- Content that triggers strong reactions is more likely to be recommended, reinforcing harmful narratives.
Ideological Fantasy and Unconscious Bias
- "Ideological fantasy" explains why some ideologies persist unconsciously.
- Fantasies in the unconscious mind influence behavior and beliefs.
- Unconscious biases, like associating professors with white men, shape perceptions.
- Example: AI image generators often show professors as white men.
- Digital platforms can legitimize racist conspiracy theories by presenting them as intellectual or scientific.
Real-World Impacts: Policy, Violence, and Radicalization
- Racist conspiracy theories have influenced policy and justified opposition to diversity.
- Example: The "cultural Marxism" theory claims academia is controlled by Marxists, used to oppose diversity policies.
- Such narratives have justified extremist attacks and radicalization.
- Algorithms can lead users into "rabbit holes" of increasingly radical content.
- Example: In Brazil, students exposed to violent TikTok content became violent at school.
- Research found TikTok recommended violent content within days.
AI, Generative Tools, and Mass Radicalization
- Generative AI speeds up the creation and spread of biased misinformation.
- Example: In the 2024 UK riots, fake news and conspiracy theories about migrants were amplified by algorithms, leading to attacks and abuse.
- Viral AI-generated images depicted white people as heroes and migrants as threats.
- Generative AI reflects and spreads societal biases, making harmful content more accessible and fueling radicalization.
Key Terms & Definitions
- Disinformation: False information created to deceive or harm.
- Misinformation: False information shared without intent to deceive.
- Conspiracy Theory: Simplified narrative blaming a powerful enemy for complex events.
- Ideological Fantasy: Unconscious beliefs that sustain ideologies.
- Unconscious Bias: Automatic associations shaping perception and behavior.
- Algorithmic Amplification: Algorithms increasing the spread of certain content.
- Generative AI: AI that creates new content, often reflecting existing biases.
Action Items / Next Steps
- Review the differences between misinformation and disinformation.
- Study why conspiracy theories appeal to people.
- Reflect on how algorithms and AI spread harmful narratives.
- Consider the impact of unconscious bias and ideological fantasy.
- Examine how generative AI and algorithms contribute to radicalization and violence.