Coconote
AI notes
AI voice & video notes
Try for free
Governance of Generative AI Content
Mar 4, 2025
Lecture Notes: Moderating Synthetic Content: the Challenge of Generative AI
Abstract
Key Concern:
Artificially generated content significantly disrupts the public sphere.
Threats:
Spread of misinformation, political propaganda, and non-consensual deepfakes.
Solution Proposal:
Enforce general platform rules rather than creating new policies for synthetic content.
Introduction
Examples of Synthetic Content:
Fabricated text claims from large language models.
Audio deepfakes affecting politics.
Non-consensual intimate deepfakes.
Artificially generated images in conflict scenarios.
Technology:
Generative AI uses large datasets to produce realistic content.
Availability:
Tools like GPTs and Stable Diffusion are widely accessible.
Need for Governance:
Technological advancements require proper governance and guardrails.
Current Approaches
1. Just Ban It: Synthetic Prohibitionism
Proposal:
Ban all synthetic content due to its potential harm.
Challenges:
Weak detection systems for synthetic content.
Not all synthetic content is harmful; some have genuine value.
Free speech values are involved in the creation and consumption of synthetic content.
2. Sui Generis Policies for Synthetic Content
Current Policies
Meta, X, TikTok, YouTube:
Each has policies for synthetic content.
Critiques:
No necessity for distinguishing between AI and human-generated content in moderation.
Harsh moderation for AI-generated content where humans might receive leniency.
Example:
Misinformation policies apply regardless of content origin.
Integrated Policy
Principle:
Apply the same rules to AI and human-generated content.
Focus:
Harmfulness of content, not its technological origin.
Application:
False content: Existing misinformation policies.
Deepfakes: Apply rules ensuring electoral integrity and handling non-consensual content.
Complications
Reinterpreting Transparent Synthetic Content
Context Change:
Known AI-generated content might be perceived differently (e.g., satire).
Text vs. Audio-Visual Content:
Different implications for truth and believability.
Content Moderation Principles
Rights and Duties:
Platforms have responsibilities towards users’ freedom of speech.
Human vs. Bot Distribution:
Considerations differ due to lack of moral rights in bots.
Dealing with Bots
Bots’ Content:
Lacks speaker rights but can have audience value.
Moderation Justification:
Easier to justify restriction on bots’ harmful content.
Conclusion
Feasibility and Desirability:
Blanket bans are infeasible and undesirable.
Proposed Solution:
Technology-neutral content moderation focused on content harm.
References and Additional Information
Authors and Affiliations:
Sarah A. Fisher, Jeffrey W. Howard, Beatriz Kira.
Funding and Ethics:
Supported by UKRI, ethical considerations disclosed.
🔗
View note source
https://link.springer.com/article/10.1007/s13347-024-00818-9