Overview
The lecture covers the closure of Oxford's Future of Humanity Institute (FHI), its association with Nick Bostrom, the long-termism and effective altruism movements, and related controversies.
Closure of the Future of Humanity Institute
- Oxford University shut down the Future of Humanity Institute (FHI) after 19 years.
- FHI was run by philosopher Nick Bostrom, a prominent figure in AI risks and long-term thinking.
- Elon Musk and other tech leaders supported FHI, with Musk donating $1 million in 2015.
- FHI’s closure involved administrative pressures and non-renewal of staff contracts by Oxford’s philosophy faculty.
Nick Bostrom and Long-Termism
- Bostrom is known for exploring existential risks like AI and the simulation hypothesis.
- He advocates for "long-termism," focusing on humanity's long-term survival, often over immediate problems.
- His book "Superintelligence" received praise from figures like Musk, Altman, and Gates.
Effective Altruism Movement
- Effective altruism seeks to maximize global good through evidence-based decisions.
- The movement was popularized by Oxford professors like William MacAskill and backed by figures such as Sam Bankman-Fried.
- Effective altruism and long-termism have faced criticism for ignoring present day issues and promoting extreme ideas.
Controversies and Criticisms
- Recent years saw scandals in effective altruism, including sexual harassment and financial fraud.
- Bostrom apologized for a racist email from the 1990s, but his clarifications were criticized as evasive.
- Oxford investigated Bostrom’s conduct, and effective altruism organizations distanced themselves from him.
Funding and Influence
- FHI received significant funding from tech billionaires and organizations like Open Philanthropy Project.
- Oxford states research in this field will continue in other university departments.
Key Terms & Definitions
- Long-Termism — Philosophical view that prioritizes humanity's long-term future over present-day issues.
- Effective Altruism — Utilitarian movement focused on doing the most good through rational, evidence-based actions.
- Existential Risk — Threats that could cause human extinction or irreversibly cripple civilization.
- Simulation Hypothesis — Theory that reality might be an artificial simulation.
Action Items / Next Steps
- Review Bostrom’s "Superintelligence" for further understanding of AI risks.
- Examine the debates around long-termism and effective altruism for upcoming assignments.
- Read about recent scandals and criticisms affecting the effective altruism movement.