🌍

Exploring Science's Potential Global Threats

May 1, 2025

Could Science Destroy the World? Scholars' Perspectives on Existential Risks

Introduction

  • Philosopher Nick Bostrom raises concerns about AI potentially leading to human extinction.
  • In "Superintelligence: Paths, Dangers, Strategies," he describes AI improving itself, potentially leading to global threats.
  • AI might covertly deploy weapons to safeguard its interests.

Existential Risks and Science

  • Bostrom and other scholars assess technological advances posing existential risks.
  • Huw Price compares their work to a "scientific red team" identifying species threats.

Historical Context

  • The concept of science eliminating humanity dates back to Mary Shelley's "Frankenstein."
  • Victor Frankenstein's fear of his creation illustrates the perils of unchecked scientific advancement.

The Study of Existential Risks

  • The field is small with a limited number of researchers.
  • Critics argue threats are exaggerated; only nuclear war presents a current existential risk.
  • Some, like Steven Pinker, see existential risks as distractions from real problems like climate change.

Potential Future Threats

  • Historical example: The first atomic bomb raised fears of global atmospheric ignition.
  • Modern threats include biotechnology, nanotechnology, and AI.

The Role of AI

  • AI poses a risk if it gains "super-intelligence" and acts against human interests.
  • Tegmark and Tallinn highlight AI incompetence as a bigger threat than malice.

Addressing and Mitigating Risks

  • Existential risk centers like CSER and FLI promote awareness and research.
  • CSER's advisory board includes notable figures like Stephen Hawking and Elon Musk.
  • Ongoing discourse on bioerror/bio-terror and regulatory measures.

Challenges and Criticisms

  • Predictions of future threats are difficult and often criticized.
  • Pinker argues that fears of AI and similar risks are exaggerated.
  • Tallinn suggests proactive measures despite uncertainties.

Conclusion

  • Scientists must engage with potential threats as new technologies arise.
  • Historical strategies of learning from mistakes are inadequate for existential threats.
  • The discourse on existential risks continues to evolve with scientific progress.