Transcript for:
The Crucial Role of the 21st Century

What's the most important century in human history? Some might argue it’s a period of extensive military campaigning, like Alexander the Great’s in the 300s BCE, which reshaped political and cultural borders. Others might cite the emergence of a major religion, such as Islam in the 7th century, which codified and spread values across such borders. Or perhaps it’s the Industrial Revolution of the 1700s that transformed global commerce and redefined humanity's relationship with labor. Whatever the answer, it seems like any century vying for that top spot is at a moment of great change— when the actions of our ancestors shifted humanity’s trajectory for centuries to come. So if this is our metric, is it possible that right now— this century— is the most important one yet? The 21st century has already proven to be a period of rapid technological growth. Phones and computers have accelerated the pace of life. And we’re likely on the cusp of developing new transformative technologies, like advanced artificial intelligence, that could entirely change the way people live. Meanwhile, many technologies we already have contribute to humanity’s unprecedented levels of existential risk— that’s the risk of our species going extinct or experiencing some kind of disaster that permanently limits humanity’s ability to grow and thrive. The invention of the atomic bomb marked a major rise in existential risk, and since then we’ve only increased the odds against us. It’s profoundly difficult to estimate the odds of an existential collapse occurring this century. Very rough guesses put the risk of existential catastrophe due to nuclear winter and climate change at around 0.1%, with the odds of a pandemic causing the same kind of collapse at a frightening 3%. Given that any of these disasters could mean the end of life as we know it, these aren’t exactly small figures, And it’s possible this century could see the rise of new technologies that introduce more existential risks. AI experts have a wide range of estimates regarding when artificial general intelligence will emerge, but according to some surveys, many believe it could happen this century. Currently, we have relatively narrow forms of artificial intelligence, which are designed to do specific tasks like play chess or recognize faces. Even narrow AIs that do creative work are limited to their singular specialty. But artificial general intelligences, or AGIs, would be able to adapt to and perform any number of tasks, quickly outpacing their human counterparts. There are a huge variety of guesses about what AGI could look like, and what it would mean for humanity to share the Earth with another sentient entity. AGIs might help us achieve our goals, they might regard us as inconsequential, or, they might see us as an obstacle to swiftly remove. So in terms of existential risk, it's imperative the values of this new technology align with our own. This is an incredibly difficult philosophical and engineering challenge that will require a lot of delicate, thoughtful work. Yet, even if we succeed, AGI could still lead to another complicated outcome. Let’s imagine an AGI emerges with deep respect for human life and a desire to solve all humanity’s troubles. But to avoid becoming misaligned, it's been developed to be incredibly rigid about its beliefs. If these machines became the dominant power on Earth, their strict values might become hegemonic, locking humanity into one ideology that would be incredibly resistant to change. History has taught us that no matter how enlightened a civilization thinks they are, they are rarely up to the moral standards of later generations. And this kind of value lock in could permanently distort or constrain humanity’s moral growth. There's a ton of uncertainty around AGI, and it’s profoundly difficult to predict how any existential risks will play out over the next century. It’s also possible that new, more pressing concerns might render these risks moot. But even if we can't definitively say that ours is the most important century, it still seems like the decisions we make might have a major impact on humanity’s future. So maybe we should all live like the future depends on us— because actually, it just might.