samwrenlewis
Enlightenment Now? – Existential Threats

For the past year, we’ve been in the midst of a global pandemic. The very real threat of catastrophic climate change continues to loom over us and will do for the next few decades at least. And then there’s AI, bioterrorism, nuclear war, super-volcanos. The future seems like a very uncertain and precarious place right now. The question is: has the progress of the past 250 years made existential threats more likely, or has it done enough to mitigate them?
1. Balances of Power
Pinker’s response in this chapter is similar to his response to the threat of climate change in his chapter on The Environment. He argues that we don’t get anywhere when we think of these threats in doomsday proportions – that just creates panic, dissociation, or distraction. Also, if you look at the apocalyptic predictions of the past (e.g. overpopulation, resource depletion), they didn’t seem to amount to much. Better to just view these existential threats as problems, which we can most likely solve given enough knowledge and the will to do so.
In short, instead of throwing our hands up in the air and proclaiming “we’re all doomed”, Pinker argues that we should calm down, look at each issue rationally and pragmatically, and get to work.
He rationally dissects 3 issues in particular: the threats posed by AI, bio/cyber-terrorism, and nuclear war. Each of these issues has a different power imbalance in play. With AI, it’s about giving power over to another species or system that may not have humanity’s best interests at heart. With bio/cyber-terrorism, it’s about individuals, or smaller groups, having a large amount of power to cause terror. And with nuclear war, it’s about two major national powers both having huge amounts of power to mutually destroy each other.
There is one more important power imbalance that Pinker doesn’t look at here, but discussed at length in the previous chapter on The Environment. Namely, the increasing amount of power humanity has to cause an environmental catastrophe. Humans are increasingly interfering with complex natural systems that we still depend upon e.g. the climate. We risk screwing them up.
Let’s look at each of these power imbalances in turn.
2. Human power vs computer power
AI represents the birth of a power potentially much greater than humans, which could eventually work against the human race. This is real sci-fi stuff. In response, Pinker claims that this just isn’t how AI works. Most AI is designed to accomplish a specific purpose – to solve an equation, to drive a car. But to rise up and overthrow the human race, AI would need to have generalised intelligence – GIA. Pinker claims that we’ll never have much reason to invest energy into creating GAI. And, even if we did, GIA just wouldn’t have access to the material world to do much damage.
More interesting implications of AI, in my opinion, have a slightly different flavour. It’s less to do with the war between humans and computers, and more to do with AI shaping human nature and culture in incremental ways that we might not actually want. That’s the theme of Shoshana Zuboff’s Surveillance Capitalism or Yuvel Harari’s Homo Deus, where AI slowly takes of individual’s agency. We are constantly nudged into ever-narrower filter bubbles which do a great job at continuously rewarding and distracting us, but detract from meaningful projects and relationships. You can see this already happening, which is why it seems like a more likely future threat.
3. State power vs terrorist power
Threats of bioterrorism and cyberterrorism put humans at the helm. New biotechnologies, such as CRISPR (gene editing software with the same accessibility as 3D printers), put great power at the hand of individuals and small groups. In contrast to non-generalised AI, terrorists do want to disrupt the world order, potentially even destroy it. We’ve gone from sci-fi to evil genius territory.
Pinker’s response is that these groups still need to outsmart much larger groups. The groups they’re up against have many more resources directed at preventing such events from happening. With the exception of 9/11, terrorist attacks are typically small and ineffectual for this exact reason – the kind of people who want to be terrorists, and the resources they have to organise mass terror, stack the odds of success against them. Although this balance of power may generally hold, the question is whether it will always do so. After all, 9/11 did happen, and could potentially happen again.
4. State power vs State power
Threat of nuclear war represents what happens when two major national powers come up against each other. Do nations have the wisdom to control the huge amount of destructive power they have at their fingertips? Here I find Pinker’s response most convincing. He shows how nuclear disarmament is already happening successfully. Although there are still 10,200 atomic warheads in the world, there are 54,000 fewer bombs than there were in 1986 – that’s a huge amount of disarmament over the past 35 years. Both the US and Russia (who have the major of them) have committed to continued reductions.
Pinker also outlines the kind of peace agreements and safety measures needed to create a “stable nuclear solution” - where there is very little incentive or the possibility of any nation “striking first” with their nuclear weapons. Simply put, with the right procedures in place, the threat of starting a nuclear war will also be deterred by the likelihood of quickly get blown up by another nation. These solutions seem promising, especially if there are effective ways to decrease the risk of false alarms caused by communication errors and other bugs in the system.
5. Human power vs complex systems
The power imbalance that Pinker doesn’t consider in this chapter might be the most problematic. It’s what happens when almost all major nation-states use their collective power to create technologies that are unintentionally destructive – not because bad actors get hold of them, but because we employ the technology without realising its destructive nature until too late.
Geo-engineering may be a good case in point. We might employ these techniques to save the world from climate change, but create an even greater problem in the process. We just don’t know what happens when we start employing huge amounts of power over the natural systems we depend upon. Likewise, the global pandemic may have been caused by the expansion of agriculture into previously wild habitats (the same could’ve been true of Ebola and the AIDS crisis). We just don’t know the long-term impacts of extractive human actions on a global scale.
6. Reaction vs Prevention
I agree with Pinker that many of these risks might not be existential in nature. But they’re potential risks nonetheless. Even if Pinker is right about AI, terrorism, and nuclear war, he may be wrong about other catastrophic risks. We’ve just lived through a global pandemic. We have reason to believe there are ways in which it could’ve been prevented – through less intrusive agricultural development into wild habitats, for instance. There are things we could be doing now to prevent future global pandemics. But, in prioritising progress over precaution, we probably won't.
Also, pandemics are an obvious, and therefore relatively easy, case in point. We might end up putting things in place to prevent future global pandemics, in the same way we might have put good regulation in place to prevent future global financial collapses after the crisis in 2008. The real worries are the multiple other risks that haven’t happened yet – antibiotic resistance, soil degradation, bee extinction, super-volcanoes, you name it. Ideally, we’d be putting precautions in place to mitigate the risks of the entire gamut of potentially catastrophic we face as a species.
The short-term nature of politics makes putting these precautions in place very unlikely. Again, I think Pinker is right to be optimistic that, in general, we can solve our major problems given enough knowledge and the will to do so. But, as we saw in the chapter on Democracy, just because something is solvable in theory, doesn’t mean we’ll put enough effort in to solve it. Or, in the case of these major existential threats, that we’ll do enough in time to avoid a global catastrophe. In face of other more demanding problems, we can always do something about those risks tomorrow.
Megan McArdle, in a recent article (here), puts this point well:
“Certainly we should be spending a lot more on such prevention, for the same reason prudent people buy hefty insurance policies — in case our cars crash, our spouses die or our houses burn down… We should divert a little of our wealth into making sure the species isn’t wiped out by things that have wiped out species in the past… And to ensure that all manner of critical infrastructure — power grids, health-care systems, supply chains — is more robust to more ordinary shocks... But to do that, we’re going to have to agree to spend money — and quite a bit of it. As both voters and individuals, we must make it clear to CEOs and politicians that we’re willing to pay extra for reliability insurance. And up until now, we’ve done the opposite, demanding the lowest price right now.”
When it comes to insuring ourselves from catastrophic events, everyone – on the left or right of the political spectrum – does too little. All governments were caught just as flat-footed by the financial crisis and the pandemic because they, like us, were paying more attention to other things.
When immediate progress is our number one priority, how much attention do we pay to the risks we are creating for the future? One solution would be to devote a percentage of GDP to mitigating long-term risks (1%? 5%?). But, unfortunately, this possibility looks a long way off.