recommended by The Centre for the Study of Existential Risk
In the rapidly-emerging field of existential risks, researchers study the mitigation of threats that could lead to human extinction or civilisational collapse. We met with four researchers from The Centre for the Study of Existential Risk at the University of Cambridge, to discuss their recommendations of the best books to get a grasp of this dense subject.
What would be your best definition of the concept of existential risk? And of the scientific field that is growing around it?
Simon Beard (SB): You probably need to start with the concept of human extinction. Human extinction is the origin, foundation, and paradigmatic example of existential risk. Humanity has many options and things that it could become. It has a lot of potential: we could colonise the stars, overcome the limits of our frail human bodies, and achieve all sorts of things. But if we go extinct, then all of that is gone; there’s only blank pages left in the book. There are many different ways in which this would be a very bad thing.
But when you look at human extinction, you start to realise that it doesn’t capture everything you want to avoid. There’s nothing that special about homo sapiens. Species go extinct all of the time, and it’s not a big deal. If Homo sapiens were to go extinct but with other species replacing it, I don’t think any of us in this group would label this as an existential threat. We don’t need to absolutely lock in the human species as we currently know it, and make it last forever. So human extinction isn’t always that bad, but there are also other things that might be as bad, or even worse. For instance, if new technologies were to develop but came to control human beings and remove some of the things we find most valuable about humanity, like our creativity or our ability to make choices for ourselves, it would fundamentally change what it means to be human. That might then be as bad as extinction, even though you might still have a bunch of humans alive. Similarly, if humanity was to survive as a species but we were to lose our science and culture, and never get them back, being reduced to a few hundred people surviving in the New Zealand wilderness, we’d never get back to the state we currently are in. That’s why we talk about existential risks: it allows us to focus not simply on the idea of human extinction, but also on other things that might be just as important as that. And we work to try to avoid all of them.
Are there risks that are considered more likely to pose a danger to humanity?
Lalitha Sundaram (LS): As a research centre, we try to avoid ranking the various existential risks. We focus on several areas, without necessarily covering every single thing. My particular focus, for example, is on biological risks, for example how our capacities in biotechnology are increasing at an unprecedented rate, and whether we’re able to govern them adequately. And on the flip side of that, whether we are harnessing the power of these biotechnologies to guard against other catastrophic risks such as global pandemics. Some of our colleagues at the Centre look at nuclear issues, artificial intelligence (AI), critical infrastructures, ecosystem collapse. We try to have at least one person keeping an eye on each major threat.
The study of existential risks is rapidly developing, mainly at Oxford and Cambridge, and what’s really noticeable is the diversity of research profiles in the field. Can each of you tell us a bit more about your initial background, and how you came to study this subject?
Shahar Avin (SA): I initially studied physics, then philosophy of science, and I worked as a software engineer for a little while. I had done my PhD in Cambridge, around the time CSER was being set up, and I heard later on that they were recruiting postdoctoral researchers. Having gone away from academia for a while, I realised that academia tended to sometimes misunderstand the corporate and technological world, which is a really important piece if we’re going to govern technological risks.
Haydn Belfield (HB): My background is in politics and policy. I worked in UK party politics as a policy researcher, after studying politics at university. However I was always interested in emerging technologies and international security – two topics that don’t get discussed enough in current party politics. Particularly striking was this argument that, because of our emerging technologies, humanity was facing a small but real chance of disaster. I came round to the view that this might be one of the most important problems in the world right now. I certainly found it one of the most fascinating – I couldn’t stop reading about it.
“If Homo sapiens were to go extinct but with other species replacing it, I don’t think any of us would label this as an existential threat”
SB: My background is in moral philosophy. I studied at Oxford, and thought I’d go work in Parliament or in a think tank, and do political things. I did that and had a lot of fun doing it, but I also realised that I really liked philosophy, and that working purely in a public policy context, I’d always be very constrained. You keep thinking that if you get the next promotion or the next job, you’ll finally be able to make decisions for yourself, but the person currently holding that next job also considers themselves constrained by other actors. So I went back to university, and did a masters and a PhD in philosophy. I thought I had that policy side of my life behind me, but I was lucky enough to get a position first at Oxford’s Future of Humanity Institute, and then at CSER. The Centre was very interested in me being a moral philosopher, with a specialisation in population ethics, and someone who’s looking at very big ethical issues (‘How do we evaluate the worst things? What would count as the worst possible outcome or an existential threat?’) but who’s also happy to engage with people, decision-makers, civil servants, etc. I wanted to try and find a way to help them to not only understand our message and our findings, but also get them to then communicate to the people they had access to, in a way that would get them to listen. We have close links with public policy researchers at Cambridge, and also collaborate with the All-Party Parliamentary Group for Future Generations.
LS: I did my undergraduate in genetics at Cambridge, and my PhD in Parasitology: a very traditional lab-based biology background. I then had an opportunity to move into synthetic biology, a sort of advanced and systematised form of biotechnology and genetic manipulation. Part of my first postdoctoral position in that included looking at responsible research and innovation, i.e. how do we govern this new type of genetic engineering, how do we make it safe and appropriate, scientifically and socially. This put me on the radar of CSER, and when they had an opening for a biorisk position, I jumped at the chance. It allows me to keep one leg in the scientific world, keep an eye on the policy side, and bridge those two worlds together.
It seems to be a recurring profile for researchers in this field to have a mix of technical/scientific knowledge, and philosophy/ethics-oriented work.
LS: I think the scientific aspect is key; in my work at CSER I have to talk to scientists every day. They’re not going to take me seriously, and I’m not going to understand what they’re saying, if I don’t have a PhD or if I don’t even know what a gene is. So I think that’s absolutely essential.
SA: A lot of people we work with have deep knowledge in at least two disciplines, and some are even broader generalists. Some fields are inherently multidisciplinary, and existential risks is one of them. So interestingly, being multidisciplinary gives you an advantage as an existential risk researcher, whereas the same person competing in a narrow field would be at a disadvantage.
SB: Oxford and Cambridge are rich universities, so they can afford to have people studying these kinds of subjects. There is this view in academia that multidisciplinary, impactful, applied work is what you should be doing; but it’s a myth. If you want to get good grants in academia, you need to be a specialist, working on a subject that is recognized by everyone in your field. If you’re going to do niche work, or focus on communicating to non-academics, or publish papers that are hard to classify in a particular section, you are costing your institution money. But Oxford and Cambridge have a big endowment, so they can afford to have people like me who are ‘dead-weights’, academically-speaking!
Has it started to get better since CSER was created? Or do you still need to prove your legitimacy to your peers in academia?
LS: It’s been mostly fine for me in biotech risks, because we’re never presenting our work as a way to stop researchers from experimenting, and I come from this community so my research is more readily accepted.
SA: More research centres like ours have been created, including in Copenhagen, and we’ve been getting better at giving the ‘elevator pitch’ about what we’re doing and why it’s important. But in general academia, things haven’t changed much. AI specifically is a complicated subject, because there’s a still a lot of mistrust in how the machine learning community views this kind of research. It’s partly due to bad initial attempts at communicating to this public, attempts that were seen as fear-mongering; but it’s getting better now.
SB: We also have the advantage of being able to say to AI researchers and developers that they need to make their research safe, but that their findings could also be part of the solution when it comes to other existential threats. The difficulty is in getting people to take seriously the fact that when you start doing risk-risk trade-off analysis – instead of the more traditional risk-benefit trade-offs – things can quickly start to look very scary. We’re working on difficult problems, and in a way they need to be solved all at once. But we can tell them that they’re not just a problem; AI can also be a big solution.
Let’s talk about your first book choice. It’s a short novel called The Last Children of Schewenborn (sometimes abbreviated The Last Children), written by Gudrun Pausewang in 1983. This is one of the darkest books I’ve ever read; it tells the story of a family trying to survive a nuclear winter.
SB: I was going to recommend When the Wind Blows, a 1982 graphic novel by Raymond Briggs, but one of our colleagues recommended that we choose this one instead.
LS: We chose it because of its unrelenting grimness. It’s apparently often assigned to school children in Germany, which I find very surprising, because it must be very hard for a teacher to talk about it to pupils. It’s the kind of book in which nothing nice happens – which makes it very realistic. Quite often in (post-)apocalyptic stories, you intermittently see some rays of hope, but not here. It’s also quite didactic, a sort of cautionary tale about the consequences of nuclear weapons.
“We tend to forget just how many nuclear warheads the US and Russia used to have during the Cold War”
SA: It’s extremely moralising. But in the epilogue, the author actually writes: “I have depicted the disaster and its consequences as less catastrophic than they presumably would be in reality, since I had to allow for a survivor who would later be in a position to talk about what had happened.” It’s also very interesting in how it brings us back to the Cold War. We tend to forget just how many nuclear warheads the US and Russia used to have. The threat of nuclear weapons was an everyday reality for people: if a nuclear war had broken out, every major population hub would have indeed been targeted. This is no longer the case, although a large-scale nuclear war would still wipe out everyone because of nuclear winter.
Do you think the risk presented by nuclear weapons has diminished since the Cold War?
SA: Yes. There is a much broader knowledge about nuclear winter being a plausible scenario, which wasn’t the case during the first three decades of the Cold War. Russia is broke, and China is not very interested in nuclear weapons. Nuclear terrorism is the thing that seems to scare people now, but we wouldn’t classify that as an existential threat because a detonation would likely be circumscribed to one geographical area.
So would you say that the current levels of nuclear armament are satisfying, from a game-theory perspective?
SA: It’s a tricky situation, and it’s difficult to admit, but once we’ve invented nuclear weapons, we really just need to govern them responsibly and prevent mutual destruction. But if we get one side to disarm without the other, we end up in a far less stable situation. When a government is being a responsible custodian of nuclear weapons, it is generally making the world a safer place.
SB: When I came to CSER I was very much a unilateralist in terms of nuclear disarmament. I still think there’s a strong case to be made that Britain should disarm by itself, because the UK could set some useful precedent for developing nations, to break this link between being taken seriously and having nuclear weapons. The smaller countries could do that, and it would be valuable. But having been through these arguments and thought about the problem a lot, I don’t see any great value in the United States or Russia disarming unilaterally. As an alternative, some scientific papers have argued that we could relax the legislation on biological weapons – whose long term consequences may not be as bad as a nuclear winter – in order to keep the logic of deterrence while convincing all superpowers to renounce their nuclear weapons. Something we do worry about within existential risks is accidental triggering of nuclear weapons.
An accidental triggering that would lead to a domino effect presumably? A sort of Dr. Strangelove scenario?
SB: Exactly. There are between 15 and 20 reasonably well-documented historical examples of near-misses of that type. There were people who had the authority to press the button, and had orders to press it under certain circumstances, but decided not to. Of course we should all be very grateful to these people, and it’s interesting to realise that in all of these examples, they seemed to be able to use their critical faculties before pressing that button.
Staying on the subject of post-nuclear tales, let’s talk about your second choice, A Canticle for Leibowitz, written by Walter M. Miller Jr. and first published in 1959. What is it about?
HB: Before the beginning of this novel, a nuclear war destroys civilisation and is followed by what’s called the ‘Simplification’, a backlash against the Enlightenment, science, and culture, leading to most people becoming illiterate. And the novel tells the story of society recovering from that disaster, and how a small core of pre-deluge civilisation is preserved and protected through centuries of rebuilding. It touches on many questions about rebirth, and whether history follows an endless cycle of dark ages, middle ages, and renaissance. The reason I chose it is that there are many things that would kill humanity just outright, but a lot what we do in existential risks is study scenarios of civilisational collapse: humanity doesn’t completely disappear, but is reduced to small groups or bands wandering around. In those scenarios we can try to imagine whether humanity would recover, and how fast it would be able to do so. Some questions are difficult to answer: if humanity does end up recovering from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that?
The story evolves over multiple centuries, all the way to the year 3800. That’s actually a feature of most of the books you’ve chosen.
SB: Yes, and it’s also part of the reason we chose The Last Children: it takes a long view of nuclear scenarios, and looks at what happens months and years after such a disaster. All of these books are willing to take the long-term perspective; this is something we’re always encouraging people to do. One of the reasons you should read these novels if you’re interested in existential risks, is that they really challenge you to think about what civilisation looks like in the very long term. It’s so easy to get hung up on problems we face and hear about every day on the news, but on the scale of history, many of those problems will be nothing more than footnotes.
“If humanity recovers from its collapse and goes back to its previous level of cultural and technological development, should we be fine with that?”
HB: A Canticle for Leibowitz invites the reader to take this long-term view, by making the point that a civilisational collapse, as long as it is followed by a renaissance, doesn’t really matter in the grand scheme of things. That little ‘blip’ won’t matter at all over millions of years. I don’t necessarily agree with that argument myself, but it’s certainly an interesting one to think about. Another question it touches on is the problem of the Great Filter. The Great Filter is one suggested solution to the Fermi Paradox, which says that given what we know about the probability of intelligent life in the Universe (calculated mainly through the Drake equation), we should have spotted intelligent extraterrestrial civilisations a long time ago. The solution proposed by the Great Filter hypothesis is that all civilisations have to pass through a sort of filter, which leads to either their survival and expansion, or their complete collapse; and of course the theory is that most civilisations don’t pass the filter, which explains why we seem to be alone in the Universe. For example one could theorise that every sufficiently-intelligent civilisation in the Universe evolves until it discovers nuclear power, and ends up killing itself with nuclear weapons. The main question for humanity then becomes: is the Great Filter behind us (and we’re ‘fine’ now, as humanity will most likely survive for a very long time), or is it ahead of us and resides in those existential threats?
SA: Scenarios like this one also make one realise the complexity and fragility of the systems that keep civilisation in place and thriving. We don’t notice those systems most of the time, but it takes a whole lot of effort to keep supermarket shelves stocked on a daily basis, around the world. How far are those systems from tipping over? And can we make them more resilient? These are interesting questions because they’re disconnected from a particular type of existential threat; they’re about making our civilisation more resilient against all possible risks, and making sure we’re able to bounce back.
The measures taken in A Canticle for Leibowitz to preserve humanity and its knowledge are purely reactive, as is the case in other fictional works like Dr. Strangelove or Horizon Zero Dawn. But another famous work of science fiction, the Foundation series by Isaac Asimov, takes the problem from another angle: in it humanity works in advance to prevent a future collapse of civilisation. Interestingly, only a small portion of the research in your field looks at such preventative measures and fail-safes (a few papers have been published on the potential benefits of permanently-staffed bunkers, submarines, or extraterrestrial bases). Is that because it’s more scientifically rational to prevent threats from becoming real, or is it due to an optimistic bias that leads researchers to focus on avoiding disasters rather than recovering from them?
LS: We do have a lot of gene banks, in Svalbard for example, so that’s one aspect that is considered fairly robust in terms of preserving things like crops.
HB: In the 1950s and 60s, when people were very scared of nuclear war, there were lots of things published on shelters, contingency plans to preserve a minimal government, etc. There’s less being written on the subject now. My view is that this kind of research is less useful – there are libraries everywhere, and our knowledge is very well stored and preserved. And even if we collapsed and recovered, physical artefacts like cars, fans or radios could be reverse-engineered to understand their inner workings.
SA: It wouldn’t work for everything though. If all nuclear scientists were to die at the same time, it would probably take us a while to get back to our current levels of understanding of nuclear energy.
SB: There is a very good book called The Knowledge: How to Rebuild our World from Scratch by Lewis Dartnell, in which he goes through everything that would be needed to rebuild civilisation, including chemistry, agriculture, electronics, etc. But in his last chapter, which really stayed with me, he argues that the one thing you’d need to rebuild civilisation quickly is the scientific method. Until a few centuries ago, so much time, effort, and energy had been wasted to improve things in the dark, with people eventually getting lucky and finding random improvements. But if you only take into account what was rigorously discovered through the scientific method, you could collapse a lot of the history of human thought and development into a relatively short period of time. When people talk about surviving and building bunkers, what they often suggest putting into the bunker is people, seeds, etc. But actually what we really need to put into the bunker, or ensure it survives in some way, is our rationalism through the scientific method. If civilisation collapses and everyone resorts to superstition, our chances of recovery are far lower, regardless of the physical resources we’ve preserved. You also mentioned the possibility of an optimistic bias, and I think there definitely is one, for several reasons. First there is a psychological bias which makes us prefer thinking about things being fine rather than going badly. There’s also a statistical reason, which is that things going very badly is quite unlikely, at least in the near future. At the moment there are so many risks around that an extinction might happen in the next century; but it almost certainly won’t happen next week. That’s penalising in general for existential risk research, because they are very rare events, and human beings are bad at judging priorities based on their total expected impact, as opposed to the ‘most likely scenario’.
“What we really need to put into the bunker, or ensure it survives in some way, is the scientific method”
SA: There is also a simplicity bias. It’s quite easy to think about resilient systems for catastrophes like volcano eruption or earthquakes; both because the risk can be calculated, and because they’re quite entertaining to talk about. The whole industry of disaster movies is founded on that premise. But on the other side, we too rarely think about the dangers of very boring systems collapsing. If the entire sewage system was to break down in a major city, the consequences would be really bad; but that’s quite a boring thing to study, and an even worse movie to make.
HB: A Canticle for Leibowitz ends in a combination of being both quite depressing, and surprisingly hopeful, and I think that characterises the entire field of existential risks quite well.
Let’s move on to Cloud Atlas, a 2004 novel by David Mitchell. Yet another story spanning multiple centuries; it tells the intertwined adventures of different groups of people, all the way from the 19th century to a post-collapse world.
LS: I read this novel before I even knew what existential risks were; I’ve loved it for a long time. I really like the narrative structure; it’s a technical and stylistic masterpiece. What made me pick it for this subject is that it starts off being not overtly apocalyptic, but there are little signs announcing that something is coming. There are little Easter eggs that slowly lead to the central section of the novel, dedicated to a post-catastrophe world set in the far future.
Support Five Books
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
SB: This is the perfect book if you want to read about human extinction but you still need to be ‘seduced’ into it. If you start with The Last Children, your exploration of existential risks might be cut short by the very depressing aspect of the story. Cloud Atlas is cleverly written, because its unusual narrative structure means that if you want to know what happens to the half-finished stories set in past centuries, you have to go through the middle section dedicated to the catastrophe.
LS: The six stories contained within the book are very interesting in the way they’re nested together, almost like a literary Matryoshka doll. And they’re also completely distinct in style: you have a hard-boiled 1970s California cop drama, but also a 19th century Herman Melville-type elegiac story. I really loved this book; don’t watch the film though!
SB: The book also has a nice coda at the end: the character from the 19th century story actually writes about human extinction, and what he thinks is going to happen to humanity in the future. He predicts that if we keep on being too greedy, or try to outdo one another instead of cooperating, then there is no way that civilisation will carry on forever without causing a massive catastrophe. Interestingly, New Scientist wrote an editorial in early 2018, basically saying that existential risk research was great but that researchers kept focusing on ‘traditional left-wing’ issues about capitalism, overpopulation, and environmental concerns. They were asking if this wasn’t politicising the research. We had a discussion at CSER about it. But my view is that there is a set of premises about what society should strive for – which are traditionally, though not universally, seen as being right-wing – such as short-term profit maximization, ‘creative destruction’, individualism, externalization of risks, and so on, then you can’t run that model forever and not end in some kind of disaster. It’s not an attack on anyone for being right-wing; all this is saying is that the long-run equilibrium of that model will almost inevitably include some kind of global catastrophe, so any system built on it is likely to be very unstable. It doesn’t mean we should all be marxists or socialists, those models are also flawed, but it does mean that there is indeed a problem with the current way we operate in our world.
Your fourth choice was Climate Shock: The Economic Consequences of a Hotter Planet, written by Gernot Wagner and Martin Weitzman. This is the only non-fiction book in your selection. Why did you choose a book about climate change, an issue that is already quite in the spotlight?
SB: The reason we dedicated our only non-fiction choice to climate change is that at the end of the day, if you’re interested in existential risks and wondering what you can do to prevent them, then probably the one direct threat you can do something about is climate change. People often ask us what the most dangerous existential risks are. If you want something that could definitely leave us all dead, that would be AI: if an AI becomes superintelligent and decides that humans are a waste of space, it’s over. If you’re interested in something that could happen tomorrow, that would be nuclear weapons: it only takes one person to press the wrong button. But if you want to work on the thing we’re currently running fastest towards, it is catastrophic climate change. There is a lot of complacency about the likelihood of extreme climate change. The uncertainty in current climate models means that we could very easily aim for a global increase of 2°C or lower, but actually get 6°C instead. People are always going to be pushing for the most optimistic predictions, but there are many tipping points and feedback loops that could drive climate change to levels that would be truly catastrophic for us. So we’re driving very fast, in the dark; and we need to do something about that. For most people out there, this is the risk they can do most about, in particular by changing how they vote and what they expect from policy-makers. If people demand that politicians talk about climate change, stuff will happen. If people don’t care then it will get taken off the agenda again and stuff will not happen. Ordinary individuals can have a big say on that, if they want to. There are also issues about individual lifestyle and consumption, and our perception of what is ‘normal’. Unfortunately not everyone has a such big role to play in something like AI safety.
“If you want to work on the thing we’re currently running fastest towards, it is catastrophic climate change”
SA: If you think of a map of the world, you can identify the geographical centres that are most contributing to each risk. For nuclear weapons, it’s mainly military bases where warheads are stored, and the chain of command that would lead to their use. If you look at AI, you can look at some key research labs and data centres in California, London, China, etc. For bioengineering, again geographical sources of risks are fairly circumscribed. But if you look at climate change, then suddenly almost the entire world, both developed and developing, is part of a very distributed source of risk. This means that any individual anywhere on the planet can act regarding climate change, and take partial ownership of the problem.
LS: For a non-fiction book on a quite heavy topic, Climate Shock is very readable.
SB: Yes, the fact that it’s about climate change makes it much more accessible. And actually, one of its chapters is a very gentle introduction to existential risks.
SA: There are some fairly good and popular books about AI, such as Life 3.0 by Max Tegmark, or The Technological Singularity by Murray Shanahan. But then what? Unless you go on to do a PhD on AI safety, those books wouldn’t really make you behave any differently after you’ve read them.
Unless educating people about the issue could help reaching a critical mass of worried individuals, which could have an impact on decisions made by researchers and policy-makers?
SA: Maybe, but if you look at the situation with nuclear weapons, there was once mass literacy about the potential effects and dangers, and very large mobilisations by people worried about it. It certainly did play a role, as did Hiroshima and Nagasaki. But ultimately if the decision power is centralised in the hands of a few people, there isn’t that much that the public can really do.
Finally, you chose The Dark Forest by Liu Cixin. It’s the second novel in a series called Remembrance of Earth’s Past (often known by the title of the first novel, The Three Body Problem). What is it about?
SA: In the first book of the trilogy, science basically stops working on Earth, and there’s a big puzzle as to why. Particle accelerators start giving random results, and a bunch of scientists commit suicide. It is then revealed that an alien civilisation is at the origin of those events. These aliens themselves are going through a systemic collapse, and they create an AI that they send across space to take control of another civilisation.
LS: So interestingly, in this book, the alien civilisation is experiencing an existential risk, and by using its technology to prevent it, it creates an existential risk for humanity.
SA: Exactly. And all of this is the setup of The Dark Forest, in which humanity realises the scale of the potential risk, and learn that they have 400 years to prepare before the alien fleet arrives. They know that the adversary is more technologically advanced than them, but not by how much. And they can’t do fundamental scientific research anymore. So the story says a lot about the massive advantage that exists in higher technological advancement.
LS: It goes back to what Simon was saying earlier, about research and the scientific method being our most precious resource.
SB: And humanity doesn’t even know what the alien side’s technology is, and what it should be catching up on.
SA: Without revealing too much, the novel sort of makes the broad point that as a species, it’s hard science, cold game theory, and consequentialist reasoning, that will keep you alive. The ‘fluffy stuff’ like human love, morality and ethics won’t save you at all.
LS: But you need the cold, hard reasoning to preserve the fluffy stuff.
SA: Interestingly, this book is among the most popular science fiction coming from China right now. It showcases the Cultural Revolution at the beginning of the first novel, most characters in the book are Chinese, and there is some uncharitable yet very accurate portrayal of Western democracy and the inefficiency of the United Nations. But it doesn’t glorify the planned economy either. It simply makes the point that humanity is fairly weak, and that if we’re ever faced by a really big threat, we most likely won’t survive it.
In the end, is your main message that people should read and think about those risks, not necessarily to do research themselves, but to realise how important the subject is?
SB: Eliezer Yudkowsky, who runs the non-profit Machine Intelligence Research Institute, says it’s really important to realise that there’s no natural law that says our civilization can’t collapse and our species can’t go extinct. It is a real, live option on the table. It really could happen. Just like as an individual you could suddenly die this afternoon, humanity could suddenly disappear, and all the ingredients necessary for that to happen already exist. Being able to accept that fact without looking away from it and then do something about it, that is the message that one would hope people might take away from these books. That’s one of the reasons why it’s worth exploring existential risks through science fiction and novels, rather than just through non-fiction books: all of the people in these stories have to engage with these problems, realise the mess they’re in, and decide how to respond. We need more people who are willing to do that; taking these issues seriously but not just getting depressed or angry, and instead actually doing the cold, hard thinking about what can be done.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]
The Centre for the Study of Existential Risk is an interdisciplinary research centre within the University of Cambridge that studies existential risks, develops collaborative strategies to reduce them, and fosters a global community of academics, technologists and policy-makers working to safeguard humanity. Its research focuses on biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.
The Centre for the Study of Existential Risk is an interdisciplinary research centre within the University of Cambridge that studies existential risks, develops collaborative strategies to reduce them, and fosters a global community of academics, technologists and policy-makers working to safeguard humanity. Its research focuses on biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general.