Politics & Society

The best books on Longtermism

recommended by Will MacAskill

What We Owe the Future by Will MacAskill

What We Owe the Future
by Will MacAskill

Read

There is so much suffering in today's world it's hard to focus attention on future generations, but that's exactly what we should be doing, says Will MacAskill, a leader of the effective altruism movement. Here, he introduces books that contributed to his thinking about the long-term future and the "silent billions" who are not yet able to speak for themselves.

Interview by Sophie Roell, Editor

What We Owe the Future by Will MacAskill

What We Owe the Future
by Will MacAskill

Read
Buy all books

I think of you as closely associated with effective altruism, but today we’re talking about longtermism. Is that about extending effective altruism into the future and the many billions of people who don’t exist yet?

Yes, exactly. Effective altruism is about asking, ‘How can we do the most good that we can?’ Longtermism is at least a partial answer to that, saying, ‘Of the things that might happen in our lifetimes, the things that are among the most important are those that will impact not just the present generation, but will be pivotal in the shaping of the entire future of humanity.’

But if you view an AI going rogue as the biggest danger to the future of humanity (say), and you want to focus your time or money on preventing that, isn’t that going to conflict with preventing children from dying of malaria? How do you reconcile the two?

The fundamental premise of effective altruism is taking seriously the fact that we are in this morally horrific situation. Even if you put aside longtermism for the moment, there are just so many problems that we are facing. If you’re preventing children from dying of malaria, you’re not using your time or money to help women get a better education, or prevent the risk of a war breaking out, or improving democratic institutions. Even within malaria, if you’re choosing one country, you’re not helping people in another country. We are in what I regard as the mother of all philosophical thought experiments right now, where there’s just innumerable people that we could help, and we need to prioritize. Do you prioritize suffering that is going on right now? Or do you try and do things that will have a very long-lasting impact that will benefit not just the present generation, but also many generations to come? It’s a super hard question and it’s why I wrote this book. I think if you’re taking the interests of future generations very seriously, the case for the latter view is very strong.

And I suppose you also wrote the book because you felt that that case hasn’t been made enough? We’re aware of the many problems facing people in the world today; future ones are harder to focus on.

Absolutely, and there’s a systematic reason for that, which is that future people are voiceless. They’re disenfranchised. They can’t write articles or represent themselves or Tweet or lobby. They don’t have a vote. That means that we systematically underappreciate the moral importance of our descendants.

A few months ago, I spoke to a biologist called Christopher Mason who specializes in the effects on the human body of space travel. He is extremely concerned about what will happen when the Earth is engulfed by the sun and thinks it’s a categorical imperative that we prepare for that. Where does longtermism end and science fiction begin?

There’s a story—I don’t know if it’s apocryphal or not—of a student at a science lecture. The lecturer explained that the sun will expand and sterilize Earth and use up its hydrogen reserves in about 5 billion years. Then he moved on. One student got very worked up and asked, ‘Did you say the sun will kill everything in 5 million years?’ And the lecturer said, ‘No, no. In 5 billion years.’ And the student said, ‘Oh, that’s okay then.’ It is interesting people are worried about this but I think there are much more pleasant, near-term threats to the continued survival of civilization than the expanding sun in 5 billion years’ time. If we make it that long, I think two things: One, that we’ll have done really pretty well to have navigated the next few centuries. Secondly, if we’ve survived as long as 5 billion years, we will be able to move to other solar systems, which will give us many trillions more years.

Let’s look at the books you’re recommending for thinking more about longtermism. Do you want to start by saying how you’ve chosen them?

Some are about how to think. So The Scout Mindset and Superforecasting are about ways of improving how we reason. The other three books—Moral Capital, The Life You Can Save and The Precipice are about particular actions we should take to make the world better, or at least relevant to them. I’ve chosen them as books that have been impactful on my thinking when I’m considering the question of how we do the most good.

Let’s start with the historical one, about the end of the slave trade. It’s called Moral Capital: Foundations of British Abolitionism. Tell me more about this book and how it fits into your thinking.

This is by a leading historian, Christopher Leslie Brown. It’s the story of the British abolitionist movement, which included North America at the time. In particular, he argues that the abolition of slavery was a quite contingent event, it’s something that could easily have never happened. When I first heard this idea—from the historian who was consulting for my book—I just thought, ‘Wow, this is mad. This is just such a wild idea. Surely the abolition of slavery was more or less inevitable?’ But I really came around to having a lot of sympathy for Professor Brown’s view. The abolition of slavery was not the inevitable result of economic changes. Instead, it was a matter of changing cultural attitudes.

Then there’s a further question: Was it heavily contingently dependent on a particular campaign that was run? That has more going for it than you might think as well. The Netherlands was also an extremely well-off, industrializing country and there was no abolitionist movement there. There was one attempt to get a petition going and it had very little impact. So I think it’s not at all a crazy view to say that if a particular cultural movement had not happened, we could be living with widespread, legally permitted slavery today.

“We systematically underappreciate the moral importance of our descendants”

Now, what does that have to do with effective altruism and longtermism? It’s relevant because changing the values and moral beliefs of a society is one of the most important, longlasting and also, in a technical sense, contingent things you can do. It really could go either way, you’re not pushing on a door that’s going to open anyway. Improving the values of the day is one way people can have a positive long-term impact. That might mean extending the circle of concern and compassion towards people in other countries and taking their moral interests more seriously, or towards non-human animals and future generations.

One point you make in your book which is so interesting to think about is that if another world religion, one that frowns on eating animals, had taken off instead of Christianity, we might all be vegetarians.

It’s a striking question, but if the Industrial Revolution had happened in India, perhaps we would look at factory farming as this horrific, impossible, dystopian scenario. It’s obviously hard to tell because it’s a counterfactual, but it seems plausible enough to me.

Let’s move on to one of the books you’re recommending that help with critical thinking. It’s called The Scout Mindset and it’s by Julia Galef, host of the Rationally Speaking podcast. Tell me a bit more about why you chose this book.

I put The Scout Mindset on the list because the ability to reason very carefully and have a curious, truth-seeking attitude is of enormous importance. We can be pretty good at it for issues that are not very high stakes. If you’re learning about a topic in school and it doesn’t have to do with anything that’s really facing you, it’s easy to be impartial. Then, when you’re talking about very morally sensitive, high-stakes, life-or-death issues, it can be much harder to have what Julia Galef calls a ‘scout mindset.’ This is about seeing all the different views, deciding how much weight you should give to each, understanding other points of view. It’s much easier to get into a mindset where you say, ‘Look, lives are on the line, this is my cause, I want to defend this view at all costs.’

When it’s high stakes, it’s even more important to ensure that we keep this open mind because if you focus on the wrong priority, then you’ve done less good. Perhaps you’re helping 10 people, whereas you could have helped 100. Perhaps you’re saving lives of people who would otherwise die instead of preventing an enormous catastrophe that would in fact kill everybody. When it comes to the best ways to help, it’s so high stakes that we have to have correct views, even if that can feel uncomfortable. That’s why I’ve chosen this book.

It’s interesting you say it could feel uncomfortable, because there is something heartless about utilitarianism. It goes against your instinct; you want to help a few people close to you rather than 100 strangers.

Yes, it can be easy to get into the mode of, ‘Look, here’s a person in front of me. I’m going to help them.’ This is the soldier mindset that Julia talks about. You want to defend them at all costs. That’s a very natural, very understandable reaction. However, if you want to do the most good, that requires your ability to reason, to take and channel those moral emotions, even in a way that can feel unintuitive.

The next book on your list is The Precipice by Toby Ord.

The Precipice is a book I regard as a complement to What We Owe the Future. It is making the case that there is a serious chance of an existential catastrophe in our lifetime, an event that would permanently foreclose on all of humanity’s future potential. Some such risks include the extinction of humanity by engineered pathogens or a takeover by a rogue AI or AI systems that have advanced to be more intelligent than humans but don’t share our goals. Also, more familiar events like asteroids, super volcanoes and so on. He gives us this amazingly detailed and balanced account of those different existential risks and what we can do about them. I think his case is fairly compelling.

He actually puts a number on it, doesn’t he, that it’s a chance of one in six?

That’s right. We actually talked about that, the choice to put a number on it, before he published the book. He didn’t think it would be a big deal, but I thought that a lot of people would talk about it. He’s not saying it’s an objective chance. It’s not that he’s very confident that it’s one in six, but that’s his best guess. And, honestly, my estimate doesn’t differ that much from his.

This might be the moment to go on to your next book, Superforecasting by Dan Gardner and Philip Tetlock. This is about “the art and science of prediction.”

Yes, this is very relevant to the one in six question. We talked about the importance of the scout mindset, of thinking clearly, of trying to have beliefs that reflect reality. Once we start thinking about issues that are not just happening now but over the coming years or even decades, that gets particularly challenging. There’s a long track record of people making predictions about the future that are hilariously wrong, both in too optimistic a direction—where they say, ‘In the year 2000, we’ll be walking in spacesuits on the moon’—and in too pessimistic a direction. JBS Haldane, one of the early futurists at the beginning of the 20th century, made many great predictions. But he also said it would be 8 million years before there was a return trip to the moon. That was only a few decades before there was one.

Support Five Books

Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by .

How should we reason in the face of uncertainty? Forecasting is this discipline, this art and skill of getting better and better at making predictions about the future. The key thing is starting to reason in terms of precise probabilities. You might ask, ‘What’s the chance of x happening in our lifetime?’ and people will say, ‘It’s unlikely.’ That’s very vague. I don’t really know what ‘unlikely’ means. Is there a 40% chance x might happen? 10%? 1%?  These are really big differences.

The approach of forecasting is to make very precise predictions and then, over time, you see which are correct and which are incorrect. They have developed a whole set of skills for improving the way we reason about probabilistic matters. The book is very helpful for thinking about things that are intrinsically uncertain, questions like, ‘When will we develop human-level artificial intelligence?’ Or ‘Will there be a third world war in our lifetime?’

So is forecasting a formal discipline now, trying to predict the future more accurately?

Yes, you could think of it as a nascent field of forecasting studies, within economics and psychology in particular.

Forecasts have a reputation for being wrong. You mention that in your book, that we have a tendency to write them off because we don’t think they’ll be accurate anyway. 

Exactly, but that’s the wrong way to think about things because it’s always a matter of doing better or worse. Here’s one example. Metaculus is a forecasting platform that overlaps a lot with the effective altruism community. In 2015, there was a forecast about the probability of a global pandemic between the years 2016 and 2026 that would kill at least 10 million people. Metaculus put the odds at one in three. Now, if the world’s decision-makers and political leaders had internalized a one in three chance of a global pandemic of that magnitude within the coming decades, we would have prepared for the pandemic that did occur much better than we did. We weren’t thinking probabilistically. We thought, ‘Nah, it won’t happen.’ One in three is not that high a probability, but it’s certainly enough to prepare for.

Lastly, let’s talk about The Life You Can Save by the Australian American philosopher Peter Singer.

This was a book that was very influential on me all the way back in 2009 and got me inspired to work on the problem of extreme poverty. I’ve been arguing for a longtermist worldview, focusing on civilization-impacting events, but there are major problems affecting the near term too. Still, every year, hundreds of thousands of children die unnecessarily of malaria, of diarrheal disease, of tuberculosis. You can do an enormous amount of good; it literally costs only a few thousand dollars to save a life. When we think about doing good in the world, we should appreciate how much our money can do. I included this book because whenever we’re engaging in prioritization or thinking about the big problems of our time, this is something that should really be borne in mind.

Peter Singer also inspired me to give away most of my income. It can do a lot more good for other people than it can for me.

In terms of the long-term challenges you address in your book, which do you think is most worth somebody’s time? Say there’s a polymath reading this who could focus on anything, what would you want them to do?

If they could do absolutely anything, I think the thing that I would most want them to focus on is AI governance. AI is already the fastest-moving technology at the moment. There’s competition between the big powers, including the US and China. It’s clear that we need to carefully navigate this technology because the upsides are enormous, and the downsides are very great. I do think that the development of very advanced artificial intelligence could be one of the most important technologies ever. How should you govern this? What are the policies that are correct and in place? Honestly, I don’t really know. That’s why if I’ve got someone who is a polymath and super smart, I would love to have more attention on that, to ensure that we reap the benefits without paying the costs.

Get the weekly Five Books newsletter

How much attention is being devoted to it right now? Are there any incentives for people to be looking at AI governance? Presumably, there’s not much commercial benefit to focusing on it.

Everyone is going to have different incentives. That’s why I particularly want to see more altruistically-motivated people, perhaps philanthropically funded, working on it because they can have a truly impartial perspective. Predictably, the US government is going to have the United States’ interests at heart. The leading AI labs will have their own interests at heart. People working in the AI labs take the concern seriously—they want to have a governance framework such that we have the upsides of AI without the downsides—but, at the same time, I do think having people who are operating in think tanks or going into government is especially valuable. They’re more concerned about how things go for the world as a whole rather than any particular interest group.

Interview by Sophie Roell, Editor

August 15, 2022

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at editor@fivebooks.com

Will MacAskill

Will MacAskill

Will MacAskill is a Scottish philosopher, ethicist, and one of the originators of the effective altruism movement. He is an associate professor at the University of Oxford, where his research focuses on the fundamentals of effective altruism—the use of evidence and reason to help others as much as possible with our time and money—with a particular concentration on how to act given moral uncertainty. He is director of the Forethought Foundation for Global Priorities Research and a co-founder and president of the Centre for Effective Altruism. He also co-founded 80,000 Hours, a YCombinator-backed non-profit that provides research and advice on how you can best make a difference with your career and Giving What We Can, a global community of effective givers that is best known for 'The Giving What We Can Pledge' to give 10% of lifetime income to the most effective charities.

Will MacAskill

Will MacAskill

Will MacAskill is a Scottish philosopher, ethicist, and one of the originators of the effective altruism movement. He is an associate professor at the University of Oxford, where his research focuses on the fundamentals of effective altruism—the use of evidence and reason to help others as much as possible with our time and money—with a particular concentration on how to act given moral uncertainty. He is director of the Forethought Foundation for Global Priorities Research and a co-founder and president of the Centre for Effective Altruism. He also co-founded 80,000 Hours, a YCombinator-backed non-profit that provides research and advice on how you can best make a difference with your career and Giving What We Can, a global community of effective givers that is best known for 'The Giving What We Can Pledge' to give 10% of lifetime income to the most effective charities.