The best books on Using Data to Understand the World
recommended by Edouard Mathieu
Even as more and more data becomes available, many of us have a view of the world that doesn't correspond to reality. On probabilities in particular, people tend to be completely clueless. Here Edouard Mathieu, Head of Data at Oxford-based research group Our World in Data, recommends books to help readers not only use data to better understand the world, but also make better decisions in daily life.
Today we’re going to talk about using data to understand the world. This is part of data science, which has become very popular, with lots of study into it and books about it. Tell me a little bit about the subject you’ve chosen.
Over the last 10 to 20 years, we’ve seen a big paradigm shift in how we’re able to use data to understand the world. This can mean several things. There’s a big trend that many people are talking about, which is data science. It’s the idea of using data—lots of data, usually—and analysing it in order to get insights that we previously could not have about the world. Some of the books I picked are about that.
But some of the books I picked are also about the good old ways to understand the world by looking at data in a simple way. For example, Factfulness, one of the books I chose, is very much about that. It’s not about fancy machine-learning techniques or anything like that. It’s about what the available data—that we’ve had for decades—can tell us about the state of the world.
And I think these two different facets of the problem are very important. There’s this idea of, ‘Sure. Let’s use what is at our disposal now with machine learning to understand the world in ways that we previously did not do.’ But let’s also remember that more and more data is not necessarily synonymous with better data and that we should make sure that we look at the data correctly, even if it looks simple at first glance, to get an accurate picture of how the world around us is.
Okay, so shall we start with Factfulness? It’s been recommended before on Five Books, not in relation to data, but in an interview about books on critical thinking.
Yes, that’s a good one to start with. Factfulness was written by Hans Rosling and his son, Ola Rosling, and his daughter-in-law, Anna Rosling. Hans Rosling died in 2017, and the book was actually published a year after.
Factfulness is a book that very much embodies what Hans Rosling tried to do in his lifetime. He was a Swedish doctor and academic, who became quite famous because of his TED Talks where he presented, in a very entertaining and insightful way, the state of the world. In particular, he became famous for telling people that they kept getting everything wrong. He used to test Swedish students, politicians, and journalists, and he consistently found that people had a very warped and biased view of the world. In general, what he found is that people did not have an accurate sense of how much progress had been achieved. People, on average, tended to be extremely pessimistic about the state of the world, whether it was about deaths from natural disasters, child mortality, or how many girls in the world go to school now.
Possibly the most important argument that Rosling makes is that the concept of dividing the world into “developed” and “developing” countries is outdated. Instead, he proposes we use a four-level model based on income per person. And contrary to what many people tend to believe, the majority of countries fall into the middle levels, and only a small number of countries are either very rich or very poor.
People consistently have this view that the world is in a terrible state, when actually, over the last decades and centuries, we’ve made huge progress. So this book, Factfulness, takes the idea of people getting things wrong and describes the actual state of the world as we know it today. It also gives some tips on how people can try to reshape their worldview to be more accurate. Anecdotally, Bill Gates liked the book so much that he offered to send a copy to any college graduate who requested it.
That’s interesting. What’s your next book?
The next book is called The Signal and the Noise and it’s by Nate Silver. Silver used to be an American poker player. He was also a baseball analyst. He had lots of different jobs and projects in the past. He became famous as a blogger because he succeeded in predicting the American presidential elections in 2008 and 2012 very accurately. Based on this success, he then launched a site called FiveThirtyEight and published this book, The Signal and the Noise.
This book comes back to what I was saying before. As the title suggests, it’s about differentiating between ‘the signal’ and ‘the noise.’ Silver starts with the premise that yes, in the last 10-15 years, we’ve had huge amounts of data coming in and we’re now able to analyse it. But his view is that we should not focus too much on the noise but rather focus on the signal. That means making sure that we don’t just look at data in a random and haphazard way, but in an insightful way.
The book discusses the practical application of this in a variety of fields, including baseball, elections, weather forecasting, climate change, economics, and of course poker. It also emphasizes the importance of properly expressing uncertainty in statistical statements and the need to consider a range of probable outcomes rather than just single-point estimates. This can take various forms, but probabilities are a big one. A lot of the book is about that.
Silver describes perfectly what probabilities are, and I think that’s an important question that many more people in our societies should be able to answer: what do probabilities actually mean? Famously, there was—I’m not sure if scandal is the right word—but a big issue around 2016 and the Trump election. Nate Silver and FiveThirtyEight published a model that predicted a 30% chance of Trump winning. And still, to this day, there are many people who say that Silver was wrong because Trump was elected. A lot of this book is about explaining why that is a terribly wrong understanding of probability. If you say that something has a 30% chance of happening—which is one in three—then it can definitely happen. If you roll a die, it has a roughly 30% chance of landing on one or two, and surely that’s not impossible? What 30% means is that if we were able to run an endless number of American elections in 2016, a third of them would have ended up with Trump winning, and we just happened to live in one of those branches.
“What do probabilities actually mean?”
Another big thing about this book—which is a theme in all the books I picked: I think probably every book I chose at some point explains it, except maybe for Factfulness—is Bayes’ theorem. Thomas Bayes was an 18th-century statistician and philosopher. He came up with this important theory about how to understand the world analytically. The actual equation of the theorem doesn’t matter; it’s a little bit complicated, and it’s not important. The reason why all these books describe Bayes’ theorem is that doing Bayesian statistics means doing statistics and understanding data in a way that is trying to reason iteratively, based on the evidence available to you over time.
Let’s take an example. Let’s try to figure out, ‘What are the chances that I have COVID right now?’—which is a question all of us have had a lot. What a lot of people tend to do is go back and forth between extreme versions of their view: they start by thinking they could never have COVID (so roughly a 0% chance), but then one day they start coughing and instantly think, ‘Oh, I must have COVID.’
Under Bayes’ theorem, the correct way to do this would be to start with a baseline probability, which could be an estimate of how many people have COVID in the UK right now, which you can derive from government statistics. For example, this could be 3%. After this, when you get a new piece of information—for example, you start coughing—the idea underpinning Bayes’ theorem is that you should update your probability that you have COVID based on this new information. Maybe that probability goes up to 20%. That 20% is based on the fact that if you have COVID, there’s a good chance you’re going to cough… but coughing also happens for lots of other reasons! So instead of going straight to the worst-case scenario, under Bayes’ theorem, you simply update your view according to the available evidence. And with each new piece of information—for example, you start feeling feverish—you keep updating until you get to the latest and most accurate estimate of the probability of the event.
Bayes’ theorem can work for lots of things—in your personal life, to think about current events or the probabilities in an election. In an election, you’d start with the probability of a Democrat or a Republican winning, you could start with a 50/50—or maybe something a little more fine-tuned, for example, based on which party is the incumbent. Then, with each poll that you see, you can slightly update the probability based on it, to get more and more accurate estimates. Just because you see one new poll that contradicts your previous estimates, you shouldn’t radically change your view. You should update your view a little bit based on each poll, but not too much.
Superforecasting is interesting because it’s more about the research side of the topic. It’s written by the journalist Dan Gardner and Philip Tetlock. Tetlock, who is a Canadian American political scientist, conducted a lot of experiments to try and understand what makes people become better at making predictions. Most of his work was conducted with IARPA, which is the research branch of American national intelligence. Tetlock’s project was called ‘The Good Judgement Project.’ He recruited several dozen people and tried to see, first, whether some people are consistently better than others at making predictions. And second, he tried to understand what happens if you train these people to use good probability and estimate techniques. In other words, he tried to make them think about the world in a way that does not try to be as fancy or impressive as possible—he wasn’t trying to turn them into typical pundits—but into the most accurate possible forecasters.
What he found is that in a whole range of topics, those he called ‘superforecasters’ tend to be, on average, much better at predicting events than the topic experts that he compared them with. When we listen to the radio or watch TV, we often see these pundits who seem to have extremely strong views about something: for example, they claim with absolute certainty what’s going to happen in the war in Ukraine, or swear that a given politician is going to get elected in the next cycle. And because these people have a very assertive and confident way of telling us these things, we tend to believe them. Tetlock went to the effort of logging all the predictions made by these people, and he found that they were no better than random. A lot of them said very random things, but because no one ever checked back and asked them later to explain themselves, they just kept on being reinvited to give those opinions.
A lot of the book is about the story of this project, how he hired those people, realized that there were some ‘superforecasters’ among them, and tried to group them together and see what that would do. It turns out that groups of superforecasters did even better than superforecasters alone. He also goes into smaller details about what it is that those people do to make themselves better—things like breaking down very large questions into smaller questions, thinking about base rates, thinking about all the information that is at your disposal rather than the most recent piece of evidence, thinking about probabilities and not just about something being true or false, etc. It’s going back to a lot of the research that’s behind Nate Silver’s book. Nate Silver’s book is more sci-pop, it’s much more accessible, and about practical applications in different domains. Phil Tetlock’s book is certainly not dry, but it’s more about the actual origins of this research.
Am I right in thinking that the superforecasting group is still running?
It’s very much still running. They still have the research project, the Good Judgement Project, and they also have a commercial spin-off called Good Judgement Inc. They run forecasting tournaments for clients and also maintain an online forecasting platform. We’ve been working with them a little bit at Our World in Data: they’ve picked 10 specific charts on our website and got their superforecasters to try and predict what’s going to happen to these metrics over the very long run—so 1, 3, 10, 30, and 100 years in the future—and they’re working on a report.
So we spoke about Nate Silver, who was a poker player previously. Your next book, Thinking in Bets, is by Annie Duke who is also a poker player. What is it about poker players that attracts them to data?
It’s an interesting question, and Annie Duke spends a good portion of the book trying to explain it. As you said, she is an ex-professional poker player who got very interested in cognitive behavioural decision science. She tried to become an expert on how people make decisions and what her poker years could potentially teach her about it, and then she wrote this book, Thinking in Bets. In it, she compares poker to chess. She explains that chess is an information-complete game. The two players who look at the board know exactly what’s happening, they have all the information at their disposal, and they know the complete state of the world. Of course, they don’t know what the other player is thinking, but no information from the game is hidden. In poker, it’s the opposite. You know the cards you have in your hands, but most of the information is hidden. You don’t know what the other players around the table have in their hands and you don’t know the cards that are about to be revealed. It’s very much an information-incomplete game.
Her theory—and I very much agree with her—is that life is a lot more like poker than chess. In life, you don’t have a complete picture of all the information you wish you could have, you’re not omniscient about the state of the world. You’re very much guessing things—about people’s intentions, about how things are going to play out. And so because life is much more like poker, you need to live life much more like a poker player and that means thinking in bets. Her idea is that if you’re going to “win” at life, you’ll need to think in terms of probabilities and use the techniques that poker players use to make better decisions.
Support Five Books
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
One of the most important things she argues we should avoid is what she calls ‘resulting’. ‘Resulting’ is the idea of judging the quality of your thought process by the final outcome. So, for example, in poker, you would make a decision during the game, and sometimes it will end up in a bad outcome. But saying the outcome was bad is completely different from saying you made the wrong decision in the first place because there is a lot of randomness in poker. If, after the game, you think back carefully about the decision, and you still think it was the best given the circumstances, then you should think of this as a success!
In the same way, she gives the example of a company that hires a new CEO because it’s in a bad state. A year later, the board realises the company is in an even worse state, so they ask themselves, ‘Did we make the wrong decision?’ But actually, the fact that the company is in a worse state a year later does not necessarily mean that they made a bad decision by hiring that new CEO. Maybe with all the information they had at their disposal at the time, it was the best decision that they could have made.
Another example of where we see this behaviour a lot is in sports. After any football match, you will hear pundits spend hours analysing everything that happened and draw huge conclusions from it. They’ll make big statements about how the coach should be fired, or a player should not have been bought from another club, or a player should have been substituted at this point in the game. What they forget is that football is an extremely random sport; maybe the coach made all the right decisions, but even with the right decisions, he could not possibly have reached a better outcome because there’s just so much randomness.
I think this idea of avoiding ‘resulting’ is extremely interesting. Like a lot of ideas in Duke’s book, it comes from poker, but it can be applied in many areas of life.
So your last book is Hello World by Hannah Fry, who’s a really good communicator. You don’t need a lot of technical knowledge to understand this book. It was shortlisted for the Baillie Gifford Prize for Non-Fiction in 2018 and it’s been picked up in science book recommendations we’ve had as well.
Yes, I picked Hello World because, as you said, Hannah Fry is a very good communicator. She’s a British mathematician, though she’s become more of an author and communicator in her public life. This book, Hello World, is trying to summarise the state of everything related to data, machine learning, and artificial intelligence.
If we think more broadly about this idea of using data to understand the world, I specifically picked this fifth book because while the other books, like Factfulness and Thinking in Bets, are about how you, as an individual, can use data to better understand the world, it’s important to acknowledge that the trend in the last few years is very much not about you as an individual. Rather, it’s about computers using data more and more efficiently to understand the world. This has been the trend and I don’t see it stopping anytime soon—quite the contrary.
Because of that, it’s very, very important that people at least start understanding how these algorithms and computer models work. Hannah Fry goes through a whole list of examples of how models and algorithms are used. It includes things that can seem trivial, like marketing, or targeted ads, but also other things that are extremely important, like advising a judge on someone’s sentence or on whether they should be released from prison.
She goes through the potential problems that can be raised by algorithms—whether they’re technical problems, ethical problems, or political problems—but what I like about the book is that she’s very honest about the opportunities. On this topic of data and machine learning, there have been different waves of books over time. There was a first wave which was singing the praises of these techniques, trying to teach people how to use them, and making lots of promises about how this was going to change the world for the better. Then there was a second wave of books, which I think was needed, but was extremely critical, basically saying, ‘Hey! These algorithms that you’re being told are going to change the world for the better, well, they’re biased, they’re racist and sexist, and we should probably stop using them.’
“More and more data is not necessarily synonymous with better data”
What I like about Hannah Fry’s book is that it’s a ‘third wave’ book. She’s very honest about the fact that these algorithms have limitations and biases, but she analyses these limitations in very nuanced ways. In the past, there have been problems and scandals around some of these models. COMPAS is a big one. It’s the name of the computer model used in the US to advise judges on whether someone in jail should be released. COMPAS was heavily criticised in the US media for using an algorithm with outcomes that were biased against Black defendants. Fry explains that it’s true, but she also explains why you can’t have everything. There are many tradeoffs in the way you set up an algorithm, and you cannot reach perfect fairness without sacrificing accuracy in other ways.
Get the weekly Five Books newsletter
Fry is also very frank about the fact that using algorithms and computer models to advise us can be extremely good. She puts things in context. Assigning a prison sentence in the US, for example, used to be done in an extremely random way, basically down to one judge. This one judge surely can have just as much—if not more—racial prejudice as a computer model. The fact that whether someone spends six weeks, two years, or 10 years in prison would be based purely on one human being’s assessment is surely not something we want to keep. She writes at some point that if she was being arrested and judged for something she did, she would much rather have an algorithm decide her sentence based on something as close as possible to fairness, rather than a ‘random’ judge who might, for example, be sexist and give her an unfair sentence.
Hello World is a very interesting book that can teach people a lot about the way things are going to evolve in the future with machine learning and data, without being too optimistic, but also without being too pessimistic.
Okay, that’s brilliant. So we’ve been through all of the five books. If someone just wanted to read one book on using data to understand the world, which of these books would you recommend?
I’m going to cheat and give you two: it would depend on which part of the title they want to focus on. If they want to focus on the ‘understanding the world’ part, I think Factfulness is really the right one. I recommend going to the site of Gapminder—which is the website of the foundation of Hans Rosling—and trying one of these quizzes that tests your understanding of the world. If people find that interesting, or find out they’re really wrong about the world, I think they would find Factfulness extremely insightful and fun to read as well.
If people want to focus more on the first part of the title, ‘using data’, then Nate Silver’s book, although it was written 10 years ago, is still probably the best summary of data science, probabilistic thinking, machine learning, and all of that. In many ways, his book is a summary of all of the other books we’ve talked about. So I would recommend The Signal and the Noise to someone if they only wanted to read one book on using data.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at firstname.lastname@example.org