Behavioural economics is a very new field, but its insights have huge ramifications for our daily lives, including life-or-death decisions. Here, author Jonah Lehrer talks us through some of its most important works.
Your first book is Extraordinary Popular Delusions and the Madness of Crowds by Scottish journalist and songwriter Charles Mackay, first published in 1841.
This is a wonderful eclectic history of mass human irrationality, and a great history of financial bubbles. If you ever thought that irrational exuberance was a modern invention, a by-product of CNBC and day traders, this book will put you in your place. It’s everything from Tulipmania in the Netherlands in the early 17th century to the Mississippi Company and he would have loved the sub-prime mortgage bubble. It shows that as long as we’ve had financial markets, there have been these insanely irrational bubbles: the history of finance is really the history of financial bubbles.
But is it that irrational? Say with Tulipmania, even if you know that one tulip bulb isn’t worth 10 florins, as long as you can sell it for 20 florins, it’s worth it – even if you know it’s ridiculous.
Yes, and what he highlights is that these are historical moments when financial markets basically look like Ponzi schemes. They work great, until they no longer work, and then they just collapse, like a house of cards. So they work well for all the people who are selling tulips for 20 florins or 30 florins, but all of sudden, for whatever reason, when they reach 101, the market disappears. And you realise you’re paying insane amounts of money for a flower. And there’s always the moment, when you read these stories, where looking back on it, it seems so absurd. And yet you know that for every person trading tulips or investing in the South Sea Bubble it felt like a very prudent investment. It felt irrational not to invest in tulips. Just like the smartest minds on Wall Street thought that it was irrational and irresponsible to not invest in mortgage-backed securities.
“The history of finance is really the history of financial bubbles.”
So this book gives you a hint of what it must have felt like on the inside, to be in the grip of this irrational exuberance – just like when Cisco was the most valuable company in the world in 2000, or when dotcom stocks that had no business plan (or a barely intelligible business plan) and never turned a profit were incredibly valuable companies. And, of course, as soon as the bubble ends, we see it for what it was – a completely irrational burst of exuberance.
And Mackay also covers topics like witchcraft and witch-hunts and alchemy?
It’s best known for the financial escapades, and that’s the stuff I find most compelling in terms of decision-making. But it’s also just a history of human mania and irrationality generally.
So, the next book you chose, Judgment under Uncertainty, is by, amongst others, Daniel Kahneman, the psychologist who won the Nobel prize for economics, despite, he says, never taking an economics course.
This is one of the most influential books in modern economics. But first of all, it’s just this list of incredibly clever experiments. They don’t use any fancy tools, there’s no microscopes or telescopes involved: Kahneman and Tversky just asked their undergraduates hypothetical questions.
So how much would a student want in return if they were betting $1 on the flip of a coin? You can’t get a simpler question to ask in a science experiment, and yet that very simple question eventually led to a thing called ‘loss aversion’. And this is now viewed as a very important phenomenon – with implications for everything from how taxi-cab drivers think, to how people act when they evaluate their stock portfolio. So what I like about this is book is how they took these very simple protocols – really just idle conversation with students – and transformed them into the first really hard proof that people consistently violate the expectation of rational agents. That we don’t think like homo economicus at all. That our behaviour, our responses to very simple questions, don’t look at all like what a rational person would do – there are these deep inconsistent flaws in the human mind. It makes no rational sense to have such a strong loss aversion, or to be so vulnerable to any one of the long list of biases that Kahneman and Tversky demonstrated. But across all the big-end, large sample sizes, this is the way people responded. So it’s an incredibly powerful piece of work that really showed that people aren’t just occasionally irrational, they don’t just act stupidly when they’re in the midst of a bubble. Irrationality is embedded deep into our operating system.
But this is quite an academic book?
Yes it’s a very academic book. But it happens to be about as accessible as a bunch of academic papers can be, simply because it’s just fun to go through and do the hypotheticals – these questions they’re giving to undergraduates at Hebrew University in the mid-1970s – and then testing yourself against them. And the collection does a very nice job of mixing together the original papers with subsequent results in the field of economics which then take, for example, loss aversion and apply it to the real world. So you can see how this actively influences the decisions of mutual fund managers, with very important negative consequences. And this book not only pointed out this core irrationality, but really changed the economics field as well.
I think the Nobel prize speaks of the importance of the work to economics. But doesn’t it show these biases affecting all sorts of things, including potentially life-and-death medical decisions?
That’s actually an offshoot of loss aversion. So there are different ways of framing a question, and one way to demonstrate loss aversion is that the ultimate loss is, of course, death. So if you go to doctors and ask them to choose between options, and one, the riskier option, is framed in terms of saving people, and the other in terms of people dying, most doctors will risk everything on the all-or-nothing approach. Even when it’s the exact same numbers, if it’s framed in terms of death, people are twice as likely to avoid that option. Because framing the question in terms of losses, making us even think about death, is so ugly, it feels so bad to us, that the person thinks, ‘Oh I’ve got to go for the risky approach.’ And Kahneman and Tversky argue that it does indeed affect the way doctors discuss, for instance, cancer treatments. You can get doctors who work in cancer wards to think very differently about treatment if you frame it as a five per cent chance of surviving, or a 95 per cent chance of dying.
And, as a patient dealing with cancer, you often do have to make decisions based on statistics you are given – doctors say there’s a five per cent chance of this if you do that, or a 10 per cent chance of that if you don’t do this, and it’s all very confusing.
Yes. We’re given all these statistics, but the human mind wasn’t designed very well to deal with statistics. What we’re left with is this feeling. A feeling of either fear – that’s a risk we’re taking – or that’s a potential gain I should pursue. A lot of it really is about these emotions which, in the end, drive our decisions. So simply by reframing the question one way or the other, you can dramatically influence these feelings. Human beings really aren’t rational agents for the most part, because we’re actually being driven by these emotions triggered by dreams of losses or gains.
Your next choice is How We Know What Isn’t So by psychologist Thomas Gilovich.
This is a really smart book and the reason I put it on there is that it really invented the genre of science non-fiction. Gilovich did some very interesting work (actually with Tversky when he was still at Cornell) including on the ‘hot hand’ effect. This refers to basketball when players think they have a ‘hot hand’. They make three shots in a row: fans think they’re in the zone. But actually the hot hand is a cognitive illusion. After making a couple of shots in a row, players actually get over-confident and become less likely to make their next shot. So the book is filled with case studies like that, clever demonstrations that so much of what we perceive in the world, and then use to act on, is actually based on cognitive illusions. So this book is very accessible. It was very popular and demonstrated for the first time that people love to learn about their biases. There’s really something fascinating about reading your own user manual and going, ‘Oh that’s what made me do that stupid thing all the time!’
What kind of cognitive illusions does it home in on?
If you want to summarise it, a large part of the book is about positive information bias – the fact that we like to believe that we’re right and so we ignore all sorts of evidence that suggests we might be wrong. That’s why conservatives watch Fox and liberals watch MSNBC. Which isn’t the biggest revelation in the world – but there’s all sorts of clever studies that demonstrate this again and again, that show just how blinded and blinkered we are. We think we’re so objective, but there’s actually nothing objective about the human mind. We have these working beliefs and we seek evidence to confirm beliefs: that, unfortunately, is the best summary of how we seek out evidence.
What about the next book, by economist Richard Thaler – what is The Winner’s Curse?
The Winner’s Curse is something I think about every time I go on E-bay. It was probably first demonstrated by Richard Thaler, who is a very influential behavioural economist at the University of Chicago. We’ve got these economic models, but the problem is these models are based on a profoundly inaccurate view of human nature. So if we get some real data showing us that people aren’t rational agents, shouldn’t we change our economic theories? Again, it’s a pretty academic book, for the most part a collection of papers, but it is fascinating to watch ideas in psychology infecting other fields, like economics, and seeing that infection for the first time in Thaler’s work.
So The Winner’s Curse is all about auctions and how people will often over-bid in blind auctions. So if you’re bidding for a free-agent baseball player, or bidding for oil rights, or Bill Clinton’s autobiography, all of these are blind auctions, and people dramatically overpaid – which is a recurring feature of blind auctions. Thaler is trying to take all these canonical examples/problems/situations in economics and saying, ‘Well now we’ve got more accurate models of what people actually do, now we don’t have to rely on these hypotheticals, shouldn’t we revise our models?’
And are other economists now doing that also?
There is still plenty of resistance, some of it well justified. People are saying: ‘Look it’s not like psychology is a very accurate model – the mind remains a very mysterious thing.’ So you don’t want to simply tether yourself to something that’s a work in progress. Which I think is a valid response. I think there are some economists who are very worried about turning economics into a subsidiary of psychology, which is what behavioural economics is: these are economists who use the tools of psychology, use psychological paradigms and combine them with economic models. So it really is a branch of psychology.
These are valid worries and concerns but, that said, it’s hard to deny that behavioural economics and neural economics, these two branches of economics that are trying to merge with neuroscience and psychology, are both very intellectual fields at the moment. They’re everywhere, in the Obama administration, there’s now some bigwigs at the University of Chicago.
Get the weekly Five Books newsletter
This is a booming field, and given where we are right now, trying to recover from a financial apocalypse – yet another financial bubble – I think it only makes sense to once again try to understand what makes us tick. If we have learned anything from the sub-prime mortgage mess it’s that we have yet another reminder that we really aren’t rational agents. When we assume people are rational agents, we are led astray. To the extent that future regulations will be effective, it’s to the extent that they can accurately look at all these irrationalities and anticipate them and take them into account.
So can you give me an example?
One example that Thaler uses is the ultimatum game. It’s a very simple game: you give person A $10 and say, ‘OK you can divide it any way you want to, and the only catch is that if person B rejects your offer, then nobody gets anything.’ So economists, with their selfish rational models, would say the way person A should behave is to keep $9 and give person B $1. Person B might think that is unfair, but if they reject it, they won’t even get $1, so rejecting it would be irrational spite. So what’s supposed to happen according to homo economicus is this very rational split.
But that’s not at all that happens – when economists actually started playing this game, and they played it all over the world, they found that people on average split $4.50 with person B: they actually made a fairly equitable split. The reason is that they know that if they made an unfair split, person B would be really angry, so angry they’d probably reject the offer. Person A is able to anticipate the emotions of the other person enough for them to make a fair offer. It’s not the way we’re supposed to behave, it’s neither particularly rational or selfish. And yet it’s the way we behave time and time again. In fact, the only people who actually act like they’re supposed to act according to economics textbooks are people with autism. They actually tend to make much more unfair offers.
Your last book is called Predictably Irrational.
This is a very popular book by Dan Ariely, a really smart behavioural economist who is now at Duke (he was at MIT). The reason it’s important is that it really gives you a sense of all the different ways in which behavioural economics is flowering. This is a field that is just 20 years old, which by academic standards is very new. Dan Ariely is a very a creative guy. He was able to take this basic idea, that humans are irrational, and mine it in a million different directions. For example, one of my favourite studies is brand-name versus generic aspirin. He shows that for whatever reason we’ve got his heuristic, this expectation, that we get what we pay for. And when you give people more expensive things, even if they’re the exact same thing, people actually get more pleasure out of the more expensive product, they find the more expensive version more useful. And brand-name versus generic aspirin is a good way of showing this. You have the exact same active ingredient, no difference between the product, the pills are identical. And yet the brand-name aspirin is much more effective at curing our headaches.
The other really important feature of his work – and this is an increasing theme of behavioural economics – is the importance of contextual conditions in shaping how people think. So one of the real flaws in the rational agent model was that people are always rational, it doesn’t matter what the situation is, people always act in this very predictable way.
We’re actually not that consistent, so that even when it comes to our irrationality, we’re not entirely predictable. So the title of the book is actually sort of misleading. Very small changes in context and circumstances can dramatically alter our behaviour and our response to incentives. In one situation, in one kind of classroom, we won’t cheat and we’ll all seem like very honest souls. Tweak a few variables, make us a little bit more anonymous, and all of a sudden we’re cheating on everything. So we’re just beginning to understand how complicated and subtle our decision-making machinery is.
So where does all this get us?
That’s the million-dollar question. The important thing to note is that it gets you nowhere, unless you’re self-aware. You can know about all the biases in the world, and, unless you are able to take them into account when making a decision, it’ll be useless knowledge. That’s the crucial ingredient, what psychologists refer to as meta-cognition – thinking about thinking. Unless you practise meta-cognition, unless you think about loss aversion when you’re evaluating your own stock portfolio, or unless you worry about this bias for more expensive things when shopping for aspirin, you’re going to make the same mistakes as everybody else.
Is that what your book is about? I saw you mentioned on your blog that you are a very indecisive person and that’s what prompted you to write a book on decision-making.
I’m pathologically indecisive as a matter of fact. My book was definitely an attempt to apply this knowledge, first of all to try and put it in the context of modern neuroscience, but also to say, ‘Well, great, people are stupid, we do all sorts of stupid things, we’re incredibly irrational, we pay a year’s salary for a black tulip. So what do we do now? How do we apply this in order to make better decisions?’ It’s a larger societal question too. As we develop better models of how humans actually behave, we can actually begin to explain how we choose which stocks we buy, what we buy in the supermarket. And then the important question becomes, ‘Great, we have these lovely models! But how do I actually use this knowledge to make better decisions? How do we turn behavioural economics into an applied science? So that next time we structure regulation we won’t be quite so tempted by subprime mortgages…’ Those are the really important questions.
But at this point, given the science is so new, the best advice someone can give you is, ‘Look, what you really need to do is to learn this list of biases, these flaws, these hardwired mistakes, and begin to take them into account in your own life when you’re making decisions.’ Because that’s the only way we’re going to be able to avoid them.
April 9, 2010
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at email@example.com