It’s a discipline in vogue with the Nobel prize committee and mysterious to most of the rest of us. So we asked econometrician Mark Thoma to explain what he does, and why there’s such a battle of ideas (and models) in economics. He recommends his favourite econometrics books.
A number of econometricians – economists who use statistical and mathematical methods – have won the Nobel prize in economics. We’re going to talk about the ones that influenced you in your work. Before we start, what got you excited about econometrics in the first place?
I was in high school at the time of the oil price crisis of the 1970s and I asked my mom, “What causes inflation?” She didn’t know. I think it was from then on that I started wondering about money and inflation and what caused it – how the Fed was involved et cetera. When I got to college I took economics and started answering those questions. Then when I got to graduate school I learned theories about why it happened, and I got interested in how you test those theories. How do you know if this theory is right or that theory is right? How can we figure it out? That’s where econometrics comes in. Time-series econometrics was a set of tools I could use to answer that longstanding question I had about how the Federal Reserve impacts the economy, how it creates inflation and so on.
Time-series econometrics is a particular area of focus for you. Can you give me an example of how it works?
Right now there is this big question about how much impact the Fed can have on the economy and whether or not deficit spending, or a change in government spending, creates employment. Is there a government-spending multiplier? You can use time-series econometrics to go back and look at data from the past, then use that data to estimate a model, and from that model you can figure out whether or not government spending works, and if it does work, when it works, when it doesn’t – all sorts of questions. Almost any policy question that you can formulate, you can take to the data. That’s where the econometric technique comes in, it allows you to determine what impact the policy is likely to have.
So it’s more empirical than straight economics?
Exactly. Our field is divided into two groups – there’s the theorists and the applied people. I’m more of an applied person. I apply and test the theories that the more highbrow, theoretical types build. They might build two different models of the economy, and then the question is which one fits the data best. Which one is the best explanation of the economy? You can use econometrics to sort out competing theoretical models.
What then is your conclusion on the government-spending multiplier?
There’s a lot of uncertainty about it, so I can’t say for sure whether it’s one or two, or somewhere in between. There’s a fairly wide range of estimates, but I think it’s pretty clear that it’s somewhere in that range. I would say it’s about 1.5 presently.
Have econometricians been well represented in the Nobel prize for economics?
I think they have. The committee has been very good at rewarding the people who are building the tools that allow us to test economic models. So it’s not unusual at all for them to give Nobel prizes to econometricians.
Prior to this year’s award to Thomas Sargent and Christopher Sims, there was Robert Engle and Clive Granger in 2003, and before that Daniel McFadden and James Heckman in 2000.
Yes, those are all econometricians. The older cohort were more micro-economists. This morning I was wondering whether this year’s prize was something of a make-up for the macro people, but I don’t think it was. They built their own tools and techniques that were important.
Let’s start by talking about New York University’s Thomas Sargent, who along with Lars Ljungqvist is the author of your first book choice, Recursive Macroeconomic Theory. What is important about Sargent’s work?
Sargent went into the engineering literature, he took the tools and techniques that engineers use to do things like make sure your television is clear, and he brought them over into economics. There were all these tools that scientists were using to control systems. An engineer might build a TV that has a feedback mechanism. Somehow, it can look at the picture and if the picture isn’t right, it can go back and adjust the inputs in a way that clarifies it. If there is a horizontal scroll or a vertical scroll, it does something internally to stabilise the system, and makes sure it doesn’t get out of whack.
All the same tools that engineers use to stabilise your television picture can be used to stabilise the economy. The difference – this is an important difference, and where Sargent’s contribution comes in – is that when I do something with a TV, it doesn’t try to get out of the way to protect itself. It doesn’t say, “Oh! I don’t like having a little more green in my colour, so I’m just going to turn it back down.” Unlike TVs, people have brains and they respond to policies. If you try to tax them, they try to get out of the way of taxes. If you try to tax a TV, it has no way of getting out the way.
That’s where rational expectations comes in. When you’re trying to control a person instead of a TV, you have to take into account their expectations – how they’re going to respond to the things you do as a policy-maker. Sargent took all these tools and techniques for optimal control that were being used in engineering, all this heavy mathematics, brought it over into economics, added rational expectations to it which made it even more complicated mathematically and harder to use than it already was in the engineering literature.
And that’s what he won the Nobel for?
Yes. It wasn’t so much that he was this grand theorist – although he is certainly good at that – it was more the tools and techniques that he brought to the profession.
I get the feeling that the Nobel prize in economics is awarded on somewhat political grounds – based on what’s happening in the world at that particular moment – and not purely on intellectual heft. Is Sargent’s work especially relevant to the economic crisis we now face?
He gave us the tools and techniques we need to analyse the crisis and build the models that we need to build to understand it. But it’s not as though he built those models themselves. He built the hammer, not the house.
Let’s talk a bit more about the book, Recursive Macroeconomic Theory.
This is the textbook I use in my PhD macro courses. In fact, it’s used in almost all of the major PhD programmes in the US right now – any respectable programme most likely uses this book. It’s a book of tools and techniques to solve what are called recursive macroeconomic models.
I think you’d better explain.
Modern macroeconomics uses DSGE models – dynamic stochastic general equilibrium models. That’s a bit of a mouthful, but all dynamic means is that it’s a model which explains how things move through time. So it explains what GDP will be today, tomorrow, the next day and the day after that, and how it might change if the government does different things or if certain things happen in the economy. It’s really a model of how the economy transitions from one point to the next – how it goes into recessions et cetera. Those models are extraordinarily difficult to solve.
What this book shows us – and this is the recursive part in the title – is a way of breaking down this really hard problem into little tiny pieces. You can actually solve a much simpler problem when you only have to look at how the economy moves from today to tomorrow. I don’t have to look at how it moves from tomorrow to the next day and all the way out as far as I can see. We can break down these really hard problems into a recursion between today and tomorrow.
If you’re doing a PhD in macroeconomics, how important is this textbook? Is it one of 10 you have to work through?
It’s probably one of two. In macro there really aren’t very many. There’s this book, and there’s a book by David Romer of Berkeley. So this is sort of the Bible right now.
Let’s go onto what will count as your next two books, volumes one and two of Rational Expectations and Econometric Practice, edited by Thomas Sargent and Robert Lucas of Chicago, a leader of the rational expectations revolution and winner of the 1995 Nobel prize.
These are the books I used when I first came out of grad school and was trying to get tenure as a young assistant professor. It’s a collection of papers, a lot of them by [Thomas] Sargent, [Christopher] Sims and [Clive] Granger. Nobel-prize winning economists are the dominant authors in it.
The second volume, especially, taught me how to use econometrics to test a brand new class of models. At the time, the dominant were called New Classical models and Real Business Cycle models. Both involved rational expectations, which, it turns out, made estimation techniques really hard. These books were crucial in, firstly, giving me all the theoretical models behind what we were trying to test – there’s a whole section on theory – and, secondly, going through all the econometric techniques one needs to test those classes of models.
What surprised me, looking down your list, was that I think of you as very much on the left-wing side of the economics divide. Aren’t Lucas and Sargent at the opposite end of the ideological spectrum?
They are, and that’s an important point. Both sides of the spectrum within economics use the same tools and techniques. So I can honour everything they’ve done to allow me to do econometrics and understand theory without endorsing the way they’ve used those particular tools.
So these books are about the tools rather than the conclusions they reach?
There are conclusions in there, but they are more by way of example. Once you’ve learned the techniques, you’re all set to do anything. Lucas I would peg as very conservative. Sargent is a bit more open-minded. He’s done learning models with a colleague, and I sometimes go down the hall and he’ll be sitting in his office. Sargent’s dad lived in Eugene, Oregon, for a long time (though he passed away recently) so he’d show up at our Department quite often.
Before I spoke to Paul Krugman, I’d never even heard the term “freshwater economist”, which I understand refers to the University of Chicago, the Minneapolis Federal Reserve, and various places near the Great Lakes which produce economists with a right-wing bent. I gather there’s a big divide between them and the “saltwater economists” who come from universities along the seaboards – Princeton, Harvard, Berkeley, Oregon – and who are more liberal.
There are several groups. There’s the divide between the New Keynesians who believe both monetary and fiscal policy are effective policy tools, the modern version of the monetarists who believe in monetary but not fiscal policy, and the Real Business Cycle economists who don’t think either type of policy is effective. The big split is the first one, monetarists versus Keynesians to use an older terminology — That’s people like Lucas and Sargent people like me, Krugman, Brad DeLong and others. Then there’s another, much smaller group that don’t think any of us have a clue. Those are the heterodox economists. They don’t like the tools and techniques we use, they don’t like equilibrium models. It’s people like Jamie Galbraith, who don’t agree with either side.
Is Joe Stiglitz verging on the heterodox these days?
I would describe him as traditional-minded. He certainly uses the same tools, though he pushes them to a bigger extreme than others would. He’s not saying: Throw out the toolbox, throw out everything we’ve learned in the last 30 years and let’s take a completely new tack using different kinds of models altogether. Though maybe that’s where we’ll end up as a result of this crisis.
I remain concerned at how economists can disagree so much. Doctors don’t disagree about how to treat a cancer patient.
Economists don’t disagree about certain things. And doctors do disagree about things, especially over time – like whether eating certain foods can reduce cholesterol. There’s a big controversy right now about whether you should take vitamins or not, whether it’s helpful or harmful. When doctors have difficulty testing things experimentally, they run into the same issues as we do. When there’s just historical data, like we have – if, for example, they try to figure out heart disease by looking back at people’s lives – then a lot of the time they get things as wrong as we do. Doctors have changed their advice many times.
Going back to fiscal stimulus, which you mentioned at the beginning as something time-series econometrics can test, I take it there isn’t overwhelming evidence in its favour? Even though you’re on the Keynesian side, do you think people that question whether it works have a point?
Yes I do, completely. The reason is that we don’t have data for historical episodes like this one. The Great Depression was like this, but our data pretty much ends in 1947. We can’t go back any further with anything close to reliable data. As an econometrician I can estimate these multipliers, but they’re for good times not bad times. I don’t have the data that I need. I don’t have enough big recessions like this one in my data set to give a precise answer.
Your next book is by the winners of the 2003 Nobel prize, Robert Engle – who is now emeritus professor at UC San Diego – and the British economist Clive Granger, who sadly has since died. What was their big insight and contribution? Tell me about their book, Long-Run Economic Relationships.
This book is about a subject for which the technical term is cointegration. What it means in everyday language is variables that are tied together in the long run, that are related in some way. For example, you might think that consumption and income are tied together in the long run. If income takes a big left-turn at some point in the data, consumption ought to take a big left-turn in the data as well. One thing they got the Nobel prize for is how, within our models, we can tie these two variables together in a way that makes sense. It sounds easy, but it’s actually a very hard econometric problem.
The other thing they got their Nobel prize for was something called ARCH, or autoregressive conditional heteroskedasticity models. What that means is that for, say, income over time, we can measure the variance of income – how variable it is. There is some mean of income over time that follows some trend, and the variation around that trend is the variance. They showed us how to write down economic models that track that variance through time. So, for example it can tell us what’s causing the variance of a financial asset to change. The variance of a financial asset would be how risky it is. If you’re looking at a financial asset, the mean would be the expected return on the asset and the variance is its risk profile.
These tools that they developed within this ARCH framework were then used – Engle says inappropriately – to do what were called value-at-risk calculations, prior to the crisis. When you hear about all these financial firms, looking at their portfolio and doing risk assessments, at the heart of what they were doing was these models that Engle and Granger built, the models that allowed us to estimate a time series of variances and see how that variance, or risk profile, changes over time.
I’m presuming that in the run-up to the crisis, their models did not flag a big risk at these banks?
Yes, and Engle would say that the reason why that happened is that they weren’t using his models correctly. In some sense he’s just protecting himself. The important point here is that at the heart of all the risk analysis for the financial industry was their models. Prior to the crash, and even after the crash, if you wanted to know how risky the portfolio that Bear Stearns was holding was, you would use those techniques.
In spite of this, you’ve kept the book on your list…
The book is more about the first topic I mentioned, cointegration, though the other stuff is in there as well. Cointegration is important because it allows us to do a better job of looking at things like causality between variables. They showed that if you have variables that are tied together over time, then the standard tools and techniques that were in use at that time would be wrong. It would be inappropriate to use them, you have to use a completely different estimation technique. They showed us how to do tests to find out if you have this problem of a long-run relationship in your data, and if you have this problem how to fix it within the models. That was an important step forward, because we learned that these relationships are all over the place. We had probably been estimating our models wrong up to that time.
And ARCH models themselves are useful, there’s no need to throw them out, but we do need to be congnizant of their limitations.
If they hadn’t been awarded the Nobel in 2003, could they have won it now, post-crisis?
The crisis wouldn’t have changed anyone’s view of cointegration. I think the value of the ARCH models might have been questioned a bit more than they were at the time, because they didn’t signal the risks in advance like we expected them to.
Your last choice is an older textbook, Macroeconomic Theory, again by Thomas Sargent. What does this one bring to the table?
This was the first book I ever had in grad school, in my very first macro class. It was a good book for me to learn macroeconomics as it existed in the early 1980s. It actually presents a very Keynesian model, because that was the dominant model of the time, and it also presents the beginnings of New Classical models.
The reason I like it, and still use it as a reference in classes, is that it shows us how to solve expectational difference equations. These are just equations that have expectations in them. You might say that GDP today is equal to some function of government spending, of interest rates and the money supply – and it might also be a function of expected income tomorrow. So income today depends on what you expect to happen tomorrow. Once you put those expectations into that equation, it’s really, really hard to solve. In this book, Sargent begins showing us how to solve those problems in a way that’s general and works in a lot of different cases. So he brings a brand new technology to the literature that opens up a lot of questions you couldn’t ask before.
Isn’t this covered in the other books?
The book he wrote with Lars Ljungqvist is an updated, expanded and better version of this older book. But this book is still really good at solving models that have expectations in them. I still assign one chapter of it to my students. Particularly those tools for solving difference equations – they’re called expectational difference equations – are just as good as they ever were. It’s still the best source that I know of.
How much are these econometrics methodologies tied to the rational expectations assumption? If economic agents are boundedly rational and not capable of solving dynamic optimisation problems, do these econometric methods still apply or do they need to be fundamentally modified? If people follow simple rules of thumb, would these methods still work?
There’s a lot of econometric tools in these books that would still work. My colleague George Evans does exactly what you say. He builds learning models, and he doesn’t assume agents are rational. He then sees whether by using simple learning rules the models converge to a rational solution over time. He still uses quite a few of the techniques that [Christopher] Sims developed, like impulse response functions, causality testing, all those kinds of techniques. There’s a set of things that aren’t very model dependent, things that you can bring to any set of data.
But there’s another set of techniques where the techniques themselves depend on implications of the rational expectations hypothesis. The rational expectations hypothesis, for instance, will you tell you that some variables have to be uncorrelated. Stock returns have to be uncorrelated over time, because if you could predict stocks tomorrow, any rational agent would arbitrage that, make money and take away the predictability. What that gives you is a zero correlation between yesterday and today. That fact that that correlation is zero is often exploited in these techniques, that’s what makes them work. So if your rational expectations hypothesis falls apart, a lot of what I would call the more structural-based econometric techniques would fall apart with it, because they rely on the implications of rational expectations.
In your field of econometrics, and in economics in general, is there a lot of change going on as a result of the crisis?
There should be, and there has been. But not as much as I would like to see. Since I was in grad school – I graduated in 1986 so it’s been about 25 years – we’ve probably gone through two or three generations of models. When I started it was very Keynesian, then it was New Classical, then we got something called the Real Business Cycle models, then we got the New Keynesian models, and today there is an emerging set of models called the new monetarist models. Within the field there’s been a lot of churning of models. The reason those first sets of models didn’t survive was because they didn’t stand up to the data.
The models that Lucas got his Nobel prize for – the new classical models, where expectations play a fundamental role, only unexpected money matters and things like that – had some really strong implications. We took that model to the data and it couldn’t explain the magnitude and the duration of business cycles simultaneously. It also couldn’t explain why expected money was correlted with output, and it got rejected. Then we went to real business cycle models. They did better. But they had trouble explaining Great Depressions and other sorts of things, so we rejected those models and went to New Keynesian models. Those models were doing great, or relatively so anyway, right up to the crisis. Then they did horridly. You don’t need advanced econometrics to reject that class of models – it’s clear that they just didn’t handle the crisis. So we’re going to reject those too. There’s been a lot of change, and I expect that change will speed up. I wish it was even faster, because it’s very clear to me that the models we were using prior to the crisis are not going to get the job done.
You mentioned your colleague George Evans. Are there other economists you admire, in terms of what they’re trying to do to find new, convincing models?
I like George’s learning models. I also like what John Geanakoplos is doing at Yale. Eric Maskin mentioned him in his interview with you. What was wrong prior to the crisis is that the macroeconomy wasn’t connected to the financial sector. There’s a technical reason for that which has to do with representative agent models – we just didn’t have any way to connect financial intermediation to the macro model. And we didn’t think we needed to. We didn’t think that was an important question. We didn’t think there was any reason to worry about the kind of meltdown we had in the 30s happening in the US today. So no one bothered to build these models, or even to ask the right questions. Nevertheless, even before the crisis, Geanakoplos was building models that tried to explain how we could have these endogenous cycles. I really like that, because it uses the same tools and techniques that we’ve been using all along, but it puts them together in a different way, and in a way that I think makes a lot more sense.
Do you think academic economists are too unwilling to get their hands dirty and continue to neglect what’s happening in the real world?
A little bit. It’s partly that, and it’s partly that the answers you get as an econometrician aren’t always that precise. Because of that lack of precision and the lack of ability to experiment, you often find people getting different values for, say, the multiplier, getting different answers with different data sets – and that makes it look, perhaps correctly, that we really don’t have any answers.
What happened is that the theorists retreated into their deductive world, where they weren’t taking their models to the data enough. When they did take them to the data, and found that they didn’t work, they just said, “Oh it’s because of bad econometrics, the model is logically correct so we’re going to stick with it.” I think the arrogance of the theorists, and the lack of ability to do experiments, combined to make the theorists in particular way too insular in terms of taking their models and forcing them to interact with the actual world.
October 28, 2011
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]
Five Books interviews are expensive to produce. If you've enjoyed this interview, please support us by donating a small amount.
Mark Thoma
Mark Thoma is a macroeconomist and time-series econometrician at the University of Oregon. His research focuses on how monetary policy affects the economy, and he has also worked on political business cycle models. Mark is currently a fellow at The Century Foundation. He blogs daily at Economist’s View, and his Twitter feed is @MarkThoma.
Mark Thoma is a macroeconomist and time-series econometrician at the University of Oregon. His research focuses on how monetary policy affects the economy, and he has also worked on political business cycle models. Mark is currently a fellow at The Century Foundation. He blogs daily at Economist’s View, and his Twitter feed is @MarkThoma.