Paul Krugman’s postwhich this responds to is described as wonkish, and so will this be. However, if you are not an economist but are interested in economic methodology, you may find this interesting even though I take some knowledge for granted.
Is intertemporal optimisation at the heart of modern macro, as I wrote here, or is it a gadget like the Dixit-Stiglitz model of monopolistic competition, as Paul Krugman suggests here? Well, I think I was right to characterise modern macro this way, but I also agree with what I believe Professor Krugman was saying in his post. Let me explain.
If we were describing what particle physics is all about nowadays, we would probably not say ‘building large Hadron Colliders’. We would say ‘looking for conjectured but as yet unobserved fundamental particles’. Professor Krugman did not get a Nobel for clever use of the Dixit-Stiglitz model of monopolistic competition, but for investigating the role of increasing returns and imperfect competition on trade and economic geography. So why characterise, as I did, modern macro by a tool it uses (i.e. intertemporal optimisation), rather than by what it is trying to achieve with that tool?
My answer is as follows. Let’s define modern macro as starting with the New Classical revolution of the 1970s. What I think singles out modern macro is not what it is trying to explain, but how it tries to explain it. Intertemporal optimisation is a tool that helps us understand (we hope) how people make dynamic decisions (like how much to save), but trying to understand those decisions did not start with the New Classical economists. What did begin in a major way around then was the microfoundation of macroeconomics, which was a major methodological change in how macroeconomics was done. So it seems quite natural to say that this method is at the heart of modern macro.
But I think Professor Krugman is pointing to a danger that results from this. He writes “a gadget is only a gadget, and you should not let it define your field.” If we define something by the tool or methods we use, we may start taking those tools too seriously. Let me illustrate what I mean by an anecdote that deliberately does not involve macro. I had a discussion many years ago with one of the UK’s best microeconomists, who coincidentally (and from memory) was also inspired to do economics by the same science fiction storyas Professor Krugman. Most wage bargaining models (at least at the time) made the so called ‘right to manage’ assumption: there is bargaining over wages, but firms determine employment. They make this assumption because that is what normally happens in the real world. But, as a paper by McDonald and Solow showed, this is inefficient. Both sides can be better off if they bargain over both wages and employment. My colleague was tempted (but only tempted) to say that because of this result, maybe what appeared to be reality was not in fact reality. After all, how could the actors involved not go for a Pareto improvement?
Now just occasionally data is wrong, and what appears to be reality is not. But 99.9% of the time this is the wrong response. The right response is that the model is wrong. In this specific case the efficiency result is missing an important aspect of the problem: it could be bargaining costs and decision frequencies, or something even more fundamental. But my point is that the temptation to take models too seriously is often there. Going back to macro, I’ve sometimes heard it said by someone that they are not sure how important price rigidity is, because they do not find any of the models of price rigidity convincing. This is letting theory define reality.
If that is Professor Krugman’s point, then I agree with it. It is very similar to the argument I have been making about the need to be pragmatic about microfoundations (post here, article here), and to be more open to macro that is not explicitly microfounded (post here). Imagine a hypothetical history where we are doing macro at the end of the 1970s, but the microfoundations approach had already become established. Everything else is as it was: the main Keynesian model of that time featured nominal rigidity due to (irrational) inertia in expectations which could not be microfounded, and there were no alternative New Keynesian stories established. What do you do in these circumstances? You should not carry on using microfounded models that ignore price rigidity. Instead you should (a) look for a better microfounded model that does allow price rigidity, but also (b) discount the microfounded models you currently have because they are clearly incomplete/wrong, and (c) be prepared to use the non-microfounded Keynesian model until something better comes along.
So while I think I was right to define modern macro by its methodology, I also agree that this methodology can assume an importance that can be dangerous. I think this is a danger that economics is particularly prone to, because its methodology is essentially deductive in character, and economists are very attached to their rationality axioms. The microfoundation of macroeconomic theory means that macro is subject to that same danger. Ironically, just when microeconomics is getting rather more relaxed about its axioms as a result of behavioural economics in particular, the hegemony of microfoundations in macro is at its height.
That would be a nice way to end this post from a rhetorical point of view, but I suspect if I did, some would misinterpret what I say as implying that the majority of mainstream macroeconomists routinely make mistakes of this kind, or worse still that the microfoundation of macro was a mistake. I believe neither of those things, and I have been rather more positivethan Professor Krugman in the past on what modern macro has achieved. I hope it is possible to be both supportive of modern macro and critical of it at the same time.
0 Comments