One of my favourite papers is by Christopher D. Carroll: "A Theory of the Consumption Function, with and without Liquidity Constraints." Journal of Economic Perspectives, 15(3): 23–45. This post will mainly be a brief summary of the paper, but I want to raise two methodological questions at the end. One is his, and the other is mine.
Here are some quotes from the introduction which present the basic idea:
“Fifteen years ago, Milton Friedman’s 1957 treatise A Theory of the Consumption Function seemed badly dated. Dynamic optimization theory had not been employed much in economics when Friedman wrote, and utility theory was still comparatively primitive, so his statement of the “permanent income hypothesis” never actually specified a formal mathematical model of behavior derived explicitly from utility maximization. Instead, Friedman relied at crucial points on intuition and verbal descriptions of behavior. Although these descriptions sounded plausible, when other economists subsequently found multiperiod maximizing models that could be solved explicitly, the implications of those models differed sharply from Friedman’s intuitive description of his ‘model.’...”
“Today, with the benefit of a further round of mathematical (and computational) advances, Friedman’s (1957) original analysis looks more prescient than primitive. It turns out that when there is meaningful uncertainty in future labor income, the optimal behavior of moderately impatient consumers is much better described by Friedman’s original statement of the permanent income hypothesis than by the later explicit maximizing versions.”
The basic point is this. Our workhorse intertemporal consumption (IC) model has two features that appear to contradict Friedman’s theory:
1) The marginal propensity to consume (mpc) out of transitory income is a lot smaller than the ‘about one third’ suggested by Friedman.
2) Friedman suggested that permanent income was discounted at a much higher rate than the real rate of interest
However Friedman stressed the role of precautionary savings, which are ruled out by assumption in the IC model. Within the intertemporal optimisation framework, it is almost impossible to derive analytical results, let alone a nice simple consumption function, if you allow for labour income uncertainty and also a reasonable utility function.
What you can now do is run lots of computer simulations where you search for the optimal consumption plan, which is exactly what the papers Carroll discusses have done. The consumer has the usual set of characteristics, but with the important addition that there are no bequests, and no support from children. This means that in the last period of their life agents consume all their remaining resources. But what if, through bad luck, income is zero in that year. As death is imminent, there is no one to borrow money from. So it therefore makes sense to hold some precautionary savings to cover this eventuality. Basically death is like an unavoidable liquidity constraint. If we simulate this problem using trial and error with a computer, what does the implied ‘consumption function’ look like?
To cut a long (and interesting) story short, it looks much more like Friedman’s model. In effect, future labour income is discounted at a rate much greater than the real interest rate, and the mpc from transitory income is more like a third than almost zero. The intuition for the latter result is as follows. If your current income changes, you can either adjust consumption or your wealth. In the intertemporal model you smooth the utility gain as much as you can, so consumption hardly adjusts and wealth takes nearly all the hit. But if, in contrast, what you really cared about was wealth, you would do the opposite, implying an mpc near one. With precautionary saving, you do care about your wealth, but you also want to consumption smooth. The balance between these two motives gives you the mpc.
There is a fascinating methodological issue that Carroll raises following all this. As we have only just got the hardware to do these kinds of calculation, we cannot even pretend that consumers do the same when making choices. More critically, the famous Freidman analogy about pool players and the laws of physics will not work here, because you only get to play one game of life. Now perhaps, as Akerlof suggests, social norms might embody the results of historical trial and error across society. But what then happens when the social environment suddenly changes? In particular, what happens if credit suddenly becomes much easier to get?
The question I want to raise is rather different, and I’m afraid a bit more nerdy. Suppose we put learning issues aside, and assume these computer simulations do give us a better guide to consumption behaviour than the perfect foresight model. After all, the basics of the problem are not mysterious, and holding some level of precautionary saving does make sense. My point is that the resulting consumption function (i.e. something like Friedman’s) is not microfounded in the conventional sense. We cannot derive it analytically.
I think the implications of this for microfounded macro are profound. The whole point about a microfounded model is that you can mathematically check that one relationship is consistent with another. To take a very simple example, we can check that the consumption function is consistent with the labour supply equation. But if the former comes from thousands of computer simulations, how can we do this?
Note that this problem is not due to two of the usual suspects used to criticise microfounded models: aggregation or immeasurable uncertainty. We are talking about deriving the optimal consumption plan for a single agent here, and the probability distributions of the uncertainty involved are known. Instead the source of the problem is simply complexity. I will discuss how you might handle this problem, including a solution proposed by Carroll, in a later post.
0 Comments