As a follow up to my recent poston alternatives to microfounded models, I thought it might be useful to give an example of where I think an alternative to the DSGE approach is preferable. I’ve talked about central bank models before, but that post was partly descriptive, and raised questions rather than gave opinions. I come off the fence towards the end of this post.
As I have noted before, some central banks have followed academic macroeconomics by developing often elaborate DSGE models for use in both forecasting and policy analysis. Now we can all probably agree it is a good idea for central banks to look at a range of model types: DSGE models, VARs, and anything else in between. (See, for example, this recent advertfrom Ireland.) But if the models disagree, how do you judge between them? For understandable reasons, central banks like to have a ‘core’ model, which collects their best guesses about various issues. Other models can inform these guesses, but it is good to collect them all within one framework. Trivially you need to make sure your forecasts for the components of GDP are consistent with the aggregate, but more generally you want to be able to tell a story that is reasonably consistent in macroeconomic terms.
Most central banks I know use structural models as their core model, by which I mean models that contain equations that make use of much more economic theory than a structural VAR. They want to tell stories that go beyond past statistical correlations. Twenty years ago, you could describe these models as Structural Econometric Models (SEMs). These used a combination of theory and time series econometrics, where the econometrics was generally at the single equation level. However in the last few years a number of central banks, including the Bank of England, have moved towards making their core model an estimated DSGE model. (In my earlier post I described the Bank of England’s first attempt which I was involved with, BEQM, but they have since replaced this with a model without that core/periphery design, more like the conical ECB modelof Smets-Wouters.)
How does an estimated DSGE model differ from a SEM? In the former, the theory should be internally consistent, and the data is not allowed to compromise that consistency. As a result, data has much less influence over the structure of individual equations. Suppose, for example, you took a consumption function from a DSGE model, and looked at its errors in predicting the data. Suppose I could show you that these errors were correlated with asset prices: when house prices went down, people saved more. I could also give you a good theoretical reason why this happened: when asset prices were high, people were able to borrow more because the value of their collateral increased. Would I be allowed to add asset prices into the consumption function of the DSGE model? No, I would not. I would instead have to incorporate the liquidity constraints that gave rise to these effects into the theoretical model, and examine what implications it had for not just consumption, but also other equations like labour supply or wages. If the theory involved the concept of precautionary saving, then as I indicated here, that is a non-trivial task. Only when that had been done could I adjust my model.
In a SEM, things could move much more quickly. You could just re-estimate the consumption function with an additional term in asset prices, and start using that. However, that consumption function might well now be inconsistent with the labour supply or wage equation. For the price of getting something nearer the data, you lose the knowledge that your model is internally consistent. (The Bank’s previous model, BEQM, tried to have it both ways by adding variables like asset prices to the periphery equation for consumption, but not to the core DSGE model.)
Now at this point many people think Lucas critique, and make a distinction between policy analysis and forecasting. I have explained elsewherewhy I do not put it this way, but the dilemma I raise here still applies if you are just interested in policy analysis, and think internal consistency is just about the Lucas critique. A model can satisfy the Lucas critique (be internally consistent), and give hopeless policy advice because it is consistently wrong. A model that does not satisfy the Lucas critique can give better (albeit not perfectly robust) policy advice, because it is closer to the data.
So are central banks doing the right thing if they make their core models estimated DSGE models, rather than SEMs? Here is my argument against this development. Our macroeconomic knowledge is much richer than any DSGE model I have ever seen. When we try and forecast, or look at policy analysis, we want to use as much of that knowledge as we can, particularly if that knowledge seems critical to the current situation. With a SEM we can come quite close to doing that. We can hypothesise that people are currently saving a lot because they are trying to rebuild their assets. We can look at the data to try and see how long that process may last. All this will be rough and ready, but we can incorporate what ideas we have into the forecast, and into any policy analysis around that forecast. If something else in the forecast, or policy, changes the value of personal sector net assets, the model will then adjust our consumption forecast. This is what I mean about making reasonably consistent judgements.
With a DSGE model without precautionary saving or some other balance sheet recession type idea influencing consumption, all we see are ‘shocks’: errors in explaining the past. We cannot put any structure on those shocks in terms of endogenous variables in the model. So we lose this ability to be reasonably consistent. We are of course completely internally consistent with our model, but because our model is an incomplete representation of the real world we are consistently wrong. We have lost the ability to do our second best.
Now I cannot prove that this argument against using estimated DSGE models as the core central bank model is right. It could be that, by adding asset prices into the consumption function – even if we are right to do so – we make larger mistakes than we would by ignoring them completely, because we have not properly thought through the theory. The data provides some check against that, but it is far from foolproof. But equally you cannot prove the opposite either. This is another one of thosejudgement calls.
So what do I base my judgement on? Well how about this thought experiment. It is sometime in 2005/6. Consumption is very strong, and savings are low, and asset prices are high. You have good reason to think asset prices may be following a bubble. Your DSGE model has a consumption function based on an Euler equation, in which asset prices do not appear. It says a bursting house price bubble will have minimal effect. You ask your DSGE modellers if they are sure about this, and they admit they are not, and promise to come back in three years time with a model incorporating collateral effects. Your SEM modeller has a quick look at the data, and says there does seem to be some link between house prices and consumption, and promises to adjust the model equation and redo the forecast within a week. Now choose as a policy maker which type of model you would rather rely on.
0 Comments