When the need for internal theoretical consistency (microfoundations) and external empirical consistency (matching the data) conflict, what do you do? You might try and improve the theory, but this can take time, so what do policy makers do in the meantime?
There are two ways to go. The first is to stick with microfounded models, and just deal with their known inadequacies in an ad hoc manner. The second is to adapt the model to be consistent with the data, at the cost of losing internal consistency. Central banks used to follow the second approach, for the understandable reason that policymakers wanted to use models that as far as possible were consistent with the data.
However in the last decade or two some central banks have made their core macromodels microfounded DSGE models. I have not done a proper survey on this, but I think the innovator here was the Bank of Canada, followed by the Reserve Bank of New Zealand. About ten years ago I became heavily involved as the main external consultant in the Bank of England’s successful attempt to do this, which led to the publication in 2004/5 of the Bank’s Quarterly Model (BEQM, pronounced like the well known English footballer). I think it is interesting to see how this model operated, because it tells us something about macroeconomic methodology.
If we take a microfounded model to the data, what we invariably find is that the errors for any particular aggregate relationship are not just serially correlated (if the equation overpredicts today, we know something about the error it will make tomorrow) but also systematically related to model variables. If the central bank ignores this, it will be throwing away important and useful information. Take forecasting. If I know, say, that the errors in a microfounded model’s equation for consumption are systematically related to unemployment, then the central bank could use this knowledge to better predict future consumption.
BEQM addressed this problem by splitting the model into two: a microfounded ‘core‘, and an ad-hoc ‘periphery’. The periphery equation for consumption would have the microfounded model’s prediction for consumption on the right hand side, but other variables like unemployment (and lags) could be added to get the best fit with the data. However this periphery equation for consumption would not feed back into the microfounded core. The microfounded core was entirely self-contained: to use a bit of jargon, the periphery was entirely recursive to the core.
Now at first sight this seems very odd. If the periphery equation for consumption was giving you your best prediction, surely you would want that to influence the core model’s predictions for other variables. However, to do this would destroy the internal consistency of the core model.
Let us take the hypothetical example of consumption and unemployment again. In the core model unemployment does not directly influence consumption over and above its influence on current and future income. We have found from our periphery equation that we can better explain consumption if it does. (Unemployment might be picking up uncertainty about future income and precautionary saving, for example.) However, we cannot simply add unemployment as an extra variable in the core model’s equation for consumption without having a completely worked out microfounded story for its inclusion. In addition, we cannot allow any influence of unemployment on consumption to enter the core model indirectly via a periphery equation, because that would destroy the theoretical integrity (the internal consistency) of the core model. So the core model has to be untouched by the ad hoc equations of the periphery.
So this core/periphery structure tries to keep our microfounded cake, but also eat from the additional knowledge provided by the data using the periphery equations. Now the research goal is to eventually get rid of these periphery equations, by improving the microfounded model. But that takes time, so in the meantime we use the periphery equations as well. The periphery equations utilise the information provided by the statistical properties of the errors made by the microfounded model.
I think this core/periphery structure does nicely illustrate the dilemma faced by policy making institutions. They want to follow current academic practice and use microfounded models, but they also want to use the information they have about the limitations of these models. The core/periphery structure described here can be criticised, because as I suggested this information is not being used efficiently without feedback to the core . However is there a better way of proceeding? Would it be better to compromise theory by changing the model so that it follows the data, which in BEQM’s case would merge core and periphery?
It is sometimes suggested that this is a conflict between forecasting and policy analysis. The example involving consumption and unemployment was chosen to show that this is not the case. The data suggests that the microfounded model is missing something, whether we are forecasting or analysing policy, and the question is what we do while we figure out exactly what is missing. Do we continue to use the wrong model, confident in the knowledge that the stories we tell will at least be consistent, albeit incomplete? Or do we try and patch up the model to take account of empirical evidence, in a way that will almost certainly be wrong once we do figure out properly what is going on?
What has this to do with academic macroeconomics? Perhaps not much for the macroeconomist who builds microfounded DSGE models and is not that involved in current policy. Microfounded model building is a very important and useful thing to do. For the profession as a whole it does matter, because the central banks that now use microfounded DSGE models do so because that is how policy is analysed in the better journals. Academic macroeconomists therefore have some responsibility in advising central banks how they deal with the known empirical inadequacies of those models. When the data tells us the model is incomplete, how do you best use that information?
0 Comments