(Note: this article is a digression explaining my distinction between empirical and teaching models. This matters a lot for the discussion of DSGE macro models. With this topic out of the way, I will then take up with some deeper analysis.)
There are a number of quite different-looking models that I lump into the "empirical" model category.
- Pure econometric approaches (almost no model structure, just try to get best fit of historical data). The advantage of these models is that if the person doing the fitting is competent, they will give the best possible fit to historical data. The reason is straightforward: there is no structure that forces a sub-optimal fit. The problem is that these models are just extrapolating past trends. Although this may make sense to people with a physical sciences background, it ignores the structure of the macro-economy. Using control systems terminology, it is not an open loop system, it is a feedback system -- fiscal and monetary policy affect outcomes. Policy reaction functions change over time, and so the behaviour of the economy changes -- even ignoring other structural changes in the economy. One could get similar results doing technical analysis of economic time series, while running the risks associated with such strategies.
- Approximating a teaching model, and fitting it to data. The best known models of this class are the linearisations of dynamic stochastic general equilibrium (DSGE) models. The ability to linearise these models was a huge competitive advantage. However, we are no longer dealing with the original model, rather the linearisation. There is an infinite number of nonlinear models that share the same linearisation, and so the linearisation has lost all the properties associated with the original model. This point has been deliberately obfuscated in most discussions.
- Attempts to fit a full teaching model to data. I am not aware of any particular successes of this strategy.
- Large-scale models ("Frankenmodels"). Large-scale models were the state of the art in the 1970s, but they have developed a deserved bad reputation. The most successful of these models at present seems to the RBB-US model used by the U.S. Federal Reserve. The ugly underside of these models is that they require a large staff to continuously monitor the inputs and results, and they "tune" the models to avoid garbage outputs. This tuning makes the model results almost entirely under the control of the guardians of the model. Furthermore, the models are typically an incoherent mess from a theoretical point of view.
- Partial models -- trying to predict the behaviour of some sector. Nobody has anything good to say about these models, but if I had to do forecasting exercises, these are what I would look at. (Why do I take this unusual stance? My argument is that "animal spirits" is the key hidden variable in macro models. We use the partial models to figure out what causes "animal spirits" to blow up. Since I am a Minsky-ite, my view is that there is always something to blow up, we just need to pin down where and when.)
Teaching models are used to visualise economic outcomes, but are typically too complex to fit to data. The reason is straightforward: the number of coefficients used to characterise behaviour ends up being large, and we do not have long enough runs of economic data without structural breaks to fit them.
Certain classes of models are inherently imposssible to fit to data. The worst offender are overlapping generation (OLG) models, where the time steps are one generation. That is, there are model households that live 2-3 periods (e.g., working age, retired). (Note that it is possible to have an OLG model with short time steps of one year or less.) There are no data corresponding to every single working age person trading with every single retired person in one day, and then stopping all commerce for 40 years. As such, the model does not correspond to any real-world situation, and can never be fit to data. They mainly exist to tell economic parables.
Other models appear that they could be fit to data, but as noted, they have too many free parameters. There are many behavioural functions that need to appear within economic sectors, and if these functions are nonlinear, the number of parameters used to specify the function rises. DSGE macro models are the worst offenders, since they impose the equivalent of a no-arbitrage condition on forward wages and consumer goods prices (as discussed here). Since there is no market for forward wage costs, there is no way of calibrating these forwards. As such, some form of simplification is needed. This simplification implies that we are fitting a simplified model to data, and not the original nonlinear one.
The standard way of getting around fitting issues is shock analysis, or "all else equals" analysis. We set up the teaching model, and see how it responds to some shock to behaviour. The two most common shocks are a change to the policy interest rate (or reaction function), or a fiscal policy change (i.e., fiscal multiplier analysis).
If we assume that "all else is equal," we apply the shock to what we think is the expected path of economic variables to get a forecast.
The justification for this step is desperation: if we do not do this, we can say literally nothing about the real-world using teaching models. Since academics need to produce a never-ending stream of publications, this approach is the path of least resistance.
The issue is that if we want to use this technique, we need to calibrate the model on its shock output versus observed data. Since an infinite number of models can get approximately the same shock, there is no reason to pursue model complexity to generate shocks.
I will let the reader draw their own opinions on the success of this technique.
Hidden Value of Teaching Models: Negative Results
My feeling is that teaching models are often approached in the wrong way. Instead of using them to demonstrate that things could happen, we should instead focus on looking at what cannot happen.
- Use a simple model to blow up conjectures about the economy. Although some might object to this as "straw manning," knowing that things will not happen is sometimes useful.
- Blowing up empirical techniques used in econometrics. The usual assumption in econometric work is that reality is using the assumed mathematical process to generate observed results. This is absolutely the worst assumption to make. Instead, look at what happens when we apply the statistical technique to a different model. My suspicion is that this would cause severe carnage in empirical macro. In order to avoid theoretical nihilism, academics will avoid this approach.
The wrong way to approach economics is to take economists' representations about their modelling at face value. Ignore what the abstract says, and look at the mathematics within the paper. If the work is aimed to deal with empirical questions, we need to look at the data used to fit the model, and what model that is being fitted. The danger is being distracted by a teaching model that has no direct connection to the numerical analysis.
(c) Brian Romanchuk 2020