This latest eruption has revolved around the non-prediction of the Financial Crisis. Noah Smith argued:
Personally I think DSGE techniques haven't reaped dramatic benefits (yet). But what other alternative is better? When I ask angry "heterodox" people "what better alternative models are there?", they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging. [Link to Wynne Godley paper.]This received responses, such as this one by Michalis Nikiforos. Although it is clear that Noah Smith aims to be provocative, I want to address the spirit of his views. I will now discuss examples of poorly applied modelling techniques within mainstream macro.
The Debt-To-GDP Ratio And Bond Yields
I discussed the relationship between bond yields and the debt-to-GDP ratio in this article, and the key chart is shown above. It shows a very strong prediction that high debt-to-GDP ratios result in lower bond yields.
I have stacked the deck in a way to make those models look as stupid as possible. I discussed some nuances in that earlier article. But if there was such a supply and demand effect that mattered, it should show up in a chart like this. (It is entirely possible that "high" debt-to-GDP ratios raise the 10-year Treasury yield by 0.20%; but when you consider that economist year-ahead yield forecasts are typically hundreds of basis points too high, such a small effect is effectively non-measurable.) The United States is not a special case; pretty well all developed countries with free-floating currencies have similar charts.
The mainstream response to this obvious downward-sloping line is to engage in widespread statistical and modelling skulduggery to create an upward-sloping line for some wildly transformed variables. This nonsensical activity is viewed as "valid" only because statistical test results are dumped into the papers.
Unit Roots And GDP
Professor Roger Farmer of UCLA provides a recent example of the over-selling of statistical techniques in "There is No Evidence that the Economy is Self-Correcting (Very Wonkish)" He describes the context as follows:
David Andolfatto asks in a twitter exchange for evidence that deviations of GDP from trend are non-stationary. Here is the raw data. Figure 1 is the residual from a regression of the log of real GDP on a constant and a time trend for quarterly US data from 1955q1 through 2014q4.
He provides a dump of various statistical tests which are looking for the presence of a "unit root"; he has a further explanation in a linked article.
I am not interested in the details of the Farmer-Andofatto debate, but the chartblogging explanation is that Roger Farmer is testing to see whether there is a tendency for real GDP (shown as a log) to revert towards a time trend, pushed by an unknown "reversion force". (A linear time trend of the log of GDP is equivalent to steady exponential growth.) He demonstrates using statistical tests that there is no such tendency (under this hypothesis).
As a cute application of how one might use statistical analysis - if we know nothing about how economies work - Professor Farmer's analysis is reasonable enough. But the problem is that we do know something about how the economy works.
The chart above shows nominal GDP growth for the United States for the same time interval used by Professor Farmer. There is an obvious structural break in behaviour between the recent era (the so-called "Great Moderation") and the earlier period which saw "stop and go" policy action, and large inventory corrections. The recent behaviour shows a tendency towards relatively steady nominal GDP growth in an expansion, punctuated with recessions. This sort of behaviour is easily produced in a stock-flow consistent (SFC) model, as nominal incomes are stabilised by the welfare state. (There are a wide number of models that could be used; I am not interested in pinning down which one here.) The differing growth rates for different periods would be explained by changes in the fiscal stance and income distributions, as well as the effect of a housing bubble (which is inherently hard to model; as discussed next).
The distinction between this model and the simple model proposed by Professor Farmer is that he is assuming that the stabilisation acts on real GDP, and that the reversion is towards a constantly growing trend line. Since we know that the fiscal stance and income shares have changed in a dramatic fashion since the early 1980s, there is no reason to assume that nominal growth rates would be constant. Therefore, the failure of the statistical tests is essentially meaningless; we need to test a model that better captures the behaviour of the economy.
Professor Lars P. Syll often emphasises this point; see the article "Model validation and significance testing" for more discussion of this.
Predicting The Financial CrisisThe timing and the extent of the Financial Crisis depended upon conditions in the financial markets, as well as the decision to throw Lehman Brothers under the bus. Although market efficiency is a hotly debated topic in academic finance, it is safe to say that predicting market direction is not easily done. There are anomalies in some risk premia, but that does not help anyone time markets, which is the key problem for forecasting a market-led recession. Therefore, I think it is unfair to complain that mainstream models did not predict the Financial Crisis and the ensuing deep drop in economic activity.
But it is fair to say that standard DSGE models offer almost no useful insight into what happened. As I discussed in an earlier article, these models incorporate productivity shocks, which is an unfortunate legacy of Real Business Cycle models. Productivity is pro-cyclical; if activity falls, workers are less productive, as "fixed costs" increase to "variable costs". Therefore, the deep insight that these models provide is that falling economic activity "explains" recessions. It is no surprise that mathematical models cannot forecast recessions, but if a modelling technique cannot provide non-trivial backcasts, it is hard to conclude that it is anything other than useless.
The Natural Rate Of InterestI have attempted to ignore the debate sparked Ben Bernanke's blogging about the natural rate of interest. This is because the entire discussion is silly, which is a strong claim, and I do not have time to develop the arguments in detail.
But the summary version is this: arguing that slow growth ("secular stagnation") is the result of the natural real rate being too low is equivalent to saying that slow growth is being caused by gremlins.
The gremlin theory works like this: although we cannot see the gremlins, we can measure their effect on the economy.
- Use a smoothing technique (
the Hodrick-Prescott filter) to generate an estimate of potential GDP.
- We then use "maximum likelihood techniques" to generate model parameters, for a model in which GDP growth reverts to potential as well as being influenced by the number of gremlins.
- This model ignores any other economic variables, most notably fiscal policy (which acts in fashion which is directionally stabilising almost all of the time).
- We then use a Kalman filter to determine the time series of the number of gremlins. If the economy is growing slower than usual, it means that there is an unusually high number of gremlins.
Returning to the "Natural Rate" theory, all you need to do is replace "gremlins" with "deviation of observed real rates from the natural real rate" and you get the Laubach/Williams paper "Measuring the Natural Rate of Interest (2001)" (the version published in 2003 is generally cited as the best way of calculating it; I have not looked at the 2003 version to see if there are difference between it and the 2001 version). The fact that they are looking at the difference between an unmeasured and measured variable creates a difference between the natural rate estimation technique and the "gremlin model", but my initial opinion is that this is not significant.* It would take considerable space to fully explain why I believe the two techniques are effectively equivalent (and I am unaware of any research that I could cite).
Since the natural rate of interest appears to be a tautology, it is very hard to find anything constructive to say about a debate which revolves around its implications.
* The model fit is very poor in the Laubach/Williams model, and the measured real rate is relatively stable, so that there should not be a significant effect on model dynamics if we replace the difference between a measured and non-measured variable with a single non-measured variable. Since the Kalman filter is a linear technique, and we presumably cannot have a negative number of gremlins, one might need to replace the "number of gremlins" in my text with the "deviation of the number of gremlins from the natural number of gremlins".
(c) Brian Romanchuk 2015