Recent Posts

Sunday, November 1, 2020

Effect Of Recessions On r* Estimates

This article demonstrates the importance of recessions in driving down the r* estimate produced by the Holsten-Laubach-Williams (HLW) methodology. Although there are other algorithms that can be used to generate a r* estimate, my argument is that they should have similar qualitative properties. In the case of the HLW estimate, my argument is that the nature of the recession in 2008 is a major contributor to the fall in r* thereafter. The underlying problem is that real-world data does not match the probability distribution assumed in the algorithm.

Why This Model?

I do not claim to have a comprehensive knowledge of all suggested methodologies to estimate r*, but from what I have seen is that they are similar enough to the core of the HLW methodology that they should have similar qualitative properties. The HLW algorithm was chosen because it is recognised (e.g., shown on the New York Federal Reserve bank website), less complex than the Laubach-Williams (LW) estimation technique, and most importantly, I had a working implementation of the algorithm (from GitHub, which is based on replication code from the N.Y. Fed).

In this article, I apply various shocks to GDP data to demonstrate the empirical properties of the estimate. Since the algorithm output depends upon estimated parameters, we need to be careful to keep the shocks relatively small so that the estimation is not disrupted too much. The historical data was truncated in the second quarter of 2019, to avoid the 2020 disruption (discussed here). The magnitude of the 2020 shock was an extreme outlier versus the assumed probability distributions used in the algorithm, and the fitting parameters end up very different, causing very different behaviour of the r* estimate. 

I am not going to pursue the details of the HLW algorithm for a straightforward reason: the version I used is going to be retired due to the 2020 shock. Either a dummy variable is used to nullify the 2020 shock (which makes the output highly dependent upon a judgement call on implementation), or new algorithms proposed. The argument here is that unless there is somehow a radical rethink of the methodology, the new ones will have similar qualitative properties, and the same sort of analysis I do here can be extended to the new models.

Recessions Create an Asymmetry

Conventional methodologies for r* estimation are based on cascading sets of assumptions.
  1. Assume that a model of the economy is correct, normally a linearisation of a New Keynesian model.
  2. Use a statistical procedure to estimate the model parameters, assuming that the model in step 1 is correct, and that we have a correct probability distribution of shocks.
  3. Use a Kalman filter to estimate hidden variables (in this case, r*, trend growth and output gap). The Kalman filter assumes the model parameters are correct, as well as making assumptions about the shock probability distribution.
It goes without saying that there are a lot of assumptions being made here. The justification is that most deviations from assumptions will hopefully cancel out. Given that a lot of control engineering designs have been safely implemented on linearisations of models for decades, there are precedents for such hopes.

However, a key problem here is that recessions do not conform to the assumptions about the distribution of model shocks. They are asymmetric, and as 2020 demonstrated, not conform to the thin-tailed normal distribution.

The lack of symmetry caused r* estimates to be driven lower during recessions, without compensating upward shocks. The rest of this article uses counter-factual GDP scenarios to demonstrate this.

Effect Of Falling Below Trend

Figure: Return To Trend Scenario


One well known observation about the 2008 recession is that GDP did not return to its pre-recession trend. The above figure depicts a scenario where GDP returns to trend.

The top panel shows historical data versus the counterfactual scenario (data are truncated in 2019Q2 to avoid 2020 shenanigans). Other input series (inflation, nominal rates) are untouched. GDP growth rates were modified to a constant level during the recovery period, then revert to historical levels once the trend level is regained. Since the figure is of log GDP, constant growth translates into a straight-line log GDP, which post-shock data are just a level shift versus historical figures.

We can see that the r* estimate drops between 150-200 basis points from its pre-recession average to the trough level. In the historical estimate, it recovers about 50 basis points before decaying towards the actual policy rate.  In the shock, the recovery is larger — about 100 basis points — before it starts to decay (as expected).

This means that if we accept the "plucking model" of recessions: an large excursion below trend before returning, there will be fairly significant downward shock to r* at each recession. This will only be cancelled out by the slow-moving decay towards the actual policy rate (which I estimated to be 50% of the deviation per decade).

Need an Overshoot

Figure: Cancelling Shock


In order to balance out the shock of the 2008 recession to the r* estimate, we need to not only return to trend, we need to overshoot it. The figure above shows the results if we double the growth rates in the recovery, so that we return to a level above trend that is symmetric to the drop. We see that r* does return to its pre-recession average before starting its decay towards the actual policy rate. (Decay is faster since it the gap between r* and r is larger.)

This should cause concern for the methodology. A basic glance at realised GDP data tells us that large deviations are one-sided. Even if we assume that the linear model is roughly correct, results will be biased lower by a sequence of recessions.

Groundhog Day!

Figure: Groundhog Data Inputs

To offer some justification for the last assertion, we can look at the "Groundhog Day!" scenario: what happens if the Financial Crisis recession and aftermath repeated after the end of the historical data set. The chart above shows the input data used in the scenario generation (the nominal interest rate aligned quite nicely). Log GDP is a level shift higher, which is equivalent to identical quarterly growth rates.

Chart: Groundhog Day r*

The figure above shows the resulting r*. The addition of the new data slightly affected the historical fit, but is nothing comparable to the 2020 shock effects. We see that r* drops precipitously, before decaying higher. 

This raises a plausibility issue. Why should the counterfactual 2020's have a r* estimate that is lower than the 2010's r*, even though both decades have identical data?

Concluding Remarks

The secular fall in r* estimates in the HLW in recent decades can be chalked up to the following factors.
  • There are no upward shocks to growth that counteract recessions. (Inflation shocks might do the job, but are not observed yet in post-1990 data.)
  • We do not see a dynamic that forces GDP to return to its trend level.  
  • New Keynesian central bankers appear to be chasing their falling r* estimates when setting rates.
It certainly seems possible to change the estimation methodology to deal with some of these problems, but it is unclear whether these patches are compatible with the underlying neoclassical theory. Nonlinear teaching models but they leave open the question: can they be fit to data to give the equivalent of a r* estimate?

I will be writing up this research in a chapter in Recessions: Volume II (link to the first volume). Once I am satisfied with my analysis, I will then discuss my critiques of it. However, we can see that we have a very hard time distinguishing the two following possibilities.
  1. Neoclassical theory is approximately correct, and r* has been falling — even though the estimation methods are presumably creating a downward bias.
  2. Low interest rates are not always effective in stimulating the economy, and so there is no reason to believe that r* exists. Instead, the effect of interest rates is contingent on conditions within the economy, e.g., is there a housing bubble in progress? It is no accident that interest rates are stuck at the effective lower bound: interest rate policy is too weak to overcome too tight fiscal and regulatory policy (e.g., labour law) settings.
For my heterodox readers, this final observation is the most interesting possibility. Since I have already outlined it multiple times, I think I will not dig further into the topic until I am ready. However, the counterfactual scenario analysis I do in this and a previous article offers the mechanics of doing the analysis in a rigorous fashion. Using these scenarios, we can build an argument that the methodology cannot differentiate between a world where interest rate policy acts like conventionally assumed, and one where interest rate policy is ineffective.

(c) Brian Romanchuk 2020

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.