Recent Posts

Tuesday, December 3, 2013

A Poor Specification Of Fiscal Policy Means That DSGE Models Will Not Be Properly Identified

In my previous post, I discussed how the use of the primary balance as the point of departure for the analysis of fiscal policy was problematic. Those observations need to be kept in mind when looking at many mainstream analyses of fiscal sustainability. However, in this post, I look at an implication that is less obvious. Dynamic Stochastic General Equilibrium (DSGE) models which use the primary balance (or a similarly poor specification) to model fiscal policy cannot be properly fitted to empirical data.

There is a great deal of controversy about DSGE models. Their underlying assumptions appear bizarre. However, the usual justification for their use is that they can be fitted to data, and answer empirical questions that are demanded by policy makers (“What happens if we hike rates by 100 basis points?”). However, DSGE model parameters may be incorrectly identified in a systematic fashion due the misspecification of fiscal dynamics. This undercuts their purported usefulness for generating scenario forecasts.

This poor identification of model parameters will generate models that imply that monetary policy is more effective than is warranted. As a result, the run of forecast errors by central banks since the end of financial crisis is more easily understood…





A Weak Defense Of DSGE Models


Professor Simon Wren-Lewis of Oxford has written extensively about the justification for the use of DSGE models (example article). He argues that policymakers need quantitative answers to questions about the impact of policy choices. They need the answers quickly, without quibbling over technical details, as they are often reacting to crises. (Of course, those crises were typically caused by policymakers’ previous decisions that were not carefully thought out.) Debates about ergodicity, representative agents, and rationality are not what policymakers want to hear.

If you take the methodology in published papers at face value, the methodology used to develop DSGE models works as follows:

  1. A complex nonlinear economic model is set up, built around the optimisation problem for the “Representative Household”.
  2. This model is linearised.
  3. This linear model is fitted to the observed data (parameters are estimated).
  4. Scenario analysis is then run on the fitted model.

This procedure seems highly vulnerable to a poor specification during the first step. Thus, the ongoing debates involving microfoundations of DSGE models. However, a more realistic way of understanding the methodology is:
  1. Economists realise that they do not really know what true nonlinear dynamics of the economy are.
  2. They pull a linear model out of their hat. This model has lots of free parameters.
  3. This linear model is fitted to observed data.
  4. Scenario analysis is then run on the fitted model.

When interpreted this way, the methodology makes more sense as means to generate quantitative forecasts. (As someone with an engineering background, I am probably less shocked by this than a purist who insists that economics must be treated like a pure science.) The methodology is a means to do a statistical fitting, with a limited amount of structure imposed on the model. (Having lots of free parameters can make up for problems in the assumed structure.)

The alternative is to use a non-structured statistical method to estimate dynamics. But the use of non-structured techniques means that we have no intuition about how dynamics will change if structural parameters move.

As such, I can sympathise with the goals of the methodology used by DSGE modellers. That said, the way in which model parameters are estimated seems incorrect, as the estimation procedure is still vulnerable to an incorrect model structure.

Fiscal Policy In DSGE Models


The bulk of the debate around fiscal policy within DSGE models revolves around “Ricardian Equivalence”. I will defer that important discussion to another post. The assumption of Ricardian Equivalence is probably a major explanation of why the modelling of fiscal dynamics is handled poorly within DSGE models. However, Ricardian Equivalence is less important for the discussion of the linearised model that DSGE modellers end up working with, and it should not matter too much for the parameter estimation. (UPDATE: I have posted an article discussing why Ricardian Equivalence does not hold if the term premium is non-zero.)

There are a lot of DSGE models, and so it is difficult to make general statements about them. For many models, fiscal policy is specified by a primary balance that is determined entirely outside the model (exogenous). As such, the stabilising forces generated by the welfare state that I discussed in the previous post disappear completely. Other models have taxes and/or government consumption depending upon the state of the economy. I will take as an example, the well-known model for the euro area proposed by Smets and Wouters (link to ECB paper).

In that model, the government consumes a quantity of real goods that is proportional to real GDP. The implications of this are:
  1. There is no stabilisation of nominal incomes via taxes, or transfer payments for things like unemployment insurance.
  2. Government consumption will fall during a recession, acting in a pro-cyclical manner.

As a result, fiscal policy is assumed to be destabilising for the economic cycle. The stabilisation of nominal incomes by the welfare state, which is trivially observed by looking at income flows across the cycle, is completely absent.

Of course, we know that economic growth is generally stabilised in modern welfare states (at least outside the euro area periphery) – growth resumes after recessions, and growth rates are fairly steady during the bulk of the expansion. The implication is as follows: since fiscal policy is assumed to be destabilising, the estimation procedure has no choice but to attribute the stability of the economy to either:
  1. inherent stability of capitalist economies (as a follower of Hyman Minsky, I winced when I wrote that);
  2. the stabilising impact of monetary policy (the usual reaction function of central banks has been to react counter-cyclically).

In other words: the estimation procedure will presumably attribute all of the impact of the “automatic stabilisers” to monetary policy. This will be true even if monetary policy actually has no effect on the economy. (This is why I view the debate I refer to as “Interest Rate Effectiveness” as being an open question. There exists empirical evidence that appears to show that monetary policy stabilises the economy, but those estimation procedures may just be picking up the effect of the automatic stabilisers.)

Bayes – Not To The Rescue


The usual method to estimate the model parameters is using Bayesian logic. “Prior beliefs” are used to create the initial estimates for parameter values. However, these models are heavily over-parameterised. For example, the Smets-Wouters model had 35 model parameters (only 2 of which were fiscal). This means that the value of the prior information is less, since there are so many parameters being estimated at the same time.

At the same time, much economic data is slow-moving (low frequency).  For example, it’s not exactly an accident that I can slap straight lines through U.S. employment data; the data are evolving at a lower frequency than the monthly sample time. For the period 2010-2013, where I did my straight line fit, there appear to be over 30 “independent” samples of the unemployment rate. However, the fact that I can fit a straight line through the data means that the data over this interval can be described by two parameters. Therefore, most of the 30 observations do not represent new information for a fitting operation.

This means that the number of free parameters in the estimation procedure is probably comparable to the effective number of degrees of freedom in the data set they used.

In other words, there is no surprise that the model can fit economic data. The only question is: is there a conceivable set of “reasonable” dynamics that could not be fit, given the number of parameters available? (One could test this be generating simulation data from Stock-Flow Consistent models, which violate various DSGE assumptions, and see how the DSGE fitting algorithm copes.)

The dynamics of a linear system are mainly described by the behaviour of the “state transition matrix” (a matrix is a table of numeric values). In particular, the dynamics are driven by the values of the eigenvalues of this matrix. One property of these state transition matrices that is not obvious is that the eigenvalues of the matrix can be very sensitive to the entries in the matrix.  In the case of linearised DSGE models, these entries are combinations of the parameters to be estimated. (Nonlinear dynamics will have a similar parameter sensitivity.)

As a result, even “small” deviations from the assumed initial values for parameters will be able to generate a system able to fit the data, since these “small” deviations can move the estimated model dynamics into almost any configuration. Meanwhile, the procedure will not have explored a wider set of models to see whether fiscal policy impacts are poorly specified.

Implications


The implications of this are straightforward. Models estimated using an unrealistic structure of fiscal policy will attach too much importance to monetary policy, and no significance to non-discretionary fiscal policy (automatic stabilisers). In an environment where the stance of passive fiscal policy is too restrictive but monetary policy is loose, the models will predict rapid growth. That strong growth will fail to materialise, however.

When viewed in this fashion, the repeated forecast misses by multiple central banks, all in the same direction, over the past few years becomes a bit less of a surprise.

I cannot offer a simple way to solve this problem, beyond scrapping DSGE models that have a poor specification of fiscal policy. I see no obvious way to disentangle the effects of monetary and non-discretionary fiscal policy, given that these variables are so highly correlated. As a result, I view the estimation problem to be a fairly critical theoretical issue. (Obviously, there are more pressing policy debates.)

(c) Brian Romanchuk 2013

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.