Recent Posts

Friday, January 22, 2016

DSGE Macro As An "All You Can Eat" Buffet (Part 2)

I previously discussed what I think can be done with mathematical models within economics; fundamental uncertainty about economic outcomes means that we are typically stuck working with partial models of the economy. The Dynamic Stochastic General Equilibrium (DSGE) literature appears to provide a satisfactory set of partial models to work with. In fact, the "publish or perish" imperative means there is an almost unlimited "all you can eat" buffet of DSGE models you can gorge yourself on. Unfortunately, a huge set of models with contradictory implications are only useful in a sociological sense: they allow leaders to provide a "scientific"-sounding justification for whatever policy their prior biases push them to undertake. (This article is a second part of a two-part article; link to Part 1).

DSGE Models As A Common Language

The most reasonable defense of DSGE models is that they provides a common framework that could be used by economists across the political spectrum. As Professor Wren-Lewis writes:
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really, in the following important sense. I think that any of this huge range of models could be presented at an academic seminar, and the audience would have some idea of what was going on, and be able raise issues and make criticisms about the model on its own terms. This is because these models (unlike those of 40+ years ago) use a common language. [Emphasis mine - BR] The idea that the academic ranking of economists like Lucas should reflect events like the financial crisis seems misconceived from this point of view.
I am not an expert on the state of the economic literature of the 1970s, but I accept Wren-Lewis' characterisation of the situation then. Mainstream macro has since had a considerable convergence of methodology; only the post-Keynesians, Marxists, and Austrians are out of the fold.

One explanation is political. The 1980s saw the collapse of left-wing parties, as well as the discrediting of Monetarists' theories about money supply targeting. This wiped out disagreements about the policy framework, which allowed for more civil discourse amongst economists in their academic seminars.

However, a more charitable explanation was that the pre-DSGE macro literature was full of sloppy mathematics. Modern mathematics can always be reduced to a set of concrete statements about sets and elements of those sets; anything else on top of that is just hand-waving (to use the favoured term of mathematicians). A lot of earlier economics consists of poorly defined statements about supply and demand curves; it is often quite unclear whether those statements could be translated into acceptable mathematical operations. (To be fair, every branch of applied mathematics uses verbal shortcuts; however we should be able to translate those short cuts into rigourous mathematics.) Since the "models" were incoherent (and therefore not actually mathematical models), economists could quite easily disagree about what the models implied.

A Common Language - So What?

However, creating a common language is a weak argument. Any coherent mathematical model would also be understandable and provide a common language. For example, the Stock-Flow Consistent (SFC) modelling methodology uses well defined models. However, SFC models are arbitrarily deemed to be unacceptable to mainstream economists. What are the advantages of DSGE models versus other frameworks?

Professor Wren-Lewis continues:
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated[...]
It also means that the range of issues that models (DSGE models) can address is also huge. To take just one example: the idea that the financial crisis was caused by growing inequality which led to too much borrowing by less wealthy individuals. This is the theme of a 2013 paper by Michael Kumhof and colleagues. Yet the model they use to address this issue is a standard DSGE model with some twists. There is nothing fundamentally non-mainstream about it.
In summary, he argues that the class of DSGE models is broad enough to cover most topics of interest. However, this is where his argument breaks down.

The class of DSGE models is fairly arbitrary; it seems that the only requirement is that "equilibrium" has to be invoked at some random point within the proofs discussing the models. Beyond that, the models have almost no useful overlap. The theory is not additive; results derived for one model have absolutely no implications for another model.

For example, the standard DSGE models developed by central banks would roughly be characterised as follows.
  • The analysis is built around a "representative household" that effectively makes decisions for the entire household sector. Since it is "representative," the model implicitly assumes that there can be no differences in households (due to inequality, for example) that would cause deviations in behaviour. This is necessary to allow us to find a solution.
  • This household attempts to optimise its expected utility over the forecast horizon (normally to infinity), trading off labour hours (which reduce utility) and consumption (which raises it).
  • The business sector does not engage in long-term planning; it reacts to the marginal cost of labour and marginal revenue, and thus is effectively mindless. This drops the business sector out of the optimisation problem (which is necessary to be able to solve the problem).
  • The only other actor in the model is the central bank; a reaction function is specified, which means that the central bank also does not engage in forward planning; it just reacts to the trajectory set by the representative household.
The solution of the optimisation problem determines the trajectory of key aggregate variables: hours worked, output and the price level. These models were seen to be useful for central banks; they were used to justify the argument that inflation targeting was the optimal policy (for example).

However, let us compare that to the cited paper "Inequality, Leverage and Crises:The Case of Endogenous Default," (by Michael Kumhof, Romain Rancière, and Pablo Winant). That model is described by:
  • Total output is a random walk; that is, the model cannot be used to estimate trends in output (which is what the standard DSGE model is supposed to analyse).
  • The share of output which is garnered by top earners is a random walk (which means that the model offers no insight why inequality would increase).
  • The only economic activity that has interesting dynamics is that rich households lend money to poor households. If those debt burdens get too large, those poor households will default (the "crisis" which is the object of study).
  • Most of the analysis involves determining the trajectory of debt outstanding. The more income the rich have, the more that they can lend, and the greater the chance of default.
This model has almost nothing in common with the representative household models described earlier. The dynamics are completely different, and each model literally cannot describe the phenomena the other deals with. The representative household model cannot be solved if there is more than one class of household; the inequality model cannot describe economic fluctuations or whether inequality lowers growth rates.

When we look at DSGE models, we are confronted with a vast array of models that have no overlap because each makes simplifying assumptions in order to find a solution to the system. This means that each model can only address a single question, and is entirely unable to offer any insight into any other issues. This means that the range of models that is useful is much smaller than it appears; all we can hope is that an existing model can cover the question at hand.

Since the models do not overlap, this opens the ground for competing models for what is supposed to be the same phenomena. For example, a free market partisan could presumably create a DSGE model where inequality raises long-term growth rates. (This is straightforward; richer households save more, allowing for more investment, which presumably drives higher productivity.) Since the models do not overlap, there is no way of reconciling them. They may speak the same "language," but partisans will still talk right past each other.

Conversely, a more sensible modelling strategy (SFC models, say) would allow analysts to pick and choose amongst models. Since the models are more easily solved, it is possible to incorporate complications within the model. For example, it would not be that difficult to split the household sector in high income and low income cohorts, and examine the effects of inequality on economic flows. Moreover, different models could be tweaked to resemble each other, allowing users to see whether the two models are coherent in areas where they overlap.

Crippled By Assumption

As discussed above, even if we accept the claims of DSGE modellers at face value, the models all have an extremely narrow scope, which makes it hard to see what model can be applied to analyse a particular problem. However, I would argue that the reality is even worse: the effectiveness is much smaller than is claimed in the verbal descriptions of these models. Most sensible people do not want to waste their time dealing with the mathematical gibberish in the DSGE papers, and they presumably only look at these verbal descriptions. However, when you peer under the hood, the usefulness of the models is much smaller than is claimed (and this puts aside the question of the ridiculous DSGE model assumptions). The small scope is the result of the fact that each of these models have crippling analysis assumptions put in place in order to be able to find a solution to the system of equations ("analytical tractability").

The poster children for these problems are the pre-2008 representative household models, which were spectacularly useless during the financial crisis.  Although researchers have attempted to patch the models (such as incorporating credit spreads), the models still are forced to ignore business sector planning due to mathematical complexity, and are almost by definition unable to model the business cycle. (It is called the business cycle and not the productivity cycle for a good reason.)

Researchers have churned out thousands of these models; each has a distinct crippling set of assumptions. For example, Overlapping Generations (OLG) models feature time increments that are too long to be a useful guide to policy. I cannot hope to go through the literature to point out all of the problems, given the number of published papers. But I will return to the previously discussed  Kumhof, Rancière, and Winant paper as an example, since Professor Wren-Lewis cited it approvingly. If he thought the paper was useless, he presumably would not have cited it.

The key defect of the Kumhof, Rancière, and Winant paper models almost no economic behaviour; the only decision-making that is addressed is the proportion of income that the rich lend to the poor. There are no other savings vehicles available within the model, which means that lending from the rich to the poor is not a side effect of other economic forces, it is literally baked into the model.

If you look at the actual model, it tells us nothing about an economy where there are alternative savings vehicles open to the rich, and where there is a business cycle (which affects the income share of the rich). Since direct lending from the rich to the poor is minuscule in modern economies, the model dynamics do not even come close to representing real world financial flows. The only possible reason to think that this model has anything to say about real world developed economies is that the reader accepts the authors' verbal description of it without actually looking at its mathematical definition.

The paper attempts to show its empirical validity by fitting the model result to real world data during relatively short time intervals ahead of the Financial Crisis and the Great Depression. However, if you look at the data series they fit, one sees that they are essentially monotonic trends. This provides another example of how the DSGE literature ignores falsifiability. I would argue that it would be nearly impossible to be in a situation where a model with 17 model parameters could not fit a time series that exhibits two degrees of freedom (level plus slope).

Very simply, you do not need any knowledge of stochastic calculus to know that if you have a model wherein the only investment alternative for the rich is to lend to the poor, a rising income share amongst the rich will lead to more lending to the poor.

To recapitulate, we start off with an apparent cornucopia of DSGE models, covering all kinds of interesting phenomena. They sound great, until we actually get past the abstract and look the actual models, we then see that they have much narrower analysis scopes. This is less good, but at least we have a lot of models. Unfortunately, once we dig deeper, we find out that the models are generally fairly terrible, as they have abstracted out most of the interesting macroeconomics. Our buffet has turned into a pile of dodgy leftovers.

Where Are DSGE Models Useful?

Although DSGE models are entirely useless as an analytical device, they will not disappear any time soon, since they fill a sociological need.
  • They allow academics to churn out thousands paper with non-falsifiable results. (Non-falsifiability is useful; having to retract a paper is embarrassing for everyone involved.) Moreover, existing ideas can be "recycled" into a DSGE framework, making the publication churn even easier to achieve.
  • Having thousands of inconsistent models makes it easier for leaders to demand their followers to find a theoretical justification for a chosen policy.
(c) Brian Romanchuk 2015

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.