Recent Posts

Sunday, April 22, 2018

Why We Should Be Concerned About The Forecastability Of Economic Models

Although it might be possible to find dissenters, the apparent consensus among financial market practitioners is that mathematical economic models provide terrible forecasts. One response is to keep searching through the set of all possible models, hoping to find something that works. The author's suggested response is to accept that forecasting is an inherently impossible task. However, in order to advance beyond nihilism, we need to quantify why mathematical models are terrible. My argument is straightforward: the models that provide the best fit to observed behaviour cannot themselves be forecast with the type of information that we have available in the real world.

(As an aside, I would note that Beatrice Cherrier has written about the preference for tractable (simplicity) of mathematical models in "What is the Cost of 'Tractable' Economic Models," and in a follow up article. Much of what she discusses appears to overlap my thinking about the existing methodology, although I believe that I have a different view. My suggestion is to look at non-forecastable economic models, and tractable (or reduced order) non-forecastable models would be the most interesting. A non-tractable non-forecastable model might provide the best fit to reality, but its complexity would also make it difficult to draw any conclusions from it; this is essentially my concern with agent-based models. Since I was in the middle of laying out my logic, I was not able to work in a longer comparison to her arguments.)

Introduction

In an earlier article ("Forecastability and Economic Modelling"), I introduced the concept of forecastability, which is a property of economic models. If a model is forecastable,  we can accurately forecast future values of model variables based solely on the history of publicly known time series, and public knowledge of exogenous variables (such as policy variables). It would take a significant amount of work to do a survey, but the author's guess is that the bulk of standard economic models are forecastable, as this represents a methodological bias.

My earlier article outlined the definition of exact forecastability: can we forecast the future of economic variables in the model exactly, given the history of public information? This is too strong a condition, as any form of measurement noise would make such perfect forecasts impossible. For example, if GDP growth equals 2% every period plus an unknown random noise signal that takes values in the range [-0.1, 0.1], the best forecast is 2%, but the forecast always be slightly off (with probability 1).

We need to have a weaker condition: given a "small" unknown "disturbance" to the model in question, can we generate a forecast with "small" forecast errors? This formulation could be quantified, but it will depend upon the nature of the "disturbances."

There are four obvious categories of disturbances to a model to consider.
  1. Measurement errors. We cannot read off the true values of economic variables. (One issue is that if the economic models encompass a full set of national accounts, we could use accounting identities to cut through the noise.)
  2. Model parameters change over time.
  3. "Forces" external to the model which impact variables. These are common in engineering model analysis, for example, a wind gust can hit a plane. However, such disturbances are somewhat difficult to square with models that represent a closed set of national accounts.
  4. Model error: the true model is another model that is "close" to the base case model in question. This is tied to the notion of model robustness, which was the key insight of post-1980s control theory. (Mainstream economists have dipped into 1960s optimal control theory, but they have largely refused to pay attention to the evolution of control theory since then.) Although the notion of two models being close to each other sounds vague, we can quantify this using operator norms.
Note that this list excludes what is allegedly a major source of "model uncertainty": uncertainty as to the level of fixed parameters. Given a fixed model structure and historical data, we can slice through parameter uncertainty like a machete through artisanal cheese. (I will discuss my disdain for parameter uncertainty in a later article in the context of an extended example.)

The multiplicity of sources of disturbances makes a generalisation of the notion of "small disturbances" difficult. As a result, I would argue that we should instead worry about analysing models with a focus on the properties of their forecast errors rather than the precise nature of the generalisation of exact forecastability.

Forecastability and Falsifiability

One popular defense of economic models is that they are "teaching models," I use the same defense for my use of stock-flow consistent models myself. This defense is invoked by both mainstream and heterodox economists. I find the heterodox tradition more congenial, as the literary criticism wing of post-Keynesian economics has kept the claims about the teaching models in line; the mainstream tradition no longer has this mechanism to enforce common sense.

As an additional disclaimer, I am only discussing here the part of economics that I have labelled "bond market economics": the components of economics that might be of interest to bond investors. This is actually a relatively wide field, as it does cover all of fiscal and monetary policy, as well as economic modelling. Political economy matters, but that is not my expertise.

It is very easy to savage the notion of teaching models: if they do not offer any useful predictions about real world behaviour, why are we teaching them to students? In particular, why choose one model for teaching, and not the model which suggests the exact opposite conclusion? If economics consists of an art of choosing the right model for each task, in what sense can economists' conclusions be falsified?

I cannot hope to answer such criticisms. However, examining the forecastability of models offers a rigorous counter-attack to people who demand predictions from economic models. If we can show that non-forecastable models generate the sort of forecast errors that we see when we attempt to make forecasts in the real world, we can then argue that the best approximation to the real world consists of models that cannot be used in forecasting. (The conclusions may be less nihilistic: we may be able to say when the models work, and when they do not.)

My intuition about this is derived from the literary criticisms of mathematical modeling from the post-Keynesian tradition. In my view, the key to advancement is to pin down the mathematical properties of the models that make them non-forecastable.

Next Steps

So far, my discussion of forecastability has been mainly literary. It is acting as an introduction to the more mathematical analysis that I hope to produce. There may be an elegant way of summarising my views mathematically, but I do not yet see such a summary. Instead, we will need to plod through some mathematical models, and examine the properties of the forecast errors that they produce. Even if there is no elegant theorem at the end of this, we will have a test bed of examples that give a new view on modelling problems.

As a spoiler for my upcoming discussion of an example, it seems to me that economists have grown so accustomed to the failure of mathematical models of the economy, they tend to not consider what would happen if a mathematical model is correct. They (and others, particularly physicists) want to compare themselves to physics, when the modelling problems are much closer to engineering. In engineering, we are habituated to a mixture of success and failure of theoretical models. In particular, engineers are habituated to seeing models that should work based on fundamental physics fail, yet ugly seat-of-pants approximations work perfectly fine. In any event, when confronted with a mathematical model of the economy, the better question would be to ask: what would happen if this is the correct model, instead of just looking for a statistical test to reject it.

(c) Brian Romanchuk 2018

3 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. Infantile model bricolage, or, How many economists can dance on a non-existing pinpoint?
    Comment on Brian Romanchuk on ‘Forecastability And Economic Modelling’*

    “The highest ambition an economist can entertain who believes in the scientific character of economics would be fulfilled as soon as he succeeded in constructing a simple model displaying all the essential features of the economic process by means of a reasonably small number of equations connecting a reasonably small number of variables. Work on this line is laying the foundations of the economics of the future . . .” (Schumpeter, 1946)

    The future is now and economists still do NOT have the paradigmatic simple core model but a heap of in incommensurable and contradicting constructions. Pluralism may have its merits elsewhere but is the worst thing that can happen in science. As the ancient Greeks already observed: “There are always many different opinions and conventions concerning any one problem or subject-matter…. This shows that they are not all true. For if they conflict, then at best only one of them can be true.” (Popper)

    Fact is that, in economics, ALL models are axiomatically false. It holds: “When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing.” (Aristotle, 300 BC) Fact is that the premises of current model are neither certain, true, nor primary.

    Brian Romanchuk’s SIM model is a case in point. He enumerates his key premises as follows.
    • The model is a straightforward three sector model, with a household sector, business sector, and government.
    • The household consumption function is defined in terms of a pair of propensity to consume parameters (out of income, out of wealth). …
    • The business sector is constrained to break even, …
    • Government policy is specified in terms of government consumption and a fixed tax rate.

    Brian Romanchuk starts with macrofoundations, which is correct. But then he assumes a consumption function, which is a NONENTITY, and break even for the business sector, which kills the model already at this early stage because a zero profit economy is the most idiotic NONENTITY of them all.

    Let us contrast this with the standard microfoundations approach. The whole analytical superstructure of Orthodoxy is based upon this set of hardcore propositions a.k.a. axioms:
    • HC1 There exist economic agents.
    • HC2 Agents have preferences over outcomes.
    • HC3 Agents independently optimize subject to constraints.
    • HC4 Choices are made in interrelated markets.
    • HC5 Agents have full relevant knowledge.
    • HC6 Observable economic outcomes are coordinated, so they must be discussed with reference to equilibrium states. (Weintraub)

    HC3 introduces marginalism which is the all-pervasive principle of Orthodoxy. HC3, though, and HC5 and HC6 are plain NONENTITIES.#1

    See part 2

    ReplyDelete
  3. Part 2

    In order to be applicable HC3, requires a lot of auxiliary assumptions, most prominently a well-behaved/differentiable production function.#2 Taken together, all axioms and auxiliary assumptions then crystallize to supply-function/demand-function/equilibrium or what Leijonhufvud famously called the Totem of Micro.#3

    The methodological fact of the matter is that ALL models that take just one NONENTITY into the premises are a priori false.#4

    So, because these premises are NOT “certain, true, and primary” they cannot be used for model building: expected utility, rationality/bounded rationality/animal spirits, constrained optimization, well-behaved production functions, supply/demand functions, simultaneous adaptation, equilibrium, first/second derivatives, total income=value of output, I=S, real-number quantities/prices, ergodicity. Every theory/model that contains just one NONENTITY goes straight into the wastebasket.

    The standard microfoundations approach with all its variants and derivatives up to DSGE is methodologically false. The same holds for Keynes’ macrofoundations and all After-Keynesian variants.

    To put NONENTITIES into the premises is the defining characteristic of fairy tales, science fiction, theology, Hollywood movies, politics, proto-science, and the senseless model bricolage of scientifically incompetent economists.#5

    Egmont Kakarot-Handtke

    * http://www.bondeconomics.com/2018/04/forecastability-and-economic-modelling.html

    #1 The solemn burial of marginalism
    https://axecorg.blogspot.de/2016/04/the-solemn-burial-of-marginalism.html

    #2 Putting the production function back on its feet
    https://axecorg.blogspot.de/2016/09/putting-production-function-back-on-its.html

    #3 Equilibrium and the violation of a fundamental principle of science
    https://axecorg.blogspot.de/2017/06/equilibrium-and-violation-of.html

    #4 The future of economics: why you will probably not be admitted to it, and why this is a good thing
    https://axecorg.blogspot.de/2016/01/the-future-or-economics-why-you-will.html

    #5 How to restart economics
    http://axecorg.blogspot.de/2016/01/how-to-restart-economics.html

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.