Recent Posts

Monday, April 23, 2018

Triviality Of Parameter Uncertainty And Measurement Noise For Forecasting

In earlier articles, I discussed the notion of forecastability (link to previous article): is it possible to forecast the future values of variables in an economic model? This article will begin an extended analysis of the simplest stock-flow consistent (SFC) model: model SIM. Based on what we know about linear system theory, we can that two standard sources of uncertainty (measurement noise and parameter uncertainty) are not forecasting challenges if we assume that we are working with the correct economic model. Other sources of uncertainty present greater problems, and will be discussed in later articles.


In my first article on forecastability, I referred to model SIM. I have used this model extensively in my discussions; if you want to explain a principle about models, the most sensible technique is to apply it to the simplest possible model. For those of you unfamiliar with the model, more details are available in this article, in Godley and Lavoie's textbook Monetary Economics (where I took the model from), or my book of SFC models.

In interests of brevity, I will not write out the model specification; there is a surprisingly high number of equations for a very simple model. The reason for the large number of equations is that it represents a full three sector model of the economy, with all accounting relations represented.

From a mathematical perspective, model SIM collapses to a simple representation. There is a single external (exogenous) variable - government consumption (G), and together with the previous period's money stock determine all other variable's values. In particular, if we look at a single variable in the model, the model generally looks like a first order low pass filter system.

This article contains a wide number of assertions about such first order low pass filter systems. I spend a few years of my life staring at this class of systems, trying to see how to derive useful results for a nonlinear variant. (Interested readers may wish to consult "Input-Output Analysis of Feedback Loops With Saturation Nonlinearities" -- published under the moniker "B.G. Romanchuk" -- to see the slightly non-soporific results of that analysis.) So there is a theoretical background behind these assertions, but I am skipping them in interests of reader time (and sanity).

The rest of this article discusses how we generate forecasts for future values for model SIM, in the deterministic setting, as well as with unknown noise and disturbances.

Suspension of Disbelief

In order to go through this analysis, the reader needs to suspend their disbelief with regards to model SIM. Yes, model SIM is a terrible model of real world economies; that is a property it shares with almost all proposed mathematical economic models. However, in order to discuss the mathematical properties of model SIM, we cannot flit between discussing it, and the properties of the real world or other hypothetical models in a random fashion. Instead, we need to focus on model SIM alone, and understand the mathematical properties of its forecasting power.

My hunch is that part of the problems in academic economists' treatment of mathematical models is a reliance on statistical tests to reject models. They look at a proposed model, compare its forecasts to reality using some black box statistical test, and see whether it passes. Although that might pass for "scientific" in academic economist circles, I would note that this is not standard procedure in systems engineering -- which is a field that actually has some mathematical successes, unlike economics. If we are discussing a mathematical model, we need to take its dynamics seriously, and those dynamics take precedence over generic statistical tests. However, we need to believe in our model dynamics. Since the general expectation is that mathematical economic models fail, there is a disturbing lack of faith in the model dynamics.

Forecasts With Parameter Uncertainty

If we have all available information in a deterministic model, we can obviously forecast the future values of  the model output by just running the model. In order for forecasting to be non-trivial, we need some information to be hidden from the forecaster. Since a formal specification of forecastability of models was not a concern to economists, the breakdown of public/private information tended to be vague and informal, and would have to be determined by the reader.

If we accept the assumptions and model dynamics of model SIM (as defined in the references given earlier), I would argue that only real source of uncertainty for forecasting is the household sector consumption function. It is specified by two "alpha" parameters (which are the propensity to consume of wealth and income). Since I am not concerned with writing out the model equations in detail, I will label them here as $\alpha_1, \alpha_2$.

This being the only source of uncertainty may not match intuition, but it appears to be required if we accept the dynamics of model SIM.
  • If we take the usual definition of state variable from dynamical systems theory, the only state variable is the money supply (M) -- which also equals household wealth. In systems theory, knowledge of the state is a typical source of uncertainty, which is why we often embed a Kalman filter into a controller dynamics to pin down the best estimate of the state. However, the money supply is measured very accurately and at very close to real time, and so the actual uncertainty about the system state in model SIM (if translated to the real world) is negligible.
  • The business sector acts in an omniscient fashion in model SIM, always hiring exactly enough workers so that it always breaks even. In the real world, there would be uncertainty about their ability to do this. In model SIM, this omniscience is assumed by the model dynamics, there is literally no way to incorporate uncertainty in the business sector within the model structure.
  • Government policy (consumption and tax rates) are exogenous, and in the real world, cannot be forecast perfectly. (Note that monetary policy does not exist in model SIM.) However, if we interpret our model forecasts as being conditional on an assumed path of future government policy, we have eliminated that source of uncertainty. If government policy changes relative to our assumption, our forecast would be wrong. The key is that we can predict the forecast error exactly based on the deviation of actual government policy from what was assumed. That is the best that we could hope for in any mathematical model of the economy.
  • For simplicity, I will assume that the tax rate is fixed for all time. We could adapt the analysis to work with a time-varying tax rate, but we end up with a linear time-varying system, which makes everything harder to work with. Furthermore, the tax rate has to be strictly positive (lying in the interval (0,1)) in order for there to be a solution to the model.
The definition of exact forecastability is specified that the forecasts are generated at a specific time, which we denote T, and we have access to the following public information:
  • the time series of economic variables for the time points 0,1,... T-1 (but obviously not the consumption function parameters); and
  • the level of government consumption (G) for all time.
We can then generate forecasts in a straightforward fashion: first we pin down the $\alpha$ parameters, then run a copy of model SIM using those parameter values to generate the levels of variables for all time greater or equal to T.

Getting the $\alpha_i$ estimates is an exercise in linear algebra. We denote the vector of the two parameters as $alpha$, where $\alpha = (\alpha_1, \alpha_2)^T$, which means it is a vector in $R^{2x1}$. (The T superscript denotes the transpose of a vector/matrix.) At $t=0$, the level of household consumption $c$ is given by:
c(0) = w_1(0) \alpha_1 + w_2(0) \alpha_2 = w(0) \alpha,  $$
where $w(t)$ is a 1x2 row vector that has entries equal to the household disposable income and previous period's wealth ($M$). (In this article, I denote the time series of consumption as $c$, to distinguish it from a matrix, as is usual in linear algebra. This does not match the usual economics notation.)

c(1) = w(1) \alpha. $$
We can then stack up the past history of observed consumption and $w$ vectors to get the matrix equation:
c = W \alpha,
where $c$ is $T$ dimensional vector, and $W$ is a matrix with $T$ rows and 2 columns.

We then do some linear algebra.
W^T c = W^T W \alpha,
\alpha = (W^T W)^{-1} W^Tc,
assuming $(W^T W)^{-1}$ exists.

The conditions for the matrix inverse to exist are straightforward.
  • We will need $T \geq 2$ in order for a possibility for the matrix to be non-singular. That is, we need at least as many data points in the back history as we have parameters to estimate.
  • The choices for $G$ and the inherited money stock at $t=0$ are somewhat constrained. For example, they cannot be equal to zero for all $t=0,1,...T-1$. This is annoying from the perspective of defining forecastability: it is possible to choose inputs to render an otherwise forecastable system non-forecastable. It is left as an exercise to the reader to demonstrate that the measure of the set of inputs $G$ that render the system non-forecastable is zero with respect to the set of all possible inputs over the time interval $0,...T-1$.
In plain English, so long as we have two historical data points, and we avoid pathological choices for inputs, we can then forecast the future trajectory of model SIM perfectly (given the various assumptions).

However, exact forecastability is too strong; we intuitively know that we will not find a model that can do that with real world data. It is safe to argue that anyone who feels that mathematical models are useful at all in economics accepts that we want "good enough" forecasts. In order to judge on that criterion, we need to start introducing more sources of uncertainty. This article concludes with one source; later articles will discuss others.

Noisy Observations

Mathematically, the easiest source of uncertainty to add is the possibility that noise corrupts our measurements. In this case, assume that we cannot observe household consumption $c$ directly, but only $\hat{c}(t) = c(t) + n(t)$, where $n(t)$ is the noise signal.

For an engineering or physical system, the concept of noise is unremarkable: anyone who has worked in a laboratory is used to adding in an error bar around measurements of physical variables. We just assume that $n(t)$ lives somewhere within those error bars.

In an economic model like model SIM, we know that accounting identities have to hold. So just having noise on one measurement is not enough; we could back out its true value by applying accounting identities against non-corrupted variables. As a result, specifying noise in these models has a hidden complexity which I want to avoid. For our purposes here, assume that $n(t)$ reflects the noise that remains after we have extracted as much information from other variables.

From a forecasting perspective, we can just plow ahead and apply the exact same algorithm to pin down the $\alpha$ vector. However, we only expect to converge to the best estimate for $\alpha$ (under certain assumptions, see below) as the length of our fitting history lengthens. This is a fairly standard fitting problem, and its properties are well known.

In order to do a proper mathematical analysis of this case, we need to start to pin down the properties of the noise signal $n$. There are two standard options (although I only recall seeing the first in the economics literature).
  1. We assume that $n(t)$ is generated by a specific random (stochastic) process, using specific parameters for the probability distribution (mean, standard deviation, etc.).
  2. We just assume that $n(t)$ lies in some standard space of time signals with useful properties, such as finite amplitude (infinity norm), finite energy (2-norm) or finite power (finite 2-norm over finite intervals; this is not a standard mathematical space, but shows up in control systems).
The first choice -- using probability theory -- is the most complex mathematically. My impression is that some people equate mathematical complexity with modeling sophistication. However, I am in the camp that the stochastic approach is less sophisticated from a modelling standpoint (and obviously, the scholars producing stochastic control theory research are likely to disagree with me).

The limitations of the probabilistic formalism are straightforward: the set of all signals (with probability 1) generated by a given stochastic process is only a subset of the wider set of signals in the second case, and assuming a specific set of stochastic parameters to specify the noise signal requires a great deal of certainty about an allegedly unknown noise signal. 

In engineering systems, what we think of as "noise" is often not very random. Anyone who did electronics laboratory work at McGill University had the joy of having their circuits oscillate as a result of the broadcasting from the nearby transmitting tower on Mount Royal. Although the signal looks like a brownian motion, careful analysis will tell you, that yep, that is "Stairway to Heaven" being frequency modulated at 97.7 MHz. Importantly, some the "noise" is stuff that we actually have to worry about. The AC current coming into the power supply is not a pure sinusoid, and any circuit that does not have a top tier power supply will tend have to signals at around 60 Hz (in my part of the grid), and at the various harmonics (120 Hz, 180 Hz). Those frequencies are close enough to the high frequency harmonics of physical systems that we need to pay some attention to that "noise" in our analysis. This can be done by shaping the random process used in analysis, but doing so is a PITA relative to the deterministic frequency domain noise shaping approach.

We have far less data available when working with economic models, and so the defects that show up in engineering may be less pressing there. However, the stochastic formalism results in far more opaque mathematics, requires a large amount of certainty about the stochastic process, and probably requires throwing out known information about disturbances in order to keep analysis tractable.

We can see the advantages of the non-stochastic approach in this example. We just need to ask a simple question: what will cause the estimate for $\alpha$ to diverge from the true value? One obvious candidate is that we find a noise signal that plants the observed $\hat c$ to equal the true output generated by another incarnation of model SIM with a different $\alpha$ parameters (call it $\tilde \alpha$. Under this construction for $n(t)$, we will always converge to ${\tilde \alpha}$ as our best estimate for the parameters.

We can then see the Achilles Heel of the construction: the noise signal has to be an extremely persistent variable, as the noise signal is the difference between the outputs of two distinct linear systems. In the limit, this noise signal would lie outside the space of signals of 2-norms, and is thus not a valid noise signal by the deterministic formalism.

Admittedly, it would take a bit of mathematics to fill in the details. But I would assert that we would be comfortable with our forecasts for model SIM if we can assure ourselves that our measurements are not being systematically distorted by persistent measurement errors in our fitting data set. And if that is the case, the problem is not our economic model, rather that we have made a fundamental error in how we are measuring the variables in our system, and there is no way that any economic model can overcome such gross incompetence. 

One could probably come up with a similar analysis using the stochastic formalism. The issue is that we would be stuck with particular models for the random process, and the logic would be buried in a dog's breakfast of stochastic calculus.

Concluding Remarks

We have covered the two simplest ways of implementing uncertainty in an economic model -- parameter uncertainty, and measurement noise. Follow up articles will discuss means to create more difficulties for economic forecasting. Another topic of interest is the discussion of the criterion for rejecting models. As one might guess, the author is skeptical about the blind application of statistical tests.

Appendix: Data Measurement Lags

In economic models, one standard problem is the reality that data is measured with a lag. We would represent this by having the measured signal be the true signal passed through a lag filter. This transformation does not meet the usual definition of being noise, since it is a deterministic transformation of the time series.

In control engineering, lags are treated with respect. Anyone familiar with the process of showering will understand why. It takes time for a change in the temperature control dial to show up in the observed temperature of the water. If we are too impatient, we can end up oscillating between too hot and too cold. After a few year's experience with this phenomenon, most people figure out that they need to react with a lag to temperature changes, giving time to for previous adjustments to show up. In such an environment, it is impossible to control the temperature of the water at a high frequency (if for some crazy reason you wanted your shower temperature to accurately track a pre-determined trajectory).

As long as we know what the measurement lag is, we can adjust out fitting algorithm to take it into account, and we will once again end up with the best estimate of $\alpha$. For model SIM, we will then have almost no difficulties with forecasting afterward (unless our estimates of government spending and the money supply are lagged, which is somewhat implausible).

Other models that are more heavily dependent upon state information would have greater difficulties with forecasting in the presence of measurement lags. Our forecasts would be conditional on our estimates of the current values of variables, which we would base on other real-time information as well as the lagged information. Whether the system is forecastable or not would depend on our ability to infer the current values of variables -- which would depend upon the model specification. 

(c) Brian Romanchuk 2018

1 comment:

  1. "Other models that are more heavily dependent upon state information would have greater difficulties with forecasting in the presence of measurement lags."

    Don't you just call in the expectation fairy and let her deal with it. ;-)


Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.