Recent Posts

Wednesday, November 29, 2017

Why Parameter Uncertainty Is An Inadequate Modelling Strategy

We live in a world of uncertainty. One strategy used in economics is to incorporate the notion of parameter uncertainty: we have the correct model, but the parameters have some random variation from a baseline value. This strategy is highly inadequate, and has been rejected by robust control theory. The belief that we have the correct model was an underlying premise of optimal control theory, and the weakness of this premise in practice explains why optimal control theory was largely abandoned in controls engineering. (Interestingly enough, it persists in Dynamic Stochastic General Equilibrium (DSGE) models).

In this article, I give an example of an abject failure of parameter uncertainty as a notion of model uncertainty.

Why Care About This Example?

The example I give here is deliberately simple: it is a model with only a single parameter. It is not a recognisable economic model. However, it is easy for the reader to experiment with this model to validate the failure of parameter uncertainty.

Optimal control failed in practice as a result of the general principles illustrated by this example. To be clear, this is not a model system that caused them difficulty in particular. I will return to optimal control after I describe the example.

One could look at the simplicity of the example and argue that modern economists are far too sophisticated mathematically to make such an error. However, such arguments ring hollow when we consider that the optimal control engineers fell into a similar trap.

Firstly, the optimal control engineers were more sophisticated mathematically than modern economists. They developed the mathematics that DSGE macro modellers now use. A major driver in the development of optimal control theory was the path-planning required to get a manned mission to the moon in the 1960s: they were literally rocket scientists.

Secondly, they were working on engineering systems. Macro economists constantly complain that they cannot do experiments on their systems of study (except by using DSGE models, of course!). The optimal control engineers had the luxury of doing almost any tests they wanted on the physical systems they studied to determine the dynamics.

Even with these advantages, optimal control still failed as a design technique (outside of said path-planning problems).

The Example

(This article uses a bit of mathematics. Equation-averse readers may be able to skip most of them. The figures were generated by running simulations that were computed using my sfc_models equation solver. The code is given below, and is available in the development branch of the project on GitHub.) 


We have an extremely simple system, and we have a baseline model for it, denoted $P_0$. (We use $P$ to stand for "plant," which is the traditional control systems name for such a system, and it technically is an operator from a discrete-time single input to single output.)
  1. We know that the steady-state gain from the input $u$ to the output $x$ is 1. That is, if we have a constant input of $k \in R$, the output converges to $k$.
  2. We believe that the model lies within a class of models defined by a parameter $a$. The baseline model has $a=.05$, and that $a \in [0.005, 0.09]$.
  3. The input $u$ is being set by the controller of the system, and hence has access to the data.
The parameterised model is defined by: $x[k] = (1-a) x[k-1] + a u[k-1], \forall k > 0, x(0) = 1.$

(Following electrical engineering tradition, I denote the discrete time index with $[k]$.)

We assume that we have access to a great deal of historical data for this system. Once we have validated that the steady state gain is equal to one, the resulting linear system has to have the above form (or be related to it by a linear scaling of $x$).

For some reason, we have reason to be certain that the parameter $a$ lies in the interval $[0.005, .09].$ We can run simulations of the model with three parameter values (the baseline $a=0.05$, plus the two endpoints of the interval) to get the simulated response of a step rise of the input $u$ from 1 to 2. (Figure below.)

We can then compare the output of the true system (in red) to these simulations. We see that the true step response is quite close to the baseline model. (An eagle-eyed reader might spot the problem here, but this would be difficult if I buried the true system response with random noise.)

As we can see, the system marches at a leisurely pace from 1 to 2, following the change in the input, However, one could imagine that this slow adjustment would be seen as suboptimal, and we can then speed the response up.

We want the output $x$ to track a reference signal $r$. We define the tracking error $e[k]$ as $x[k[ - r[k]$. We would like the tracking error to have the dynamics: $e[k] = \frac{1}{4} e[k-1].$

We can achieve this by setting $u$ so that it cancels out the existing dynamics for the baseline model, and force $x[k]$ to emulate the above behaviour. This is achieved by setting a control law:

$u[k] = \frac{-0.7 x[h] + 0.75 r[k]}{.05}.$

If we simulate the closed loop responses for our baseline system, and the systems at the extremes of the parameter set, we see that behaviour is relatively acceptable for all three models.

However, if we apply the control law to the actual system model, we end up with unstable oscillatory behaviour (in red).

In other words, although parameter uncertainty covered the open loop behaviour nicely, actual closed loop behaviour was nothing close to what was implied by the extremes of the parameter set.

The reason for this failure is quite familiar to anyone with a background in control engineering -- or taking showers -- a lag between input and output. In this case, the model is perturbed from the original model by adding two lags to the input signal. This is enough to make the resulting system unstable. The same effect is felt if one is impatient in setting a shower temperature. If you keep setting the dial temperature based on the current water temperature, you will end  up repeatedly overshooting between too hot and too cold. You need to give time to allow the water to flow through the pipe to see whether the temperature dial needs further adjustment. (An automatic shower temperature control is a standard control systems engineering project.)

In summary, systems can behave in ways quite different than predicted solely by parameter uncertainty. Missing dynamics can be fatal.

So What?

The simplicity of this example might make some readers impatient. "That's a lag. We know how to incorporate them into our estimation strategy by adding parameters."

Not really. Optimal control engineering did not fail because they did not have enough parameters. I had to use a fairly drastic lag to destabilise the system only because the underlying system (a low-pass filter) is remarkably stable. If the system had oscillatory dynamics of its own, much more subtle perturbations to the model would achieve destabilisation.

The optimal control strategy failed because it assumed that the model was known. The methodology was:
  1. Take assumed model of the plant.
  2. Calculate the optimal trajectory (using some objective function).
  3. Force the system trajectory to follow the optimal trajectory by cancelling out the assumed plant dynamics.
This procedure was equivalent to determining the inverse of the mathematical operator of the plant, and using that inverse to calculate the target dynamics. Unfortunately, cancelling out an operator with an inverse creates severe numerical instability unless the matching is perfect. (Your system matrices end up being ill-conditioned.) This numerical instability made optimal control laws useless in practice.

For those interested in the history of optimal control, I want to underline that I do not think that they were misled by parameter uncertainty. When I did my doctorate in the early 1990s, optimal control theory was only studied as a historical curiosity; nobody really cared what their exact thought processes were. Based on my hazy memory of the literature (and the only optimal control textbook I own), there was no formal notion of model uncertainty. Instead, the explanation for mismatches between models and reality was explained as follows.
  1. Our measurement of outputs was corrupted by random noise.
  2. Model dynamics included additive random disturbances.
These issues were dealt with by using the Kalman filter to estimate the state despite the noise, and the assumed stability of the system would handle (finite energy) external disturbances. (This should sound familiar -- this is exactly the strategy that DSGE modellers inherited.) As can be seen, this is inadequate. Incorrect dynamics can imply a consistent force driving the actual output away from the theoretical trajectory, a possibility that is not included in random disturbance models.

I am unaware of optimal control theory that dealt with parameter uncertainty. However, it seems likely that engineers would like have done basic "sensitivity analyses" where they varied parameters. They would have done so without the aid of any formal theory to guide them, and would likely have ended up with an approach similar to what I sketched out above (what happens if a parameter is near its assumed limit?).

In any event, robust control -- sometimes called $H_\infty$ control -- abandoned the approach that we know the plant model with certainty. We still have a baseline model, but we want to ensure that we stabilise not only it, but a "cloud" of models that are "close" to its behaviour. There are formal definitions of these concepts, but readers would need to be familiar with frequency domain analysis to follow them.

Real World Implications?

In the modern era, we are unlikely to see disasters similar to those created by the application of optimal control in engineering. People are aware of the issue of variable lags, and so policy is unlikely to be as aggressive as they were in engineering. Furthermore, modern mainstream applications of optimal control to policy is in the domain of interest rates. Fortunately, interest rate changes have negligible effect on real economic variables, and so policymakers cannot do a lot of damage.

Instead, the applications of the notion of model uncertainty would be more analytical. We might begin to ask ourselves: although we managed to fit some model to historical data, is that fit spurious? How large is the class of models would yield an equally good fit? We just need to keep in mind that varying a parameter is a much weaker condition that having unknown model dynamics.

Code





(c) Brian Romanchuk 2017

16 comments:

  1. For those who want to explore this further in the browser, here's in a 'Insight' about Balancing Loop With Delay

    This is one of the reasons why you want your stabilisation to be done on the spend side via a Job Guarantee. Its response is near instant.

    Even on the tax side there is a delay from taxation to the payment that removes the spend from the economy - often variable delay - all of which introduce these oscillations.

    ReplyDelete
  2. Great post. Some of it beyond me, but I followed the underlying point (I think!). Thanks Brian.

    ReplyDelete
    Replies
    1. Thanks. Some of the technical digressions were in there to deal with some of the more trivial complaints that people who are wedded to parameter uncertainty could come up with.

      Delete
  3. If in aggregate banks and other financial intermediaries hold a portfolio of loans with long repayment terms and relatively fixed interest terms, and if these FIs issue a significant float of liabilities which must be renegotiated overnight or every few months at prevailing money market interest rates, then a central bank or central government with power to rapidly raise interest rates should be able to bankrupt some financial intermediaries. This would induce recession and impact the real economy. It should kill any credit system fueled rapid price inflation but would also have other unwanted side effects that play out with time lags. Central banks probably do not want to take drastic action under typical conditions. So the impact of interest rate policy on financial markets and the real economy is subject to much uncertainty when the system is operating with business as usual conditions. I have never read a paper describing the models for monetary policy transmission that fails to recognize that the exact mechanisms are unknown although there are several channels through which MPT might operate in theory.

    ReplyDelete
    Replies
    1. Interest rate risk is the most easily hedged risk in the world now, and almost all competent banks are indeed hedged. I could have added a qualifier stating the assumption that the currently timid behaviour of central bank rate setting is maintained.

      Delete
    2. I am fairly certain there must always be interest rate risk that cannot be adequately hedged when considered for the whole financial system. This is due to the long maturity of financial assets in credit markets relative to the short duration of wholesale funding in money markets. When money market rates rise rapidly the deals that make financial theory work in practice are disrupted by the psychology of financial dealers. Finance theory goes out the window in a money market crisis. The central bank can inject a money market crisis by jacking up money market rates if it creates the institutional structure in which it is always operating as the wholesale dealer of last resort.

      Delete
    3. (1) The pension industry is massively short duration versus their liabilities. Other financial intermediaries will always find a counterparty for their hedges.

      (2) Modern central bankers faint at the prospect of the policy rate being 100 basis points too high versus the "correct level." Your concerns are only valid if they cranked up rates by thousands of basis points, which ain't gonna happen.

      Delete
    4. During the global financial crisis the federal reserve took operational control over AIG because it was about to default on all sorts of derivative and hedge finance contracts. This contradicts a theory that there will always be a market based counter-party which will never default on its position. If a modern central banker had to fight rampant inflation rather then deflation the way to do it is to jack up wholesale interest rates as Volcker did in the early 1980s. This will change the perception of counter-party risk in money and credit markets and will induce a recession with bankruptcy of some units at the margin. The resolution of the systemic bankruptcy will take some pressure off rising prices in the real economy.

      Delete
    5. AIG did not have *interest rate risk*.

      Delete
    6. AIG is an example of the potential for systemic counter-party risk. According to conventional literature when a party hedges interest rate risk it must trade-off by taking liquidity and counter-party risk instead. In the system as a whole only the central bank or central government tends to have enough balance sheet to provide liquidity and reduce counter-party risk in a systemic crisis.

      Delete
    7. I worked as a senior interest rate quant. Nobody is going to worry too much about pension funds as counterparty risk in interest rate swaps, so long as they are hedging their liability risk.

      Delete
    8. I respect your engineering and finance perspective based on experience, however, if your finance models slants toward finance theory and away from the underlying financial dealer models also discussed by experts such as Hyman Minsky and Perry Mehrling, then I would not expect your models to capture the corner cases concerning market caused runaway inflation or deflation. Minsky thought the financial structure evolves from hedge, to speculation, to ponzi finance over time when there has not been a money market crisis for a significant period of time. Then a small shock can precipitate a money market crisis. If inflation is rapid in a modern economy, and it is not being caused exclusively by a large government deficit, then it must be partially caused by market credit expansion driving prices up under too much aggregate demand for limited goods and services. The government could jack up wholesale interest rates which will bankrupt units that must be taking on speculative and ponzi finance otherwise there would be no credit boom and no inflation. Every unit cannot be hedged since long term lending at higher interest rates means extending credit to units that do not qualify for wholesale or hedged interest rates.

      Delete
    9. That’s a theory that ignores the reality that inflation is not running anywhere, and you are completely ignoring the risk profile of the aggregate private sector. Rates cannot really fall from here, and rising rates reduces risk. This was not the case when Minsky was writing.

      Delete
    10. I framed my comments in a realistic context based on actual past episodes with systemic threats appearing in the corner cases as rapid inflation or rapid deflation. Finance theory assumes that arbitrage in the financial dealer markets always makes liquidity available to refinance positions. This liquidity provision would seem to hold during a rampant private credit fueled inflation (1960s, 1970s, 1980s in the US) and would seem to be violated during a money market crisis that precipitates a potential deflation (2007, 2008 global financial crisis). Minsky wrote about the structural and agent based causes of euphoric inflation and the cash flow causes of recession and depression in this 91 page paper prepared for the Federal Reserve Board:

      https://fraser.stlouisfed.org/files/docs/historical/federal%20reserve%20history/discountmech/fininst_minsky.pdf

      To me it seems that we should just recognize that Fed acts like a financial dealer of last resort which allows the possibility to kill inflation via a forced disruption of money markets (take some liquidity out) and the possibility to prevent significant deflation by responding to a disruption in money markets (replace some liquidity that was previously being provided by market participants in the money markets).

      Delete
    11. (1) The pension industry is massively short duration versus their liabilities.

      If their liabilities are mostly driven by inflation the short duration might be in order as it's highly correlated with the short end of the yield curve.

      Delete
    12. Most pensions only offer at most partial indexation, so they’re not that exposed too inflation. In any event, inflation has been highly stable since 1990, and hopes of a 1970s-style bear market just represent a fantasy by pension managers to take them away from the reality that they should have duration-matched a long time ago.

      Delete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.