Recent Posts

Sunday, December 3, 2017

Robust Control Theory And Model Uncertainty

Mainstream mathematical economics has a very strong influence from optimal control theory. As I discussed previously, optimal control was abandoned as a modelling strategy decades ago by controls engineers; it only survives for path-planning problems, where you are relatively assured that you have an accurate model of the overall dynamics of the system. In my articles, I have referred to robust control theory as an alternative approach. In robust control theory, we are no longer fixated on a baseline model of the system, we incorporate model uncertainty. These concepts are not just the standard hand-waving that accompanies a lot of pop mathematics; robust control theory is a practical and rigorous modelling strategy. This article is a semi-popularisation of some of the concepts; there is some mathematics, but readers may be able to skip over it.

It should be noted that I have serious doubts about the direct application of robust control theory to economics. In fact, others have discussed robust control (sometimes under the cooler-sounding name $H_\infty$ control). As such, the examples I give probably have little direct application to economics. Instead, the objective is to explain how we can work with model uncertainty in a rigorous fashion.

This article reflects the thinking in robust control theory up until the point I left academia (in 1998). I have not paid attention to subsequent developments, but based on the rather glacial pace of advance in control theory at the time, I doubt that I missed much. It should be noted that there were disagreements about the approach; I was part of the dominant clique, and this article reflects that "mainstream" approach. (I discuss briefly one alternative approach at the end, only because it was actually adopted by an economist as a modelling strategy.)

For readers with a desire to delve further into robust control theory, the text Feedback Control Theory, by John C. Doyle, Bruce A. Francis, and Allen R. Tannenbaum, was the standard text when I was a doctoral student. There are more recent texts, but the ones that I saw were only available in hardcover.

The Canonical Control Problem

The standard control problem runs as follows.
  1. We have a system that we wish to control. By tradition, it is referred to as the plant, and is denoted P. We have a baseline model $P_0$, which comes from somewhere -- physics, empirical tests, whatever. (This model is provided by the non-controls engineers.)
  2. We design a controller -- denoted $K$ -- that is to stabilise the system. It provides a feedback control input $u$ that is used to guide the plant's operation.
  3. We typically assume that we are analysing the system around some operating point that we can treat as a linear system. (My research was in nonlinear control, and the amount of theory available is much smaller.)
  4. We often assume that the observed output ($y$) is corrupted by noise ($n$), and there may be disturbances ($d$) with finite energy that also acts as an input to the plant.

The diagram above shows the layout of the system, with variables indicated. We assume that each variable is a single variable (not a vector).

If the system is linear, time invariant, and discrete time, we can use the z-transform to analyse the system. (In continuous time, we use the Laplace transform.) The z-transform uses the frequency domain to analyse systems; we are not tied to any state-space model.

The reason why we use the z-transform is that it turns the operation of a system into a multiplication. If $Y(z)$ is the z-transform of $y(k)$, then:
Y(z) = P_0(z) (D(z) - U(z)).
(By convention, we use a negative feedback for the control output u.)

We can now calculate the transfer function from the disturbance d to the plant output y (ignoring the noise n).
U(z) = K(z) Y(z).
Y(z) = P_0 D(z) - P_0(z) K(z) Y(z).
We arrive at:
Y(z) = P_0(z) (1 + P_0(z) K(z))^{-1} D(z).

The term $P_0(z) (1 + P_0(z) K(z))^{-1}$ is the closed-loop model of the system (the area in dotted lines in the above diagram). If the closed loop model of the system is stable, standard linear system theory tells us that the closed loop will reject noise and disturbances. The zero point is a stable equilibrium, using the rigorous dynamical system definition of equilibrium (and not the hand-waving metaphysical definition used in economics).

The above was standard system theory; optimal control worked within this framework. Any notion of uncertainty was assumed to be handled by either the disturbance or noise.

In robust control, we assume that the "true" plant model lies close to our baseline model, but it is not exactly the baseline model. We can express this uncertainty in a number of ways. The diagram above shows one standard possibility: the actual plant is equal to the baseline plant, which is locked in a feedback loop configuration with an unknown system $\Delta$.

We obviously cannot do much analysis if there are no constraints on $\Delta$, the true system could be literally anything. We constrain $\Delta$ so that its gain in the frequency domain is less than or equal to 1. (This is denoted as $\| \Delta \|_\infty < 1$, or the infinity norm is less than one. (This is where $H_\infty$ control gets its name.) This characterisation was developed by the late George Zames, a Professor at McGill University.

We can then manipulate the systems to calculate the baseline closed loop model, and have it in a loop with $\Delta$. We then can apply a fixed point theorem -- called the Small Gain Theorem in control theory (which is also due to George Zames) -- to show that if the infinity norm of the baseline closed loop model is less than 1, the overall system will be stable. In other words, the controller will stabilise the true plant, for any $\Delta$ in the set of possible perturbations.

By contrast, optimal control was highly aggressive in its usage of the baseline model. Almost any perturbation of the system from the assumed model would result in instability. Alternatively, the numerical procedure to determine the optimal control law was numerically unstable.

The above specification of uncertainty is standard, but is somewhat naive. In practice, we have a rough idea what sort of uncertainty we are up against. We can extend the analysis to allow ourselves the ability to shape the uncertainty in the frequency domain. For example, we usually have a good idea what the steady-state operating gain of a system is, but often have little idea what the high frequency dynamics are. We shape the frequency domain characterisation in such a fashion, and we thus constrain how to design our control laws.

Applications to Economics?

The direct application of control theory is in the design of policy responses, as was done in the Dynamic Stochastic General Equilibrium literature. The difficulty with trying to apply robust control is that we do not really have a good notion of the baseline system. Also the true models are certainly not linear.

Another issue is that the type of uncertainty we face is somewhat different. We know with certainty that accounting identities will always hold. The true uncertainty we face is the behaviour of economic sectors. I made an initial stab at analysis that exploits this feature in an earlier article. However, I do not see an obvious way to shoehorn that type of model into existing robust control theoretical frameworks.

However, the realisation that we can rigorously discuss model uncertainty means that should not be treating uncertainty via parameter uncertainty.

Hansen and Sargent's Approach

In 2008, Lars Peter Hansen and Thomas J. Sargent published the book Robustness, which was an attempt to bring robust control theory to economics. I started reading the book with high expectations, and gave up fairly quickly.

Within control theory, there were a number of differing approaches to robust control. Back when I was a junior academic, I would have had to be diplomatic and pretend that they were all equally valid. Since I am no longer submitting papers to control engineering journals, I am now free to write what I think. My view was that those alternative approaches were largely bad ideas, and were only useful for expanding publication counts.

One such alternative approach was to apply game theory. Instead of a truly uncertain model, you are facing a malevolent disturbance that knows the weak points of whatever control law you are going to apply. You are forced to use a less aggressive control strategy, so that it is not vulnerable to this interference.  You ended up with the same final set of design equations as in robust control, but that was just a lucky accident of linear models. Any nonlinearity destroyed the equivalence of the approaches; that is, an uncertain nonlinear system could not be emulated with a game theory framework. (Since I worked in nonlinear control, I largely managed to ignore that literature. However, I was forced to work through a textbook detailing it as part of a study group, and I hated every minute of it.)

Working entirely from memory, I believe that you also largely lost the ability to shape the uncertainty in the frequency domain. That is, the fact that we generally know more about the steady-state characteristics of a system than the high frequency response is lost, since we have a "malevolent actor" that is responding at an extremely high frequency. For linear system design, you can cover this up with kludges that allow you to restore the equivalence to standard robust control design equations, but there was no theoretical justification for these kludges from within the game-theoretic framework.

Given mainstream economists' love of game theory, it was perhaps not surprising that Hansen and Sargent chose that formalism. You end up with a new variant of DSGE macro, but still without any true model uncertainty. It may be better than the optimal control based DSGE macro, but that's setting the bar very low. I was thinking of writing a review of the book, but I would have been forced to be all academic-y. The resulting review would have been painful to write (and read). It may be that there are more redeeming features to the approach than I saw in my first reading of the book, but I remain skeptical.

(c) Brian Romanchuk 2017


  1. The link below is to a 125 page PHD thesis. It opens as pdf in Firefox.

    Quote page 4-5: "I built a cybernetic model, that is a world where agents are represented by closed-loop controllers fighting one another. But it isn’t obvious why I would choose PID controllers in particular as a way to model trial and error. ... The main reason is the argument put forward in [Bennett, 1993] (also cited in [Hawkins"et al., 2014]) is that “in the absence of any knowledge of the process to be controlled, the PID controller is the best form of controller. This matches very well with the purpose of my agents. Rather than thinking cybernetically top-down with the Gosplan being the well-informed well-meaning ultimate controller I wanted bottom up equilibrium coming with no centralized information. PID controllers are ideal for this."

    1. All I can say is that I think it was best for all concerned that I was not brought in as an external examiner for that thesis.

  2. "We know with certainty that accounting identities will always hold."

    I cannot confidently support this statement. I think an event of borrowing-from-one's-self (as when government creates money) bypasses accounting identities to become an event driving perturbation. I think of this event as a throwing of an off-on switch to disengage/engage some part of the control feedback loop.

    1. Accounting identities imply that accounting is done correctly, there is no behaviour content.

      Money creation is captured by accounting identities, modulo measurement errors.

  3. This comment has been removed by a blog administrator.

  4. Interesting read. Can you pinpoint any good resources that explore or mention this nonequivalence of the system theoretic and game theory approaches for nonlinear systems?

    1. The only reference I could offer is
      B.G. Romanchuk, Unpublished comments, 1994.

      It is a long time that I looked at this. My argument at the time would have run as follows: the game theoretic interpretation of robust controls can be made to work on linear systems. It ends up with equivalent algebraic Riccati equations as other approaches - stochastic, model uncertainty. This means that you could end up with the same designs starting from different interpretations. The game theoretic approach replaces the uncertain linear model with an "adversary" actor (not sure what they called it).

      This breaks down for nonlinear models. You cannot replace an uncertain linear model with a game-theoretic actor and get the same model behaviour. You can no longer use linearity to decompose the effect of the "adversary" from other system dynamics.

      I stopped looking at the game theoretic stuff in the early 1990s, and I have no idea whether they attempted to deal with nonlinear systems. I saw no indication that there was enough interest in that approach for anyone to bother.


Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.