Recent Posts

Sunday, March 17, 2019

Understanding DSGE Macro Models

Dynamic Stochastic General Equilibrium (DSGE) articles have attracted a great deal of attention in economic squabbling. In my view, the existing discussions of DSGE models -- mine included -- have been confusing and/or misleading. Properly understood, DSGE macro models are an attempt by neoclassical economists to weld together two standard optimisation problems, but with the defect that the neoclassicals lacked the notation to state the resulting problem clearly. This lack of clarity has made the debates about them unintelligible. Once we clean up notation, these models have a variety of obvious limitations, and it is unclear whether they have any advantages over stock-flow consistent models.

Update 2017-03-18 I finally tracked down an article that is useful for my purposes (one day after publishing this, of course). This working paper from Bank of England researchers acts an exception to some of my comments on the state of the literature. At present, I do not think I need to retract many comments (although a speculative one was heavily qualified), as I noted multiple times that I was basing my comments on the literature I examined. That said, the article conforms to my description of the "reaction function interpretation" (explained within the text). There is a technical update added below with further details.

TLDR;

This article is unusual (and self-indulgent) in that it consists of a long string of bullet points that do not attempt to follow a narrative arc. My defence is simple: given the confusing state of these models, any discussion of them is likewise confusing. There is a certain amount of repetition of concepts. Readers are assumed to have some mathematical knowledge; my text here is terse, and I do not offer much in the plain English explanation. Many of the technical points would turn into an entire article if I wrote in my usual style in which I offer lengthy explanations of concepts. Rather than write a few dozen articles, I just dumped the entire explanation into a single place.

The following points are what I see as the key points that may be of interest to a casual reader. The main text has other technical points that may or may not be more important, but one would need to be at least slightly familiar with the model structure to follow them.
  • I am only discussing DSGE models that consist of the macro outcome where there are two distinct optimising agents, such as a representative household and firm.
  • I argue that any field of applied mathematics needs to result in tractable computational models, and that mathematical discussions can be related to operations on sets. The DSGE literature is characterised by long-running theoretical disputes that are debated in text, which the author argues is the result of the discipline of computation/set theory being lost. 
  • I argue that the model descriptions used in the DSGE literature are not properly laid out from the perspective of set notation. The DSGE authors are attempting to join (at least) two optimisation problems. There seem to be two ways to formalise their attempts. They are either defining reaction functions for all agents in the model, or else searching over a set of exogenous prices to find a set of prices that lead to coherent model solutions. Both interpretations have difficulties that become quite obvious once we attempt to formalise them.
  • The author feels that textual descriptions of these models are misleading, both by the proponents, as well as critics.
  • The author has only sampled the DSGE macro literature; the comments reflect the texts read. One common assertion the author has encountered is that the issues outlined were dealt with in some other works. The author's contention is that this is not adequate; the texts I read should explicitly cite critical results.
  • Both formalisations of the DSGE model have very little that characterises the final model solution. For example, since the model structures are not formally specified, we have no way to relate them to any results on the existence and uniqueness of solutions. Furthermore, in the absence of a clear formalisation, there is no way to validate assertions about the mathematics.
  • The probabilistic treatment used cannot be interpreted as uncertainty. Rather, these appear to be deterministic models that have a randomisation of parameters ahead of model solution.
  • From a practical perspective, the so-called log-linearisations are the de facto models that are being worked with. Linear models have very strong features, such as the ease of fitting them to data. However, linear models will obviously not capture things like accounting constraints, nor take into account nonlinear effects. Furthermore, given the intractability of the nonlinear DSGE model, we have no reason to believe representations that the "log-linearisation" solution has a strong relationship to a solution of the nonlinear model.

Main Text

There is no doubt that many of my statements will be disputed by neoclassical economists. I accept that one or more bullet points may need to be more heavily qualified, and the reader should keep that disclaimer in mind. That said, the overall thrust of the logic herein will not be deflected by adding a few qualifications to statements. I do make some speculative or editorial comments, which are generally qualified as such.

I would caution readers of a heterodox leaning with a more literary approach to economics. I am largely confining myself to statements about DSGE mathematics. My comments may or may not dovetail with previous heterodox critiques. In my view, many existing heterodox critiques may be misleading, as they were based on neoclassical textual assertions about the models. In fact, my own analysis of the models was greatly delayed by accepting textual descriptions of what the models represent, which can be quite different from what they actually are. For example, the plausibility of the assumptions behind a model that is not in fact solved is a red herring.

The points made are only loosely in a logical order, and they generally do not follow from each other. I only number the points to make them easier to refer to if someone ever wishes to discuss them.
  1. (Editorial) In my view, the mantra "all models are wrong" is not taken seriously by the economists that appeal to it. The more accurate assessment is that "all macro models are terrible," an argument that is the subtext of my upcoming book on recessions. Although I am dunking on DSGE macro models herein, I have no reservations on dunking on any mathematical macro model, including the ones in my own book on SFC models. Anyone who attempts to defend DSGE macro models by demanding that I produce models that are "better" than DSGE macro models is grasping at straws.
  2. (Editorial) If we look at the neoclassical literature, there are a number of theoretical disputes -- e.g., the Fiscal Theory of the Price Level, "neo-Fisherism" -- that are discussed in extremely long textual arguments, and drowning in citations of "who said what." This is much closer to the post-Keynesian academic debate of "what did the General Theory really mean?" than one would expect from a field associated with applied mathematics. This is because of the next point.
  3. Most of what I refer to as "applied mathematics" is the analysis of models. A model is a creature of set theory. It is an entity that bundles a set of variables -- which are elements of sets, such as the set of real numbers, or sequences of real numbers -- and a set of logical relations that relate the variables, which we will term constraints. What sets a model apart from a generic mathematical description of set objects is that we partition the variables into two sets: which I term the model input, and the model solution. When we solve the model, we fix the elements of the set of inputs, apply model constraints, and determine what the set of solutions is. One serious concern is whether the solution exists and is unique, as we wish to relate the model to the real world. It is straightforward that a model whose solution set is empty offers no insight, while a set-valued solution (that is, non-unique) is hard to relate to real world data, where we typically have a single measured value for observations.
  4. Although some pure mathematicians attempt to write texts that are strictly symbolic, such papers are unusable to most people. Applied mathematics uses textual short cuts to aid comprehension. However, the standard for rigorous applied mathematics is that we can ultimately relate all textual arguments to be being about sets: either a reference to a set, elements of sets, or logical statements about sets and set elements.
  5. (Editorial) It is the author's position that a sub-field of applied mathematics needs to concern itself with models that have solutions that can be computed numerically in a reasonable time. Based on his experience in the area of nonlinear control theory, a field that concerns itself with the derivation of properties of non-computable solutions degenerates, and cannot be applied to the real world. Pure mathematicians are free to devote their attention to such mathematical objects, as publications are judged on their elegance, not their practical utility. When we turn to DSGE models, although the author has read assertions about the numerical computation of solutions, actual experience with the technique mainly involves reading neoclassical authors making textual assertions about the models; for example, "all else equal" arguments. The issues of ambiguity around notation and model structure discussed in other points raises question marks about any numerical techniques based on the existing literature: what exactly are they solving?
  6. I am not holding DSGE macro expositions to some impossible standard. I was heavily in contact with rigorous pure mathematicians when I was in academia, and the expectation was that any high quality published mathematical paper in control theory will have at least one non-serious mathematical error (typically a typo) per page. (Pure mathematics journals may have higher standards.) Most fields of applied mathematics rely heavily on short cuts in notation (column inches are precious), but a competent mathematician will be able to fill in the gaps. For example, very few stock-flow consistent papers will specify that government consumption is an element of the set of positive real numbers, but only a complete dimwit would expect that government consumption would be an imaginary number. Writing applied mathematics is an art -- how much mathematical detail is really needed, versus adding complicated-looking content to add "prestige".
  7. If we look at the logical order of progression, what I refer to as "DSGE macro models" (defined below) are constructed around the concepts of single agent models. These models are based on straightforward optimisation problems for a single agent. For example, we could define a household optimisation model (denote one such model $M_h$) or a firm's profit maximising model (denoted $M_f$). These are models that fit under the general category of optimal control theory, with an added characterisation that they feature vectors of price variables that are set exogenously. From what I have seen of such models, they are normally set up in a perfectly adequate fashion -- since they are explicitly based on results coming from 1960s era optimal control theory.
  8. As a historical aside, optimal control theory was abandoned (except as a mathematical curiosity) by the control theory community by the time I started my doctorate in the 1990s. (Problems were apparent as soon as the techniques were applied, but it took time -- and disastrous engineering errors -- to pin down why optimal control techniques failed.) One could summarise the argument is that they are a disastrous guide to decision-making in an environment with any model uncertainty.
  9. The concerns I have are with what I term DSGE macro models. As is clear in the expositions I have seen (at least one hundred articles over more than a decade, and a half dozen standard texts), they represent a desire to join at least two single agent optimisation problems into a single problem. (For example, the model in Chapter 2 of Gali's text is generated by welding together a household optimisation problem $M_h$, and a firm optimisation problem $M_f$.) Please note that the term "dynamic stochastic general equilibrium model" is more generic, and so there are DSGE models that lie outside the above set of models. My comments here are most likely not applicable to those other models.
  10. What I have termed the "notation problem" has appeared in every single exposition of DSGE macro models I have encountered. There is an attempt to develop a macro model, denoted $M_m$. The generic problem is that the text refers to two single agent models (such as $M_h, M_f$), and then enforces an equality condition for variables between the two problems. For example, the number of hours worked (commonly denoted n) in $M_h$ has to equal the hours worked in $M_f$. The author argues that under conventional applied mathematics standards, that implies that the mathematical descriptions given for $M_h$ and $M_f$ are actually components of $M_m$, and so we can apply standard mathematical operations to the variables to those objects. In particular, if two set elements are equal, we can freely substitute them for each other in analysing the constraints. (For example, if we are faced with the constraints {x=y, z=y+2}, we can state that z = x + 2.) For the models $M_m$ based on $M_h, M_f $, the implication is that the production function in $M_f$ constrains output, and can be substituted into the model $M_h$. However, this substitution -- which is implied by the usual allowed operations on sets -- is directly contradicted by the mathematical exposition of model $M_m$ found in the text.
  11. To clarify my notation, I am using an example of a two sector model with a household sector and a business sector as an example of a non-trivial macro model throughout the text. There are many other variations, such as splitting the business sector into sub-sectors. Referring to the macro models as having an arbitrary set of sub-problems does not add any value to the discussion, even though that would be strictly necessary to capture all cases.
  12. (Argument from authority) The author has commonly encountered statements that the issues I have raised have been dealt with somewhere in the literature. To a limited extent, this defense can be used in applied mathematics. For example, very few people would dig through axioms on algebra to justify replacing $2x + x$ with $3x$. (I am unsure even where to find the appropriate axiom to justify that substitution.) That said, in the set of articles analysed by the author, key steps in proofs were always skipped over. (If those steps had been supplied, I would not have been forced to guess as to how the notation is to be interpreted.) In the author's opinion, every published text that was read could have been legitimately sent back for a re-write by a peer reviewer, as the resulting proofs require large leaps of faith. (To repeat, the possibility remains of a clean exposition in an unread text.) As for the existence of a proof being done correctly in article A, that has no bearing on the results of article B, unless B explicitly references A in the appropriate place. Readers cannot be expected to absorb the entire literature produced over fifty years to fill in the gaps of the description of a mathematical model -- which is after all, a series of statements about sets.
  13. (Speculative) There appears to be two main ways in which to interpret the expositions of DSGE models. The first is more complex, and best captures the spirit of the contents of the papers. In the first interpretation, the DSGE author lays out the single agent optimisation problems, and then determines a set of first order conditions, which act as a reaction function. The reaction functions are then (somehow) stitched together. The second interpretation is simpler: the DSGE authors want to set up (at least) two optimisation problems that are run separately, and then pin down a set of exogenous prices that leads to a coherent solution. Both interpretations have problems, and these are discussed in the following points in a more formal fashion.
  14. (Editorial/confession) Although many people are confused by DSGE mathematics, the author's difficulties with the exposition seemed to be unusual. Many people (including the DSGE model author community) did not see any difficulties. The author guesses that other readers are imposing a notion of logical time on the model exposition: first we have one model, then we have another model, then we join then. Each step occurs at a different "time," and so there is a compartmentalization that prevents the sort of substitutions that breaks the model (e.g., inserting the production function into the household problem). However, set entities are timeless, and so such a notion of logical time technically makes no sense.
  15. (Speculative) We can try to define $M_m$ in terms of constraints (reaction functions) as follows. (Since the literature is itself does not lay out the structure of the models, parts of this description are unavoidably vague.) The researcher R lays out model $M_h$, and finds a set of constraints on the solution, denoted $F^R_h$. (It is denoted $F^R_h$ as the constraints of interest are first order conditions, although that is a misnomer with respect to $M_m$.) The model $M_f$ is then laid out, and then the constraints $F^R_f$ are derived. Then a model $M_m$ is postulated. Model $M_m$ has a set of variables that have labels that are the union of the labels of the variables in $M_h$ and $M_f$, and the constraints on the variables are the union of $F^R_h, F^R_f$. The constraints are a set of statements that are applied to the new set of variables, with the labels in the new problem matching the old labels. The addition of the label R to the F variables is not cosmetic: since there does not appear to be a systematic way of specifying the set of all "first order conditions," the set of constraints used in the model is an arbitrary choice of the researcher. The implication is that two distinct researchers -- $R_1, R_2$ -- may choose two distinct sets of "first order conditions", leading to two different macro models, even if based on the same underlying single agent optimisation problems. However, in the DSGE macro expositions seen by the author, the full specification of what constraints is chosen for model $M_m$ is never given as a single logical unit; the reader needs to guess which equations are to be incorporated into $M_m$.
  16. There is no doubt that the previous explanation is not completely satisfactory, and needs to be cleaned up. The key is that we need to firmly set each variable within its proper model, and we cannot be confused by the fact that the variables share the same label: they are distinct mathematical objects. In fact, we need to define $M_m$ without any reference to $M_h, M_f$; it is a stand-alone mathematical object.
  17. Since the arbitrary nature of first order conditions is probably not easily understood, we we step back and see what non-arbitrary conditions are. Take any standard optimisation problem $M_o$. It will have an objective function and constraints. These can easily be written out in standard mathematical notation. The logical operations imply a set of solutions, denoted $\{x^*\}$. The set $\{x^*\}$ is a well-defined, and can be characterised by standard theorems: is it non-empty, unique, etc. We can then make any number of statements about $\{x^*\}$, such as first order conditions. That is, we can contrast the well-defined nature of the rules defining $M_o$ versus the infinite number of true statements we can make about the solution. Furthermore, the well-defined nature of the definition of $M_o$ is what allows the possibility to find the solution $\{x^*\}$ numerically: we use them to derive true statements about the set $\{x^*\}$ so that we can determine a procedure so that variables defined in a numerical algorithm will converge to the true solution in some fashion. The simplest example of a condition that we can derive is as follows. If u is the objective function that is maximised, $u(x^*) > u(x) \forall x^* \in \{x^*\}, x \notin \{x^*\}.$ (That statement can be derived solely by applying the definition of optimal.)This is very different from trying to determine the set of variables that are consistent with an arbitrary set of mathematical statements: without knowledge about the properties of the set of solutions, we cannot hope to derive an algorithm that converges to elements of that set.
  18. We now turn to the possibility of defining the problem as simultaneously solved optimisations based on a common set of exogenous prices. We first need some notation.
  19. The first piece of notation is the notion of the exogenous price vector, which is denoted P. For standard household/firm problems, there would be three time series in P: goods prices (p), wages (w), and the discount factors in the yield curve (Q). The last element maps to the forward path of interest rates, and poses difficulties that will be discussed in later points.
  20. We need the notion of an optimisation problem operator, O. For a single agent optimisation problem v, we denote the optimal solution ${x^*}$, where each $x^*$ (if it exists) is the vector of all time series in the model. The overall problem solution is characterised by $O_v P = \{x^*\}$; that is, the operator $O_v$ maps the exogenous price vector to the set of solution time series. Since we need to select particular time series, we define a set of operators $O_P^y$, which maps the exogenous price vector to the time series $y$ within $\{x^*\}$.
  21. (Speculative) We can now define $M_m$ as follows. Let $C = \{c_i\}$ be a set of variables that are to be part of market clearing (e.g., number of hours worked, output, etc.).   $M_m: \{P: O_h^{c_i}P = O_f^{c_i} P \forall c_i \in C \}.$ That is, this is just a statement about existence: the solution to the macro problem is the set of exogenous prices for which the single-agent optimisation operators lead to equal values for the variables to be cleared. (Note: the market clearing operation would need to be made slightly more complex to take into account things like government purchases.)
  22. (Editorial) Although this characterisation appears much cleaner, it in no way captures the spirit of the mathematical analysis seen in the literature sampled. Very simply, the author has invented the operator notation, and cannot point to a single example of overlap with the actual mathematical exposition. Although the notation is somewhat awkward, it is so straightforward that the author finds it very hard to understand why this was not being explicitly stated. For example, the entire derivation of first order conditions -- which represents the bulk of column inches in most expositions -- is entirely irrelevant to the model definition. The only explanation that can be given is that the DSGE authors went out of their way to avoid the word "existence" in describing the solutions of the models. 
  23. (Speculative) A slightly stronger variant of the "price clearing" characterisation is that the solution to $M_m$ is the limit of some iterative procedure (tâtonnement). (I believe that this was the historical interpretation.) This has a slight advantage, as there is a stronger definition of the set of solutions. Of course, this variant would require the search procedure to explicitly incorporated into the model definition -- which is not done. Furthermore, we have almost no useful description of the limit of an algorithm that only converges to its solution after an infinite number of iterations. Note that this description applies to any numerical attempts to solve the nonlinear problem: the actual model is the solution algorithm, and we would need to determine what it is actually converging towards.
  24. The main mathematical defect of the "exogenous price vector" interpretation is rather unexpected: many of the models examined by the author were defective as interest rate determination is not properly analysed within the nonlinear model. (Disclaimer: This author only realised this issue very recently, and has not re-examined the literature to see how widespread it is.) Overlooking interest rate formation in models that are used to assert the supremacy of interest rate policy is not what one might expect. The problem is straightforward: forward interest rates (discount prices) are a critical component of the exogenous price vector facing the household sector. When we calculate the optimal solution set, we need to fix those prices. The following points describe the issue.
  25. We would not have problems if interest rates were solely a function of other exogenous prices. For example, if $Q = f(p,w)$, which says that the central bank reaction function ($f$) is only a function of the expected path of wages and prices. In which case, the set of solutions over which we search is $p,w$, $Q$ is pinned down by that choice, and we can then solve the problem. However, not all central bank reaction functions depend solely on expected price trends.
  26. (Speculative) The first way to characterise the solution for a more complex central bank reaction function is to search over all possible $P$, calculate the clearing solutions, and reject solutions for which the $Q$ vector does not match the chosen reaction function. The problem with this is straightforward: there does not appear to be a way to characterise that set. The central bank does not have a supply or demand function for bonds in the same fashion as the other agents: it has a mysterious veto property to eliminate solutions that are not compatible with the reaction function. It is very unclear that we can postulate any real world mechanism that gives a central bank that power. In any event, the set of solutions appears to be completely novel, depending on the choice of reaction function, and so the author does not see any way to apply any existing theories to questions of existence and uniqueness of solutions.
  27. (Speculative) The second approach for complex central bank reaction function would be to embed the reaction function into the household optimisation problem. However, this is completely incompatible with the model expositions in the literature, and it is very unclear that the single agent problem fits within known optimisation problems. The concern is that the reaction function would be embedded in the household budget constraint, and that would need to be taken into account when calculating the solution. The resulting problem most likely falls outside the scope of existing optimal control techniques.
  28. We return to the first characterisation of the solution, which defines the model as a set of constraints. One advantage of this characterisation is that we can throw the central bank reaction function into the list of constraints, and so it does not pose any added difficulties (since we already have almost no characterisation of the solution anyway). 
  29. There is a straightforward interpretation of "first order conditions" in the macro model: they are reaction functions. (The remaining mathematical constraints are either accounting identities, or the "laws of nature," such as the production function.) Most people who have looked at these models eventually come to understand that the idea is that the central bank does not "set interest rates," rather it specifies a reaction function, for example, a rule that is based on the output gap, and inflation expectations. This means we cannot think of the central bank setting interest rates as an exogenous time series, rather the level of interest rates are set by the state of the system. The same logic applies to all actors in the model. Although this interpretation makes the models easier to understand, it raises some issues when we run into the concepts of randomness, or uncertainty. This will be discussed in a later point.  
  30. Under the reaction function interpretation, the resulting model $M_m$ does not lie in the set of standard mathematical models for which we can apply existing theorems. For example, the author has never seen an actual validation of the existence and uniqueness of solutions; at most, appeals to various conditions, or perhaps a a suggestion that a fixed point theorem might apply. Since each model appears to be somewhat novel, we cannot be sure that they meet the stated conditions for existence and uniqueness theorems. The use of a fixed point theorem raises an obvious problem: these models refer to infinite sequences that are expected to grow in an exponential fashion. The difference between any two such non-identical sequence will have an unbounded norm, for every standard definition of a norm of a sequence. How can we validate that we have an operator norm less than one, which is a standard requirement for most fixed point theorems?
  31. Under the reaction function interpretation, any solution of the macro model (if one indeed exists) is not the result of an optimisation within the model $M_m$. The constraints $F^R_h, F^R_m$ are not in any sense first order conditions of an optimisation within $M_m$, since they ignore the added constraints implied by the other model (and there is no optimisation problem within $M_m$). As a result, we cannot summarise the models as saying that solutions reflect agents optimising objective functions with respect to the closed macroeconomic model, rather they are following heuristics that are very likely to result in values for objective functions that are lower than what could be achieved ("sub-optimal") by ignoring the first order conditions that were derived from $M_h, M_f$ (but still respecting all accounting constraints, as well as the "laws of nature", such as the production function). That is, the usual textual interpretation of DSGE models -- that the agents are acting in a fashion to optimise objective functions, and so cannot be "fooled" by policymakers -- is literally incorrect. They are following heuristics, and can be fooled.
  32. An alternative phrasing of the previous problem is that the behaviour of agents are the results of reaction functions. We have no reason to believe that a reaction function that leads to "optimal" behaviour in a single agent model is in any sense "optimal" in a model where its behavioural decisions interact with those with other agents. Truly "optimising" behaviour would require knowledge of the constraints of the actual model it is embedded in: $M_m$.
  33. There is no way to interpret the model results in terms of how markets operate in the real world. In order to solve the model at time t, markets need to "clear" at all time points t, t+1, t+2, .... The "market clearing" cannot be imagined purely in terms of expected prices: the amount supplied needs to match the amount demanded. This implies that forward transactions have to be locked in for all time. This is quite different than the similar-looking arbitrage free models used in option and fixed income pricing. In an arbitrage-free model, we are testing whether an investment with zero net size has a positive expected value; the position size may be arbitrarily small. For DSGE models (without linear production and household objective functions), the marginal value of changes to forward transactions depends upon the size of the forward purchase, and so the market clearing cannot be thought of in terms of arbitrarily small transactions. In the real world, we are not realising contracts locked in just after the Big Bang, so this clearly is not a description of reality.
  34. We need to keep in mind my argument that all decisions made by model actors are in the form of reaction functions. We cannot imagine model variables as being the result of thinking by actors. This means that we cannot imagine actors setting prices; in particular there is no thought process behind price determination. Instead, prices are arrived at automatically; if we insist on anthropomorphic thinking, there is an omniscient market maker that knows all reaction functions, and sets prices in a fashion to clear markets. 
  35. As a related point, under the reaction function interpretation, we cannot interpret the equilibrium as the result of coinciding "rational" views by agents, without taking an unusual definition of "rational." If equilibrium were the result of converging forecasts, the model agents would need to know the structure of the macro model $M_m$, as they need to know what the other agent's reaction functions and constraints are. However, if the agent knows those constraints, those constraints should have been taken into account when determining the first order conditions (reaction functions). If we want to characterise market clearing as the result of coinciding forecasts, it has to be phrased as: agents come up with coincident plans, under the assumption that all agents are following arbitrary reaction functions. Since there is no notion of optimisation within $M_m$, it definitely cannot be described as "optimising behaviour." 
  36. (Speculative - Updated) The solution to these problems under the reaction function interpretation is purely the result of the intersection of two sets of constraints. We have no other characterisation of what describes the solution. It may be that the only way to determine whether a point is in the set of solutions consistent with constraints is brute force numerical tests. If that indeed is the case, we are dealing with the discussion of solution sets that we have no realistic way to characterise. Update: the struck out comments were perhaps only applicable to many of the models that were "solved" via appeals to linearisation. Some models can obviously be solved numerically (see update below); the question is to what extent this is a general property.
  37. It might be possible to fuse the two interpretations: the true problem is the "exogenous price" definition, but then we determine constraints (first order conditions). However, those first order conditions are mathematical trivia, as they do not offer any mechanism to determine whether a solution exists, and is unique. Characterising something that may not exist is a waste of time.
  38. The time axis that appears in models is the equivalent to "forward time" in an arbitrage-free yield curve model. (Link to earlier article.) Some issues around this definition of the time axis appear in points below.
  39. (Speculative) There does not appear to be a practical way to fit these models to observed data. Under the assumption that we are not just realising contractual obligations locked in at "time zero", we have to accept that observed data would have to be the result of re-running a DSGE model at each time step, with variables set to historically realised values. However, we cannot observe any "expected" future values, only the result at the first time point. For example, one could try to argue that we can derive expected forward rates based on the spot yield curve, e.g., infer the short rate expected at t+1 at time t. Unfortunately, we do not observe the t+1 forward rate, rather we can see the market-traded forward rate at time t. We need to model the market in forward rates (or two-period bonds) to determine the market-clearing observed forward rate, which can diverge from the expected value (that is, there can be a term premium). Determining what factors caused the first period solution to change appears intractable, and is distinct from determining what causes expected values to change.
  40. (Speculative) The treatment of stochastic variables cannot be interpreted in terms of uncertainty in decision-making. The "decision rules" derived from the single agent problems are based on particular realisations of the stochastic variable. In order to find the solution in the single agent problems, we need to "roll the dice" to determine the exact levels of variables for all time first, then determine the optimal choices. That is, we have a deterministic model, but the parameter values are randomised slightly before solving. 
  41. An alternative way of looking at the previous point is to note that when we solve the model at time t, all future actor behaviour results from fixed reaction functions. Those reaction functions at time t depend upon the realisation of random variables in the future. For example, we cannot think of a "shock" to a behavioural parameter within the model happening as the result of the passage of calendar time, rather it is a projected change to the reaction function. 
  42. Since the macro model appears intractable, we have almost no way to determine the probability distributions of variables. In the absence of probability distributions, casting constraints F in terms of expected values (as is sometimes done with the household budget constraint) provides almost no information about solutions.
  43. If we look at standard two sector models with a firm and household, we see the fundamental constraints (that is, constraints that are not determined by behavioural choices) are accounting constraints (including compound interest), and one physical law of nature: the production function. Since accounting constraints will always hold. one can argue that there is really only one source of fundamental model uncertainty -- the production function. and (Obviously, if we add more elements to the models, more sources of natural uncertainty may appear,) The business sector reaction function is used to determine market-clearing conditions at time t, and that reaction function at time t depends upon the future values of the parameters in the production function ("productivity"). Therefore, the reaction function at time t faces no uncertainty regarding the future values of productivity. That is, we cannot interpret the model results as being the result of decision making under uncertainty. 
  44. As a concrete example of the previous point, assume we are solving a standard two sector real business cycle model at time t=0. The value of the productivity parameter in the production function starts out as a constant, but features a random jump to a new value at t=10. The productivity parameter shows up in the reaction function of the business sector, and so determines the acceptable ratio of wages to prices in the market-clearing price vector, as well as the demand for worker hours. We cannot determine the trajectory of the consumption function without access to productivity for all times. As a result, any model that can be computed would require knowledge of the productivity parameter at time 10 in order to solve the problem. The implication is that we cannot interpret the change at time 10 as being a random event that happens at time 10, rather an event that happens with certainty in a scenario that has the probability associated with the probability of the productivity parameter reaching that value. It goes without saying that we could never compute the implied models in finite time if random values have probability distributions that are non-zero on sets of non-zero measure.
  45. (Partly Speculative) In the absence of characterisations of the solutions to $M_m$, we cannot "linearise" the resulting model. It is very unclear that the "linearisations" used in empirical research correspond to any particular nonlinear model, including the macro DSGE model $M_m$. There are an infinite number of nonlinear models that give rise to the same linearisation.
  46. (Speculative) There does not appear to be a way to model government default in a truly uncertain fashion and generate behaviour that can be compared to observed data. If a government defaults in its debt at time t+N, there is a major disruption to the household budget constraint, and behaviour at the initial period t. In order to get market clearing for all time, the realisation of the default random variable at time t+N has to be crystallised in the solution at time t. If we interpret historically observed data as being generated by a sequence of DSGE models, we will jump from a model where default is avoided with certainty, to a model where a default is known with certainty. If that were a realistic description of behaviour, we would not need such models -- we would already know exactly when each government will default in the future.
  47. To continue the previous point, a default event would be very easily side-stepped in standard assumptions. Private sector agents would just revert to just holding money instead of the bonds that are to default in the period of default. A default event cannot surprise households, as that would be incompatible with the derived reaction function constraints associated with the household budget constraint.
  48. Central bank reaction functions are needed to close the model. In some treatments (many? all?) the reaction function is only specified in terms of the log-linear model. It is unclear that the model can be linearised in the absence of the central bank reaction function for the original model.
  49. (Partly speculative) Models that contain a business sector ($M_f$) with a nonlinear production function appear to not respect strong notions of stock-flow consistency (described later). The author has not see a budget constraint for a business sector in the macro models examined. Superficially, that opens the models to critiques about stock-flow consistency (since financial flows do not add up). This omission can be excused on the grounds that the budget constraint has no behavioural implications for the business sector, and financial asset holdings for the business sector can be inferred from the state of the government and household sectors. However, this does appear problematic for textual descriptions of the models, which sometimes equate household bond holdings to governmental bonds outstanding. Nevertheless, there are concerns with a strong notion of stock-flow consistency: are there black holes in the model? (As described by Tobin per [Lavoie2014, p. 264].) Under the author's interpretation of these models, there is no mechanism to reflux pure profits from the business sector to the household sector within the macro model. (Pure profits are profits above the cost of rental of capital.) If the model is nonlinear, pure profits are either strictly positive, or zero, in the case of a trivial solution with zero production. As such, business sector financial assets will be growing at a strictly greater rate than the compounded interest rate on any finite interval for a non-zero solution.
  50. The standard technique of indexing firms to elements of the real interval [0,1] (as in Calvo pricing) cannot be interpreted as taking the limit of a finite number of firms. It is a standard result of undergraduate real analysis that the interval [0,1] cannot be the limit of a sequence of separate points.

Technical Update (2018-03-18 - Under Construction)

The comments above were based on my survey of the literature, and as noted in the update at the top of the article, I managed to find an article that has information that previously eluded me. The article "Monetary financing with interest-bearing money," by Richard Harrison and Ryland Thomas has a feature that eluded my in my examination of the literature: a comprehensive list of the nonlinear constraints on the macro model. Please note that I have only given the article a cursory examination, but I believe my characterisations capture what is happening mathematically.

There are at least two New Keynesian DSGE models derived in the paper, but the main model is defined by the model equations in Appendix B.4, on page 46. This set of equations is the DSGE model. Harrison and Thomas do not notation that specifically conforms to my description of reaction functions, but that is what they have done. The simulations shown are of that model (and possibly the other model they derived). All the stuff about households on a continuum, optimisations, etc.? Just back story for the actual model. In 1970s wargaming parlance, "chrome."

My guess that this paper does conform to best practices for DSGE macro in 2019, and thus the "reaction function interpretation" reflects the current state of the art. Furthermore, my guess is that "exogenous price interpretation" is now mainly of historical interest, with only a few authors pursuing it. Most of the models I looked at dated from before the Financial Crisis, and I suspect that there is a significant shift in practices since then. Although neoclassicals might view that shift as a sign of progress, a less charitable interpretation is that this was just a catch up to the heterodox critiques (which are never cited, of course). I have discussed this with Alexander Douglas, and our consensus is that our projected academic article on this needs to be very conscious of these methodological shifts.

Does the paper refute my criticisms of notation? Based on my cursory examination, I would argue not. Note the following quotation from
Asset market clearing requires equality between government supply of assets and
private sector demand. Our notation removes superscripts for market clearing equilibrium asset stocks.
This is followed by equation (10): $b^p_t = b^g_t = b_t$.

From a technical mathematical point of view, that equation is just equivalent to:

  1. We assert that $b^p_t = b^g_t$,
  2. We define $b_t = b^p_t (= b^g_t)$.
The first statement implies that $b^p_t, b^g_t$ are the same mathematical object, which normally implies that they are lying in the same mathematical model, and hence we could freely substitute between the variable's constraints. I believe that this would cause havoc with the household budget equation. The appeal to "asset market clearing" and "Our notation removes superscripts" have a deeper meaning to Harrison and Thomas. I do not know exactly how they interpret this step, but I would interpret it as the operation of stitching together reaction functions. Operationally, if we look at the systems of equations, that is what is happening.

The contents of this paper are not in my current area of interest, so I have no incentive to dig deeper into their very detailed algebra. However, I would use this as an example of how my interpretations will aid outside readers to follow the DSGE literature more easily.


(c) Brian Romanchuk 2019

2 comments:

  1. Hyman Minsky refers to Leon Walras and a paper by Debreu when discussing the math models of an economy in equilibrium. The author at Scientific Metrics uses set analysis to find flaws in the use of math to build certain economic models:

    http://scientificmetrics.com/publications.html

    http://scientificmetrics.com/downloads/publications/Barzilai_2009_MCDM.pdf

    If I understand basic arguments: human preferences are subjective and form ordinal sets so one cannot do math operations of addition, multiplication, and differentiation on such sets as applied in economic models. I need to further consider the assumptions and methods in each proposed model.

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.