Recent Posts

Showing posts with label DSGE. Show all posts
Showing posts with label DSGE. Show all posts

Friday, August 18, 2023

Castillo-Martinez And Reis Article On Interest Rates

One of the interesting features of neoclassical macro is the vagueness of how the models are supposed to work. One can find popularisations of General Relativity which are meant to be understood by people who just took high school physics. And if one has the misfortune of studying tensors and manifolds, one might even have a chance of guessing at the mathematics behind the explanations. I have not seen anything remotely useful for neoclassical macro at a general reading level, while the more technical introductions have the defect of being expressed in what is best described as “economist mathematics.”

The working paper “How do central banks control inflation? A guide for the perplexed.” by Laura Castillo-Martinez and Ricardo Reis is one of the better attempts at an introduction that I have encountered, but it is mathematical. The advantage is that they address the more squirelly part of the mathematics that other texts tend to bury under a wall of obfuscation. Someone not interested in the mathematics might be entertained by puzzling through the text, but the hidden cost to doing that is one is entirely reliant upon their textual representations about the models.

Back to Basics

The working paper is relatively straightforward because it remains close to the household optimisation problem. This makes it easier to follow because it is closer to standard mathematics.

We could imagine an optimisation problem for a household. Given an initial stock of money and a future earnings flow, the objective is to generate a sequence of consumption expenditures over an infinite time horizon that optimise a utility function. (Yes, an infinite time horizon is a bit silly, but it is convenient mathematically.) For example, we have $100 to spend on apples, and we want to optimise our lifetime apple consumption utility when we have the full grid of future prices of apples.

We assume that the household is given the time series of future (expected) prices as well as future interest rates that determine the rate of return on an unspent money balance. The utility function is chosen so that the solution will tend to spread out consumption over time. (By contrast, if the utility function said that the utility was given by the square of the number of units consumed, the preference is going to be to consume the entire budget in one shot. For example, assume we could buy 100 apples spread across today and tomorrow. For simplicity, we are indifferent to the date of purchase. If our utility function is the square of apples consumed in a period, the optimal solutions (there are two) are to consume 100 apples either today or tomorrow. But if the utility is the square root of the number of apples consumed per period, then the optimal solution is to consume 50 each day. Utility functions used in neoclassical models are like the square root case.)

This is a problem that is not too difficult to pursue with standard 1950s optimal control theory, although optimising on an infinite time horizon is somewhat tricky mathematically courtesy of infinite dimensional spaces being a royal pain in the nether regions (to use mathematical jargon).

However, such a problem was not exactly what economists needed: they wanted prices to be determined within the optimisation problem (as well as determining the optimal consumption path). This is an extremely difficult problem to express in standard mathematics, which is why we end up with “economist mathematics.” However, if the model has a single optimisation problem, one can generally reverse engineer what they are trying to do. (Not the case when they throw in multiple optimisations.)

So, How Do Central Banks Control Inflation?

Although the paper has an expansive title suggesting that the answer to how central banks control inflation, it is a survey of a number of neoclassical approaches (which may or may not be internally consistent). As such, it is a good introduction to neoclassical debates. However, it is not an empirical paper, leaving open the question “Do these models stink?”

I am most interested in the first approach, which involves embedding something like a Taylor Rule within a model. So, one might ask: how is a Taylor Rule supposed to control inflation? The answer is somewhat painful, but much cleaner than other texts that I have read that skipped over the mathematical ugliness.

The key theoretical mechanism relies on two alternative specifications of the nominal interest rate. Note that everything here is being expressed in log-linear terms, so we add terms rather multiply factors. (That is, we do not see (1+i) = (1+r)(1+π), rather i=r+π. Using additive terms is crucial for the algebra.)

  1. The first is a Taylor Rule: the nominal policy rate (single period) is equal to a constant that is greater than 1 multiplied by the current period inflation rate (so the price change from t-1 to t), plus another term that is given by the rest of the Taylor Rule (that typically incorporate corrections for a non-zero target inflation rate, plus an estimate of the real rate). The key is that the inflation rate from t-1 to t appears.

  2. The second is the Fisher equation, where the nominal interest rate equals the real interest rate in the economy (discussed more below) plus the expected inflation rate from time t to t+1.

Since it is the same nominal interest rate in both equations, we can equate the two expressions. We then get a relationship between the inflation rates over two time periods. Using some algebra (described below in the text block) and a key assumption, we can express inflation rates at any given time as an infinite sum (“summation”) of terms involving variables that we hopefully know. Readers who do not want to wade through the word salad below can skip to the implications.

I will now describe the manipulations. This probably would have been better with equations, but I will try to describe it as text. One could look at the equations in the article instead of my description, but they have a lot of symbols running around in there, and they also skip how the summation is derived. Given the complexity of the expressions, jumping to the summation formula is not a trivial step for anyone who has not seen the equations multiple times.

We rearrange terms in the joined equation to get an equation where the inflation rate between t-1 and t is equal to a simple function of the (expected) inflation rate from time t to t+1. (I am going to drop the “expected” from the description.)

Since we normally refer to the inflation rate between time t-1 and t as inflation at time t, we see that we can specify inflation at time t as a function of inflation at time t+1.

The reason to do this is that we can then use this relationship to specify inflation at time t+1 a function that includes inflation at time t+2 (since the equation holds for all t, we can relabel). We can then substitute back into the original equation, so that inflation at time t is equal to some terms plus a factor multiplying inflation at t+2. We then keep going, until we end up with inflation at time t equalling a summation of N terms, and a term including inflation at t+N.

We then invoke an assumption that the term including inflation at t+N tends to zero as N goes to infinity (discussed below!), and we end up with an expression for inflation at time that is a summation of terms that we can calculate without knowing future inflation.

Since this equation works at t=0 (if the assumptions hold!), the inflation rate from time t=-1 to 0 can be calculated, and so the price level at t=0 is pinned down. (This would not possible if we did not have the Taylor rule based on historical inflation, as opposed to expected inflation. I complained about indeterminacy in the past, but including historical inflation in the reaction function is the end run around the issue.)

The problem is that the assumption that allows the summation to converge is entirely based on “we assume that the summation converges” (although expressed in a mathematically equivalent format). The logic is essentially “nobody would believe it if the inflation rate tore off to infinity,” which is precisely not the sort of mathematical logic taught in reputable Real Analysis courses.

The authors even note one of the fundamental issues: the Taylor Rule magnifies inflation deviations. That is not the sort of mathematical system that I am going to make leaps of faith regarding the convergence of infinite summations (and the existence and uniqueness of solutions).

Banks - A Red Herring

The article includes a balderdash reference to “banks” that allegedly use “reserves” to invest in “real assets.” Heterodox authors could easily be misled by that text. As always, one needs to take textual assertions about model mathematics made by neoclassicals with a massive grain of salt. There are no “banks” in the model. Instead, they are coming up with a fairy story to motivate an argument about “real interest rates.”

The idea is that if the (expected) real rate of interest on financial investments (reserves/bills that pay the policy rate) departs from the assumed known real rate of return on real assets, then mysterious entities will pop into existence and buy/sell the real assets (which is also the consumption good) versus bills to arbitrage the difference in return. (The real rate of return is supposed to be known because entities know the current period production function, but anyone even familiar with how businesses work realise that skips a lot of uncertainties.)

In other words, these “bank” entities have no mathematical existence within the model description itself, the only mathematical object is the assumption that the Fisher equation holds (a statement about set elements).

Although this story has a lot of plausibility issues, it is also core to the mathematical manipulations. If the real rate of return at time t is not fixed by the economic laws of nature, the Fisher Equation (nominal interest rate equals that real rate of return plus expected inflation) is no longer useful, and we cannot use it to create the summation formula.

The random appearance of “banks” is the sort of thing one has to expect when dealing with economist mathematics. Properly structured mathematics refers to statements about sets, and the sets involved are clearly delineated within the exposition of the model at the beginning. Economist mathematics involves randomly dropping in entities that are not sets in the middle of the exposition, and the reader has to figure out how those entities interact with already existing mathematical entities. And since they refer to real world entities — like banks — one could easily make the mistake of using mathematical operations describing how banks operate in the real world, as opposed to what the authors want the entities to do (“arbitraging” Treasury bills and real assets). It also creates the mistaken impression that such neoclassical models include banking system dynamics, which is definitely not the case here.

Concluding Remarks

If we are to take the model literally, central banks “control inflation” by announcing that they are going to follow a rule that would probably cause the economy to blow up, but nobody really believes it will blow up, so everything expects inflation to follow some sensible path near the inflation target.

One only needs to re-read that sentence to realise that you are not supposed to take the mathematical models too literally. Instead, one is supposed to assume that it is an idealised approximation that captures mechanisms that allegedly exist in the real world. The problem with this approach is that if one starts ignoring the core of the mathematical model, there are no objective standards to discuss the quality of the model predictions.

The fundamental issue with neoclassical modelling is that the equilibrium assumption means that everything in the economy is tied together, and mainly influenced by expected values of variables — which are generally not measurable. With all the modelling weight on non-measurable quantities, it is quite hard to deal with what should be straightforward questions, like “What is the effect of an immediate 50 basis point rate hike?,” or even “What was the effect of the Fed rate hike campaign?” The only questions the models are clearly suited for are ones like “What happens if the non-measurable expectations for production function shifts downwards for the rest of time?”


Email subscription: Go to https://bondeconomics.substack.com/ 

(c) Brian Romanchuk 2023

Monday, August 14, 2023

Neoclassical Interest Rate Articles

I ran into a pair of articles about interest rates by neoclassicals. They are straightforward by the standards of neoclassical macro, and cover two topics of interest. The first topic is — how do neoclassicals believe that central banks control inflation? The second is on r* estimates, which are presumably needed in order to use interest rates to control inflation according to the theory.

Wednesday, August 9, 2023

Inflation Or Nominal GDP Targeting? Whatever.

Nominal GDP targeting is the latest mainstream fad, which is allegedly going to improve central banking when compared to inflation targeting. In a certain sense, I have some sympathies. If a central bank could stabilise nominal GDP growth, it has abolished the business cycle. I am in favour of abolishing the business cycle. Doing so would not solve all economic woes, but it would be the most that can be hoped for from macroeconomic stabilisation policy. Unfortunately, there is no reason to believe that the central bank can wave a magical expectations wand and abolish the business cycle — despite the claims of some Market Monetarists.

Tuesday, August 1, 2023

Inflation Targeting In Practice

I have been running into the ongoing debates on nominal GDP targeting, and whether it is superior to inflation targeting. To look at this debate, I need to put my “conventional economist” hat on, as if one accepts the heterodox view that interest rate policy is ineffective, the entire debate is pointless (neither policy “works”). As will become apparent, I think the neoclassical models behind the debate are dubious. But if we accept that interest rates at least sort-of work the way that they are conventionally assumed to (e.g., rate hikes dampen growth and inflation), we can have a view on how a change in targets would work in practice.

Wednesday, January 19, 2022

Whither r*?

(Note: This article was delayed because of technical difficulties. I finally found a work around.)

Although hand-wringing about Quantitative Easing and the “transitory-ness” of inflation is catching most people’s attentions, there is an interesting theoretical concern that is about to get quite pressing. That is: what is up with r* (which is the modernised version of the “natural rate of interest,” although the word “natural” was finally dropped from the jargon). Although post-Keynesians generally argue that r* does not exist — so this is a non-issue — neoclassicals cannot easily embrace that position.

Monday, October 4, 2021

Why Inflation Expectations Matter A Lot For Theory

The fallout from the Rudd article on inflation expectations (that I described here) is still percolating around the internet. Duncan Weldon just wrote “We Have No Theory of Inflation” which discusses the issues from the empirical side. In this article, I am re-writing and expanding on some points that I noted in my previous article. If inflation expectations do not behave as predicted by neoclassical models, that is a critical problem that cannot be easily dealt with by adding epicycles to the models.

Monday, September 27, 2021

Rudd Inflation Expectations Article

Jeremy B. Rudd caught a fair amount of attention with his recent discussion paper “Why Do We Think That Inflation Expectations Matter for Inflation? (And Should We?)” The text has a lot of zingers that post-Keynesians would write, but Rudd is in fact a researcher with the Federal Reserve.

Monday, March 29, 2021

Can We Replicate DSGE Models With An Agent-Based Model?

I have another non-paywalled draft PDF on my Patreon (link). They are initial musings about a theoretical question: can we implement an agent-based model, and have it converge to a DSGE model equilibrium solution?

The text is somewhat rough, as it is a brainstorming document rather than a finished product. My conjecture is that the only way to get a DSGE-style equilibrium is to force the agents to use that solution, which is not within the "spirit" of agent-based modelling.

With that digression out of the way, I have returned to puttering around with my Python agent-based framework. The code is still preliminary, and I would not yet describe it as a functioning model. However, within a few weeks, it should start to resemble a very simple model that could be related to an aggregate model of the economy.

(As for my Patreon, for now I am now treating it as a place to host free PDF's. Once the agent-based framework is in a functioning state, I will place high-level design documents as a treat for Patrons, and it
would be "open for business" in a real sense.)

Thursday, March 25, 2021

DSGE Notation Comments

I have put up a non-paywalled draft PDF about DSGE model notation. It's a rehash of things I have written earlier, but since I can stick PDF's on Patreon, I formatted it properly using LaTex. 

I do not view this as a topic of burning interest for me. However, I am in the process of writing a short comment on agent-based models and DSGE models, and I needed this background.



Thursday, February 11, 2021

The Great Vacation: Recessions In DSGE Models (Part II)

The Great Vacation Effect is what I term one well-known pathological side effect of almost all macro dynamic stochastic general equilibrium (DSGE) models: since employment hours are a voluntary decision in the household optimisation problem the direct implication that unemployment is voluntary as well. As such, The Great Depression can be interpreted as The Great Vacation. Although this silliness is well known, the silliness has nasty side effects for recession analysis. This article continues the discussion of the previous part, turning to the question of why this effect matters even if we suspend disbelief with respect to the interpretation of unemployment. 

Monday, February 8, 2021

The Great Vacation: Recessions In DSGE Models (Part I)

Neoclassical models are built around optimising behaviour. The logic for this is somewhat reasonable: one should expect the private sector to look out after its own interests, and not be tricked by policymakers into self-defeating behaviour. The aspiration is hard to argue against, the problem is the implementation. When it comes to recession analysis, the most blatant problems are in the modelling of household sector behaviour. Since working is voluntary, the hours worked in a period is allegedly a decision variable that can be controlled unilaterally by the household in order to optimise its utility. However, since employment is voluntary — so is unemployment. The result is that recessions can be seen as the optimal decision of households to stop working. As wags have described it, The Great Depression was actually The Great Vacation.

Thursday, February 4, 2021

Primer: Labour Markets In Neoclassical Models

The labour market is a key driver for business cycles. Within standard economic models where production is mainly a function of capital and labour (and productivity which determines the multiplier from these inputs), given that productivity is normally fairly stable, we can only generate a recession via a drop in employment. (The recession of 2020 provides an example of "productivity" dropping as a result of governments shuttering activity, but even then, the mechanism for lower production was stopping workers from going to work. This could be captured any number of ways within a model.)

Monday, February 1, 2021

Capital Complexities

The definition and treatment of capital is an important issue that arises quickly when discussing neoclassical approaches to the business cycle. In particular, one can rapidly fall down the rabbit hole of the Cambridge Capital Controversies. However, if the objective is to focus on what is important for understanding the business cycle, we see that capital is hard to easily model, and this is related to the opacity of business cycle analysis.

Tuesday, December 22, 2020

Animal Spirits In A Monetary Model: Initial Comments

Roger E.A. Farmer and Konstantin Platonov published "Animal Spirits in a Monetary Model" in 2019 (link). This is a continuation of Roger Farmer's research programme of working with DSGE-style models but with a notion of multiple equilibria. Within the framework, the final solution is pinned down by "animal spirits": beliefs about the future evolution of the economy.

Wednesday, November 18, 2020

Roger Farmer's Comments On The Natural Rate Of Unemployment

One of my concerns about mainstream economic methodology is the dependence upon hidden variables that are estimated with techniques like the Kalman Filter. The issue is that the resulting methodology is non-falsifiable: it will always end up being consistent with any observed data (admittedly, some outright bizarre behaviour might be rejected). 

Although this might appear to be my own hobby horse, I just want to note that I was not the first person to make such a complaint. Various heterodox authors have levelled similar complaints, but my feeling they have done so in such a long-winded fashion that non-heterodox readers like myself have a very hard time picking out what is a straightforward -- but important -- point. In reading Prosperity for All: How to Prevent Financial Crises by Roger E. Farmer (Amazon affiliate link), he makes the same point with respect to the natural rate of unemployment.

Wednesday, November 11, 2020

Side Effects Of Stability Of DSGE Models

The stability of the solutions of workhorse dynamic stochastic general equilibrium (DGSE) models is a major drawback of these models when trying to explain recessions. This article looks at the estimated linear New Keynesian model that was used in the previous discussion of the estimation of r* -- the Holston-Laubach-Williams (HLW) model. 

As I have noted a few times previously, more interesting behaviour can be obtained by nonlinear DSGE models. However, this flexibility comes at the cost of increasing the difficulty of fitting the model to data. There is an infinite number of models one can use to tell stories with.

Sunday, November 8, 2020

Roger Farmer's Work: A Bridge Between Heterodoxy And The Mainstream?

I am now returning to my work on recession analysis, covering the background research that will make its way into the second volume of recessions. I have more work to do on nonlinear DSGE models, but my argument is that they are not inherently that interesting, given their mathematical intractability. Yes, you can tell stories with them -- but so what? I can tell stories any number of ways, all of them more interesting.

Rather than chase after hundreds of papers that I think are uninteresting and will be rapidly made obsolete by the ongoing wave of research, I would rather focus on a niche area that I think is interesting. Roger E. Farmer has been publishing models for some time that fit within the broad neoclassical tradition, yet have some similarities to post-Keynesian thinking. The book Prosperity for All (Amazon affiliate link) gives a readable introduction to his work.

Sunday, November 1, 2020

Effect Of Recessions On r* Estimates

This article demonstrates the importance of recessions in driving down the r* estimate produced by the Holsten-Laubach-Williams (HLW) methodology. Although there are other algorithms that can be used to generate a r* estimate, my argument is that they should have similar qualitative properties. In the case of the HLW estimate, my argument is that the nature of the recession in 2008 is a major contributor to the fall in r* thereafter. The underlying problem is that real-world data does not match the probability distribution assumed in the algorithm.

Wednesday, October 28, 2020

The Perils Of Non-Causal Models: r* Edition

HLW r*: Effect of 2020.

One important property of time series is models is whether they are causal or non-causal. A non-causal model has the property that future values of inputs affects the current values of outputs. For time series, the calculation implies the use of a time machine, which is generally not available. One needs to be careful of the issues posed by non-causality in financial model building, since time series libraries treat time series as single units, and contain many non-causal operations. However, as I discovered the hard way, the Holsten-Laubach-Williams (HLW) model is non-causal, and the 2020 spike causes some serious issues (figure above). I previously analysed the model, but had not realised the magnitude of the problem associated with 2020. Although my qualitative analysis was not greatly affected, the charts were perhaps too sensationalistic. 

Sunday, October 25, 2020

Falling r* Is No Accident

A great deal of significance has been attached to the fall in r*, which is the current preferred term for what was known as the natural rate of interest. My belief is that this fall is not due to structural factors in the real economy, rather it is an artefact of the means of estimating r*, as well as the reaction function of New Keynesian central bankers.