Wednesday, September 13, 2023
Thursday, September 7, 2023
A popular belief in hard money internet commentary is that economists are conspiring to lower reported inflation. The advantage of this is to mislead voters and to reduce cost-of-living adjustments paid by the government. Admittedly, there are cases in emerging markets where there is widespread skepticism about government inflation statistics, and the government inflation numbers do not appear to align with market data. As such, I label this as a “misunderstanding,” as I accept that CPI numbers probably have been fudged somewhere. However, the usual case in developed countries is that statistical agencies are transparent about their methodologies, and the complaints are bad faith misinterpretations of the methodologies.
Tuesday, September 5, 2023
The usual response to critics who state things like I just did is “give me a better model.” The idea is that we need to replace one reductionist model with another reductionist model. The reasoning seems to be that economics is like physics, where a lot of the history of the field is doing exactly that. (Physicists might be getting into “complexity,” which may or may not be a mathematical pseudo-science. In any event, this is not what people have in mind when they compare economics to physics.) If inflation is a complicated process, any reductionist model is going to fail.
Any empirical work on the link between inflation and the labour market is going to run into a snag that is the result of what I argue are relatively non-controversial positions.
We assume that there exists a unitary variable that summarises the business cycle — which might not be directly measurable. It might be something like the first principal component of a few underlying variables. The usual measured variable that is supposed to be a proxy of this variable is GDP, but one may note that the NBER looks at a variety of variables to date recessions. The justification for the existence of this variable is that recessions are a somewhat nonsensical concept without the notion of some way of summarising the state of the business cycle which goes up and down. (If one wants to insist that no such unitary variable exists, we end up with business cycle nihilism. This might be the correct stance, but makes it very awkward to discuss macro.)
Inflation is a pro-cyclical variable, possibly with a lag. This can be justified by eyeballing inflation/GDP charts. There are any number of theoretical stories justifying why this is the case.
Employment growth is pro-cyclical, almost by definition. Rising unemployment is one of the defining characteristics of a recession.
(Core/median) inflation and employment growth empirically exhibit “trends” — although monthly data might be noisy, the averages across months tend to be smoother after seasonal adjustment (albeit with step changes during things like recession).
As an added bonus, we can note that private sector credit is pro-cyclical. This can be seen by eyeballing charts, and is likely to be a component of any model that takes the Kalecki Profit Equation seriously. (Neoclassical models notably do not.) Bank lending and hence deposits are a component of private credit, and thus the bank deposit component of M1 is going to be pro-cyclical.
You do not need a doctorate in statistics or need to study Real Analysis to know that 1-4 taken together imply that inflation and employment growth are going to be correlated to some possibly unknown “business cycle” variable. It is going to be nearly impossible to detect a causal relationship between the variables unless the relationship is simple and stable over time. Guess what? We cannot find any such relationship.
(The fifth point in the list is aimed at any Monetarists who for some bizarre reason decided to read this article.)
Inflation is complicated, and there is no reason to expect that rents are going to move in the same way as college tuition, used automobiles, or imported jeans. To any extent we can model inflation, we need to decompose the aggregate (which was allegedly proved to be the wrong way to look at inflation, according some neoclassical economists).
The whole “unemployment needs to rise to reduce inflation” was a terrible take since it was a somewhat innumerate understanding of a “stylised fact” about recessions. Recessions tend to be associated with disinflation (rate of inflation dropping). As such, one strategy to control inflation is to throw the economy into recession whenever inflation is “too high.” (I am not saying that is a good strategy, but reading between the lines, that is the neoliberal strategy.) However, the causal implication is one way — a recession is (typically) sufficient for disinflation, but it is not necessary (as seen in the disinflation after the pandemic spike).
Book Progress? Sigh.
I am still plugging away at editing my inflation manuscript (interrupted by the CFL Labour Day Classic). Most of the work is tweaking existing text, and thus not publishable here. However, I added a missing section that should show up within a week or so.
If I were productive, I might be able to finish it off within a month or two. Based on past experience, a publishing date in January is more realistic.
Tuesday, August 29, 2023
Wednesday, August 23, 2023
The extreme dominance of the share of global GDP after World War II by the United States was a historical accident, and so the relative rise of other economic powers was inevitable. Meanwhile, the eagerness of the United States to use sanctions as a foreign policy tool is going to create incentives to develop mechanisms to do an end run around those sanctions. Nevertheless, the geopolitical system is far more stable than is commonly described. (I will enter a geopolitical tangent later in this article to justify that claim.)
Friday, August 18, 2023
One of the interesting features of neoclassical macro is the vagueness of how the models are supposed to work. One can find popularisations of General Relativity which are meant to be understood by people who just took high school physics. And if one has the misfortune of studying tensors and manifolds, one might even have a chance of guessing at the mathematics behind the explanations. I have not seen anything remotely useful for neoclassical macro at a general reading level, while the more technical introductions have the defect of being expressed in what is best described as “economist mathematics.”
The working paper “How do central banks control inflation? A guide for the perplexed.” by Laura Castillo-Martinez and Ricardo Reis is one of the better attempts at an introduction that I have encountered, but it is mathematical. The advantage is that they address the more squirelly part of the mathematics that other texts tend to bury under a wall of obfuscation. Someone not interested in the mathematics might be entertained by puzzling through the text, but the hidden cost to doing that is one is entirely reliant upon their textual representations about the models.
Back to Basics
The working paper is relatively straightforward because it remains close to the household optimisation problem. This makes it easier to follow because it is closer to standard mathematics.
We could imagine an optimisation problem for a household. Given an initial stock of money and a future earnings flow, the objective is to generate a sequence of consumption expenditures over an infinite time horizon that optimise a utility function. (Yes, an infinite time horizon is a bit silly, but it is convenient mathematically.) For example, we have $100 to spend on apples, and we want to optimise our lifetime apple consumption utility when we have the full grid of future prices of apples.
We assume that the household is given the time series of future (expected) prices as well as future interest rates that determine the rate of return on an unspent money balance. The utility function is chosen so that the solution will tend to spread out consumption over time. (By contrast, if the utility function said that the utility was given by the square of the number of units consumed, the preference is going to be to consume the entire budget in one shot. For example, assume we could buy 100 apples spread across today and tomorrow. For simplicity, we are indifferent to the date of purchase. If our utility function is the square of apples consumed in a period, the optimal solutions (there are two) are to consume 100 apples either today or tomorrow. But if the utility is the square root of the number of apples consumed per period, then the optimal solution is to consume 50 each day. Utility functions used in neoclassical models are like the square root case.)
This is a problem that is not too difficult to pursue with standard 1950s optimal control theory, although optimising on an infinite time horizon is somewhat tricky mathematically courtesy of infinite dimensional spaces being a royal pain in the nether regions (to use mathematical jargon).
However, such a problem was not exactly what economists needed: they wanted prices to be determined within the optimisation problem (as well as determining the optimal consumption path). This is an extremely difficult problem to express in standard mathematics, which is why we end up with “economist mathematics.” However, if the model has a single optimisation problem, one can generally reverse engineer what they are trying to do. (Not the case when they throw in multiple optimisations.)
So, How Do Central Banks Control Inflation?
Although the paper has an expansive title suggesting that the answer to how central banks control inflation, it is a survey of a number of neoclassical approaches (which may or may not be internally consistent). As such, it is a good introduction to neoclassical debates. However, it is not an empirical paper, leaving open the question “Do these models stink?”
I am most interested in the first approach, which involves embedding something like a Taylor Rule within a model. So, one might ask: how is a Taylor Rule supposed to control inflation? The answer is somewhat painful, but much cleaner than other texts that I have read that skipped over the mathematical ugliness.
The key theoretical mechanism relies on two alternative specifications of the nominal interest rate. Note that everything here is being expressed in log-linear terms, so we add terms rather multiply factors. (That is, we do not see (1+i) = (1+r)(1+π), rather i=r+π. Using additive terms is crucial for the algebra.)
The first is a Taylor Rule: the nominal policy rate (single period) is equal to a constant that is greater than 1 multiplied by the current period inflation rate (so the price change from t-1 to t), plus another term that is given by the rest of the Taylor Rule (that typically incorporate corrections for a non-zero target inflation rate, plus an estimate of the real rate). The key is that the inflation rate from t-1 to t appears.
The second is the Fisher equation, where the nominal interest rate equals the real interest rate in the economy (discussed more below) plus the expected inflation rate from time t to t+1.
Since it is the same nominal interest rate in both equations, we can equate the two expressions. We then get a relationship between the inflation rates over two time periods. Using some algebra (described below in the text block) and a key assumption, we can express inflation rates at any given time as an infinite sum (“summation”) of terms involving variables that we hopefully know. Readers who do not want to wade through the word salad below can skip to the implications.
I will now describe the manipulations. This probably would have been better with equations, but I will try to describe it as text. One could look at the equations in the article instead of my description, but they have a lot of symbols running around in there, and they also skip how the summation is derived. Given the complexity of the expressions, jumping to the summation formula is not a trivial step for anyone who has not seen the equations multiple times.
We rearrange terms in the joined equation to get an equation where the inflation rate between t-1 and t is equal to a simple function of the (expected) inflation rate from time t to t+1. (I am going to drop the “expected” from the description.)
Since we normally refer to the inflation rate between time t-1 and t as inflation at time t, we see that we can specify inflation at time t as a function of inflation at time t+1.
The reason to do this is that we can then use this relationship to specify inflation at time t+1 a function that includes inflation at time t+2 (since the equation holds for all t, we can relabel). We can then substitute back into the original equation, so that inflation at time t is equal to some terms plus a factor multiplying inflation at t+2. We then keep going, until we end up with inflation at time t equalling a summation of N terms, and a term including inflation at t+N.
We then invoke an assumption that the term including inflation at t+N tends to zero as N goes to infinity (discussed below!), and we end up with an expression for inflation at time t that is a summation of terms that we can calculate without knowing future inflation.
Since this equation works at t=0 (if the assumptions hold!), the inflation rate from time t=-1 to 0 can be calculated, and so the price level at t=0 is pinned down. (This would not possible if we did not have the Taylor rule based on historical inflation, as opposed to expected inflation. I complained about indeterminacy in the past, but including historical inflation in the reaction function is the end run around the issue.)
The problem is that the assumption that allows the summation to converge is entirely based on “we assume that the summation converges” (although expressed in a mathematically equivalent format). The logic is essentially “nobody would believe it if the inflation rate tore off to infinity,” which is precisely not the sort of mathematical logic taught in reputable Real Analysis courses.
The authors even note one of the fundamental issues: the Taylor Rule magnifies inflation deviations. That is not the sort of mathematical system that I am going to make leaps of faith regarding the convergence of infinite summations (and the existence and uniqueness of solutions).
Banks - A Red Herring
The article includes a balderdash reference to “banks” that allegedly use “reserves” to invest in “real assets.” Heterodox authors could easily be misled by that text. As always, one needs to take textual assertions about model mathematics made by neoclassicals with a massive grain of salt. There are no “banks” in the model. Instead, they are coming up with a fairy story to motivate an argument about “real interest rates.”
The idea is that if the (expected) real rate of interest on financial investments (reserves/bills that pay the policy rate) departs from the assumed known real rate of return on real assets, then mysterious entities will pop into existence and buy/sell the real assets (which is also the consumption good) versus bills to arbitrage the difference in return. (The real rate of return is supposed to be known because entities know the current period production function, but anyone even familiar with how businesses work realise that skips a lot of uncertainties.)
In other words, these “bank” entities have no mathematical existence within the model description itself, the only mathematical object is the assumption that the Fisher equation holds (a statement about set elements).
Although this story has a lot of plausibility issues, it is also core to the mathematical manipulations. If the real rate of return at time t is not fixed by the economic laws of nature, the Fisher Equation (nominal interest rate equals that real rate of return plus expected inflation) is no longer useful, and we cannot use it to create the summation formula.
The random appearance of “banks” is the sort of thing one has to expect when dealing with economist mathematics. Properly structured mathematics refers to statements about sets, and the sets involved are clearly delineated within the exposition of the model at the beginning. Economist mathematics involves randomly dropping in entities that are not sets in the middle of the exposition, and the reader has to figure out how those entities interact with already existing mathematical entities. And since they refer to real world entities — like banks — one could easily make the mistake of using mathematical operations describing how banks operate in the real world, as opposed to what the authors want the entities to do (“arbitraging” Treasury bills and real assets). It also creates the mistaken impression that such neoclassical models include banking system dynamics, which is definitely not the case here.
If we are to take the model literally, central banks “control inflation” by announcing that they are going to follow a rule that would probably cause the economy to blow up, but nobody really believes it will blow up, so everything expects inflation to follow some sensible path near the inflation target.
One only needs to re-read that sentence to realise that you are not supposed to take the mathematical models too literally. Instead, one is supposed to assume that it is an idealised approximation that captures mechanisms that allegedly exist in the real world. The problem with this approach is that if one starts ignoring the core of the mathematical model, there are no objective standards to discuss the quality of the model predictions.
The fundamental issue with neoclassical modelling is that the equilibrium assumption means that everything in the economy is tied together, and mainly influenced by expected values of variables — which are generally not measurable. With all the modelling weight on non-measurable quantities, it is quite hard to deal with what should be straightforward questions, like “What is the effect of an immediate 50 basis point rate hike?,” or even “What was the effect of the Fed rate hike campaign?” The only questions the models are clearly suited for are ones like “What happens if the non-measurable expectations for production function shifts downwards for the rest of time?”