Recent Posts

Monday, January 15, 2024

A Response To A Question About Post-Keynesian Interest Rate Theories (...And A Rant)

I got a question about references for post-Keynesian theories of interest rates. My answer to this has a lot of levels, and eventually turns into a rant about modern academia. Since I do not want a good rant to go to waste, I will spell it out here. Long-time readers may have seen portions of this rant before, but my excuse is that I have a lot of new readers.

(I guess I can put a plug in for my book Interest Rate Cycles: An Introduction which covers a variety of topics around interest rates.)

References (Books)

For reasons that will become apparent later, I do not have particular favourites among post-Keynesian theories of interest rates. For finding references, I would give the following three textbooks as starting points. Note that as academic books, they are pricey. However, starting with textbooks is more efficient from a time perspective than trawling around on the internet looking for free articles.

  1. Mitchell, William, L. Randall Wray, and Martin Watts. Macroeconomics. Bloomsbury Publishing, 2019. Notes: This is an undergraduate-level textbook, and references are very limited. For the person who originally asked the question, this is perhaps not the best fit.

  2. Lavoie, Marc. Post-Keynesian economics: new foundations. Edward Elgar Publishing, 2022. This is an introductory graduate level textbook, and is a survey of PK theory. Chuck full o’ citations.

  3. Godley, Wynne, and Marc Lavoie. Monetary economics: an integrated approach to credit, money, income, production and wealth. Springer, 2006. This book is an introduction to stock-flow consistent models, and is probably the best starting point for post-Keynesian economics for someone with mathematical training.

Why Books?

If you have access to a research library and you have at least some knowledge of the field one can do a literature survey by going nuts chasing down citations from other papers. I did my doctorate in the Stone Age, and I had a stack of several hundred photocopied papers related to my thesis sprawled across on my desk1, and I periodically spent the afternoon camped out in the journal section of the engineering library reading and photocopying the interesting ones.

If you do not have access to such a library, it is painful getting articles. And if you do not know the field, you are making a grave mistake to rely on articles to get a survey of the field.

  1. In order to get published, authors have no choice but to hype up their own work. Ground-breaking papers have cushion to be modest, but more than 90% of academic papers ought never have to been published, so they have no choice but to self-hype.

  2. Academic politics skews what articles are cited. Also, alternative approaches need to be minimised in order handing ammunition to reviewers looking to reject the paper.

  3. The adversarial review process makes it safer to avoid anything remotely controversial in areas of the text that are not part of what allegedly makes it novel.

My points here might sound cynical, but they just are what happens if you apply any minimal intellectual standards to modern academic output. Academics have to churn out papers; the implication is that the papers cannot all be winners.

The advantage of an “introductory post-graduate” textbook is that the author is doing the survey of a field, and it is exceedingly unlikely that they produced all the research being surveyed. This allows them to take a “big picture” view of the field, without worries about originality. Academic textbooks are not cheap, but if you value your time (or have an employer to pick up the tab), they are the best starting point.

My Background

One of my ongoing jobs back when I was an employee was doing donkey work for economists. To the extent that I had any skills, I was a “model builder.” And so I was told to go off and build models according to various specifications.

Over the years, I developed hundreds (possibly thousands) of “reduced order” models that related economic variables to interest rates, based on conventional and somewhat heterodox thinking. (“Reduced order” has a technical definition, I am using the term somewhat loosely. “Relatively simple models with a limited number of inputs and outputs” is what I mean.) Of course, I also mucked around with the data and models on my own initiative.

To summarise my thinking, I am not greatly impressed with the ability of reduced order models to tell us about the effect of interest rates. And it is not hard to prove me wrong — just come up with such a reduced form model. The general lack of agreement on such a model is a rather telling point.

“Conventional” Thinking On Interest Rates

The conventional thinking on interest rates is that if inflation is too high, the central bank hikes the policy rate to slow it. The logic is that if interest rates go up, growth or inflation (or both) falls. This conventional thinking is widespread, and even many post-Keynesians agree with it.

Although there is widespread consensus about that story, the problem shows up as soon as you want to put numbers to it. An interest rate is a number. How high does it have to be to slow inflation?

Back in the 1990s, there were a lot of people who argued that the magic cut off point for the effect of the real interest rate was the “potential trend growth rate” of the economy (working age population growth plus productivity growth). The beauty of that theory is that it gave a testable prediction. The minor oopsie was that it did not in fact work.

So we were stuck with what used to be called the “neutral interest rate” — now denoted r* — which moves around (a lot). The level of r* is estimated with a suitably complex statistical process. The more complex the estimation the better — since you need something to distract from the issue of non-falsifiability. If you use a model to calibrate r* on historical data, by definition your model will “predict” historical data when you compare the actual real policy rate versus your r* estimate. The problem is that if r* keeps moving around — it does not tell you much about the future. For example, is a real interest rate of 2% too low to cause inflation to drop? Well, just set it at 2% and see whether it drops! The problem is that you only find out later.

Post-Keynesian Interest Rate Models

If you read the Marc Lavoie “New Foundations” text, you will see that what defines “post-Keynesian” economics is difficult. There are multiple schools of thought (possibly with some “schools” being one individual) — one of which he labels “Post-Keynesian” (capital P) that does not consider anybody else to be “post-Keynesian.” I take what Lavoie labels as the “broad tent” view, and accept that PK economics is a mix of schools of thought that have some similarities — but still have theoretical disagreements.

Interest rate theories is one of big areas of disagreement. This is especially true after the rise of Modern Monetary Theory (MMT).

MMT is a relatively recent offshoot of the other PK schools of thought. What distinguished MMT is that the founders wanted to come up with an internally consistent story that could be used to convince outsiders to act more sensibly. The effect of interest rates on the economy was one place where MMTers have a major disagreement with others in the broad PK camp, and even the MMTers have somewhat varied positions.

The best way to summarise the consensus MMT position is that the effect of interest rates on the economy are ambiguous, and weaker than conventional beliefs. A key point is that interest payments to the private sector are a form of income, and thus a weak stimulus. (The mainstream awkwardly tries to incorporate this by bringing up “fiscal dominance” — but fiscal dominance is just decried as a bad thing, and is not integrated into other models.) Since this stimulus runs counter to the conventional belief that interest rates slow the economy, one can argue that conventional thinking is backwards. (Warren Mosler emphases this, other MMT proponents lean more towards “ambiguous.”)

For what it is worth, “ambiguous” fits my experience of churning out interest rate models, so that is good enough for me. I think that if we want to dig, we need to look at the interest rate sensitivity of sectors, like housing. The key is that we are not looking at an aggregated model of the economy — once you have multiple sectors, we can get ambiguous effects.

What about the Non-MMT Post-Keynesians?

The fratricidal fights between MMT economists and selected other post-Keynesians (not all, but some influential ones) was mainly fought over the role of floating exchange rates and fiscal policy, but interest rates showed up. Roughly speaking, they all agreed that neoclassicals are wrong about interest rates. Nevertheless, there are divergences on how effective interest rates are.

I would divide the post-Keynesian interest rate literature into a few segments.

  • Empirical analysis of the effects of interest rates, mainly on fixed investment.

  • Fairly basic toy models that supposedly tell us about the effect of interest rates. (“Liquidity preference” models that allegedly tell us about bond yields is one example. I think that such models are a waste of time — rate expectations factor into real world fixed income trading.)

  • Analysis of why conventional thinking about interest rates is incorrect because it does not take into account some factors.

  • Analysis of who said what about interest rates (particularly Keynes).

From an outsiders standpoint, the problem is that even if the above topics are interesting, the standard mode of exposition is to jumble these areas of interest into a long literary piece. My experience is that unless I am already somewhat familiar with the topics discussed, I could not follow the logical structure of arguments. 

The conclusions drawn varied by economist. Some older ones seem indistinguishable from 1960s era Keynesian economists in their reliance on toy models with a couple of supply/demand curves.

I found the strongest part of this literature was the essentially empirical question: do interest rates affect behaviour in the ways suggested by classical/neoclassical models? The rest of the literature is not structured in a way that I am used to, and so I have a harder time discussing it.

Appendix: Academic Dysfunction

I will close with a rant about the state of modern academia, which also explains why I am not particularly happy with wading through relatively recent (post-1970!) journal articles trying to do a literature search. I am repeating old rant contents here, but I am tacking on new material at the end (which is exactly the behaviour I complain about!).

The structural problem with academia was the rise of the “publish or perish” culture. Using quantitative performance metrics — publications, citation counts — was an antidote to the dangers of mediocrity created by “old boy networks” handing out academic posts. The problem is that the quantitative metric changes behaviour (Goodhart’s Law). Everybody wants their faculty to be above average — and sets their target publication counts accordingly.

This turns academic publishing into a game. It was possible to produce good research, but the trick was to get as many articles out of it as possible. Some researchers are able to produce a deluge of good papers — partly because they have done a good job of developing students and colleagues to act as co-authors. Most are only able to get a few ideas out. Rather than forcing people to produce and referee papers that nobody actually wants to read, everybody should just lower expected publication count standards. However, that sounds bad, so here we are. I left academia because spending the rest of my life churning out articles that I know nobody should read was not attractive.

These problems affect all of academia. (One of the advantages of the college system of my grad school is that it drove you to socialise with grad students in other fields, and not just your own group.) The effects are least bad in fields where there are very high publication standards, or the field is vibrant and new, and there is a lot of space for new useful research.

Pure mathematics is (was?) an example of a field with high publication standards. By definition, there are limited applications of the work, so it attracts less funding. There are less positions, and so the members of the field could be very selective. They policed their journals with their wacky notions of “mathematical elegance,” and this was enough to keep people like me out.

In applied fields (engineering, applied mathematics), publications cannot be policed based on “elegance.” Instead, they are supposed to be “useful.” The way the field decided to measure usefulness was to see whether it could attract industrial partners funding the research. Although one might decry the “corporate influence” on academia, it helps keep applied research on track.

The problem is in areas with no recent theoretical successes. There is no longer an objective way to measure a “good” paper. What happens instead is that papers are published on the basis that they continue the framework of what is seen as a “good” paper in the past.

My academic field of expertise was in such an area. I realised that I had to either get into a new research area that had useful applications — or get out (which I did). I have no interest in the rest of economics, but it is safe to say that macroeconomics (across all fields of thought) has had very few theoretical successes for a very long time. Hence, there is little to stop the degeneration of the academic literature.

Neoclassical macroeconomics has the most people publishing in it, and gets the lion share of funding. Their strange attachment to 1960s era optimal control theory means that the publication standards are closest to what I was used to. The beauty of the mathematical approach to publications is that originality is theoretically easy to assess: has anyone proved this theorem before? You can ignore the textual blah-blah-blah and jump straight to the mathematical meat.

Unfortunately, if the models are useless in practice, this methodology just leaves you open to the problem that there is an infinite number of theorems about different models to be proven. Add in a large accepted gap between the textual representations of what a theorem suggests and what the mathematics does, you end up with a field that has an infinite capacity to produce useless models with irrelevant differences between them. And unlike engineering firms that either go bankrupt or dump failed technology research, researchers at central banks tend to fail upwards. Ask yourself this: if the entire corpus of post-1970 macroeconomic research had been ritually burned, would it have really mattered to how the Fed reacted to the COVID pandemic? (“Oh no, crisis! Cut rates! Oops, inflation! Hike rates!”)

More specifically, neoclassicals have an array of “good” simple macro models that they teach and use as “acceptable” models for future papers. However, those “good” models can easily contradict each other. E.g., use an OLG model to “prove” that debt is burden on future generations, use other models to “prove” that “MMT says nothing new.” Until a model is developed that is actually useful, they are doomed to keep adding epicycles to failed models.

Well, if the neoclassical research agenda is borked, that means that the post-Keynesians aren’t, right? Not so fast. The post-Keynesians traded one set of dysfunctions for other ones — they are also trapped in the same publish-or-perish environment. Without mathematical models offering good quantitative predictions about the macroeconomy, it is hard to distinguish “good” literary analysis from “bad” analysis. They are not caught in the trap of generating epicycle papers, but the writing has evolved to be aimed at other post-Keynesians in order to get through the refereeing process. The texts are filled with so many digressions regarding who said what that trying to find a logical flow is difficult.

The MMT literature (that I am familiar with) was cleaner — because it took a '“back to first principles” approach, and focussed on some practical problems. The target audience was fairly explicitly non-MMTers. They were either responding to critiques (mainly post-Keynesians, since neoclassicals have a marked inability to deal with articles not published in their own journals), or outsiders interested in macro issues. It is very easy to predict when things go downhill — as soon as papers are published solely aimed at other MMT proponents.

Is there a way forward? My view is that there is — but nobody wants it to happen. Impossibility theorems could tell us what macroeconomic analysis cannot do. For example, when is it impossible to stabilise an economy with something like a Taylor Rule? In other words, my ideal graduate level macroeconomics theory textbook is a textbook explaining why you cannot do macroeconomic theory (as it is currently conceived).

1

Driving my German office mate crazy.


Email subscription: Go to https://bondeconomics.substack.com/ 

(c) Brian Romanchuk 2024

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.