(As many of my readers are post-Keynesian, I will note that this article follows the convention of mainstream economics, and ignore the insights of post-Keynesian analysis. As a result, I am just writing here about the analysis of DSGE models, and not how economies in the real world might operate.)

## Introduction

N. Kocherlakota has written a short informal paper on "Neo-Fisherianism" which contains what appears to be relatively straightforward mathematics (by the standards of DSGE macro).The debate revolves around the effect of interest rates on inflation. His summary:

A number of authors (Schmit-Grohe and Uribe; Cochrane) and bloggers (Cochrane and

Williamson) have argued that, if a central bank pegs the nominal interest rate forever, the

equilibrium inflation rate is increasing in the level of the peg.

## How To End The Debate

I will admit that I have given up at looking at DSGE macro. However, it appears to me that the issue with this debate as the examples used rely on bizarre central bank behaviour -- keeping the rate of interest fixed at some level forever. (As an aside, I would argue that such a policy would not be problematic in the real world, but it is within a DSGE model that assumes that interest rates drive everything.)Even if those examples are analysed correctly, the model dynamics are dubious looking (as seen in the Kocherlakota paper). Furthermore, we cannot relate the framework to the real world -- we know that central banks do not commit to fixed interest rates

*forever*.

The solution is to start off with a sensible central bank reaction function, for example a Taylor Rule. (Please note that the usage of a Taylor Rule may be problematic in the real world.)

We write:

*r(t) = T(x(t)),*

where

*r(t)*is the policy rate, and

*T(x(t))*is the Taylor Rule output (which is a function of the model state.) We solve for the model solution (at least a deterministic central solution). We label this the "baseline" solution.

We then create a new model, where the policy rate is given by:

*r(t) = R(x(t)) = T(x(t)) + k,*

where

*k <> 0*is some constant. (Alternatively, replace

*k*with

*k(t)*, with

*k(t)*to be non-zero for some finite period, then zero thereafter.) We set

*k = 1%*to see the effect of "raising interest rates" by 1%.

There are essentially four possible outcomes of this analysis (assuming

*k = 1%*).

- The model converges to a solution that features a greater rate of inflation than the baseline solution ("neo-Fisherian").
- The model converges to a solution that features a lesser rate of inflation than the baseline solution ("standard").
- The model has a solution that has a difficult to characterise relationship to the baseline solution (for example, oscillating around the baseline trajectory). ("?")
- We can show that the model solution does not exist.
- We cannot solve the model.

Unlike the thought experiments I have seen, this test corresponds about as closely as possible to the question of "What happens if the central bank raises the interest rate?" When we discuss "raising the interest rate," it is in reference to where the policy rate is now, which is presumably at a "sensible" level. The Taylor Rule stands in for that "sensible" level. We then raise the interest rate higher than the "sensible" level, and keep the rate higher than "sensible" levels in the future (for possibly just a finite interval of time).

Note that future "sensible" levels will adapt to the current condition of the model economy (since the state

Note that future "sensible" levels will adapt to the current condition of the model economy (since the state

*x(t)*will be different). It could be that even with a positive*k*, the model output will eventually have a lower nominal interest rate than in the baseline scenario -- as the future output gap might be more negative than in the baseline scenario.- If we can determine that the outcome is cases (1)-(3), the debate is largely over (although (3) would raise new questions).
- Possibility (4) seems far-fetched (no solution). If we set
*k*to be arbitrarily small but greater than zero, why would there be no solution? That would raise a lot of questions about the DSGE framework -- the models are completely unstable in the face of model uncertainty. - Possibility (5) leaves us where we are now -- we have no idea what the answer is. Furthermore, if DSGE modellers cannot find a solution, it raises questions whether they have any idea what the solutions for any DSGE model configuration.

(Of course, things may be more complicated than indicated above. We are running this test for a single model structure, and we might only be able to characterise the solution numerically for a single parameter set. Different model structures and/or parameters might have different results. In which case, the debate turns to which model best approximates reality. That is a much more productive discussion than what I have seen of the "Neo-Fisherean" debate.)

## No comments:

## Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.