for Lovecraft fans.
(Cover is link to
Three men were swept up by the flabby claws before anybody turned. God rest them, if there be any rest in the universe. They were Donovan, Guerrera, and Ångstrom. Parker slipped as the other three were plunging frenzied over endless vistas of green-crusted rock to the boat. and Johansen swears he was swallowed up by an angle of masonry which shouldn't have been there; an angle which was acute, but behaved as it was obtuse. ("The Call of Cthulhu," H.P. Lovecraft, 1928.)
Searchers after mathematical horror haunt strange, far concepts. Being swallowed by non-Euclidean geometry is one form of terror. But the true epicure in the terrible, to whom a new thrill of unproveable ghastliness is the chief end, esteems most the hideous infinitesimal agents of mainstream economics.
This article examines the curious mathematics of infinitesimal agents, which are not merely infinitely small, they are indexed on the [0, 1] interval. Such agents are of critical importance in New Keynesian economics, as the standard Calvo pricing uses such agents to generate price stickiness. However, it is impossible for this mathematical formalism to be the limit of a large number of firms, nor is it possible to properly define an optimisation problem for such agents. Since the solution of the mathematical problem is not the result of optimising agents, such models are just as vulnerable to the Lucas Critique as the old Keynesian models. It may be possible to create a proper optimisation structure for such models, but it would probably requite re-writing most of the mathematics.
UPDATE: This aspersion at mainstream economics thankfully triggered a response. Brian Albrecht (@BrianCAlbrecht on Twitter) gave me some references to chew on (yay). He listed the work of Yeneng Sun, which looks good and mathematical (link: https://scholar.google.com/citations?user=45jVI-4AAAAJ&hl=en&oi=sra), and a standard reference: "Markets with a Continuum of Traders," by Robert J. Aumann. I have not read any of the supplied references, so I cannot tell how close my guess was to the actual justification. (In between channeling my inner H.P. Lovecraft, I do hint at what I think was going on.) If necessary, I will cut back some of my claims here...
Why Do This?To an outside observer, it appears that the mathematics of infinitesimal agents was something that is lifted from the Necronomicon. However, there is a logic behind the choice, even if the mathematics makes no actual sense.
The objective of mainstream macroeconomics is to derive macroeconomic models based on the optimising choices of agents (households maximise utility, firms maximise profits). Although that seems like a reasonable starting point of a mathematical model -- which is always going to have to abstract from reality -- the difficulties arise when agents choices interact.
For example, if we assumed that the business sector consisted of one firm (a monopoly), we would have a single optimiser driving a significant part of the economy. It would effectively be a form of central planning, and behaviour could be quite erratic. Since the single firm has no competitors, it can follow any strategy it wishes. This is not going to be a reasonable approximation of reality, so we need to have multiple firms.
The first thing to keep in mind is that the mainstream insists that individual agents (outside of the monopoly situation) do not set prices; they are "price takers." Some mysterious agency causes supply and demand to come into "equilibrium," and prices are set in a way to cause such an "equilibrium" to come into play. (As I noted in an earlier article, whether such an equilibrium makes formal mathematical sense is unclear.)
When we look at the preferred optimising structure, the optimisation problem for firms involves choosing the level of production given the level of wages and selling prices. If wages and prices are fixed, the optimising choice appears easy, given various assumptions that are made. However, if the firm changes its production level, should it not move wages and prices?
The solution is to make the firms so small that they have no influence on prices. Thus the quest for infinitesimal firms.
What are They?If we were truly attempting to approximate having a great number of firms, the normal way to proceed is to say that we have N firms, and see what happens when N tends to infinity. From a mathematical perspective, this represents a countable infinite set: we can associated the firms with an element in an infinite sequence. This makes too much sense for the mainstream.
They instead say that we have a lot of firms, each with an index i, and the set of indices is the set of [0, 1]. That is, for every real number in the interval [0,1], there is an associated firm. To calculate aggregates, we integrate over that interval.
This is used in Calvo pricing. In each time period, there is a fixed probability that each infinitesimal firm is allowed to change prices. (Some wags have referred to the "Calvo fairy" as allowing price changes to occur.) If the probability is 0.5, then:
- there's a 50% probability it cannot change prices the next period;
- there's a 25% probability it cannot change prices over the next two periods
As a result, it needs to raise prices now to take into account expected inflation that could occur when it was unable to raise prices.
You Can't Get There From Here
|Excellent - and|
cheaper than Rudin!
What madness seized the economists who use this indexation schemes as part of the drive to add "micro-foundations" to models? This is one of the first theorems one learns in most real analysis courses (as evidenced by being in an early section of the first chapter). How could they have skipped over that result?
Although this transgression against mathematics might be excused as the result of a simplifying assumption, the situation gets even more mind-destroying.
Infinitesimals Cannot OptimiseSince we are no longer just sum up the actions a set of $N$ firms, we need to another way to calculate aggregate behaviour. This is achieved by integration (the Lebesgue integral, to be precise). If we associated a decision vector $u(i)$ with each firm $i$, and have a state vector $x$, we can define a profit function $f$. The aggregate profits $s$ are generated by:
s = \int_0^1 f(x, u(\mu)) d \mu.
(Please note that this formulation is a simplification of the relationships that might be found in DSGE papers; the key is that we get the aggregate by integration over the interval [0, 1], and not the notation associated with the variables inside the integral.)
For a particular agent $i$, its profit $s(i)$ is given by:
s(i) = \int_i^i f(x, u(i)) d\mu.
The solution to this is simple: profits are equal to zero, no matter what decision the firm makes.* A single firm is a set of measure zero, and what happens on a set of measure zero has no effect on the Lebesgue integral.
Very simply, an individual firm makes no profits (or an infinitesimal household always has utility of 0), no matter what choices it makes. There is no optimisation to be done, since all choices are equally valid.
In other words, the optimisation problem as stated makes no sense whatsoever from a formal mathematical perspective. Since the DSGE model cannot be construed as the result of optimisation, it is vulnerable to the Lucas Critique.
Can the Framework be Saved?Can we resurrect an optimisation problem from this infinitesimal mess? It seems possible, but it would make no sense from the perspective of micro-foundations. We can only have non-zero values in the functions if we integrate over sets with non-zero measure. This means that we are looking at the optimal choice of an infinite number of "firms" (or households), if we use the original mainstream definition of a "firm."
In the Calvo pricing formalism, that raises awkward problems. The point of Calvo pricing is that there is a probability of each infinitesimal firm can change prices in each time period. Once we start optimising over sets of non-zero measure, we are now optimising over a set of "firms" which have a probability of being able to set prices in each period. Since we have an infinite number of firms, we should be able to appeal to the law of large numbers, and we know that a fixed percentage of the target firms will be able to adjust prices in each period. It seems to me that this ends up being for all intents and purposes the same thing as flexible prices, since the "firms" that change prices can make up for the "firms" that cannot change prices.
Furthermore, as soon as we are optimising over sets of non-zero measure, the firm presumably has some market power. This market power should presumably be taken into account when analysing the problem.
I believe that the attempt to rescue this formalism will revolve around looking at "infinitesimal profits." Under the notation used here, we are allegedly interested in maximising the function f, and not worry about integrating it. However, this does not work, once we take into account the constraints facing the optimisation. As I discuss in "Interpreting DSGE Mathematics," the optimisation problem for agents need to take into account budget constraints.
[Update: I read one of the references, and as I suspected, it only tangentially covers the issues that I am worried about. I can see how a continuum of agents can work for some models, but the specific application to DSGE models with Calvo pricing and two classes of optimising agents is still unclear. The paper I read used a new formal definition of competitive equilibrium (which is of course completely unrelated to any other definition I have come across), and I need to digest its implications. The technique discussed in the paper I read does not translate to the "macro model" framework, which is one of my standard complaints about the references in mainstream mathematical economics. Finding a self-contained reference is still beyond my reach. If any mainstream economist is offended by my assertions, please feel free to explain exactly how I am wrong. If I find such a reference, I can stop wasting my time on definitional issues, and return to substantive ones.]
Concluding RemarksBeware acute angles, and infinitesimal agents.
* One could try to appeal to the Dirac Delta "Function", which is actually a generalised function (Kolmogorov and Fomin, page 105). These generalised functions are only defined in the context of integration; it is impossible to choose a "delta function" as the argument of function to be optimised.
(c) Brian Romanchuk 2017