Recent Posts

Wednesday, May 31, 2017

The Horrifying Mathematics Of Infinitesimal Agents

Highly recommended
for Lovecraft fans.
(Cover is link to
Amazon.com.)
Three men were swept up by the flabby claws before anybody turned. God rest them, if there be any rest in the universe. They were Donovan, Guerrera, and Ã…ngstrom. Parker slipped as the other three were plunging frenzied over endless vistas of green-crusted rock to the boat. and Johansen swears he was swallowed up by an angle of masonry which shouldn't have been there; an angle which was acute, but behaved as it was obtuse. ("The Call of Cthulhu," H.P. Lovecraft,  1928.)

Searchers after mathematical horror haunt strange, far concepts. Being swallowed by non-Euclidean geometry is one form of terror. But the true epicure in the terrible, to whom a new thrill of unproveable ghastliness is the chief end, esteems most the hideous infinitesimal agents of mainstream economics.

This article examines the curious mathematics of infinitesimal agents, which are not merely infinitely small, they are indexed on the [0, 1] interval. Such agents are of critical importance in New Keynesian economics, as the standard Calvo pricing uses such agents to generate price stickiness. However, it is impossible for this mathematical formalism to be the limit of a large number of firms, nor is it possible to properly define an optimisation problem for such agents. Since the solution of the mathematical problem is not the result of optimising agents, such models are just as vulnerable to the Lucas Critique as the old Keynesian models. It may be possible to create a proper optimisation structure for such models, but it would probably requite re-writing most of the mathematics.

UPDATE: This aspersion at mainstream economics thankfully triggered a response. Brian Albrecht (@BrianCAlbrecht on Twitter) gave me some references to chew on (yay). He listed the work of Yeneng Sun, which looks good and mathematical (link: https://scholar.google.com/citations?user=45jVI-4AAAAJ&hl=en&oi=sra), and a standard reference: "Markets with a Continuum of Traders," by Robert J. Aumann. I have not read any of the supplied references, so I cannot tell how close my guess was to the actual justification. (In between channeling my inner H.P. Lovecraft, I do hint at what I think was going on.) If necessary, I will cut back some of my claims here...

Why Do This?

To an outside observer, it appears that the mathematics of infinitesimal agents was something that is lifted from the Necronomicon. However, there is a logic behind the choice, even if the mathematics makes no actual sense.

The objective of mainstream macroeconomics is to derive macroeconomic models based on the optimising choices of agents (households maximise utility, firms maximise profits). Although that seems like a reasonable starting point of a mathematical model -- which is always going to have to abstract from reality -- the difficulties arise when agents choices interact.

For example, if we assumed that the business sector consisted of one firm (a monopoly), we would have a single optimiser driving a significant part of the economy. It would effectively be a form of central planning, and behaviour could be quite erratic. Since the single firm has no competitors, it can follow any strategy it wishes. This is not going to be a reasonable approximation of reality, so we need to have multiple firms.

The first thing to keep in mind is that the mainstream insists that individual agents (outside of the monopoly situation) do not set prices; they are "price takers." Some mysterious agency causes supply and demand to come into "equilibrium," and prices are set in a way to cause such an "equilibrium" to come into play. (As I noted in an earlier article, whether such an equilibrium makes formal mathematical sense is unclear.)

When we look at the preferred optimising structure, the optimisation problem for firms involves choosing the level of production given the level of wages and selling prices. If wages and prices are fixed, the optimising choice appears easy, given various assumptions that are made. However, if the firm changes its production level, should it not move wages and prices?

The solution is to make the firms so small that they have no influence on prices. Thus the quest for infinitesimal firms.

What are They?

If we were truly attempting to approximate having a great number of firms, the normal way to proceed is to say that we have N firms, and see what happens when N tends to infinity. From a mathematical perspective, this represents a countable infinite set: we can associated the firms with an element in an infinite sequence. This makes too much sense for the mainstream.

They instead say that we have a lot of firms, each with an index i, and the set of indices is the set of [0, 1]. That is, for every real number in the interval [0,1], there is an associated firm. To calculate aggregates, we integrate over that interval.

This is used in Calvo pricing. In each time period, there is a fixed probability that each infinitesimal firm is allowed to change prices. (Some wags have referred to the "Calvo fairy" as allowing price changes to occur.) If the probability is 0.5, then:
  • there's a 50% probability it cannot change prices the next period;
  • there's a 25% probability it cannot change prices over the next two periods
  • etc.
As a result, it needs to raise prices now to take into account expected inflation that could occur when it was unable to raise prices.

You Can't Get There From Here


Excellent - and
cheaper than Rudin!
 The first obvious complaint with this indexation scheme is that it cannot be viewed as taking the limit of a great many firms. As everyone knows, the closed interval [0, 1] on the real line is a nondenumerable set. (A proved in the Theorem in Section 4 of Chapter 1 of Volume 1 of Elements of the Theory of Functions and Functional Analysis, by A.N. Kolmogorov and S.V. Fomin.) This means that we cannot express the set [0, 1] as the limit of a countable series of agents. In other words, we formally cannot view such a construct as being the limit of having "a lot" of firms.

What madness seized the economists who use this indexation schemes as part of the drive to add "micro-foundations" to models?  This is one of the first theorems one learns in most real analysis courses (as evidenced by being in an early section of the first chapter). How could they have skipped over that result?

Although this transgression against mathematics might be excused as the result of a simplifying assumption, the situation gets even more mind-destroying.

Infinitesimals Cannot Optimise

Since we are no longer just sum up the actions a set of $N$ firms, we need to another way to calculate aggregate behaviour. This is achieved by integration (the Lebesgue integral, to be precise). If we associated a decision vector $u(i)$ with each firm $i$, and have a state vector $x$, we can define a profit function $f$. The aggregate profits $s$ are generated by:
\[
s = \int_0^1 f(x, u(\mu)) d \mu.
\]
(Please note that this formulation is a simplification of the relationships that might be found in DSGE papers; the key is that we get the aggregate by integration over the interval [0, 1], and not the notation associated with the variables inside the integral.)

For a particular agent $i$, its profit $s(i)$ is given by:
\[
s(i) = \int_i^i f(x, u(i)) d\mu.
\]
The solution to this is simple: profits are equal to zero, no matter what decision the firm makes.* A single firm is a set of measure zero, and what happens on a set of measure zero has no effect on the Lebesgue integral.

Very simply, an individual firm makes no profits (or an infinitesimal household always has utility of 0), no matter what choices it makes. There is no optimisation to be done, since all choices are equally valid.

In other words, the optimisation problem as stated makes no sense whatsoever from a formal mathematical perspective. Since the DSGE model cannot be construed as the result of optimisation, it is vulnerable to the Lucas Critique.

Can the Framework be Saved?

Can we resurrect an optimisation problem from this infinitesimal mess? It seems possible, but it would make no sense from the perspective of micro-foundations. We can only have non-zero values in the functions if we integrate over sets with non-zero measure. This means that we are looking at the optimal choice of an infinite number of "firms" (or households), if we use the original mainstream definition of a "firm."

In the Calvo pricing formalism, that raises awkward problems. The point of Calvo pricing is that there is a probability of each infinitesimal firm can change prices in each time period. Once we start optimising over sets of non-zero measure, we are now optimising over a set of "firms" which have a probability of being able to set prices in each period. Since we have an infinite number of firms, we should be able to appeal to the law of large numbers, and we know that a fixed percentage of the target firms will be able to adjust prices in each period. It seems to me that this ends up being for all intents and purposes the same thing as flexible prices, since the "firms" that change prices can make up for the "firms" that cannot change prices.

Furthermore, as soon as we are optimising over sets of non-zero measure, the firm presumably has some market power. This market power should presumably be taken into account when analysing the problem.

I believe that the attempt to rescue this formalism will revolve around looking at "infinitesimal profits." Under the notation used here, we are allegedly interested in maximising the function f, and not worry about integrating it. However, this does not work, once we take into account the constraints facing the optimisation. As I discuss in "Interpreting DSGE Mathematics," the optimisation problem for agents need to take into account budget constraints. Those budget constraints at the aggregate level are described by aggregate money (and other financial asset) holdings, as well as aggregate government consumption and taxation. If we used the more sensible countable notion of an infinite number of firms, we could associate 1/N of those aggregate values to any particular agent. Once we accept the madness of the [0, 1] interval, the dollar amounts within the constraints have no choice but to be zero. This puts us in the untenable position that constraints do not appear at the agent level, but they somehow pop up at the aggregate level.  (I deleted the previous text as I thought it summarised my technical worries, but I realise it is not a good formulation. The issue around constraints is more complex than I discuss here, but it might take a more formal explanation, which is the subject of research. Having agents with finite measure, as I suggest, would eliminate my real concern.)

[Update: I read one of the references, and as I suspected, it only tangentially covers the issues that I am worried about. I can see how a continuum of agents can work for some models, but the specific application to DSGE models with Calvo pricing and two classes of optimising agents is still unclear. The paper I read used a new formal definition of competitive equilibrium (which is of course completely unrelated to any other definition I have come across), and I need to digest its implications. The technique discussed in the paper I read does not translate to the "macro model" framework, which is one of my standard complaints about the references in mainstream mathematical economics. Finding a self-contained reference is still beyond my reach. If any mainstream economist is offended by my assertions, please feel free to explain exactly how I am wrong. If I find such a reference, I can stop wasting my time on definitional issues, and return to substantive ones.]

Concluding Remarks

Beware acute angles, and infinitesimal agents.

Footnote:

* One could try to appeal to the Dirac Delta "Function", which is actually a generalised function (Kolmogorov and Fomin, page 105). These generalised functions are only defined in the context of integration; it is impossible to choose a "delta function" as the argument of function to be optimised.

(c) Brian Romanchuk 2017

22 comments:

  1. There are many problems with economic models based on infinities and infinitesimals, never mind the continuum, in general equilibrium and DSGE models derived from them, for example about infinitesimal probabilities.

    But all these "technical" problems don't matter to the Economics profession: what matters is whether a model is "validated", which means it supports JB Clark's three fables, in particular the one that the distribution of income, absent government intervention, is (Pareto-) optimal and exactly determined by merit, that is productivity.

    Economic theories or models that are not validated by their support for the "three fables" are regarded by the Economists, and their sponsors, as communist, and we all know that "communism never worked and never will". Therefore such invalid models cannot possibly work, and "top journals" will not welcome papers on them, unless cleverly disguised, and their proponents or adopters will not become Economists.
    PhD students in "top departments" of Economics or Business rapidly understand which approaches are "internally consistent" and how much "internal consistency" matters, and how little "technicalities" matter, as long as "internal consistency" is achieved.

    ReplyDelete
    Replies
    1. As an outside observer, I would not argue against that view.

      The problem is that I want to be able to explain economic debates to my readers, and it's hard to explain mathematical models that appear to be formally incoherent, but mainstream economists say they are fine. I doubt that anything I write will get anywhere near a "top journal," but since I already have a Ph.D. in a real academic field, I hardly care.

      It's not even a question of realism, it is a question of writing down the actual optimisation problem that we are supposed to be solving. Considering that I bill myself as an applied mathematician, saying that the mathematics makes no sense to me is very awkward. (Once again, it may be that the formal mathematics makes sense, but the publishing standards in economics are sloppy to say the least. A great deal of formal mathematics is dropped from discussion, when that formal mathematics is obviously critical.)

      Delete
    2. The formal mathematics is a smokescreen for the embedded political beliefs in the structure. The outcomes of all the debates lead inevitably to that conclusion.

      It is like criticising Catholic Dogma. There is no way you're ever going to convince a catholic that it is wrong, no matter how good the argument. They first have to stop being a catholic.

      Or you don't bother talking to catholics, bypass them, and talk to other people instead about the undue influence of catholics in society. There are historical precedents for the process :-)



      Delete
    3. One could see that conclusions might be pre-ordained for ideological reasons; that's an old critique. (I believe that the word "ideology" was invented as part of one critique.) However, it seems that it should be possible to write down the mathematics, so that it is possible to discuss the model. It's not a realism complaint, it's a definitional complaint.

      If it's not real mathematics (I do not know whether this is the case; it is still possible), that pretty much ends all debates about realism: DSGE "models" are not actually models, so there's no question of realism. But if they incorporate real mathematics, we can then analyse the model and see how it compares to reality. Theoretically, it should be easy to determine whether the mathematics is valid; in practice, it seems to involve chasing through a "vast literature" and finding that each reference does not cover what I want.

      Delete
    4. «However, it seems that it should be possible to write down the mathematics, so that it is possible to discuss the model.»

      Ah such optimism :-). For "internal consistency" (which is what matters to becoming and remaining and Economist) the "technicalities" matter only inasmuch they support the "valid" conclusions. Or else the long forgotten "Cambriges Capital Controversy" would have had an impact. So your quest to «discuss the model» is quite futile. BTW here is an interesting and very related post on the issue of making-up assumptions about infinities in Economics here:

      http://econospeak.blogspot.co.uk/2015/11/what-has-not-been-said-about-later.html

      There is another mathematically trained non-Economist who has tried to make sense of Economics models and was "disappointed" by the "difficulties" in doing so, his blog is at:

      http://robertvienneau.blogspot.com/

      Delete
    5. Ah yes, Robert Vienneau's blog. I used to read it, and I thought he stopped writing (or I lost the bookmark when I changed tablets). Thanks for the reminder. (I read econospeak, but did not remember that article you linked.)

      I am not hugely optimistic (I gave up earlier on the subject). But I want to write a paper on the issue with Alexander Douglas. We are both outsiders to economics, and we need to be as open as possible.

      Delete
  2. Einstein ponders the problem of correspondence between a math model and reality in this essay:

    Geometry and Experience
    http://pascal.iseg.utl.pt/~ncrato/Math/Einstein.htm

    I learned the definition of a point as an axiom of geometry and the definition of a particle as a point-mass as a simplifying assumption in physics to eliminate the hard math and solve problems as a first-case approximation. I recall reading an article where Paul Krugman says an economist must have a talent for making simplifying assumptions that are reasonable approximations. So if the math applied is axiomatic and consistent one must attack the reasonableness of the assumptions and if the math is axiomatic and inconsistent with the axioms then of course it is bad math.

    ReplyDelete
  3. "DSGE "models" are not actually models...."


    Brian,

    By this do you mean that there is no cogent logical structure to them?

    Henry

    ReplyDelete
    Replies
    1. To underline, that's the worst case interpretation. There could be a reference somewhere that cleans up my definitional concerns.

      But if the issues are not cleared up, the situation is like the following mathematical model:
      1) x is an element of the real line;
      2) x=0;
      3) x=10.

      No such solution x exists, so it tells us nothing. It's just a long-winded way of saying "empty set"; and there's an infinite number of ways of doing that.

      (If that assertion bothers any mainstream economists that somehow reads this, feel free to comment!)

      Delete
    2. Brian,

      Please forgive me if I say some stupid things - I am sure I have misconstrued your entire argument. So I will jump in the deep end with a question.

      How is an infinite set countable?

      Delete
    3. There's a hierarchy of inifinities. With a countable set, it can be represented as the limit of a finite sequence.

      Uncountable - we cannot define such a such that covers all of the points.

      I have read (I am unsure about this, but it seems plausible) that the set of all numbers that we can represent in the set [0,1] has measure 0; the full set has measure 1. In other words, all of the "weight" in the real line consists of numbers that we cannot represent.

      The real line is one of those Lovecraft-ian concepts that we do not want to think about too much.

      Delete
  4. Brian,

    Forget my last request. I'll have you bogged done in explaining things you probably can't be bothered explaining - no doubt I will have other questions. Your post is very technical and it seems you are looking to engage with people having a technical appreciation of the issues raised.

    It seems to me that the set [0,1] and [1,infinity] are both infinite sets and equally uncountable. And I can't see the relevance of "countability" anyway. The index is just a way of naming an element within a set (isn't it?), so what's in a name?

    Henry

    ReplyDelete
    Replies
    1. (You usually need to write [1, infinity); we do not include infinity in the set. Yay for finicky math!)

      The technical problem is that we cannot conceive of the set [0,1] as the limit of taking a large number of firms. From the point of proofs, the structure for taking the limit of a sequence versus the continuum is completely different.

      The defense of the mainstream approach is simplicity, and so the fact that it is not really a limit of a finite number of firms might be considered a triviality. However, the structure of the oprimisation problem is different in each case. The "proofs" I have seen using Calvo pricing are all technically incorrect, since they did not specify the proper optimisation structure. The excuse is that the proofs are a shorthand for the real problem structure. However, if the actual problem statement is obscured, how does one validate proof validity? Mathematics is supposed to eliminate ambiguity, not create it.

      Delete
  5. Dear Brian,

    This is a good exercise (i.e. trying to interpret DSGE models) but I ended up being confused.
    Even though I suppose you're getting the maths right, I am not sure about the interpretation:

    You have already defined the profit function f.
    How do we end up with the integral s(i) for the profit of the i-th agent and not simply f(i) ? Is s(i) = f(i) and thus f(i) = 0 ?

    On a side-note, micro theory says that under perfect competition the long-run profit for a firm is zero. Not sure if the whole framework is such by design, in order to comply with this, but seems consistent in this manner.

    Stelios

    ReplyDelete
    Replies
    1. All aggregates are generated by integrating over [0,1]. The function f is just the "infinitesimal" profits, and needs to be integrated to be brought into the right "units",
      (The infinitesimal variables can be thought of as density per unit of "volume", and the intervals you are integrating over are the volumes. The technical term from this volume is the measure. The problem is that a single agent has a measure of zero, so it does not matter what the density of the infinitesimal function is.)

      I was given a reference on the "continuum" of agents. The way in which it is supposed to work is that it is not really an optimisation for a single agent (as I somewhat suspected). This means that my comments about integration from i to i are not in the spirit of what they do, but it raises other issues. The actual mathematics of what they are supposed to be doing bears no resemblance to what they write out in their papers.

      Profits going to zero is another one of those maintream ideas that sound good until you think about accounting identities...

      Delete
  6. 16 pages of Lecture Notes on Monetary Economics:

    http://web.mit.edu/14.461/www/part1/slides4.1.pdf

    There is a discussion of optimal price setting on page 9 asserting that firms choose a variable P*(sub_t) to maximize profits subject to a sequence of demand constraints. Unfortunately many symbols used in the equations do not appear to have a verbal description within the reference.

    ReplyDelete
  7. The first two pages of this five page paper discuss the stochastic integral:

    Diffusions and the Weiner Process:
    http://www.stat.cmu.edu/~cshalizi/754/notes/lecture-17.pdf

    The final paragraph on the second page reads "It turns out that, properly constructed, this sort of integral, and so this sort of stochastic differential equation (SDE), make sense even when dY/dt does not make sense as any sort of ordinary derivative, so the the more usual way of writing an SDE is (shows Eq. 17.5) even though this seems to invoke infinitesimals, which don't exist (footnote 3)."


    This 24 page paper is a good introduction to the theory of stochastic methods with discussion of simulation of Brownian motion per the subject of Einstein's paper in 1905:

    Stochastic Simulation and Monte Carlo Methods:
    http://arachne.it.uu.se/edu/course/homepage/bervet2/MCkompendium/mc.pdf

    This 30 page paper says it lays out the standard New Keynesian model based on Calvo (1983) staggered price setting:

    The Simple New Keynesian Model:
    http://www3.nd.edu/~esims1/new_keynesian_model.pdf

    Even if the math is applied consistently based on well-developed theorems in the context of economics the basic assumptions are much less likely to correspond with actual parameters of social systems.

    ReplyDelete
  8. Here are two papers directly addressing

    Markets with a Continuum of Traders - Aumann 1964 (12 pages):
    http://econweb.ucsd.edu/~rstarr/201/Aumann%20Econometrica%201964.pdf

    Continuum in Economics: Realism of Assumptions in Economic Theory (22 pages):
    http://www2.gcc.edu/dept/econ/ASSC/Papers2007/infinities_in_economics.pdf

    I recall some applied math of distribution functions (which I think by definition always span the interval 0,1 and have value = 1) from a course on semiconductor theory:

    https://ecee.colorado.edu/~bart/book/distrib.htm

    It seems to me the economists are simply attempting to argue that the methods developed in the field of statistical mechanics somehow also apply to economics? Personally I can't follow the math models as described in most of the economic papers in the way I could follow the application of distribution functions to large systems of discrete energy states or large systems of particles as a reasonable model in a given context.

    ReplyDelete
  9. For what it is worth, I have outlined in more detail than anybody probably cares about my understanding this subject: http://robertvienneau.blogspot.com/2017/06/perfect-competition-with-uncountable.html

    ReplyDelete
    Replies
    1. That seems to capture my understanding. Aumann has a discussion of how competition is supposed to work; you create coalitions (?) of firms with non-zero measure, and it is not supposed to be possible for them to increase their profits by a different choice of working hours (at the given wage and price).

      The problem then becomes: how can the coalition change working hours, without implying that households change their optimising solution? There may be no supply of labour (or too much) at the new hours demanded.

      My guess is that they are just ignoring the coupling between the sectors, and pretending that we have two completely independent optimisations inside the same model. That is certainly consistent with the proofs seen in places like Gali, where first order conditions are derived for each sector independently.

      You'd think that for a "workhorse model" that is used by multiple central banks, someone would be bothered to write out the formal mathematics somewhere, and that paper would be referenced.

      Delete
    2. Wow you're not very smart. SAD!

      Delete
  10. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.