Romer writes:

This is just post-real Calvinball used as a shield from criticism. Imagine someone saying to a mathematician who finds an error in a theorem that is false, “you can’t criticize the proof until you come up with valid proof.”

*(As an aside, I should note that agree that DSGE macro stinks, and I have used the Calvinball metaphor myself.)*

My academic experience was in Control Systems Theory, a branch of applied mathematics. I ran into a case where I was an anonymous referee for a submitted paper. The new paper contained

*Theorem B*, and

*Theorem B*cited

*Theorem A*within its proof. Meanwhile,

*Theorem A*was contained in another paper that was published in the #2 journal in the field.

The result looked dubious, and I managed to disprove

*Theorem A*within 10 seconds of picking up the volume that held it. (Luckily, the volume opened close to the page with the alleged "proof.") That was a personal speed record, and it was the most entertaining moment of my academic career.

I was able to write a short response rejecting the paper, explaining that the cited theorem was in fact incorrect; hence there was no need to pursue the new paper without a total re-write. Since it was someone else who made the original error, I was able to be quite gracious to the authors. (Although I doubt that made them any happier.)

And that was effectively the end of it. I did not write an anguished public letter to the editors of the #2 journal, telling them that they published a clunker of a paper. I just ignored its existence when I did literature surveys. (I discovered that there was a small sub-literature built around

*Theorem A*.) If anyone objected to my ignoring that paper, I would have quietly sent them the explanation.

I would not say that I was told to ignore the existence of dubious papers, but that was my impression that was how the serious mathematicians within control systems dealt with the problem. (The quality of mathematics within control systems was uneven, as the academics ranged from engineers to people with doctorates in pure mathematics.) From the outside, it may have looked like it was a cabal that was playing favourites, but the reality is that there is no clean way to retract papers from a journal without making a lot of people look foolish.

Pure mathematicians probably have a more rigorous attitude within their journals, but at the same time, there are presumably not a lot of incorrect proofs found there.

## An Application To Economics?

If you are a researcher, and you think a result is wrong -- and you can prove this -- you should ignore its existence. If someone complains, you just send them the*proof*that the result is incorrect. That should be the end of the story.

Unfortunately, macro is not in this position; there are no standards of correctness that correspond to that of mathematics. There are problems in the mathematical modelling, and model predictions are routinely contradicted by data. If very strict standards of rigour were applied, the carnage in the post-war literature would be spectacular.

*(And yes, the post-Keynesian literature would likely be hammered as well.)*

*Since a wholesale deletion of the existing literature is not going to happen as a result of the institutional structure, the only way forward is to hope that a new methodology can appear, allowing researchers to ignore the existence of the dodgy current literature. This is going to be much easier for heterodox economists to achieve, as their reputations are not based upon that literature. That said, they are up against the institutional factors that favour the*

*status quo*.

(c) Brian Romanchuk 2016

Brian, You might enjoy reading The Ideal Mathematician which is an 8 page pdf under this link:

ReplyDeletehttp://users-cs.au.dk/danvy/the-ideal-mathematician.pdf

The final four or five paragraphs describe the indoctrination process of a mathematician and seem to apply equally to the indoctrination process of an economist into a particular school of thought.

I think it should be possible to generate a housing bubble in a toy model with a single bank in a bounded economic region populated by a finite number of households and other non-bank units. The scientific Python community now provides tools to package a program so others can reproduce the same numerical experiments. Currently I don't have all the Python skills necessary to build such model because it requires a simulation in which intelligent agents make credit decisions based on shifting margins of risk and safety while taking account of the balance sheet positions of other units. The decisions of intelligent agents should drive an overshoot and collapse pattern in housing prices in the toy model just as one observes in a bounded credit region over long periods of time.

The linked article gives a taste of life amongst pure mathematicians. Their objective is elegance, which has nothing to do with applications. They are a good fit for Universities, but the reality is that you cannot support more than a handful of them. There's a lot more applied mathematicians out there. However, even applied mathematicians can get far removed from usefulness, which was definitely a problem in Control Systems. The reality is that the system demands more publications than can possibly be generated, so people churn out junk just to fill up their resumes.

DeleteAs for the housing bubble, there is a decent amount of agent-based modelling work. It is interesting - pretty much like programming a video game. My concern is that is possible to make them behave like a real economy, there's no way we can fit them to data. That is, we could generate scenarios that make RBC models look stupid, but we have a hard time using them to find the odds of a recession in the United States in 2017.

The toy model would only demonstrate that price levels change relatively slowly or more rapidly due to credit decisions of intelligent agents with feedback from balance sheet positions. It would hopefully show that prices of houses and equities are the residual level driven by the generation or destruction of credit instruments in debt markets. The model would have no predictive value except to show that digital agents making decisions similar to market agents can mimic the behavior of an actual financial system.

DeleteRegarding forecasts based on stock-flow consistent models I am only aware of The Jerome Levy Forecasting Center and the Levy Economics Institute Macroeconomic Model. These seem to be rich models of the actual economy based on econometric data with financial sectors. But I have not seen a complete specification of the actual models.

Brian, about agent-based modelling. There is much higher potential that from 'stylised' mathematical models.

DeleteThe problem is that 'stylised' implies making assumptions, and the more stylised the more assumptions you make. This is a top-down approach to problem modelling, the problem is that the real world does not work like that, it always 'gets ugly' and does not fit the model, which are always rationalised stylised facts.

Agent-based solutions is a bottom-up approach to problem modelling, meaning that you can always make them more complex (like the real world, which is complex). The limitation is on the part of the modeller, and you only need to make assumptions as far as your data limits you.

IMO stylised mathematical models are a waste of time for economics, they won't ever give any useful output. The problem is producing more complex agent based models. This is not a work of economists, is an interdisciplinary work that could only be achieved at large institutional level and would require constant adjusting and motorisation to be useful.

Now, if all the resources put by governments to spy its own citizens (like the NSA) would be used for stuff like this, we could get somewhere. It would be still a model, far from a representation of how things work in reality, but far more useful than the current absurd methods for macro policy making.

Joe,

DeleteThat is certainly useful in a teaching context, but it is likely that we can get whatever results we want (within reason) by tweaking parameters. There are some SFC models used in prediction; they face the same issues that any large model does.

Ignacio,

I would have to look more carefully at the agent-based techniques. I will be starting a book on post-Keynesian analysis techniques, and I will be looking at stylised models. I want to hold off on staking out a position before I do that work. (I have done some SFC work, and so I have preliminary views.)

Any realistic forecast model must reckon with uncertainty. The primary sources of uncertainty in financial markets are the future actions of government agencies and of intelligent agents in the economy. Markets with mark-to-market psychology show prices either average up at the last sale, average down at the last sale, or no change at the last sale price for a market transaction. Credit dealers price the spread via a similar mechanism. When credit dealers change their spreads and margins of safety it drastically impacts the ability of other markets to set prices in non-financial asset markets. Therefore a model where one tweaks the parameters in the decisions of setting prices by market agents captures the mechanism that generates outcomes and uncertainty in a complex financial system. No realistic model will be able to forecast with certainty unless governments impose regulations to stabilize the system.

DeleteWhat I think mathematical models can and cannot do is going to show up in an appendix to an upcoming article: "Whither Mainstream Macro?" I have written similar things in the past, but this version is quite compact. I think that it can be applied to your comments here. I am in the process of trying to make the article less of a rant, so I do not want to "spoil it" by pasting it into my comments in here. Please wait and see what I have to say, and then let me know what you think about it.

Delete