tag:blogger.com,1999:blog-59088308271350608522017-06-23T09:33:47.343-04:00Bond EconomicsBrian Romanchuk's commentary and books on bond market economics.Brian Romanchuknoreply@blogger.comBlogger574125tag:blogger.com,1999:blog-5908830827135060852.post-61886345148686775062017-06-21T09:00:00.000-04:002017-06-21T09:00:26.686-04:00The Bond Market Is Discounting an Eventual Recession<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-1Ys4IR1IHq0/WUk3zpKXohI/AAAAAAAACzI/D1xg31ojjJsHygnLbqlNDGiQfJRkp3WegCLcBGAs/s1600/c20140620_us_2_10_slope.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" data-original-height="400" data-original-width="600" src="https://1.bp.blogspot.com/-1Ys4IR1IHq0/WUk3zpKXohI/AAAAAAAACzI/D1xg31ojjJsHygnLbqlNDGiQfJRkp3WegCLcBGAs/s1600/c20140620_us_2_10_slope.png" title="Chart: U.S. 2-/10-year slope" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The flattening of the yield curve has attracted some commentary, but it appears to be telling us very little. By itself, the 2-/10-year slope (above) is not at a particularly interesting level; one should expect flattening during a rate hike cycle. Current bond market pricing is consistent with an <i>eventual</i> recession, and so arguments that the Treasury market is "in a bubble" appear to be hyperbole.</div><div class="separator" style="clear: both; text-align: left;"></div><a name='more'></a><br /><div class="separator" style="clear: both; text-align: left;">Many commentators and strategists like using markets that they do not trade as cyclical indicators. Since most commentators are not particularly interested in the bond market, but are keenly interested in equities, they like using the bond market as an indicator for the future prospects for stocks. Those of a bearish disposition enjoy pointing to the flattening yield curve, saying that this a bearish signal for equities.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I am in the minority who is more interested in the bond market than equities. I might attempt to use the equity market as an indicator for future economic prospects, but I have my doubts about its reliability. The stock market is very good at backcasting recessions that started a few months earlier, which is not particularly useful.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Luckily, I do not sell investment advice, since I do not see the bond market as being in an interesting position valuation-wise. The 5-year, 5-year forward (below) gives a good read on the "fundamental value" of a 10-year bond (under the assumption that the spot 5-year is not horribly mis-priced).* If assume that previous average nominal short rates are a reasonable prediction for the forward rate, the current forward is above the average. (Some readers might object to that comparison; from the perspective of conventional analysis, the comparison assumes that inflation remains anchored near where it has been for the last few decades.)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-urLshsz7nIc/WUks5dNvSoI/AAAAAAAACy0/49a59rH_CdsHY2wQn2a4MmgZhGHLa1mbgCLcBGAs/s1600/c20170620_tsy5_forward.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: U.S. Treasury 5-year Forward Rate" border="0" data-original-height="400" data-original-width="600" src="https://2.bp.blogspot.com/-urLshsz7nIc/WUks5dNvSoI/AAAAAAAACy0/49a59rH_CdsHY2wQn2a4MmgZhGHLa1mbgCLcBGAs/s1600/c20170620_tsy5_forward.png" title="" /></a></div><b><br /></b>If we round off the forward rate to 2.5%, and ignore risk premia (including term premia), current pricing is consistent with an outlook that consists of two scenarios:<br /><ol><li>There is a 50% chance of recession within 5 years, and the New Keynesian Fed once again drives the policy rate to 0% (or below!). The 5-year spot rate ends up at 0%.</li><li>There is a 50% chance of there being no recession, and the Fed keeps hiking. The spot 5-year yield ends up at 5%. (To what extent there are risk premia embedded in the forward rate, that scenario yield gets revised lower.)</li></ol><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-BSiM4BTZk-o/WUk3zt7SnGI/AAAAAAAACzE/adJR9pU0zswC05n2-9YdWS2S1f6S6-vxACLcBGAs/s1600/c20140225_ChicagoFed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" data-original-height="400" data-original-width="600" src="https://2.bp.blogspot.com/-BSiM4BTZk-o/WUk3zt7SnGI/AAAAAAAACzE/adJR9pU0zswC05n2-9YdWS2S1f6S6-vxACLcBGAs/s1600/c20140225_ChicagoFed.png" title="Chart: Chicago Fed National Activity Indicator" /></a></div><div><br /></div><br />I find it hard to see the above scenario exercise as being implausible; the possibility that there is no chance of a recession on a 5-year horizon seems far more implausible. There are a number of potential catalysts for a global recession, but I would note that I have been writing that since I launched my web site almost four years ago. If we use the Chicago Fed National Activity Index (above) as a broad indicator, it is not signalling an immediate recession.<br /><h2>Who Cares What the Fed (and Fed Watchers) Think?</h2>It is easy to look at shorter-dated forwards, and argue that they appear too low relative to rate hikes expected (for example, in 2018). Central bankers (and central bank watchers) over-interpret such pricing.** The curve being 25 basis points low in 2018 does not say a lot about long-dated bonds.<br /><br />Forward curves always tend to be smooth; they do not have the jerky movements that characterise the path of realised policy rates (which are the result of central bankers reacting in a panic to the business cycle). The Fed "dot plots" incorporate the utterly unrealistic forecast that the policy rate will revert to some long-term average and stay there forever; market pricing is starting to incorporate the return of the policy rate to zero. This means that we should expect the market forward rate to be somewhat below the long-term average in the "dot plots." Since the forwards will not fall off a cliff at some future date, the end result is that forward rates will smoothly cut below the "dot plot" forecast.<br /><br />Even if you agree that the Fed will hike by more than is forecast in 2018-2019, you are on the wrong side of tail risk. Unless Janet Yellen gets all Volcker on us, the upside on a short rates position is only a rate hike or so in your favour. However, if something goes wrong, rates will be back at zero in a flash, and you are down multiple rate hikes in losses. Meanwhile, most investors have portfolios that are exposed to financial crisis risk, and so this tail risk is a wrong-way risk for your entire portfolio. This asymmetry limits the attractiveness of short rate positions.<br /><br />For things to go seriously wrong for the bond market, you either need to have faith that a recession is multiple years away (which will allow the New Keynesian baby steps could to become somewhat more forceful), or there is a total change in the reaction function at Federal Reserve.<br /><br />I do not follow conventional beliefs about interest rates; to a certain extent, the nominal interest rate is arbitrary. This is in opposition to beliefs that interest rates have a "natural" level (or the equivalent), which implies that deviations from that cyclically-adjusted level will have a large effect on the economy. If you believe that there is a "natural" rate of interest, then current rates cannot be far off from where they should be, given the lack of acceleration in the economy. Correspondingly, there is little reason for anything other than the baby steps the Fed has been taking. However, if nominal interest rates are arbitrary, there is nothing stopping the Fed from hiking rates to 5%. In other words, the main risk to long positions is personnel changes at the Federal Reserve.<br /><h2>Concluding Remarks</h2>The U.S. Treasury may be mis-pricing the path of Fed rate hikes. However, in the absence of radical changes of philosophy at the Federal Reserve, this mis-pricing will be corrected at a snail's pace.<br /><br /><b>Footnotes:</b><br /><br />* In case the bond mathematics is unfamiliar, the 10-year yield is approximately equal to the average of the spot 5-year rate and the 5-year rate, 5-years forward. The 5-year yield should equal the expected average of the short rate over the next 5-years, which is a plausible forecasting exercise. If we have no reason to disagree with the 5-year pricing, our determination of the fair value of the 10-year depends almost entirely on whether we agree with the level of the 5-year, 5-years forward.<br /><br />** For Federal Reserve staffers (and those closely affiliated to them) to complain that the bond market is mis-priced requires a lot of chutzpah. According to these people, quantitative easing was supposed to lower term premia (I disagree, for what it's worth). To then chide the bond market for too-low forward rates requires an impressive amount of cognitive dissonance.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-8825974294292218452017-06-18T19:06:00.000-04:002017-06-18T19:06:20.160-04:00Summer Schedule<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-3kU7bbXZVp4/WUcGc34lCTI/AAAAAAAACyY/dKMmgR1PZcsy-xdlIqaOv7nO6RzQ-nFYACKgBGAs/s1600/logo_blog.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-3kU7bbXZVp4/WUcGc34lCTI/AAAAAAAACyY/dKMmgR1PZcsy-xdlIqaOv7nO6RzQ-nFYACKgBGAs/s1600/logo_blog.png" /></a></div>I have been tied up with various non-writing projects (as well as being hit by some kind of virus this week), so I will be now dropping to a summer schedule. Unless I want to write a quick rant about something, I will aim for one weekly article, probably published on Wednesdays. (Although this is not a subscription service, I try to keep to a weekly writing schedule.)<br /><br />Yeah, the Fed hiked rates this week. Not worth getting excited about, until they are hiking every second meeting...Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-19104136022066553552017-06-14T09:00:00.000-04:002017-06-14T09:00:00.144-04:00Let's Talk About Debt, Baby<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-g_JTa4RR54M/WUBa2TK7BAI/AAAAAAAACxg/Z3WQGFSei7MMhyOfG8otBk1e5lBODWGTACKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-g_JTa4RR54M/WUBa2TK7BAI/AAAAAAAACxg/Z3WQGFSei7MMhyOfG8otBk1e5lBODWGTACKgBGAs/s1600/logo_MMT.png" /></a></div>Gerard MacDonell wrote "<a href="https://beinnbhiorach.com/2017/05/30/the-debt-debate-is-relevant-now/" target="_blank">The debt debate is relevant now</a>" a couple of weeks ago. In it, he argues that debt sustainability in the United States is a relevant issue now, not an academic issue a couple of decades out. He realises that economists in the Modern Monetary Theory (MMT) school will disagree, and he explains why he disagrees with the MMT view. I am in the MMT camp, and I suspect that I do not violently disagree with Gerard's view on the current state of the cycle. I would side-step his concerns about "fiscal sustainability," and instead argue a slightly-modified version of his argument: fiscal policy is relevant now (and it always is). However, political economy matters. That is, I do not think we can discuss fiscal policy in the dry technocratic terms our elites prefer to use; we need to accept that fiscal policy is inherently political. "Debt sustainability" is best labelled "political sustainability of debt." Given the drift in the Debt Ceiling debate, "(political) sustainability" is an issue that may hit in a matter of <i>months.</i><br /><br /><a name='more'></a><h2>The Economic Theory of Fiscal Sustainability</h2>I will return to "political sustainability" in later sections. In this section, I will attempt to address the topic in terms that are closer to what Gerard might use. (In doing so, I may not use standard MMT terminology. What follows are my views, which I believe resemble the MMT position.)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-7EOFc7qYGa4/WUBtSJ-gATI/AAAAAAAACxs/Iwx4-l-8JW0hes-GQ9Diw526EdlmbQ2tACLcBGAs/s1600/c20170614_CBO_debt.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="600" src="https://1.bp.blogspot.com/-7EOFc7qYGa4/WUBtSJ-gATI/AAAAAAAACxs/Iwx4-l-8JW0hes-GQ9Diw526EdlmbQ2tACLcBGAs/s1600/c20170614_CBO_debt.png" /></a></div>The key plank of Gerard's argument is based around the Congressional Budget Office's (CBO) long-term estimate for the debt. (He notes appropriate disclaimers about long-term projections.)<br /><br />I have serious reservations about the CBO's methodology (and forecasting track record). However, I doubt that I could come up with a superior alternative amid all my other projects right now. Therefore, I will not completely dismiss the CBO forecast: it is telling us something. The projected ratio is growing exponentially, and that would presumably be considered "unsustainable" (which is admittedly a weasel word).<br /><br />I am fairly confident that the actual debt ratio will not resemble the CBO projection. Instead, if their estimates about the fiscal settings are roughly correct, what would certainly happen is that nominal GDP growth would overshoot the CBO projections. This would either be greater real GDP growth (yay!) or higher inflation (boo!).<br /><br />In other words, we are back to the inflation limit on deficits, as per <a href="http://www.bondeconomics.com/2014/04/primer-what-is-functional-finance.html" target="_blank">Functional Finance (link to primer). </a> What Gerard is diagnosing as a debt problem I would diagnose as a potential inflation problem.<br /><br />If policy making in the United States were half-way rational, we would ask ourselves a very simple question: does it make sense to tighten fiscal policy <i>now </i>in response to a <i>future</i> inflation problem (with an unknown horizon)? If you believe in the Fiscal Theory of the Price Level, you would agree with that sentiment. For the rest of us, the answer is: no.<br /><br />Unfortunately, policy making in the United States is hardly rational (which I will return to after a slight detour).<br /><h2>Why Not to Extrapolate Present Trends Too Far Forward</h2>From the perspective of MMT, policymakers do not control the level of debt: they only set spending parameters and tax rates. The level of debt is not under direct control. (If you want to sound like an economist, you can say that "the debt level is endogenous, not exogenous.")<br /><br />The current rise in the debt-to-GDP ratio reflected a number of structural forces in the "non-Federal Government sector."<br /><ul><li>There has been a negative correlation between nominal GDP growth and the debt-to-GDP ratio. <a href="http://www.bondeconomics.com/2013/09/higher-debt-to-gdp-means-lower-bond.html" target="_blank">(This article discusses the relationship between bond yields and the debt-to-GDP ratio; bond yields tracked nominal GDP growth over that period.) </a> Whether or not that correlation will always holds (or was just the result of other common factors), nominal GDP growth appears to have little room to fall going forward.</li><li>Foreign central backs (mainly in Asia) have been snarfing up Treasury securities as part of their trade policies. Meanwhile, the policy framework pushed the private sector to amass large pension fund assets (to "fund" the transition of the Baby Boom to retirement). Financial assets being taken out of the circular flow of the economy result in a demand deficiency, which is counter-balanced by greater fiscal deficits as a result of the automatic stabilisers of the welfare state.</li></ul><div>In other words, the Federal Government debt-to-GDP did not rise to its current level by budgetary decisions; it was the result of "private sector" (lumping mercantilist foreigners in the "private sector") responses to non-budgetary policy decisions. Those structural factors may have run their course.</div><h2>Sigh, Back to Political Economy</h2>As I argued in <a href="http://books2read.com/understanding-government-finance" target="_blank">Understanding Government Finance</a>, the default risk for floating-currency sovereigns (like the United States) is political (although sufficient incompetence could do the job). Either the country ceases to exist (revolution or losing a war), or lawmakers decide to repudiate the debt. <i>There are no "unsustainable debt ratios" to point to.</i><br /><br />What makes a debt load politically unsustainable? That depends. During the Depression, the Canadian Federal Government was perfectly content to let the people on the prairies starve, rather than risk the sanctity of the budget balance. A few weeks later, when Canada joined into World War II, the same government was happy to overpay for first class passage to England for the previously starving persons who enlisted in the army. (I owe that historical insight to Pierre Berton.)<br /><br />According to the Freedom Caucus (a faction within the Republican Party), practically any level of Federal debt is unacceptable. Therefore, we may be able to gear up for yet another fight over the debt ceiling over the coming months. The Freedom Caucus believes that by not raising the debt ceiling, the Federal Government can avoid default; rather it would be operating under <a href="http://www.bondeconomics.com/2013/10/why-hard-debt-limit-is-very-bad-idea.html" target="_blank">a hard balanced budget constraint</a>. (The legal opinions I have seen disagree with the Freedom Caucus' view. From a practical standpoint, I see no way the Treasury can prioritise payments the way the Freedom Caucus wishes without exposing the President to the charge that he is ignoring Congress' legislation.)<br /><br />In any event, looking to economic theory to find a solution to irreconcilable political differences is not going to work. I doubt that there is a magical level of debt which will satisfy both the Freedom Caucus as well as the Democrats. In other words, to the extent that "fiscal capacity" is a political concept, we are unable to come up with a quantitative definition for it.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com18tag:blogger.com,1999:blog-5908830827135060852.post-72892332046182987162017-06-11T21:00:00.000-04:002017-06-12T06:44:48.404-04:00Quick Comments On Ongoing Projects<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-bMmYHkvGjtI/WT3HX0g9sYI/AAAAAAAACxQ/AboUMOLPsNkRgDVQuS2NefcAdpRbRaqdQCKgB/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://2.bp.blogspot.com/-bMmYHkvGjtI/WT3HX0g9sYI/AAAAAAAACxQ/AboUMOLPsNkRgDVQuS2NefcAdpRbRaqdQCKgB/s1600/logo_MMT.png" /></a></div>This article gives a few comments on some projects that I am involved in. They are the upcoming Modern Monetary Theory (MMT) academic conference (in September), my stock-flow consistent (SFC) models guide, and my struggle to put mainstream Dynamic Stochastic General Equilibrium models into a formal mathematical framework that makes sense to me.<br /><br /><a name='more'></a><h2>The MMT Conference</h2><br /><a href="http://www.pkconference.com/" target="_blank">The call for papers for the 2017 MMT Conference at the University of Missouri at Kansas City (September 21-24th) is ending soon.</a> Previously, the conference was billed as a "Post-Keynesian Conference" (I attended last year). The decision was made to give the conference a more decided MMT focus. (As I have previously written, MMT is a school of thought within a "broad tent" post-Keynesian tradition. Some authors define post-Keynesian economics in a "narrow tent" fashion. which leaves no good label for the "broad tent post-Keynesians." From a real-world perspective, this labelling dispute smells like silly academic politics, but as someone who attempts to write about the school of thought, it is a real pain.)<br /><br />I have made a submission; I will discuss it in the next section. Unless some form of emergency crops up, I expect to be at the conference.<br /><br />I have a somewhat mixed view of the narrowing of focus. From the perspective of MMT, there is definitely a need for a discussion of "what to do next?" It has to be understood that MMT is not a purely an academic exercise; the objective is to put into place a coherent policy platform that can seriously rival mainstream economics. Being involved in fratricidal academic political games about the exact provenance of economic theories is not going to advance that project.<br /><br />The down side from the perspective of someone outside of the academic circuit, it makes it harder to get a broader view of post-Keynesian theory. Since MMT leans heavily on the contributions of authors who are not considered "MMT," they cannot be ignored. However, it is difficult for a non-academic to justify going to multiple conferences during the year.<br /><h2>SFC Models Project</h2>My submission to the conference is an overview of the Python<i> sfc_models </i>package. At present, the package only has limited coverage of the models from within Godley and Lavoie's <i>Monetary Economics</i>. However, with the core algorithms in place, adding new economic functionality is straightforward.<br /><br />In my view, the package could be very useful as a communication tool. It is powerful enough to handle most (discrete time) stock-flow consistent models that will be devised by researchers (although those researcher would probably need to learn a small amount of Python). The framework generates the equations and solves them, reducing the difficulties of tracking dozens of similar parameters in a multi-country model. The solution method is completely transparent, so that the researcher can examine the equations generated.<br /><br />For users who are are not programmers or comfortable with mathematics, the framework will solve the system, which is generated with a few lines of code. The equations can be consulted if so desired, or the time series output examined either as time series plots, or in an automatically-generated CSV. It would be possible to make simple experiments by modifying an existing model with very limited programming knowledge.<br /><br />In the longer-term, the system could easily have a graphical user interface added.<br /><br />In my view, the framework would allow MMT researchers to share teaching models as well as research models with a wider audience. (I certainly intend to use the package as part of my writing programme.) It would be possible to have a set of "canonical models" which might be used in internet discussion. (It would also help insulate MMT against the notion that it has no mathematics.)<br /><br />Currently, I am trying to finish off the "user guide" for <i>sfc_models</i>. There is not a whole lot of text to write, and I hope to finish it off before September.<br /><h2>Robert Vienneau Link</h2>Robert Vienneau writes the blog "<a href="http://robertvienneau.blogspot.ca/" target="_blank">Thoughts on Economics.</a>" I believe that he is also trained in mathematics, and he writes extensively on modelling issues in economics. For some reason, I thought he stopped blogging, but that was not the case.<br /><br /><a href="http://robertvienneau.blogspot.ca/2017/06/elsewhere.html" target="_blank">He recently linked</a> to my article on "<a href="http://www.bondeconomics.com/2017/05/the-horrifying-mathematics-of.html" target="_blank">The Horrifying Mathematics of Infinitesimal Agents</a>." <a href="https://www.econjobrumors.com/topic/the-horrifying-mathematics-of-infinitesimal-agents" target="_blank">He also linked to the "Econ job rumors" web site forum area which eviscerated my logic (allegedly)</a>. My personal favourite was the line: "So the blogger is not familiar with how decimal system works then." in response to my referring to a theorem in Kolmogorov and Fomin. This was a perfect mixture of condescension and raw ignorance. (The proof in Kolmogorov and Fomin precisely relies on showing we cannot represent all elements in [0,1] as decimals.)<br /><br />The other comments were of the "read the vast literature" variety. (<a href="http://www.bondeconomics.com/2017/05/two-paper-trolling.html" target="_blank">What was that about "mud moats" again?</a> From the perspective of mathematics, the entire DSGE literature appears to be a mud moat.)<br /><br />To be honest, my article was somewhat of a fishing expedition. All of the papers I read that covered Calvo pricing were completely and utterly wrong from the perspective of formal mathematics. I had to guess what was the alleged true mathematics behind the concept, and I generated some leads.<br /><br />In case it's not clear, I am not interested in whether DSGE models are a good approximation of reality. (Certainly some readers on econjobrumors did not pick that up; they read what they thought I would write, and not my actual text. The reality is that most internet readers skim text heavily, so that is to be expected.) Instead, I am concerned whether DSGE models are mathematical models in a formal sense. There is no such thing as "an approximately true" proof.<br /><br />All that would be needed to shoot down my aspersions is to formally write out the full mathematical model of the Calvo model with both production and household utility optimisation. I would be happy to receive such a model definition, but I keep running into the "someone else did the mathematics in some intermediate micro textbook," without naming names. (Admittedly, there was a somewhat interesting reference given in the comments; I have not tracked it down. However, the description of that paper in the comments suggest that mainstream economists have been doing the mathematics of a continuum of agents wrong, which is exactly my point.) If I were reassured that the New Keynesian model is in fact a mathematical model, I could then turn to examine what it predicts about the real world.<br /><br />Anyway, returning to Robert Vienneau, he asked (in the above linked article):<br /><blockquote class="tr_bq">Don't the last two bullets <b>[one bullet is a link to my article - BR]</b> imply that the intermediate neoclassical microeconomic textbook treatment of perfect competition is balderdash, as Steve Keen says?</blockquote>I cannot comment on the other article (by Miles Kimball), but my argument is that the competitive equilibrium treatment <i>within a macro model that features two classes of agents</i> (households and firms) is generally not specified properly within published papers. I have no idea whether some papers exist that in fact define this equilibrium concept properly, and I still have some references to read - <i>i.e. I have no ability to make a definitive judgement. </i>Unfortunately for my readers, I have some consulting work (which is fortunate for me) that is cutting down on my time available for decoding mainstream economics. I will get back to it, but I prefer to wait until I digest some of the references that are piled up on my "to read" list.<br /><br />Whether or not mathematical proofs on models that do not specify the formal mathematical problem to be solved is "balderdash" depends upon the reader's personal definition of "balderdash"; I am not going to go there.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com1tag:blogger.com,1999:blog-5908830827135060852.post-17006869130356774292017-06-07T12:23:00.000-04:002017-06-07T12:23:59.134-04:00The Relationship Between sfc_models And Godley And Lavoie <a href="https://www.amazon.com/gp/product/B01FYB1D82/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B01FYB1D82&linkCode=as2&tag=bondecon09-20&linkId=9053949c120bc47b830fcbaf855c9b20" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=B01FYB1D82&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=B01FYB1D82" style="border: none !important; margin: 0px !important;" width="1" />The text<i> Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth</i>, by Wynne Godley and Marc Lavoie is cited heavily within the <i>sfc_models</i> framework. This text is a standard text for SFC modelling, and has already been the object of extensive modelling. The fact that the models are well known is extremely useful from the point of view of development. These existing models were used to calibrate the <i>sfc_models</i> code.<br /><br />(This article is an unedited draft of a section from my upcoming book "Introduction to SFC Models with Python.")<br /><br /><a name='more'></a>Users who wish to implement directly the models from <i>Monetary Economics</i> can do so by using the code in the <span style="font-family: "courier new" , "courier" , monospace;">sfc_models.gl_book</span> sub-package. There are modules within the <span style="font-family: "courier new" , "courier" , monospace;">gl_book</span> package that correspond to the chapters of the book. (In some cases, there are multiple models within each chapter.) At the time of writing, the coverage of models is quite selective, but more may be added over time. Examples of this direct object creation are given later. However, the user may examine the source code for the chapter modules to see the operations to create the models directly. In particular, if you wish to experiment with the models, it would be best to build the models by copying the source code. The code in the <span style="font-family: "courier new" , "courier" , monospace;">gl_book</span> sub-package is designed to emulate the models in the book exactly, as this is needed for calibration.<br /><br />This calibration effort may be invisible to most users of the <i>sfc_models</i> package, but it is a key reason why the package should be a stable programming environment. As discussed in <another section of the book>, Python has well-developed unit testing capabilities. Other than some areas of code that inherently cannot be tested – and are explicitly marked as such – the objective is that 100% of the lines of code are exercised in tests. These unit tests call functions and methods, and ensure that the output matches expectations.<br /><br />Code that is designed to be unit tested needs to be broken into small blocks; spaghetti code has to be avoided. Each of the small components is tested separately, so that changes to one block do not break hundreds of tests because of a desired change of behaviour.<br /><br />The risk with testing of this nature is that we are focused on testing small parts, and it may be possible that changes will break the code in unexpected ways due to unforeseen interactions. Therefore, we need to augment smaller unit tests with end-to-end tests. These end-to-end tests do large operations, but only examine the final results; implementation details are allowed to remain flexible. For <i>sfc_models</i>, the model outputs from selected models from <i>Monetary Economics</i> were used as end-to-end tests. Since the models were implemented elsewhere already, the target output data (based on fixed inputs) were already available. It was possible to use these target data to ensure that the models generated by <i>sfc_models</i> matched the existing results.<br /><br />It would have been possible for me to develop the package to generate models that behaved differently. However, such models would be non-standard, and one of the first things users would want is to be able to emulate the models in <i>Monetary Economics</i>. Rather than start from an unorthodox position, I instead decided to be able to emulate the existing structures in <i>Monetary Economics</i> perfectly, and leave the job of creating new types of models as future extensions under the control of users.<br /><br />One difference between the<i> sfc_models</i> framework and that of Godley and Lavoie is the equation structure. Since the <i>sfc_models</i> equations are generated algorithmically, they needed to be set up in a generic fashion that allows the algorithms to create connections between sectors in a flexible manner. As a result, the initial set of equations generated by <i>sfc_models</i> features many redundant equations (that are pruned by the solver). This destroys some of the mathematical elegance.<br /><br />A second difference is cosmetic: variables in<i> sfc_models</i> are based on an algorithmic scheme based on long text strings, while Godley and Lavoie use standard variable names from economics, using mathematical notation with subscripts. Therefore, the user needs to “translate” variable names when comparing results.<br /><br />The final difference is a limitation of the<i> sfc_models</i> framework: there is no mechanism to generate the transaction matrices that are a prominent feature of Monetary Economics. It may be possible to infer such matrices from the economic model structure, but the feasibility of that step has not yet been examined.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-85156749091431190632017-06-04T09:00:00.000-04:002017-06-04T10:10:46.081-04:00Primer: Funding Versus Credit Risk<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-3kyutSiWtzw/WTLHEdvdNII/AAAAAAAACw0/RqtGBeWQrHAbBZXR426ZfSWZ4nPmeUsZACKgB/s1600/logo_banking.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-3kyutSiWtzw/WTLHEdvdNII/AAAAAAAACw0/RqtGBeWQrHAbBZXR426ZfSWZ4nPmeUsZACKgB/s1600/logo_banking.png" /></a></div>The act of lending involves two fundamental operations: extending financing (funding) ans taking credit risk. These two roles are typically thought of together, which obscures what is happening. By decoupling these concepts, we can better understand the effects of debt issuance. If we can eliminate credit risk, the circular nature of financial flows means that the only limitations on debt issuance are real resource constraints. This understanding helps us better understand the behaviour of economic models (why they depart from real-world behaviour), governmental finance, and banking.<br /><a name='more'></a><h2>Financial Engineering</h2>Financial engineering has a deservedly bad reputation as a result of sell-side financial engineers devising toxic securities to dump on their clients. However, on the buy side, a lot of "financial engineering" consists of breaking apart traded instruments into their fundamental risk factors. Knowing what risks you are running is the key to good portfolio management.<br /><br />In this article, I discuss credit financial instruments of all types, including some that might not be thought of as "financial," such as accounts receivable. However, the remarks are not really applicable to equities.<br /><br />The following factors are the main ones that determine the "fundamental" value of a credit instrument (although we may need to add some to get an expected market price). I am listing them all for completeness; only the first two are discussed in the rest of the article.<br /><ul><li><b>Funding (or financing).</b> The need to transfer "money" (or assets of some sort) to the borrower.</li><li><b>Credit risk.</b> Will the borrower pay back the borrowing?</li><li><b>Term</b> (maturity). </li><li><b>Interest rate risk.</b> This can be tied to the maturity (fixed coupon), but not always (floating rate notes).</li><li><b>Optionality.</b> Can either party have the right modify the structure of payments?</li></ul><div>Once again, the latter three factors are critical for pricing instruments, but they can be viewed as secondary to the act of lending.<br /><br />In order to do such a decomposition, we need instruments to be able to isolate these risks. We can separate the funding/credit risk components by using a credit guarantee or credit default swap (CDS). A bond owner can hedge out the credit risk of a bond by "buying protection" in the CDS market (albeit by gaining a credit risk exposure to the swap counterparty). Meanwhile, seller of CDS protection ended up with the credit risk associated with the loan, but without extending any financing,</div><div><br /></div><div>My argument is that the first two terms are quite different, but common usage of "financing" causes people to blur the two together. One method to deal with this would be to invent a new term for the act of extending money without worrying about credit risk, but I will stick with "financing/funding" as that is already the phrasing used by many (most?) authors.</div><div><h2>Modelling Perspective</h2>Embedding credit analysis within a mathematical economic model is difficult, but funding poses no problems. It is therefore easy to set up models with behaviour that is hard to interpret if we lump together funding and credit.<br /><br />We can use the <a href="https://github.com/brianr747/SFC_models" target="_blank">Python sfc_models stock-flow consistent (SFC) modelling framework</a> to create an example. It currently is only emulating models from the Godley and Lavoie's <i>Monetary Economics</i>, but the equation structure is slightly different, and it is easy to illustrate the limited constraints on funding if there is no credit risk.<br /><br />Within <i>sfc_models</i>, each sector has a net financial holding variable, denoted <i>F</i>. Imagine that a non-government entity issues debt, but otherwise undertakes no transactions in an accounting period, the variable<i> F</i> associated with that sector would not change. Instead, it will grow its balance sheet: it would increase the stock of financial liabilities (it increased its debt), and it will invest the proceeds into other financial assets. For the sake of simplicity, label this other financial asset "short-term paper," which is issued by a number of issuers (think of a money market mutual fund).<br /><br />Where does the funding come from? If all entities in the model issue liabilities that do not have credit risk, there is nothing to distinguish them within portfolios. If all liabilities are short-term instruments, they are all indistinguishable from "money" or "treasury bills." As a result, other entity would increase their holdings of the issuing entity's new debt, and there would have been a reshuffling of other entities' portfolios that would allow the issuer to increase its holdings of "short-term paper." (The simplest case is one other entity reallocating from "short-term paper" into the newly issued debt.)<br /><br />In summary:<br /><ul><li>The target entity issues $X of new debt; it ends up $X in new financial assets ("short-term paper"). Balance sheet $X larger.</li><li>Other entities exchange $X of "short-term paper" for $X of new debt. Balance sheets have same size, just with an allocation change.</li></ul>Since there are no worries about credit risk, there is no <i>financial</i> reason for there to be any limit to this process. Entities could issue arbitrarily large amounts of debt, and the proceeds invested in the arbitrarily large issuance of liabilities issued by other entities. Since such a result appears nonsensical, the modeller has to impose behavioural rules that limit debt issuance. For example, debt issuance is typically tied to consumption or investing (in real assets). Production capacity constraints exist (although not in the simpler SFC models!), and so arbitrarily large nominal debt flows cannot be matched by arbitrarily large flows of real goods and services. In other words, the increased nominal flows would likely just represent inflation.<br /><br />The exact mechanism of the transfer is deliberately shrouded in mystery. The <i>sfc_models</i> framework follows the aggregation methodology of standard stock-flow consistent models, and all transactions are assumed to clear simultaneously at the end of the accounting period. We cannot trace the individual transactions, rather all we see is the new portfolio configurations (and flows) at the end of the period. This is somewhat akin to the notion of equilibrium in mainstream economics, which is briefly touched upon this <a href="http://www.levyinstitute.org/publications/stock-flow-consistent-macroeconomic-models-a-survey" target="_blank">survey paper by Michalis Nikiforos and Gennaro Zezza</a>. Ramanan has a longer discussion of how equilibrium fits in within the post-Keynesian approach. I would note that the notion of (short-run) equilibrium as used here is a well-defined mathematical concept, <a href="http://www.bondeconomics.com/2017/05/does-general-equilibrium-exist.html" target="_blank">unlike the ambiguous ideas used in mainstream mathematics</a>.*<br /><br />This situation is distinct from what might happen in an agent-based model, which theoretically might track all transactions. In such a case, we would explicitly model how debt is funded, and it may be that the developer of the model will put into place behavioural limits on how much debt can be funded.<br /><br />My argument is that adding such arbitrary limits on debt issuance is perhaps misleading; the private sector will arrange its affairs in such a way so as to eliminate <i>funding </i>constraints. Instead, the real limit is on the willingness to take credit risk. A business cycle is largely defined by the increased willingness of the private sector to take credit risk -- allowing debt issuance to expand -- until that willingness disappears (the financial crisis that has terminated every expansion in recent decades). This should sound roughly familiar; it is just a restatement of Minsky's Financial Instability Hypothesis.<br /><br />The rest of this article discusses how to translate these observations on mathematical models toward the real world. In my view, the apparent ability of mathematical models to allow unlimited debt expansion is not a deficiency of the model (as a result of the violation of the common sense idea of "loanable funds" limiting debt issuance). Instead, it is the difficulty of modelling credit risk, which is the true brake on debt growth.<br /><h2>Why is Credit Risk Hard to Model?</h2>It is perhaps not obvious why credit risk is hard to model within a macroeconomic model. The reason is that mathematical models follow rules, whereas credit risk often revolves around people and firms not following rules.<br /><br />We have to remember that a mathematical model is just a statement about sets, and elements of sets. There are no mathematical elves that live inside the model, running production functions, trading, and lending to each other.<br /><br />Lending decisions are often based on instinct, but they generally credit ratio guidelines, which could be modeled. In the real world, both sides of the transaction will scour business plans and accounting data to determine whether such guidelines are met. Lenders have to factor in the possibility that borrowers are delusional, or lying through their teeth. As a result, the financial sector has an active role in the decision-making process.<br /><br />In a mathematical model, such negotiation is not going to be easy to represent (outside of an agent-based model, or game-theoretic models that abstract away from the rest of of the economy). The aggregate position of the borrowing sector is presumably known, as are the lending standards. Therefore, the natural set up is to set up a borrowing capacity function, and a willingness to borrow to function. The "decision" within the model is based on those functions, and thus it appears that only the borrowing sector is behind the decision, and the financial sector is entirely passive. (Since this is the natural implementation, I am not a fan of the heterodox complaint about the absence of the financial sectors in models; its role can be implicitly buried in the borrowing behaviour od the real economy sectors.)<br /><br />In summary, the credit risk component of a macro model is most likely going to look unrealistic.<br /><h2>Not Just Banks</h2>The ability of banks to expand their balance sheets is well known; loans create deposits. There is considerable mysticism about this ability, and the belief that makes them special. (<a href="http://www.bondeconomics.com/2014/04/are-banks-special-yes-and-no.html" target="_blank">I discuss this topic in "Are Banks Special: Yes and No."</a> As the title suggests, banks do have some privileges, but this can be overstated.)<br /><br />Non-bank finance also allows the private sector to expand their balance sheets. Firms can issue commercial paper, and invest the proceeds in the commercial paper market. So long as the issuers are willing to hold eachother's commercial paper, this process can continue indefinitely. Certainly, banks are involved in the payments mechanism, but there is no need for any borrowing from banks that is observed at the close of the day. If banks somehow became a bottleneck for the system, arrangements would be made to bypass those constraints.<br /><br />We are not even confined to the financial sector. Firms routinely sell goods and services on a credit basis: accounts receivable and payable. The seller effectively extends funding in the form of goods or services provided to the buyer; since the seller had to previously finance the inventory that was sold.<br /><br />Trade finance is an important component of capitalism, even if it gets less attention than banking in popular analysis. Vendor financing was a critical component of the technology bubble, and is often sneered at. However, rediscounting receivables was one of the drivers of the development of the money markets. Merchant banks rediscounted receivables, and this gave the instruments enough credibility to be traded in the money markets.<br /><br />Furthermore, there is an intense economic force behind vendor financing. By extending financing, you make sales, and thus profit. If regulation clamped down on financing -- such as is suggested by full reserve banking -- commerce would grind to a halt. With apologies to Frank Herbert -- the credit must flow. As a result, attempts to protect the sanctity of "money" -- such as gold backing, or full reserves -- will be bypassed to grease the wheels of commerce. The drive for profit and incomes creates a political coalition that dooms attempts to restrict private credit creation too tightly.<br /><br />The problem with vendor financing is that the seller needs to correctly judge the creditworthiness of buyer. (This was a key problem in the tech bubble.) There is an incentive for businesses to find a way of outsourcing that credit analysis. Historically, general stores had to keep track of the tabs that they extended to their regular customers, and the determination of the size of the tab was a role that only trusted persons could make. (The fact that publicans acted as financial intermediaries when the Irish banking system was shuttered by a labour dispute in the 1970s is an example of the credit expertise in the real economy --<a href="http://www.bondeconomics.com/2014/07/book-review-money-unauthorized-biography.html" target="_blank"> discussed in this book review</a>.) Credit card companies have taken over the role of credit assessment for retailers, and so it is possible to have junior employees acting as cashiers. In other segments of the economy, intermediaries spring up to fulfil that role (such as receivables factoring, or trade finance companies). For large fixed investments, the bond market exists to absorb the credit risk that is too concentrated for even the banking system.<br /><br />The fact that financing forms will adjust to allow credit to be advanced explains why I believe that the "equilibrium" description of financing in mathematical models are relatively close to the truth. The system will evolve so that if there is some entity that is willing to take on the credit risk of the borrower, so there will be a way of matching the supply and demand.<br /><br />The willingness of <i>someone</i> to underwrite the credit risk of a borrower is thus one constraint on borrowing. The other constraint is real: trade finance can only be advanced against existing goods and services. If we attempt to advance more paper claims against resources than there are resources available, inflation would result.<br /><br />This analysis implies that arguments suggesting various policies are needed to raise savings rates to fund investment are bunkum. There is no need to raise the <i>volume</i> of savings; borrowing is inherently self-financing. The role of the financial sector -- if it is functioning properly -- is to assess credit risks. The only advantage of new financing forms is that they may be able to undertake credit risk that banks naturally shun. Stories about entrepreneurs needing to save to build their business (or people hoarding coconuts on desert islands to "invest") miss the point: they needed to save to invest since no one was willing to lend them money. Since most new businesses fail, there is not a huge lineup of firms to extend such financing. However, existing firms have a track record, and financing is normally available for viable ones.<br /><h2>The Role of Banks</h2>The importance of money creation by banks is overstated. By itself, money creation is a financing operation, and financing itself is not an issue. Instead, the role of banks in credit analysis is what matters. Banks have an inside view of depostors' liquidity position, which is of great value in credit analysis. They also are diversified lenders, which naturally reduces their risk. By contrast, the fate of specialist lenders depends upon a narrow group of clients, who are therefore exposed to sectoral downturns.<br /><br />The evolution of banks towards the originate-to-distribute model is an abdication of the fundamental responsibilities of banks. As a result, they have lost the technical competence to execute their core business functions properly, which sadly is a feature of many large corporations.<br /><h2>What Happened in Canada</h2><a href="http://www.bondeconomics.com/2017/05/the-zombification-of-canada.html" target="_blank">The explosion of Canadian household debt after the disastrous decision by the CMHC to underwrite the credit risk of risk borrowers is very easy to understand from this point of view.</a> They took away the credit risk eliminating the true constraint on borrowing. As one might expect, borrowing accelerated. The only limit on borrowing was finding new buyers that could pass the exceedingly easy barriers against borrowing too much money, not really financing. <i>(Arguably, the Canada Mortgage Bonds were created to allow the necessary intermediation, which would have been difficult if the mortgages remained on the bank balance sheets. The institutional structure of Canada ensures that wealth leaks into non-bank finance -- for example, pension funds -- and it would have been necessary for the banks to increase their already large bond borrowing programmes to recirculate the funding from the shadow banks to the mortgage positions.)</i><br /><i><br /></i>Offering credit guarantees looks like a way to implement government policy "for free," which is attractive in an environment where politicians have a primitive ideological fear of government debt. However, such guarantees eliminate the constraint on credit growth for that class of borrower, and so it is necessary to think whether unchecked growth of a certain type of debt will be a good idea.<br /><h2>Central Governments</h2>A central government (that controls its central bank, and borrows in its own free-floating currency) is essentially free of the risk of involuntary default. (To what extent that claim is controversial, I discuss it at greater length in <i><a href="http://books2read.com/understanding-government-finance" target="_blank">Understanding Government Finance</a></i>.)<br /><br />Therefore, such governments are free of the credit risk aspect of borrowing, and they are thus only engaged in a financing operation. Meanwhile, there are no financial constraints on such an operation, as I discussed earlier. Since government money always trades at par, liability issuance by the government is matched by rising non-government financial asset holdings (often called a circular flow). The only constraint on government borrowing is the real constraint: there has to be someone willing to sell real goods or services (or work) for the nominal dollar amount offered by the government.<br /><br />As a result, economic models that allow for arbitrarily large amounts of government borrowing appear to be closer to real world behaviour.<br /><br />Keen observers of Modern Monetary Theory (MMT) would note that I used the dreaded word "financing" with respect to government spending. They do not like to use that word with respect to government debt issuance. They have realistic political concerns about the misuse of words to frame debates. For example, <a href="http://bilbo.economicoutlook.net/blog/?p=36092" target="_blank">Bill Mitchell discusses the role of language in a recent article on the language used in World Bank publications</a>:<br /><blockquote class="tr_bq">The shift from systemic failure to individual choice – from a lack of jobs to transactional choice – from responsibility to aid all citizens to governance and oversight of ‘taxpayer funds’ – from the responsibility of government under international treaties to provide enough jobs to contract brokering and surveillance – from full employment to full employability – reflected the neo-liberal dominance of public policy.<br />And the shift in language was an intrinsic part of this policy shift. Of this abandonment of full employment. Of this jettisoning of responsibility.<br />That is when I became interested in the way language and framing works.</blockquote>With respect to government borrowing, MMT authors are very keen to differentiate the central government from other borrowers. This is in opposition to hypocritical rambling about "magical money trees" (<a href="https://medium.com/modern-money-matters/the-magic-money-tree-exists-822ee0ecb09a" target="_blank">as discussed by Neil Wilson here</a>). As a result, I have some sympathy for this view. However, they are effectively lumping the act of financing with the act of credit transfer (like everyone else), which I view as awkward. "Financing" or "funding" appear to be the best technical terms for the act of transferring money, without worrying about the transfer of credit risk. Until such a term is established, a discussion similar to the one here is going to be awkward.<br /><h2>Concluding Remarks</h2>We need to decouple the act of funding from the act of taking credit risk. Credit risk and real resource constraints are the true limits on borrowing; trying to find magic debt-to-GDP ratios is a doomed quest.<br /><br /><b>Footnote:</b><br /><br />* The notion of "long-run equilibrium": that there are steady-state ratios towards which variables converge is somewhat harder to define without adding classifications of variables within models if the economy is growing. Different classes of variables will exhibit different growth rates (for example, real growth rates versus nominal), and so badly selected ratios will not converge towards any value, even in a "steady state." These classifications are not in place in the <i>sfc_models</i> framework, and so a formal definition is hard to implement.<br /><br /></div>(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com19tag:blogger.com,1999:blog-5908830827135060852.post-79540655146312374422017-05-31T09:00:00.000-04:002017-06-01T09:26:52.371-04:00The Horrifying Mathematics Of Infinitesimal Agents<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://www.amazon.com/gp/product/0871404532/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0871404532&linkCode=as2&tag=bondecon09-20&linkId=d36ec2d07c20fcfc2ff85304ce36d362" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=0871404532&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Highly recommended<br />for Lovecraft fans.<br />(Cover is link to<br />Amazon.com.)</td></tr></tbody></table><blockquote class="tr_bq"><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=0871404532" style="border: none !important; margin: 0px !important;" width="1" />Three men were swept up by the flabby claws before anybody turned. God rest them, if there be any rest in the universe. They were Donovan, Guerrera, and Ångstrom. Parker slipped as the other three were plunging frenzied over endless vistas of green-crusted rock to the boat. and Johansen swears he was swallowed up by an angle of masonry which shouldn't have been there; an angle which was acute, but behaved as it was obtuse. ("The Call of Cthulhu," H.P. Lovecraft, 1928.)</blockquote><br /><i>Searchers after mathematical horror haunt strange, far concepts. Being swallowed by non-Euclidean geometry is one form of terror. But the true epicure in the terrible, to whom a new thrill of unproveable ghastliness is the chief end, esteems most the hideous infinitesimal agents of mainstream economics.</i><br /><br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>This article examines the curious mathematics of infinitesimal agents, which are not merely infinitely small, they are indexed on the <i>[0, 1]</i> interval. Such agents are of critical importance in New Keynesian economics, as the standard Calvo pricing uses such agents to generate price stickiness. However, it is impossible for this mathematical formalism to be the limit of a large number of firms, nor is it possible to properly define an optimisation problem for such agents. <i>Since the solution of the mathematical problem is not the result of optimising agents, such models are just as vulnerable to the Lucas Critique as the old Keynesian models.</i> It may be possible to create a proper optimisation structure for such models, but it would probably requite re-writing most of the mathematics.<br /><br /><a name='more'></a>UPDATE: <i>This aspersion at mainstream economics thankfully triggered a response. Brian Albrecht (@BrianCAlbrecht on Twitter) gave me some references to chew on (yay). He listed the work of Yeneng Sun, which looks good and mathematical (link:<a href="https://scholar.google.com/citations?user=45jVI-4AAAAJ&hl=en&oi=sra" target="_blank"> https://scholar.google.com/citations?user=45jVI-4AAAAJ&hl=en&oi=sra</a>), and a standard reference: "Markets with a Continuum of Traders," by Robert J. Aumann. I have not read any of the supplied references, so I cannot tell how close my guess was to the actual justification. (In between channeling my inner H.P. Lovecraft, I do hint at what I think was going on.) If necessary, I will cut back some of my claims here...</i><br /><h2>Why Do This?</h2>To an outside observer, it appears that the mathematics of infinitesimal agents was something that is lifted from the <i>Necronomicon</i>. However, there is a logic behind the choice, even if the mathematics makes no actual sense.<br /><br />The objective of mainstream macroeconomics is to derive macroeconomic models based on the optimising choices of agents (households maximise utility, firms maximise profits). Although that seems like a reasonable starting point of a mathematical model -- which is always going to have to abstract from reality -- the difficulties arise when agents choices interact.<br /><br />For example, if we assumed that the business sector consisted of one firm (a monopoly), we would have a single optimiser driving a significant part of the economy. It would effectively be a form of central planning, and behaviour could be quite erratic. Since the single firm has no competitors, it can follow any strategy it wishes. This is not going to be a reasonable approximation of reality, so we need to have multiple firms.<br /><br />The first thing to keep in mind is that the mainstream insists that individual agents (outside of the monopoly situation) do not set prices; they are "price takers." Some mysterious agency causes supply and demand to come into "equilibrium," and prices are set in a way to cause such an "equilibrium" to come into play. (<a href="http://www.bondeconomics.com/2017/05/does-general-equilibrium-exist.html" target="_blank">As I noted in an earlier article, whether such an equilibrium makes formal mathematical sense is unclear.</a>)<br /><br />When we look at the preferred optimising structure, the optimisation problem for firms involves choosing the level of production given the level of wages and selling prices. If wages and prices are fixed, the optimising choice appears easy, given various assumptions that are made. However, if the firm changes its production level, should it not move wages and prices?<br /><br />The solution is to make the firms so small that they have no influence on prices. Thus the quest for infinitesimal firms.<br /><h2>What are They?</h2>If we were truly attempting to approximate having a great number of firms, the normal way to proceed is to say that we have <i>N</i> firms, and see what happens when <i>N</i> tends to infinity. From a mathematical perspective, this represents a <i>countable</i> infinite set: we can associated the firms with an element in an infinite sequence. This makes too much sense for the mainstream.<br /><br />They instead say that we have <i>a lot</i> of firms, each with an index <i>i, </i>and the set of indices is the set of <i>[0, 1]</i>. That is, for every real number in the interval <i>[0,1]</i>, there is an associated firm. To calculate aggregates, we integrate over that interval.<br /><br />This is used in Calvo pricing. In each time period, there is a fixed probability that each infinitesimal firm is allowed to change prices. (Some wags have referred to the "Calvo fairy" as allowing price changes to occur.) If the probability is 0.5, then:<br /><ul><li>there's a 50% probability it cannot change prices the next period;</li><li>there's a 25% probability it cannot change prices over the next two periods</li><li>etc.</li></ul><div>As a result, it needs to raise prices now to take into account expected inflation that could occur when it was unable to raise prices.</div><h2>You Can't Get There From Here</h2><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://www.amazon.com/gp/product/0486406830/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0486406830&linkCode=as2&tag=bondecon09-20&linkId=276c6024d5dab3b78fec8a12bf2024c0" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=0486406830&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Excellent - and<br />cheaper than Rudin!</td></tr></tbody></table><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=0486406830" style="border: none !important; margin: 0px !important;" width="1" /> The first obvious complaint with this indexation scheme is that it cannot be viewed as taking the limit of a great many firms. As <i>everyone </i>knows, the closed interval <i>[0, 1]</i> on the real line is a nondenumerable set. (A proved in the Theorem in Section 4 of Chapter 1 of Volume 1 of <i>Elements of the Theory of Functions and Functional Analysis, </i>by A.N. Kolmogorov and S.V. Fomin.) This means that we cannot express the set <i>[0, 1]</i> as the limit of a countable series of agents. In other words, we formally cannot view such a construct as being the limit of having "a lot" of firms.<br /><br />What madness seized the economists who use this indexation schemes as part of the drive to add "micro-foundations" to models? This is one of the first theorems one learns in most real analysis courses (as evidenced by being in an early section of the first chapter). How could they have skipped over that result?<br /><br />Although this transgression against mathematics might be excused as the result of a simplifying assumption, the situation gets even more mind-destroying.<br /><h2>Infinitesimals <i>Cannot</i> Optimise</h2>Since we are no longer just sum up the actions a set of $N$ firms, we need to another way to calculate aggregate behaviour. This is achieved by integration (the Lebesgue integral, to be precise). If we associated a decision vector $u(i)$ with each firm $i$, and have a state vector $x$, we can define a profit function $f$. The aggregate profits $s$ are generated by:<br />\[<br />s = \int_0^1 f(x, u(\mu)) d \mu.<br />\]<br />(Please note that this formulation is a simplification of the relationships that might be found in DSGE papers; the key is that we get the aggregate by integration over the interval <i>[0, 1]</i>, and not the notation associated with the variables inside the integral.)<br /><br />For a particular agent $i$, its profit $s(i)$ is given by:<br />\[<br />s(i) = \int_i^i f(x, u(i)) d\mu.<br />\]<br />The solution to this is simple: profits are equal to zero, no matter what decision the firm makes.* A single firm is a set of measure zero, and what happens on a set of measure zero has no effect on the Lebesgue integral.<br /><br />Very simply, an individual firm makes no profits (or an infinitesimal household always has utility of 0), no matter what choices it makes. There is no optimisation to be done, since all choices are equally valid.<br /><br />In other words, the optimisation problem as stated makes no sense whatsoever from a formal mathematical perspective. Since the DSGE model cannot be construed as the result of optimisation, it is vulnerable to the Lucas Critique.<br /><h2>Can the Framework be Saved?</h2>Can we resurrect an optimisation problem from this infinitesimal mess? It seems possible, but it would make no sense from the perspective of micro-foundations. We can only have non-zero values in the functions if we integrate over sets with non-zero measure. This means that we are looking at the optimal choice of an infinite number of "firms" (or households), if we use the original mainstream definition of a "firm."<br /><br />In the Calvo pricing formalism, that raises awkward problems. The point of Calvo pricing is that there is a probability of each infinitesimal firm can change prices in each time period. Once we start optimising over sets of non-zero measure, we are now optimising over a set of "firms" which have a probability of being able to set prices in each period. Since we have an infinite number of firms, we should be able to appeal to the law of large numbers, and we know that a fixed percentage of the target firms will be able to adjust prices in each period. It seems to me that this ends up being for all intents and purposes the same thing as flexible prices, since the "firms" that change prices can make up for the "firms" that cannot change prices.<br /><br />Furthermore, as soon as we are optimising over sets of non-zero measure, the firm presumably has some market power. This market power should presumably be taken into account when analysing the problem.<br /><br />I believe that the attempt to rescue this formalism will revolve around looking at "infinitesimal profits." Under the notation used here, we are allegedly interested in maximising the function <i>f, </i>and not worry about integrating it. However, this does not work, once we take into account the constraints facing the optimisation. As I discuss in "<a href="http://www.bondeconomics.com/2017/04/interpreting-dsge-mathematics.html" target="_blank">Interpreting DSGE Mathematics,</a>" the optimisation problem for agents need to take into account budget constraints. <strike>Those budget constraints at the aggregate level are described by aggregate money (and other financial asset) holdings, as well as aggregate government consumption and taxation. If we used the more sensible countable notion of an infinite number of firms, we could associate <i>1/N</i> of those aggregate values to any particular agent. Once we accept the madness of the <i>[0, 1]</i> interval, the dollar amounts within the constraints have no choice but to be zero. This puts us in the untenable position that constraints do not appear at the agent level, but they somehow pop up at the aggregate level. </strike> (I deleted the previous text as I thought it summarised my technical worries, but I realise it is not a good formulation. The issue around constraints is more complex than I discuss here, but it might take a more formal explanation, which is the subject of research. Having agents with finite measure, as I suggest, would eliminate my real concern.)<br /><br /><i>[Update: I read one of the references, and as I suspected, it only tangentially covers the issues that I am worried about. I can see how a continuum of agents can work for some models, but the specific application to DSGE models with Calvo pricing and two classes of optimising agents is still unclear. The paper I read used a new formal definition of competitive equilibrium (which is of course completely unrelated to any other definition I have come across), and I need to digest its implications. The technique discussed in the paper I read does not translate to the "macro model" framework, which is one of my standard complaints about the references in mainstream mathematical economics. Finding a self-contained reference is still beyond my reach. If any mainstream economist is offended by my assertions, please feel free to explain exactly how I am wrong. If I find such a reference, I can stop wasting my time on definitional issues, and return to substantive ones.</i><i>]</i><br /><h2>Concluding Remarks</h2>Beware acute angles, and infinitesimal agents.<br /><br /><b>Footnote:</b><br /><br />* One could try to appeal to the Dirac Delta "Function", which is actually a generalised function (Kolmogorov and Fomin, page 105). These generalised functions are only defined in the context of integration; it is impossible to choose a "delta function" as the argument of function to be optimised.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com20tag:blogger.com,1999:blog-5908830827135060852.post-40002748523547981702017-05-28T21:26:00.000-04:002017-05-28T21:31:12.554-04:00The Zombification of Canada<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-UBMvGb_oK6M/WSt1sQ-HMiI/AAAAAAAACwY/AxOeaTEAE0gZNvQ_JEk7dw_ovQcEAXtCACLcB/s1600/c20170528_canada_debt_income.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: Canadian Household Debt-to-Disposable Income Ratio" border="0" data-original-height="400" data-original-width="600" src="https://3.bp.blogspot.com/-UBMvGb_oK6M/WSt1sQ-HMiI/AAAAAAAACwY/AxOeaTEAE0gZNvQ_JEk7dw_ovQcEAXtCACLcB/s1600/c20170528_canada_debt_income.png" title="" /></a></div><br />A series of policy errors has trapped the Canadian economy in a near-zombie status. Household debt levels are high, leading to a fragile system. The only benign way of reducing this fragility is to induce high wage inflation, which is precluded by the unthinking attachment to the inflation target. There is no reason to expect the system to collapse on any particular forecast horizon; rather the economy can muddle along in a low-growth path. The fact that the brain trust that inflicted this low-growth destiny on the Canadian economy in the name of improving economic efficiency is ironic, but this reflects the general failure of modern policymaking. The situation in Canada may mainly be of interest to Canadians, but it does provide yet another data point for the general thesis that trusting the policy preferences of the financial sector is inherently a bad idea.<br /><br /><a name='more'></a>This article is not meant to be a detailed analysis of the policy failures, rather it sets out a summary of my views. It could easily serve as the introduction to a report that describes the origins of “<i>The Great Canadian Economic Collapse of <insert date here>.</i>” For regular readers, this is a restatement of long-held views, and so it should be similar to what I have previously written.<br /><h2>What Went Wrong?</h2>My “zombification” thesis is not based on what has happened over the past few months or years; rather, the inception of the problem can be traced to decisions going back more than two decades. As a result, I see little need to point fingers at any particular individuals or political parties for the problem; the errors have been systemic, and embraced by the main political parties.<br /><br />The thesis is straightforward: the rise in house prices since the late 1990s is mechanically inseparable from the rise in household debt. Although the current data flow is hardly reassuring, there is no reason to believe that the system must collapse because household debt has hit some magical “unsustainable” level. It is entirely possible that house prices have hit a “permanently high plateau.” (At least relative to the prices that generally prevailed during the mid-1990s, which were quite low when compared to other developed countries in most Canadian markets. Even a correction of 20% to house prices may eventually be hard to spot on a long-term price chart.) Household debt ratios would presumably still rise, but the growth rate would level off as household finances reach a “steady state” with respect to home prices.<br /><br />Zombification does not imply a disorderly collapse. Instead, policymakers are stuck staring into the abyss: they cannot afford to let housing implode. Even if the main banks are protected by mortgage insurance, the household sector would laid waste. It is impossible to have a healthy banking system if all the banking system’s customers are on the edge of bankruptcy. (Macro-prudential policies are just spitting into a hurricane under such circumstances.)<br /><br />Therefore, policymakers have to treat the possibility of a housing sector collapse with extreme caution. Unfortunately, there is no way out. There are two ways of reducing the debt/wage ratio: mass defaults, or a rapid increase in wages. A mass default would result from the crisis that we are trying to avoid. The other way out seems unlikely: the Bank of Canada would respond to rising wages by hiking rates, which is the standard trigger for a housing market collapse.<br /><br />Instead, policy has to be set in such a way that we remain stuck in low-growth stasis.<br /><h2>The Policy Errors</h2>The policy errors made were straightforward.<br /><ul><li>The most important was the loosening of the down payment requirements by the Canada Housing and Mortgage Corporation, which was started in 1999 (there were some trials starting earlier). Canadians cannot take mortgages with less than a 20% down payment, unless they purchase mortgage insurance. This insurance mainly comes from the CMHC, although there have been attempts to allow private sector competitors. The loosening of standards was done in a series of steps, and some of that loosening was reversed after the Financial Crisis. However, one key loosening remains: previously, the maximum mortgage size was capped at a low level that allowed purchase in most markets, but were too low for the highest-priced markets. If those low limits were still in place, the current high prices would be obviously unsustainable.</li><li>The Bank of Canada drank the New Keynesian Kool-Aid™. Like every other New Keynesian central bank, they slashed rates in a panic during every down turn, and they refused to hike rates in a symmetric fashion. It is a mathematical certainty that they would end up at the “zero lower bound.” This one-way trip for interest rates was gratifying for secular bond bulls, but it also raised the animal spirits of the housing market. Canadian mortgages are short term when compared to the United States; the effective maximum maturity for rate fixing has been five years. (Mortgages have longer amortisation periods, but the interest rate is renegotiated after five years. There have been attempts to make 10-year mortgages more attractive; I do not know whether this has lengthened the average maturity much.) As a result, heavily mortgaged Canadian households would take a rate renormalisation right on the chin, whereas American households tend to rely on 30-year fixed conventional mortgages, and are somewhat insulated from the policy rate.</li><li>Policymakers have prioritised inflation targeting above all else, on the theory that it would raise growth rates. The risk of a housing collapse did not appear in the models that “proved” that low inflation raises growth.</li><li>The idea that the “wealth effect” was a costless way to boost growth (as a side effect of lower interest rates) captivated economists in the 1990s.</li><li>Fiscal policy has been running too tight, locking the Bank of Canada into an untenable position.</li></ul><h2>Political Economy</h2>It would be easy to explain the situation as being the result of “neoliberal” policies. However, it is just as easy to explain this as the result of chronic short-termism and poor understanding of policy consequences among Canada’s governing elites.<br /><br />The mortgage insurance system as it was constituted in the early 1990s worked; the key was that few households took the insurance. It opened opportunities for home ownership, but most households were pushed to wait until they saved up the 20% down payment. (There is an analogy to the Job Guarantee; it’s a sign of success that few people are using the programme.) However, the people running the system in the 1990s did not understand this, and the bright idea was to expand the programme. The government programme ended up backing the financing of the majority of housing market entrants.<br /><br />From an old school political economy perspective, Canada ended up in the worst of all possible worlds. We Sovietised the household credit process, but we pretended that the system was still market-based. Say what you want about the Soviet Union, they at least understood that they were running a command economy. The private sector just extracts rents as they process applications as quickly as possible, and then dump the risk on the CHMC. Economic analysis in the mainstream media is dominated by commentary of bank economists. Normally, these economists follow a pro-free markets platform; however, there is remarkably little criticism of a programme that coincidentally benefits their employers.<br /><br />These poor policy choices have trapped future governments in a strait jacket. There is almost no room for policy experimentation in any direction, as anything that might trigger rapid rate hikes risks bringing down the system. We are stuck waiting for a supply side miracle that raises capacity, and we have investment-led growth. Although that happened in the 1990s, repeating the experience would not be easy. Why would businesses ramp up investment in a low demand growth environment?<br /><h2>The Messy Analysis of Institutions</h2>Predicting the consequences of the liberalisation of mortgage insurance was presumably not easy, even though it appears obvious in retrospect. (I was still teaching engineering when the process was started, so I certainly did not predict it.)<br /><br />Mainstream theory that has a bedrock assumption that households and firms optimally plan is not going to be helpful, even if we incorporate whatever wrinkles researchers have added to the rational expectations assumption. The entire premise of the Canadian mortgage insurance system when it was still working was paternalistic: households and the financial sector had to be stopped from blowing themselves up.<br /><br />Post-Keynesian theory provides a more sensible starting point for analysis. For example, the Financial Instability Hypothesis of Minsky seems to offer a very good description of how the old system helped reduce the odds of a crisis. (I am lumping Minsky within the “broad-tent” notion of post-Keynesian economics.)<br /><br />The problem is that the analysis would likely be qualitative, and it may have been difficult to offer quantitative predictions or recommendations. (For example, what would be a safe level for the maximum insured loan size?) Based on the historical data, mortgage borrowing was extremely cautious, and the policy changes made probably looked safe to do. We now have access to an expanded data set, but it has arrived too late to forestall problems.<br /><h2>Concluding Remarks</h2>So long as we avoid a global crisis of some sort, Canadian policy makers have enough freedom of action to prevent a meltdown in the housing market. However, the actions needed to prevent a crisis are helping lock the economy into a low growth path that offers little chance of reducing the overhang of household debt.<br /><br />The fact that poorly thought out reforms that would allegedly raise growth rates is what has locked us into a low-growth regime is ironic, but is entirely typical of post-1990 policy.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com28tag:blogger.com,1999:blog-5908830827135060852.post-50276649419547132712017-05-24T09:00:00.000-04:002017-05-24T09:00:07.268-04:00Does General Equilibrium Exist?<a href="https://www.amazon.com/gp/product/0486434842/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0486434842&linkCode=as2&tag=bondecon09-20&linkId=94046587b29261be95cb24efade3e59b" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=0486434842&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=0486434842" style="border: none !important; margin: 0px !important;" width="1" /> One of the more entertaining parts of online (and academic) economic squabbling is the fights over the equilibrium concept used in mainstream economics. The reason for my amusement is that is is possible that the concept is not actually well-defined for macroeconomic models of interest. I must cautiously note that a proper definition might exist <i>somewhere. </i>However, I can say that it is possible to read a large portion of the modern mainstream literature and not find a formal definition that survives scrutiny. Such an omission is curious, given the alleged importance of the concept.<br /><br /><a name='more'></a><br /><h2>Does the Real World Converge to Equilibrium?</h2>This rant was triggered by some research I am doing, some articles discussing equilibrium that I have recently read (<a href="https://andrewlainton.wordpress.com/2017/05/22/why-an-economic-equilibrium-is-not-like-a-physical-equilibrium/" target="_blank">Andrew Lainton</a>, <a href="https://rwer.wordpress.com/2017/05/19/equilibrium-a-weed-to-pull/" target="_blank">Peter Radford</a>), and <a href="https://economics.stackexchange.com/questions/10341/road-to-equilibrium-in-a-basic-economic-model" target="_blank">this question on equilibrium on the Economics Stack Exchange</a>.<br /><br />The questioner asks:<br /><blockquote class="tr_bq">but is the model's equilibrium one that occurs naturally? Is real-life money supply equal to money demand? Is real-life investment equal to savings?<br />My question in short: why does the economy converge to these equilibriums?</blockquote>This raises all kinds of questions. What is equilibrium? Is it a property of the real world? <i>How is it possible to write a macro textbook and not answer these questions?</i><br /><h2>Seriously, It's a Set</h2>Mainstream economics is highly mathematical. One of the perceived advantages of the mathematics is that argumentation is supposed to be clearer than literary economics. However, in order for mathematics to provide this advantage, concepts have to be formally defined. From the perspective of applied mathematics, this means defining concepts in terms of set theory (yes, there are weird corner cases in mathematical logic, such as self-referential sets, where we have to go beyond set theory).<br /><br />Other than very highly stylised models, most macro models are specified by three things.<br /><ol><li>A set of economic variables, which are elements in the set of "time series."</li><li>"Constraints" on those variables: accounting identities, production functions, etc.</li><li>A method to find a "solution" to the system. A solution is the set of variable values (1) that satisfy constraints (2). In general, there is an infinite number of potential solutions that satisfy constraints, so some method is needed to winnow down the choices.</li></ol><div>Based on a textual analysis of economic writing, it seems clear that "general equilibrium" (whatever it is), is a technique for choosing a solution. By itself, this is innocuous. For example, we need to <a href="http://www.bondeconomics.com/2016/03/techniques-for-finding-sfc-model.html" target="_blank">calculate the solution to the system of equations of a stock-flow consistent model</a>; if we set up the equations correctly, there will be a unique solution. What is to stop us from labelling said solution the "general equilibrium"? From an ideological perspective, that would be a big no-no (based on my reading of the literature). From a formal mathematical perspective, we would need a definition of "general equilibrium" that stops us from making that characterisation.</div><h2>Economic Model Transitivity Fallacy</h2><div>Before returning to general equilibrium, I want to explain (again) one of my complaints about the economic literature. Whenever I read Dynamic Stochastic General Equilibrium (DSGE) articles, there are commonly logical jumps in proofs, where assertions are made without any justification. The mathematical logic being used seems to rely on "Economic Model Transitivity."</div><div><ol><li>An "economic model" X has mathematical property A.</li><li>Model Y is an "economic model."</li><li>Therefore, Y has property A.</li></ol><div>This obviously does not work from a formal mathematical perspective, unless we can validate that model Y has exactly the same characteristics as X, which allowed us to derive the result that A holds. The only way to be sure is to re-derive the proof that A holds for the new model Y.</div></div><div><br /></div><div>In other words, we cannot just appeal to random theorems (or definitions) without citation; we need to explicitly list the conditions for the theorem (or definition), and then validate that the system meets those conditions.</div><h2>What is General Equilibrium?</h2><div>DSGE macro has its roots in optimal control theory. However, the optimal control theory mathematics has largely been obscured by economists following a publishing convention that gets further and further away from its mathematical roots. It is entirely possible to read a few dozen DSGE journal articles, texts, or lecture notes, without finding a valid formal characterisation of how to find the solution for the <i>macro</i> model of interest. (When I refer to a macro model, it includes both households and firms attempting to optimise their utility/profits respectively, as well as a government sector. This creates a optimisation structure that is completely unlike an optimisation problem for one sector alone.) </div><div><br /></div><div>The implicit assumption is that the determination of "general equilibrium" was covered elsewhere. Walras? Arrow and Debreu? Intermediate microeconomics? The obvious question to ask: did that ultimate definition source solve the same macroeconomic system as the current journal article, or was the modern author relying on "Economic Model Transitivity"? That is, we can find definitions of "general equilibrium" that work for some models; the trick is to find a definition that matches macro DSGE models. At the time of writing, I still have not found anything satisfactory, but I want to underline that this is still a research in progress.*<br /><h2>Why Criticisms of Equilibrium do not Register</h2><div>As a final note, I would suggest that this situation explains why heterodox complaints about the realism of equilibrium do not register among mainstream economists. In practice, the equilibrium assumption does not even get properly defined in papers that allegedly depend upon the assumption. Given the low level of attention to the concept, worrying about its realism is moot.</div></div><br /><b>Footnote:</b><br /><br />* The closest I have seen is in Section 7,3 ("Recursive competitive equilibrium") in "Recursive Economic Theory: by Lars Ljungqvist and Thomas J. Sargent. Although it appears quite formal, there were a couple of issues. One was a verbal formulation that was hard to translate into a statement about sets. (This could be viewed as a stylistic issue; in applied mathematics, it is normal to use verbal shortcuts. However, it is unclear how to resolve the ambiguity.) The second issue is more serious, as it does not take into account the differing objective functions of households and firms. I am in the process of reading the text, and I do not know whether this concern is addressed in a later section.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com23tag:blogger.com,1999:blog-5908830827135060852.post-950809198456130592017-05-21T14:34:00.000-04:002017-05-22T09:58:06.282-04:00Two Paper Trolling<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-M4zFTue0Hrs/WSHJ00lVdzI/AAAAAAAACv4/FrtXjVREIA0EziPHaxqbfjwrDpW9Rj3sQCKgB/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-M4zFTue0Hrs/WSHJ00lVdzI/AAAAAAAACv4/FrtXjVREIA0EziPHaxqbfjwrDpW9Rj3sQCKgB/s1600/logo_MMT.png" /></a></div>I am tied up with Victoria Day Weekend gardening and outdoor maintenance activities, and was planning on skipping writing a blog article. (Happy Victoria Day to my readers who celebrate it.) However, I saw an attempt at trolling MMT that was such an intellectual horror show that I decided to take the bait.<br /><br /><a name='more'></a><h2>Noah Smith At It Again</h2>Noah Smith started the show with his "<a href="http://noahpinionblog.blogspot.ca/2017/05/vast-literatures-as-mud-moats.html" target="_blank">Two Paper Rule</a>" article. I would roughly summarise his argument that people who defend a school of thought saying that there is "a vast literature" is just an attempt to create a "mud moat" around the literature, protecting it from criticism. If you cannot cite two papers that are exemplary examples of the methodology in a field, it's not worth studying.<br /><br />I have not spent a lot of time thinking about Noah's article; there are certainly some grains of truth to what he wrote. That said, I believe his views reflect his educational background: just because he studied physics, all other scholarly fields need to emulate physics. Try applying the "two paper rule" to Philosophy, History, Political Science, Control Systems Engineering, or whatever, and you would see that this is a rather reductionist methodology. An additional problem is the collapse of standards in academic publishing as a result of the "publish or perish" imperative: well over 90% of the article published after 1990 in almost all academic fields deserve to be ritually burned. In other words, there is a "vast literature" about practically anything that has a literature at all; and most of the literature is going to be chaff. All the key valid points within the "vast literature" may be buried in a multitude of papers, which may be of uneven quality, and using a variety of methodologies. In most cases, the sensible starting point is a graduate text/monograph, not journal articles; you only wade into the journal articles once you are attempting to build up a citation trail.<br /><br />If I were to take the corner of the control systems literature that I mainly worked on ("anti-windup control"), the "two paper" criterion would have been meaningless. A lot of key results (in my view) were developed in the 1950s and 1960s, using mathematics that was sloppy. The "modern" (early 1990s, when I was working on it) was (largely) mathematically sound, but the application areas were of less interest, as the useful results were mined out. How could you represent that literature with two papers?<br /><br />Furthermore, his comments on business cycle macro appear quite dubious.<br /><br />When applied to economics, he says we can ignore the entire post-war literature because of the "Lucas Critique." This is convenient for people who wish to pretend that ancient debates are actually brand new and exciting, like the so-called "Neo-Fisherian effect." A somewhat less sympathetic view is that such an attitude is just a rationalisation by unscrupulous academics to bury the relevant literature in order to make said clique sound original.<br /><br />Turning to the Lucas Critique, I have been forced to piece together the real mathematics that lies behind DSGE macro, as opposed to the hand-waving that passes for mathematics in most published papers. As a result, I have some appreciation for the critique, but my argument is that the critique does not say what most people think it says. If we take a narrow reading, it just suggests that a particular optimisation technique used for policy analysis was incoherent. Although interesting, <i>any</i> optimisation technique for policy analysis is inherently a stupid way to proceed. Therefore, the narrowly-defined Lucas Critique is not telling us very much. If we take a wide definition of the Lucas critique (which is what I usually see on the internet), most modern mainstream papers could be skewered by the same reasoning.<br /><br />In any event, Noah is basing his argument on a dubious analogy to Physics. "It's about evaluating the quality of the literature's methodology." There is a hidden assumption that the business cycle can be easily modelled. If this is not the case, any attempts at analysing the business cycle are going to be buried in struggles with the details of particular economics institutions. Any attempt at providing a clean and simple model for the business cycle is doomed.<br /><h2>Gerard Piling On</h2>Like clockwork, <a href="https://beinnbhiorach.com/2017/05/20/this-so-fits-my-experience-trying-to-debate-mmt-kraken-pot-economics/" target="_blank">Gerard MacDonell piled on, complaining about Modern Monetary Theory (MMT).</a> Gerard's MMT criticism can be summarised:<br /><ol><li>He read Warren Mosler's "<i>Seven Deadly Frauds.</i>"</li><li>He did not like it.</li><li>Like the Commander-in-Chief of his adopted country, he doesn't need to read any more books.</li></ol><div>Needless to say, it is very hard to debate against <i>that </i>position.</div><div><br /></div><div>I will once again repeat what I have written many times: MMT is a school of thought within the broad post-Keynesian tradition. It is an attempt to create a single internally-consistent world view within a literature that is fractured into multiple competing schools of thought. Although most "MMT" articles focus on a few topics that are of the most interest (Job Guarantee, free-floating currency economics, governmental financial operations), <i>"MMT" implicitly covers all of economics.</i> Meanwhile, many of those papers will have been written by authors who are not considered to be "MMT." (For example, MMT uses the stock-flow consistent modelling methodology, but the key papers in that area were written by people who were explicitly not "MMT.")</div><div><br /></div><div>Realistically speaking, you need to specify what area of economics you are interested in before asking for the two papers. Furthermore, the papers you might be offered might not be written by "MMT" authors. If that disappoints you, you were not paying attention. Ask an academic philosopher which two articles summarise "Philosophy," and I imagine the response would be a rather curious expression.</div><h2>Gerard's Attempt to Summarise MMT</h2><div>Since he refuses to read the MMT literature, Gerard has a hard time finding what it says ("what did you expect?" is a question many might ask). As a public service, I will attempt to help.<br /><br />Gerard attempted to summarise MMT's views towards fiscal policy as follows. </div><blockquote class="tr_bq">The public sector budget constraint is either non-existent or a trivial accounting identity with no practical implication. Accordingly, tax and spending policy should be set without any regard to the deficit and long-run trajectory of the debt/GDP ratio. Fiscal policy may occasionally need to be tightened, but the signal for that would be only an acceleration of inflation to an undesirable pace. Worrying about the trajectory of the debt itself is merely a reflection of a misunderstanding of how the payments system works, and is pointless.</blockquote><div>I think it is close, but it needs some amendments. <i>(My added text is in italics.)</i></div><br /><i>What the mainstream refers to as</i> "the public sector budget constraint" is either non-existent<i> (the infinite horizon statement, which is based on a raw inability to do stock-flow consistent model accounting*),</i> or a trivial accounting identity<i> (the one-period accounting statement) </i>with no <strike>practical</strike> <i>behavioural </i>implication<i>s</i>. Accordingly, tax and spending policy should be set <strike>solely</strike> <i>primarily based upon the inflationary implications of policy, as well as other real constraints, assuming that government is issuing liabilities in a currency tied to a central bank it controls.</i> <i>In other words, fiscal policy for a free-floating sovereign</i> should be set without without any regard to the deficit and long-run trajectory of the debt/GDP ratio. <strike> Fiscal policy may occasionally need to be tightened, but the signal for that would be only an acceleration of inflation to an undesirable pace.</strike> Worrying about the trajectory of the debt itself is merely a reflection of a misunderstanding of how the payments system works, and is pointless, <i>and reflects a complete lack of understanding of private sector behaviour with respect to savings accumulation</i>.<br /><br />Although this is a decent summary of <a href="http://www.bondeconomics.com/2014/04/primer-what-is-functional-finance.html" target="_blank">Functional Finance</a>, it is certainly not "all of MMT."<br /><br />Footnote:<br /><br />* The Fiscal Theory of the Price Level provides the only mathematically coherent mechanism to save the infinite-horizon version of the governmental budget constraint.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com19tag:blogger.com,1999:blog-5908830827135060852.post-60282790084836217952017-05-17T09:00:00.000-04:002017-05-17T09:00:28.187-04:00Advantages Of Discrete Time Models In Economics<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-lGORoqkT5MQ/WRs1PGZXUFI/AAAAAAAACvQ/NVAi_A-jNO06OaB4J1hN1AxYwzv-PDY_QCKgB/s1600/logo_models.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-lGORoqkT5MQ/WRs1PGZXUFI/AAAAAAAACvQ/NVAi_A-jNO06OaB4J1hN1AxYwzv-PDY_QCKgB/s1600/logo_models.png" /></a></div>One sometimes encounters continuous time economics, such as the Minsky package developed under the direction of the economist Steve Keen (link: <a href="http://www.debtdeflation.com/blogs/minsky/">http://www.debtdeflation.com/blogs/minsky/</a>). In a continuous time model, the time axis is the real line, instead of being discrete steps (as in the Python <i>sfc_models</i> package). This article discusses some of the advantages of the discrete time formalism.<br /><br /><a name='more'></a><br />The first key advantage of discrete time is that all economic and financial data are ultimately only available in discrete time. (This might be surprising for the case of finance, but it should be noted that the entire premise of the profitable high frequency trading industry rests upon the observation that financial transactions are not instantaneous. Continuous time models are used in mathematical finance, but these should be interpreted as approximations of the true system.) A continuous time model is therefore one step removed from the data, and we would have to be cautious translating properties that appear only in continuous time series.<br /><br />Even comparing a discrete time model to data is always going to be a difficult process in practice. For example, how do we treat monthly data in a model that evolves quarterly? We are always going to lose information (or forced to insert information) as we change data frequencies. Furthermore, it is difficult to align data that released with a variable lag to the calendar dates that they represent.<br /><br />A second key advantage of discrete time is the simplicity of treatment, particularly if random variables are involved. As soon as we introduce randomness, it is incorrect to assume that the derivatives of any variables exist. To what extent solutions exist, they are defined in terms of Lebesgue integrals, a mathematical area that is not particularly well known. Almost all the work in analysis proofs would involve extremely obscure corner cases. (“What happens if government spending is $20 if <i>t</i> is rational, $0 otherwise?”) It is one thing to define continuous time models where the components are passive resistances and capacitances that obey simple laws of physics; the interactions created by entities reacting in real time to inputs creates the possibility of highly pathological outcomes.<br /><br />A related issue is the question of time delays. Within a discrete time model, a time delay is straightforward: we add a new state variable that is the original variable from the previous period. In continuous time, the amount of information contained within any non-zero interval is theoretically infinite. (For example, we could theoretically encode all human knowledge into a signal that lasts less than one microsecond. In practice, information channels have finite bandwidth, so we do not see this effect.) In order to model a time delay, we have an infinite dimensional system. Statements of mathematical results (such as stability theorems) we have available for infinite dimensional nonlinear systems would comprise a very small book.<br /><br />Finally, accounting is unusual within a continuous time system. We are no longer doing familiar accounting with stocks and flows that can be related with basic arithmetic. We instead would have to define all accounting relationships as stocks being the Lebesgue integral of flows. Such an environment is much less intuitive, and more prone to error. Furthermore, there is no clean way to model events that cause discrete jumps in stock variables, without invoking the Dirac Delta Function. This so-called function is not actually defined as a time variable, and so it is difficult to relate it to system behaviour that is defined as mathematical operations on time series.<br /><br />The only real cost to discrete time analysis is that some of the more easily understood stability results (such as Lyapunov functions) are lost. However, it is possible to define the discrete time equivalents, and the general lack of computational tractability of such results for high dimensional systems makes the loss of this theory not practically significant.<br /><div><br /></div>(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com10tag:blogger.com,1999:blog-5908830827135060852.post-71201528467427222017-05-13T22:12:00.000-04:002017-05-14T09:00:23.056-04:00Labour Market Tightness Confusion<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="http://books2read.com/interest-rate-cycles" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://3.bp.blogspot.com/-z0WNloMGNE0/WReYLwclOyI/AAAAAAAACug/Wy3buD6PxqY9bIWjoiKq9Rqfr6BuRbQtACKgB/s200/ereport02_irc_web.png" width="133" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Click to find retailer.</td></tr></tbody></table>The latest set of data out of the United States increases the confusion factor associated with future monetary policy and inflation trends. Although I am a recovering secular bond bull, it seems that the bond market remains somewhat complacent about Federal Reserve policy (with my usual disclaimer about a near-term recession). Historically, central banks followed the economy with a lag, and I would not expect things to be radically different this time.<br /><br />This article discusses a few topics that have struck my interest in recent days. Unfortunately, discussions of theory are back. I am starting to pursue a collaboration with Alexander Douglas (an academic philosopher), as we wish to write an article about Dynamic Stochastic General Equilibrium (DSGE) models from a philosophy of science perspective. As a result, I feel somewhat like I am trapped in a <a href="https://mygeekwisdom.com/2013/11/02/just-when-i-thought-i-was-out-they-pull-me-back-in/" target="_blank">parody of <i>The </i></a><i><a href="https://mygeekwisdom.com/2013/11/02/just-when-i-thought-i-was-out-they-pull-me-back-in/" target="_blank">Godfather, Part III</a>, </i>being pulled back in to discuss DSGE macro.<i> </i>Although I want to otherwise avoid being bogged down in theory, the belief that we can discuss the empirical data without reference to theory in this case is just wishful thinking.<br /><br /><a name='more'></a><h2>Unemployment and Inflation Both Down</h2>I will start with a discussion of the data, which would presumably be an empirical discussion.<br /><br />The big excitement on the Fed watching front was the continued decline in the unemployment rate, punching through whatever NAIRU estimates are floating around. (The chart below uses the NAIRU estimate from the Congressional Budget Office, which may not be the preferred measure for those who believe in the NAIRU. I comment on some theoretical issues about NAIRU below.)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-9Xn4mDn3Z4A/WRenRApzH6I/AAAAAAAACuw/ZKpwzmKpdII8ENPvGBKGd5AWf4EWOFS7gCLcB/s1600/c20170514_us_UR_NAIRU.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="U.S. Unemployment Rate Versus NAIRU" border="0" src="https://1.bp.blogspot.com/-9Xn4mDn3Z4A/WRenRApzH6I/AAAAAAAACuw/ZKpwzmKpdII8ENPvGBKGd5AWf4EWOFS7gCLcB/s1600/c20170514_us_UR_NAIRU.png" title="" /></a></div><br />The NAIRU concept has been the subject of a lot of empirical study, and the purported conclusions is that a falling unemployment rate is associated with rising inflation.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-F0iLkLfLiAA/WReoQ3dXApI/AAAAAAAACu4/B8oBrtIG5FoX2J5cy-PYnbDp9Y0itOnOwCLcB/s1600/c20170514_us_hourlyearn_inf.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="U.S. Wage And Core CPI Inflation" border="0" src="https://3.bp.blogspot.com/-F0iLkLfLiAA/WReoQ3dXApI/AAAAAAAACu4/B8oBrtIG5FoX2J5cy-PYnbDp9Y0itOnOwCLcB/s1600/c20170514_us_hourlyearn_inf.png" title="" /></a></div><br />Of course, the response of the data to that empirical work is to ignore it: inflation is falling. Core CPI has dipped below 2%, and average hourly earnings is mired at a level that matches the lows of the "jobless recovery" of the early 1990s.<br /><br />This divergence has been the subject of much reflection, and needless to say, there is a bias in favour of the theoretical concept of NAIRU versus observed inflation data among many economists. There is a great deal of analysis that rips apart the various employment and inflation data to prove various points.<br /><br />My bias is to suggest that inflation has been stable since the early 1990s for a number of institutional reasons, particularly labour market reform. My thinking was influenced by the book <i>Full Employment Abandoned: Shifting Sands and Policy Failures</i>, by William (Bill) Mitchell and Joan Musken. They highlight the 1994 OECD Jobs Study as a shift towards "full employability" (and incidentally gutting workers' bargaining power). (It's been a long time since I read that book, so I cannot say how close my thinking is to theirs.)<br /><br />As a result, I find it completely unsurprising that there has been no relationship between the unemployment gap (difference between the unemployment rate and NAIRU) and inflation. Unless something big changes, there is little reason to believe that inflation is going to move significantly in one direction or another.<br /><br />That said, I believe that the Federal Reserve would like to get the policy rate away from the zero bound. Unless something goes seriously wrong, a subdued pace of rate hikes (plus a gradual run down of securities holdings) seems to be their preferred course of action. If inflation does not move (as I suspect), they will take credit for "stabilising inflation expectations" and declare victory for New Keynesian economics.<br /><h2>Theory: Not Particularly Helpful</h2>Over the past few decades, mainstream economists assured everyone that monetary policy alone could stabilise the cycle, and a fairly impressive amount of resources was dumped on monetary economic research.<br /><br />So one would think there would be an adequate theory to allow them to determine how to set the interest rate in response to developments in the labour market and inflation. They had one job, and that was it. Admittedly, they were somehow surprised by the zero lower bound of interest rates.<i> (Apparently one of the side effects of not actually solving the equations in your mathematical models is that you have no idea how to predict their behaviour. Who knew?) </i>Nevertheless, a complete inability to judge the level of slack in the economy is not exactly awe-inspiring.<br /><br />Although this looks bad, the situation is more difficult than it might appear. I have been reading commentary by non-economists that argue that economists should be more empirically minded. "Use data to choose models." was one insight that I saw offered. This is almost as useful as saying that economists "should be more scientific." Believe it or not, economists actually thought of this -- decades ago.<br /><br />The problem is not that the data does not fit the models, rather it is fitting too many models. I have not formalised my model, but arguing that inflation is stuck near 2% for institutional reasons does fit the post-1992 data well. At the same time, various mainstream models are able to generate output consistent with observed data. As I discussed in <i><a href="http://books2read.com/interest-rate-cycles" target="_blank">Interest Rate Cycles: An Introduction</a>, </i>mainstream economics relies on variables whose values are determined by statistical inference (the output gap, NAIRU, the natural rate of interest). So long as economic time series are relatively low frequency, those imputed values will shift in such a way so that the model output will track observed data.<br /><br />The mere existence of such imputed values is not too troubling. Other than some seriously wrong theories (the Quantity Theory of Money) or esoteric curiosities (the Fiscal Theory of the Price Level), most economic theories suggest that "labour market overheating" is related to inflation. In practical terms, that implies that something like an output gap exists (I call it a "generalised output gap" to distinguish it from specific estimates). The problem is that there is no theoretical construct to pin down its value.<br /><br />This ties into the justification for DSGE macro. In the Old Keynesian models, model equations were based on observed data. This would appear to satisfy naive calls for empiricism. Unfortunately, that data is dependent upon the policy frameworks that were in place. What happens if policy shifts? The promise of DSGE macro was that the models would be dependent upon "deep structural parameters," and it would be possible to predict what happens if policy changes. From my perspective, this premise is entirely reasonable. If all the data we have for a system includes the effect of a feedback loop (the policy rule), we need to incorporate the possibility of the feedback rule changing in our model estimates. However, this could have been addressed without having to incorporate some of the more dubious features of DSGE models.<br /><br />The entertaining part of the previous observation is that it is one of the main complaints by post-Keynesians about the NAIRU concept (although they certainly do not use the same language!). The mainstream backers of the NAIRU are estimating it, using the current institutional framework as given. <i>If we change the policy framework for employment, NAIRU estimates based on the old framework are entirely useless.</i><br /><br />After that long-winded digression, I can now return to some of the debates about the unemployment rate and overheating. (Such as the effect of an infrastructure investment surge.) The key thing to keep in mind is that the U.S. labour market is obviously fragmented, with some regions featuring wide spread underemployment, and others featuring tight labour markets. Any policy that increases demand in the areas with tight labour markets would be much more inflationary than one that is aimed at the areas of underemployment.<br /><br />In my view, the risk to my rosy inflation outlook is the possibility of sectoral overheating. For example, a tax cut aimed at the well-off could easily boost inflation in the sectors that cater to their needs. I doubt that this would be enough to drive the overall inflation rate beyond 3%, but that would be enough to cause panic amongst Fed hawks. In any event, the overall unemployment rate may not matter; it is the interplay of labour market slack between various sectors.<br /><h2>Low Unemployment and the "Curve"</h2>I want to finish off with a link to Gerard MacDonell's article <a href="https://beinnbhiorach.com/2017/05/11/low-unemployment-and-where-the-curve-is/" target="_blank">"Low Unemployment and 'the Curve'.</a>" He discusses many of the issues covered here. I just wanted to highlight his comments about the Taylor Rule.<br /><blockquote class="tr_bq">Crucially, the Taylor Rule does <i>not </i>identify the level of the funds rate appropriate to the current position of the business cycle. It would be a bizarre coincidence, as Dudley emphasizes, if the funds rate exactly appropriate to today could be described as a linear combination of the unemployment rate (or output gap) and the inflation rate. <br />… No, the key feature of the Taylor Rule is that it incorporates <i>stabilizing feedback signals</i> that allow it to perform well in most economic environments (at least as simulated by econometric studies) despite its failure to identify the appropriate level of the fund rate in real time (which would be impossible).</blockquote>In other words, the key feature of a Taylor Rule is not the level of interest rate it suggests, but rather that it should move in a stabilising fashion.<br /><br />This actually dovetails with robust control theory, which replaced optimal control theory. A feedback rule that is a linear combination of variables is known as a proportional feedback controller, and one of the empirical regularities of control theory is that such controllers tend to be robust to model error. There are no dynamics within the controller that can be destabilised by a mismatch between your model and actual behaviour. Conversely, optimal control rules attempt to exploit the dynamics embedded within the assumed form of the model, and tend to blow up spectacularly when there is mismatch between model and reality. <i>(Let's hope Janet Yellen was not taking that optimal control thing too seriously.)</i><br /><br />The straightforward interpretation is that policymakers should hike rates in response to the falling unemployment rate. That is, they should ignore whether they have a perfect model, and respond to the data in a presumably stabilising fashion.<br /><br />Gerard disagrees with that interpretation, arguing that policy makers need to keep in mind the proximity to the zero bound. Errors cannot be easily undone.<br /><br />I have not had a chance to think about this too carefully, but my initial reaction is somewhat different. Firstly, I am generally unimpressed to appeals to the zero bound. Secondly, I look at the predictive performance of NAIRU, and I have serious doubts about it's usefulness in the feedback law. I would instead put more weight on inflation, which is not doing anything. Correspondingly, I would also see little reason to tighten rates solely based on the unemployment rate.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-66108454737336488742017-05-10T13:01:00.000-04:002017-05-11T06:24:09.804-04:00Book Review: Can We Avoid Another Financial Crisis?<a href="https://www.amazon.com/gp/product/1509513728/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1509513728&linkCode=as2&tag=bondecon09-20&linkId=1cf6120d126a253f40520d9ffd3d3604" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=1509513728&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><br />Professor Steve Keen, the well-known critic of mainstream economics and professor at Kingston University, published <i>Can We Avoid Another Financial Crisis?</i> His argument that unsustainable private debt dynamics make a future crisis of some sort inevitable; and the biases of mainstream economics helps ensure that little useful will be done to prevent this outcome. The book offers an overview of his economic theories, which are somewhat distinct niche within the heterodox tradition. Although the book is interesting, I have some reservations about his strategy vis-à-vis mainstream economics within the book.<br /><h2><a name='more'></a>Book Description</h2>Polity Press published the book in 2017, and the paperback edition is a brief 129 pages (excluding end matter). The book consists of six chapters:<br /><ol><li>From Triumph to Crisis in Economics</li><li>Microeconomics, Macroeconomics, and Complexity.</li><li>The Lull and the Storm</li><li>The Smoking Gun of Credit</li><li>The Political Economy of Private Debt</li><li>A Cynic’s Conclusion</li></ol>Steve Keen is the author of <i>Debunking Economics: The Naked Emperor Dethroned?</i><br /><i><br /></i>This review is going to discuss the book roughly in the reverse order of the chapters.<br /><h2>Can We Avoid A Financial Crisis?</h2>At the risk of spoiling the punch line, Keen believes that a crisis will not be avoided. He is not attempting to predict a particular date, but argues that unsustainable debt buildups will reach a head in 2020. In his view, policymakers are following poor guidance from mainstream economics, and will not make an effective response.<br /><br />The only caution I would draw from this conclusion is that the logic of Keen’s models suggests that private debt buildups will reverse, causing a slowdown, but the implications for a <i>financial</i> crisis are less clear (as defined by defaults cascading through the financial system).<br /><br />I have my home country in Canada (which Keen views as being one of the main targets of a slowdown) as an example. It is easy to see how the spectacular pop in house prices and building activity will reverse by 2020. However, it is unclear why this will turn into a financial crisis.<br /><br />Obviously, weaker links in the financial system would be culled (a process that one could argue that has started), but the core of the system is protected by CMHC guarantees, and lender-of-last-resort operations if need be. If you worked at one of the smaller non-standard landers (or were an investor in their securities), it might feel like a crisis, but it is unclear why the big banks will be discomfited by smaller competitors crashing and burning.<br /><br />Looking further afield, one may note that investment bankers have filled most economic portfolio appointee positions in recent years. Although this perhaps is not attractive politically, one may note this means that governments are staffed with people with the requisite background to manage bailouts of the financial system.<br /><h2>The Credit Engine</h2>The driving force for Keen’s analysis is his economic model. It was improved because of contact with post-Keynesian economists, but it appears to follow the trends of the model’s earlier iterations. The model is straightforward, consisting of a few accounting identities as well as basic behavioural relations. The dynamics suggest that the driver of growth during expansions is the tendency of private sector debt to outstrip income growth. This becomes unsustainable, and then the cycle goes into reverse. However, this is the result of the reversal of borrowing in the non-financial sector, and not necessarily a disruption in the financial sector.<br /><br />I have not spent much time looking at Keen’s model. I have an aversion to continuous time economic models, and I would prefer to work with the standard discrete time stock-flow consistent framework. (For those unfamiliar with the jargon, a discrete-time model has a time axis that consists of steps, such as months or quarters. A continuous time model has the time axis being the real number line, similar to most models in physics.) I believe it would be straightforward to create a model in discrete that emulates the behaviour of Keen’s model using my Python framework (for example). I expect that I will discuss why I prefer discrete time models in a later article.<br /><br />One concern with Keen’s framework is that the causality is unclear: does growth cause debt, or is it a side effect of growth?<br /><br />The key example is the case of housing. Most of the growth in private debt since the early 2000s in the developed world can be traced to housing markets. Since households’ down payments have at best grown in line with income, rapid house price increases imply a need for larger mortgages. We could therefore argue that high house prices cause greater debt growth. However, in practice, the higher house prices are associated with loosening credit standards. This was certainly the case in Canada; the start of the explosion in household debt was associated with the loosening of CMHC lending standards in the late 1990s.<br /><br />The difference in interpretation matters. Policy makers were only able to control the lending standards; the amount of debt that results was an outcome of policy shifts. Correspondingly, it would be difficult to view private debt levels as a specific target for policy, in the same manner that the inflation rate currently is. Therefore, Steve Keen’s arguments would need further fleshing out before being put into practice.<br /><h2>“Modern Debt Jubilee”</h2>Professor Keen proposed what he calls the “Modern Debt Jubilee”: the central bank transferring money to private sector accounts (similar to “helicopter money”), but with the proviso that the money be used to pay down debt first. As he notes, the implementation is more difficult than it is to describe. (Does paying bills count as “debt”?)<br /><br />I am not a fan of “helicopter money” (as I discussed in essays in “<a href="https://books2read.com/abolishmoney" target="_blank"><i>Abolish Money (From Economics)!</i></a>” In addition to the difficulty of tying the transfer to debt repayment, I also question the optics. Why should a debt-free rich person be able to spend the transfer on caviar and cognac, while poor people are forced to reduce the balance of their mortgages?<br /><h2>The Political Economy of Private Debt</h2>Keen argues in Chapter 5 that there are perverse incentives for political leaders with respect to private debt. A leader that unleashes a bubble in the private sector is lauded as a hero as a result of the increase in growth (and rising home prices). The bubble persists until the unsustainable private debt growth reverses, and a later politician is stuck cleaning up the mess. If we had a system that monitored private debt growth (similar to the hokey public debt clocks), there might be greater accountability among politicians.<br /><br />Although I am sympathetic to this, I am highly averse to unelected bodies trying to police “the truth.” We have freedom of the press for the reason; it is up to political parties to present their case themselves, without being moderated by self-selected experts.<br /><h2>The History of Keen’s Models</h2>Chapters 3 and 4 discuss the history of Keen’s models since the early 1990s, and the various debt-financed booms that happened during that period. He points out that his models hinted at the various crises that afflicted developed economies, while the mainstream was proclaiming the “Great Moderation” thesis. For readers who are new to economics, this history is certainly interesting. For readers more familiar with the history, it may be that Keen’s views will be quite novel when compared to conventional histories.<br /><h2>The Darned Mainstream</h2>For those familiar with Keen’s other writings, Chapters 1 and 2 cover familiar ground: mainstream economics is all wrong, and they completely misunderstand everything. For readers who are new to the topic, these chapters act as an updated summary of<i> Debunking Economics.</i><br /><i><br /></i>I plead guilty to writing pretty much the same thing in many of my articles. However, my feeling is that Keen should have held back in this case. In this book, he is arguing that the developed world is facing a crisis that will be of a similar magnitude to the Financial Crisis. Even if mainstream economics is part of the problem, writing the book in such a way that no self-respecting mainstream economist will go past the first few pages is not exactly a great method for coalition-building behind your policy remedies.<br /><br />At the minimum, this discussion should have been deferred to the middle of the book, with parts moved to an appendix. I fail to see how assumptions about the aggregation of demand functions is going to cause mainstream economists to back policies that will cause a financial crisis. Alternatively, there is little reason for anyone other than friends and family to care what Robert Lucas in particular thinks. Instead, we need to ask: is the policymaking response to a potential crisis correct or incorrect, and why?<br /><h2>Concluding Remarks</h2>Steve Keen’s models and thinking are interesting, and he is certainly a passionate writer. However, my feeling is that the beginning of the book put too much emphasis on theoretical disputes, and less on the topic at hand: crises.<br /><br /><br /><a href="https://www.amazon.com/gp/product/1509513736/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1509513736&linkCode=as2&tag=bondecon09-20&linkId=b6eeecf201c8c2e03beac8b9934592e5" target="_blank">Can We Avoid Another Financial Crisis? (Amazon link)</a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=1509513736" style="border: none !important; margin: 0px !important;" width="1" /><br /><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=1509513728" style="border: none !important; margin: 0px !important;" width="1" /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com11tag:blogger.com,1999:blog-5908830827135060852.post-64673553526788585072017-05-07T09:37:00.000-04:002017-05-07T09:38:49.497-04:00Book Review: When The Bubble Bursts<a href="https://www.amazon.com/gp/product/1459729803/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1459729803&linkCode=as2&tag=bondecon09-20&linkId=b1fb932682c8410eb812cbe036ea55e9" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=1459729803&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><br />Hilliard MacBeth published <i>When the Bubble Bursts: Surviving the Canadian Real Estate Crash </i>in 2015. It is mainly targeted at Canadians, but it would be of interest for foreigners who wish to understand what is happening in the Great White North. Although real estate bubbles are a familiar international phenomenon, there are some institutional differences that matter.<br /><br /><a name='more'></a><h2>Book Description</h2>The book is 221 pages (excluding end matter), and published by Dundurn Press. The book is divided into three parts:<br /><ol><li>The Bubble</li><li>It Won't Be A Soft Landing</li><li>Surviving the Canadian Real Estate Crash.</li></ol><div>The paperback ISBN is 978-1-4597-2980-3; it is also available in electronic format.</div><div><br /></div><div>Hilliard MacBeth has been an investment advisor for over 35 years, and wrote <i>Investment Traps and How to Avoid Them</i> in 1999, predicting the collapse of the dot-com bubble.</div><h2>Suitability</h2><div>The book is aimed to be a personal finance book that can be read by a wide audience. However, he does a good job of capturing the various economic and financial forces in play in Canada. It is very easy to recommend this book to almost all Canadians who are interested in the housing bubble or personal finances.</div><div><br /></div><div>However, the reality is that most of my readership is outside of Canada; currently, I am getting more site visitors from London England than I am from all of Canada. For these readers, there are some caveats to keep in mind.</div><div><br /></div><div>If you are interested in personal finance. The details of how personal finances are arranged in Canada are different than elsewhere, so this would not be a first choice for personal financial reading. That said, MacBeth has an interesting, cautious approach to investing and home ownership, so it might be of interest if you understand the basics already.<br /><br /></div><div>However, if you are an investor or just interested in economics, this book will be of interest, so long as you accept that some parts of the book are not aimed at you. You will be getting a good view of what the situation looks at it from the inside of the bubble, which more technical macro approaches might gloss over. If you are interested in behavioural finance, this book offers a lot of anecdotes that might help develop your understanding why housing finance seems to be getting out of control in the modern world.</div><div><br /></div><div>Given the nature of my audience, I will focus mainly on the macro side of the book.</div><h2>Housing Mania Builds</h2><div>The introduction describes the genesis of the book. During the 1990s, Canadians had been captivated by various bubbles in the stock market. During the mid-1990s, the Bre-X saga played out in the business press: was the company legitimate? Was it possible that other investors could make the same huge gains as the early investors? <i>(No, and no.) </i>Once Bre-X went onto the rocks, the tech bubble took over, with Northern Telecom (NorTel) eventually rising to be 30% of the market capitalisation of the S&P/TSX index in September 2000. </div><div><br /></div><div>Hilliard had managed to steer his clients clear of tech stocks, and so they did relatively well. Even so, their enthusiasm for equities dimmed. Capital was starting to be steered towards real estate. Since his clients were pulling money away from financial assets, he had an incentive to be concerned.</div><div><br /></div><div>Hilliard is skeptical about housing as an investment (a feeling I share). He bought a house in 1990, and sold it for the same price in 2000. (I believe that the boom in national home prices can be dated to about 1998 or 1999.) There have been various booms and busts over the past decades, particularly in Alberta, as a result of the cyclical nature of the oil business. There was also a fairly mess bust in the condo market in 1990 in Toronto and Vancouver, which took most of that decade to unwind.</div><div><br /></div><div>One thing to keep in mind is that Canadian home prices in many markets were extremely cheap when compared to a country like England. In most of Montreal, the normal valuation rule was that you could buy a duplex, and if the second unit was rented, it would pay the mortgage for the entire duplex. (I always enjoyed shocking my counterparts in London with the price of our three bedroom townhouse.) Vancouver and Toronto were the only exceptions to this rule of cheap housing.</div><div><br /></div><div>In other words, Hilliard MacBeth's attitude reflects an old school Canadian view, that housing is not an investment, and should be quite cheap. Attitudes have changed.</div><h2>Stay Liquid for a Buying Opportunity</h2><div>MacBeth's advice is that investors should remain liquid, waiting for a buying opportunity. Buying during a panic provides a huge boost to returns. A meltdown in the Canadian housing market would provide plenty of ammunition for a panic, and you certainly do not want your capital tied up in housing at that point.</div><h2>No Canadian Exceptionalism</h2><div>Chapter 8 is titled "Canadian Exceptionalism." In it, he argues that Canada is not an exception (an argument that was parroted by various bullish Bay Street analysts). </div><div><br /></div><div>I disagree somewhat, but that is because I am taking a different view of the matter. MacBeth is viewing the situation from the perspective of a Canadian household. From this perspective, it does not matter whether Canada suffers a short meltdown in housing (as was the case in the United States), or a slow melt over a decade: in either case, your personal finances are stuffed if all your net worth is in rental properties.</div><div><br /></div><div>However, from a macro or international perspective, those two scenarios are very different. A rapid meltdown will have fairly obvious investment and economic implications. A slow melt may matter for asset allocators, but it may not be otherwise visible. Almost all developed countries are struggling; weakness in the Canadian consumer sector would not be particularly remarkable.</div><div><br /></div><div>In other words, I agree that a slow melt is probably the best Canada can hope for, but a rapid collapse (which is what raises foreign interest) could be avoided. The main arguments that a rapid collapse could be avoided revolve around the following:</div><div><ul><li>the Canadian Mortgage and Housing Corporation (CMHC) has underwritten all the credit risk; and</li><li>the nature of the banking system.</li></ul><div>I discuss these in turn.</div></div><h2>The Ottawa Whale</h2><div>Canadian mortgage lending practices were strictly regulated, and highly conservative. There were some regional booms and busts, but those largely followed regional fortunes.</div><div><br /></div><div>As Hilliard MacBeth describes in Chapter 5 ("Blowing Bubbles: The CMHC and the Government"), the brain trust in Ottawa decided to bring Canadian practices in line with our wilder southern neighbours. As night follows day, a finance-driven bubble ensued. (Hyman Minsky was not too well known in Canada, other than at BCA Research, where I used to work.)</div><div><br /></div><div>All high loan-to-value mortgages <i>have</i> to have mortgage insurance. The price of the insurance is embedded in the loan payments. The CMHC provides most of the insurance, the market was opened up to the private sector, but market penetration has been low. Once the mortgage is insured, the CMHC eats all the residual losses on the loan. (It presumably can push mortgages that were obtained fraudulently, which might be concern in Vancouver, which is notorious for fairly shady real estate practices.)</div><div><br /></div><div>If we want to translate what the CMHC has done into the jargon used in the crisis, <i>it has written CDS protection on pretty much the entire high risk mortgage market.</i> </div><div><br /></div><div>Hilliard MacBeth suggests on page 137 that the CMHC would only be able to cover losses up to its reserves (typically around $30 billion). This does not seem to be correct; the CMHC is a normally described as a full faith and credit obligation of the Federal Government of Canada. <i>(Importantly, this is not a legal opinion, which I am not qualified to give in the first place. You should contact your legal counsel to check whether there are any asterisks to full faith and credit status.) </i>In other words, the CMHC would likely get its board of directors canned, but the Federal Government will always step up to make CMHC mortgage payments whole.</div><div><br /></div><div>Unless you want to entertain the possibility that the Government Canada will voluntarily renounce this obligation, the financial system cannot suffer the same meltdown as the solvency of protection sellers was called into question. (I discuss in <a href="http://books2read.com/understanding-government-finance" target="_blank"><i>Understanding Government Finance</i></a> why I view default for a government like Canada to be voluntary.) Meanwhile, this protects the perceived solvency of private sector financial firms that rely upon the credit guarantee.</div><h2>Banks: Earnings Risk, Not Credit Risk</h2><div>Chapter 9 ("Banks: They Will Survive (But They Won't Thrive)") describes the outlook for the major Canadian banks. His view is similar to mine: a meltdown in the consumer sector greatly damages the earnings prospects for these banks, but their solvency is probably not in question.</div><div><br /></div><div>This view might be viewed as complacent if there is a rapid global meltdown, and all of the banks' customers are at risk. However, a relatively slow melt of the Canadian housing market is not enough to be truly scary for the banks, given the nature of CMHC insurance.</div><h2>Surviving and Profiting From the Downturn</h2><div>For interests of time, I will not try to summarise the discussion in the third part of the book, which discusses how to position your personal finances. Needless to say, avoiding owning too many properties is a key point. He also discusses the difficulties with renting, which limits the attractiveness of dip-buying in real estate.<br /><br />One of the questions that comes up among foreign investors: how do I position for a meltdown in Canadian housing? Where is the next subprime CDS trade? The book is aimed at retail investors, and so if any such trades exists, they are not really discussed. I could easily be wrong, but there do not seem to be any obvious macro trades to put on here -- other than for positioning for a rapid meltdown (which has obvious effects). Once again, the CMHC credit insurance acts to keep the really scary scenarios under wraps. As was seen recently, there may be some interesting stock-specific trades, but I am not the person to discuss such possibilities.</div><h2>Concluding Remarks</h2><div><i>When the Bubble Bursts</i> is an interesting book, even for international readers with an interest in understanding the dynamics of the Canadian housing bubble.</div><div><br /></div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-47348129763827854472017-05-03T09:00:00.000-04:002017-05-03T09:00:00.175-04:00Canadian Housing Crash (Again)!<a href="https://www.amazon.com/gp/product/1459729803/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1459729803&linkCode=as2&tag=bondecon09-20&linkId=b1fb932682c8410eb812cbe036ea55e9" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=1459729803&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a>The news that some Canadian alternative mortgage lenders have had financing issues has brought around speculation about the Great Canadian Real Estate Crash. Certainly, the Canadian economy is extremely vulnerable to another global financial spasm like 2008. I am somewhat of a Negative Ned, and can think of many catalysts for such an event. However, barring such an event, the problems look like they can be <strike>contained</strike> managed.<br /><br /><a name='more'></a>I am now rereading Hilliard MacBeth's <i>When the Bubble Bursts: Surviving the Canadian Real Estate Crash.</i> The book was published in 2015, and thus missed the latest surge in housing prices. The book is mainly targeted at Canadians, but may be of interest to foreigners who wish to understand how the Canadian housing market functions. There are some key differences from the United States, particularly in the area of mortgage insurance. (I hope to write a review of the book soon.)<br /><br />Whether or not the Canadian housing market will have an obvious collapse is not clear. <a href="http://www.bondeconomics.com/2013/09/why-canadian-economy-is-doomed.html" target="_blank">I discussed the issues way back in 2013</a>, (<a href="http://www.bondeconomics.com/2013/09/why-canadian-economy-has-not-blown-up.html" target="_blank">This followup explains why predicting a "collapse" is difficult</a>). But regardless of the precise trajectory of Canadian home prices, it is safe to say that many younger Canadians are trapped by having too high a level of mortgage debt. This was a very preventable public policy disaster. Therefore, unless there is a dose of wage inflation that comes from somewhere, the outlook for the Canadian middle class looks grim.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-FsZhGosalms/WQi5qodUayI/AAAAAAAACt8/QROwo3vtstgY7EI5F3nwdsywwTnIJUqdwCLcB/s1600/c20170503_canada_starts.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: Total Housing Starts" border="0" src="https://1.bp.blogspot.com/-FsZhGosalms/WQi5qodUayI/AAAAAAAACt8/QROwo3vtstgY7EI5F3nwdsywwTnIJUqdwCLcB/s1600/c20170503_canada_starts.png" title="" /></a></div><br />As I stated in my earlier articles, I am not particularly concerned about house prices. The lack of good data from Statistics Canada is one issue, the other is that I do not see house prices as being the issue. The real concern is employment: if too many people in construction lose jobs, that creates a negative feedback loop.<br /><br />The latest spike in housing starts (above) attracted some attention, but that could easily have been related to the weather. I was out bicycling in March, which is not a normal seasonal activity in Montreal. (Admittedly, there was some amazing cross-country skiing two weeks later.) I do not have the non-seasonally adjusted data to be able to evaluate this, unfortunately.<br /><br />If we ignore the latest spike, we see that activity has been relatively moderate in recent years. This probably reflects the tightening of mortgage lending standards.<br /><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Lk8lbjoN0PI/WQi5qm1aCHI/AAAAAAAACuI/7A5OUehIkrQZf5M2mjxwYVaUv-_dYsKKQCEw/s1600/c20170503_canada_starts_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: Starts, Multi versus Detached" border="0" src="https://4.bp.blogspot.com/-Lk8lbjoN0PI/WQi5qm1aCHI/AAAAAAAACuI/7A5OUehIkrQZf5M2mjxwYVaUv-_dYsKKQCEw/s1600/c20170503_canada_starts_2.png" title="" /></a></div><br />The chart above explains my somewhat limited concerns. Construction of detached houses has been mired at low levels for almost a decade; all of the activity has been in multi-family dwellings. Although not solely condos, they are presumably a large part of this. In other words, construction activity is relatively concentrated; we do not see the construction of McMansions across Canadian suburbs.<br /><br />The condo market is presumably vulnerable. However, we did have a speculative condo bubble in the early 1990s in Toronto and Vancouver, and it was unwind without a crisis (unless you owned a condo). The Canadian banks are diversified, and not at risk if only a narrow part of their customer base is in trouble. Even if the market dries up, it will take time for the existing pipeline to finish construction. Furthermore, I believe that Canadian condo financing is less exuberant than the pre-crisis American custom; construction does not go ahead until 50-60% of the units are presold.<br /><br />In order for the scarier scenarios to play out, there needs to be a larger shock to hit the wider Canadian economy. Given the synchronisation of business cycles, some form of global recession will eventually hit. All we can hope is that Canadian politicians relax fiscal policy in the face of such a downturn, and that some of excesses in real estate will have already been worked out.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com1tag:blogger.com,1999:blog-5908830827135060852.post-60106938202855097382017-04-30T09:00:00.000-04:002017-05-03T12:02:53.258-04:00Interpreting DSGE Mathematics<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" /></a></div>To describe the mathematics of Dynamic Stochastic General Equilibrium (DSGE) as confusing is an understatement. Although trained in applied mathematics, I always had difficulties following the logic used. I now realise that the economists were not solving the global optimisation problem set out at the beginning of the paper. In fact, they are solving a different mathematical model. Importantly, this new formulation no longer solves the original optimisation problem. As a result, it is incorrect to assume that model behaviour reflects optimisation by households.<br /><br /><a name='more'></a>This article uses some mathematical notation, rendered by MathJax. Some browsers will have difficulties rendering these equations. For non-mathematicians, they may be able to follow my logic, but I am sticking fairly close to the mathematics. What the arguments mean in plain English is open to interpretation, but in my view, we need to accept that the behaviour of published models are arbitrary, and not drawn from the constraints of optimisation.<br /><br /><i>Update: I added a small section discussing one potential mainstream response to this article. The construction I discsuss is only of more interest if it is extended to the sticky-price models, assuming that can be done. Also, I got some interesting feedback from Brian Albrecht (@BrianCAlbrecht) and Roger E. A. Farmer (@farmerff) on Twitter. I need to digest the information, but as I suspected, my approach towards DSGE macro mathematics is somewhat literalist. There is a large "back story" behind the various equations that appear in published papers and books. This is roughly what I argued; I may reconsider whether or not my assessment at the bottom of the article is too harsh later. </i><br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script type="text/x-mathjax-config">MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); </script><script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script><br /><h2>What the Mathematical Problem Looks Like</h2>(Please note that this article is mainly aimed at representative household DSGE models. Other DSGE models may be less problematic.)<br /><br />If we look at a DSGE model paper and attempt to translate the contents into a well-posed mathematical problem, it has the impression of looking something like the following model, labelled <i>M1</i>.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/smexpfpQh9kGASnmfGLvc0aH1Day4mq1ACLcB/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/smexpfpQh9kGASnmfGLvc0aH1Day4mq1ACLcB/s1600/line.png" /></a></div><br /><b>Model <i>M1</i>.</b> Define a set of state variables<i> x</i>, which are time series on time axis <i>T</i>. That is, for each element $x_i$ of <i>x</i>, $x_i(t) \in {\mathbb R}$ for all $t \in T$.<br /><br />Let the set of all possible $x$ be denoted <i>A</i> (the set of any conceivable state vector).<br /><br />Impose a set of constraints <i>C</i> on the state variables <i>x. </i>We define the set of feasible solutions ${\cal F}$, which is a subset of <i>A, </i>for which <i>C(x)</i> is true.<br /><br />Define a utility function <i>U</i> that is a function of <i>x</i>.<br /><br />The solution to the model is the vector $x^*$ which maximises $U$, that is:<br />\[<br />x^* ={\mathrm argmax}_{x \in {\cal F}} U(x).<br />\]<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" /></a></div><i><br /></i><i>(What happens in the case of </i>$x^*$<i> not existing or non-unique is another question.)</i><br /><br />The standard practice in other fields of mathematics would be first to lay out the definition of <i>M1</i>, and then turn to method of solution.<br /><h2>DSGE Macro Papers</h2>The layout of published DSGE macro papers diverges from standard mathematical practice. Each economic sector is laid out separately, and then the author starts working out various optimality conditions -- called first-order conditions. Once all sectors are defined, the global model solution is approached. I found this confusing; the solution method is mixed up with the problem statement, and so it was unclear whether equations were part of the constraints of the problem, or part of the solution.<br /><br />If we rewrite the problem statement into an organised fashion, we get the following structure (Model M2).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" /></a></div><b>Model <i>M2</i></b>. Define a set of state variables <i>x</i>, which are time series on time axis<i> T</i>. That is, for each element $x_i$ of x, $x_i(t) \in {\mathbb R}$ for all $t \in T$.<br /><br />Let the set of all possible $x$ be denoted <i>A</i> (the set of any conceivable state vector).<br /><br />Impose a set of constraints <i>C</i> on the state variables <i>x</i>.<br /><br />We impose an addition set of constraints <i>O</i> on the state variable <i>x</i> ("first-order constraints"). The set of <i>x</i> for which <i>C(x)</i> and <i>O(x)</i> are true is the set of equilibrium solutions, <i>E</i>.<br /><br />Define a utility function <i>U</i> that is a function of <i>x</i>.<br /><br />The solution to the model is the vector $x^*$ which maximises $U$, that is:<br />\[<br />x^* = {\mathop{\mathrm argmax}}_{x \in E} U(x).<br />\]<br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" /></a></div><br /></div>This seems reasonable, but it is troubling that there does not appear to be a systematic way of determining what the first order conditions are. My assumption was that they result from some un-specified theorems taken from microeconomics.<br /><br />However, when we dig further into the proof, the author just seems to find the solution based on the first-order conditions and the constraints. We are actually at a new problem, <i>M3</i>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" /></a></div><br /><b>Model <i>M3</i>.</b> Define a set of state variables <i>x</i>, which are time series on time axis<i> T</i>. That is, for each element $x_i$ of x, $x_i(t) \in {\mathbb R}$ for all $t \in T$.<br /><br />Let the set of all possible $x$ be denoted <i>A</i> (the set of any conceivable state vector).<br /><br />Impose a set of constraints <i>C</i> on the state variables <i>x</i>.<br /><br />We impose an addition set of constraints <i>O</i> on the state variable <i>x</i> (first-order constraints). The set of <i>x</i> for which <i>C(x)</i> and <i>O(x)</i> are true is the set of equilibrium solutions, <i>E</i>.<br /><br />The solution is a vector in <i>E</i>.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-v5Kx4CYawSc/WQXOKf7qmiI/AAAAAAAACtc/3zub9Mmt79QcUmU5jwHFYKBnQP9vNaAjACEw/s1600/line.png" /></a></div><br />That is, we just find a state vector for which the first-order conditions and constraints hold. To what extent the solution is optimal (as in models <i>M1</i> or <i>M2), </i>that has to be the result of the first-order conditions <i>O</i>.<br /><br />When put this way, what is happening becomes obvious: <i>the author of the paper believed that the first order conditions imply optimality when applied to M1</i>. That is, if $O(x^*)$ is true, then $x^*$ is the solution to <i>M1</i>. (The fact that "first-order conditions" comes from optimisation theory should have been a clue.)<br /><br />The reason why I found this difficult to follow is that it was very clear that <i>O</i> did not imply optimality when we look at the problem <i>M1</i>. I assumed that I was missing added constraints.<br /><h2>Back to Partial Equilibrium</h2>It turns out what was happening had a simple explanation:the first-order conditions were determined sector by sector, without any reference to the global model.<br /><br />If we want to use economic jargon, the first-order conditions are based on "partial equilibrium" conditions: we look at equilibrium in a single sector or market at a time, without reference to what is happening in other sectors or markets. This is as opposed to "general equilibrium", where all markets (including future markets) reach equilibrium simultaneously. Since the "GE" in DSGE explicitly refers to general equilibrium, did not the solution method have to look at the global optimisation problem?<br /><br />The answer appears to be no (for at least most of the representative household models I struggled with).<br /><h2>Simple Example</h2>A full DSGE model is complex; I will just look at a cut-down one period model. The treatment is roughly based on the model developed in Chapter 2 of Jordi Galí's <i>Monetary Policy, Inflation, and the Business Cycle: An Introduction to the New Keynesian Framework. </i>I am following the structure used in that text.<br /><br />The household sector starts with an initial money balance <i>M</i>. It provides labour to the business sector, and uses its wages to purchase output. The business sector aims to maximise profits.<br /><br />The following variables are defined:<br /><ul><li><i>P</i> = Price of goods (nominal).</li><li><i>W</i> = Wage rate (nominal).</li><li><i>N </i>= Number of hours worked.</li><li><i>C</i> = Amount of goods consumed (real units).</li></ul>We assume that all output is purchased (<i>Y=C, </i>in economist jargon).<br /><br />The household sector wants to maximise its utility function <i>U(C,N). </i>(We assume that increased consumption improves utility; increased work subtracts from uutility,)<br /><br />The household sector has a budget constraint:<br /><br />$$\begin{equation}<br />PC \leq WN + M. \label{eq:budget}<br />\end{equation}$$<br /><br />That is, the amount of spending on goods (<i>PC</i>) is less than or equal to the wage bill (<i>WN</i>) plus the initial stock of money.<br /><br />We then jump to the business sector. The argument is that the business sector wants to maximise profits. I label profits <i>F, </i>and it is defined by:<i> </i><br />\[<br />F = PC - WN.<br />\]<br /><br />Total output is given by a production function:<br />\[<br />C = A N^{1-\alpha}, A \in {\mathbb R_+}.<br />\]<br />The usual logic is: maximum profits occurs when $\frac{dF}{dN} = 0$, and we can apply to the previous two equations to get:<br />$$\begin{equation}<br />\frac{W}{P} = (1-\alpha) A N^{-\alpha}. \label{eq:marginal}<br />\end{equation}$$<br />This appears to pin down the relationship between wages and prices; we then apply this relationship to the first-order conditions for the household sector. (This step is very standard; the textbook applies it to the model in Chapter 2, although the expressions are slightly more complex.)<br /><br /><i>However, this logic assumes that there is no relationship between the wage bill and business revenue. </i>Unfortunately, such a constraint exists: equation ($\ref{eq:budget}$).<br /><br />If we apply ($\ref{eq:budget}$) to the definition of <i>F.</i> we see that:<br />\[<br />F \leq M.<br />\]<br /><br />It is straightforward to see that maximum profits is equal to <i>M</i>. The values of <i>P</i> and <i>W</i> are essentially free to take any value that they wish, so long as they arrive at <i>F=M</i>.<br /><br />We can then apply the relationship $F=M$ to ($\ref{eq:budget}$), and we get:<br />\[<br />PC - WN = M.<br />\]<br />That is, the household budget constraint becomes an equality.<br /><br />We can construct an optimising solution as follows.<br /><ul><li>Apply $C = A N^{1-\alpha}$, and insert into <i>U</i>. That is, form $\hat{U}(N) = U(A N^{1-\alpha}, N).$ </li><li>The standard constraints on the form of <i>U </i>ensure that $\hat{U}$ is differentiable with respect to <i>N. </i>Furthermore, there is a unique $N^* \in {\mathbb R}_+$ such that the derivative of $\hat{U}$ with respect to $N$ is zero at $N^*$, and $\hat{U}(N^*)$ is indeed the maximum of $\hat{U}$.</li><li>Set $C^* = A (N^*)^{1-\alpha}.$</li><li>Fix the wage $W \in {\mathbb R}_+$.</li><li>Set $P = \frac{WN^* + M}{C^*} = \frac{N^*}{C^*} W + \frac{M}{C^*}$. That is, a unit wage cost plus a markup.The constraint $F = M$ trivially holds.</li></ul>In other words, the set of solutions is infinite; real variables are pinned down, but nominal prices are indeterminate. There is no reason to believe that ($\ref{eq:marginal}$) holds, but it may be possible to adjust wages until it is satisfied. In other words, it was not really a constraint on the optimal solution; it was an arbitrary condition that might prevent us from determining the true set of optimal solutions.<br /><br />In plain English, the optimal solution is that prices and wages are set so that:<br /><ul><li>the household sector works the number of hours that it feels is optimal ("full employment");</li><li>the business sector's profits represent the entire starting stock of money held by the household sector; and</li><li>wage rates are indeterminate, but prices are set as a markup over wages to achieve the fixed target profit level.</li></ul>This outcome meets the optimisation problem conditions as stated in the text; but the solution bears no resemblance to what the text says it is.<br /><h2>Comparison to Existing Results</h2>It should be noted that this example assumes flexible prices, and so it is equivalent to the Real Business Cycle (RBC) models. The RBC solution also features the same optimal output level as my result here, but with the <i>W/P </i>ratio pinned down by the equation given in Galí's text. So long as the solution stays away from the budget constraint, that comprises another set of optimal solutions (since the price level is also unconstrained, until something changes to force the price level to a particular level). (That is, Galí is correct in arguing that the ratio is an optimal choice, but with the assumption that we are not hitting budget constraints, which was not specified.)<br /><br />As a result, the construction here could easily be interpreted as a pathological corner case. To be interesting, it needs to be extended to where prices appear to matter for the solution (such as New Keynesian models with Calvo pricing). I am having off-line discussions with Alexander Douglas about that analysis.<br /><h2>Stock-Flow Inconsistency</h2>Things get even uglier if we start looking at inter-temporal optimisations. We see that the optimal strategy is for the business sector to absorb all of the money from the household sector. What happens thereafter?<br /><br />The usual argument by mainstream economists is that household bond/money holdings match government issuance; that is, household bond holdings determine the amount of debt outstanding. This runs into the obvious problem: they forgot about the business sector holdings of government liabilities.<br /><br />Unless the business sector is to have an ever-growing pile of financial assets, it has to return profits to the household sector as dividends.<br /><br />If we allow dividends to be returned in the same period, the result is indeterminate: profits would be arbitrarily large. Let <i>D</i> be dividends.<br />\[<br />PC = WN + M + D.<br />\]<br />\[<br />F = M + D.<br />\]<br />Any $D \in {\mathbb R}$ is a solution, and thus may be arbitrarily large. Formally, no solution would exist to the maximisation problem. The only way of getting a finite solution is to impose a condition on dividend payments. Very few DSGE papers discussed dividends, and the effect on the household budget constraint was minimised. That is, the fact that there is a feedback loop was generally not discussed.<br /><h2>No Longer an Optimisation Problem</h2>If we follow the spirit of DSGE macro, we just impose the partial equilibrium first-order conditions, and try to find a solution (model <i>M3</i>). However, the solution to this new problem no longer is an inter-temporal optimisation, and we cannot conclude any properties about its solution from optimisation theory.<br /><br />For example, <a href="http://www.bondeconomics.com/2017/04/does-governmental-budget-constraint.html" target="_blank">there is no reason for the Transversality Condition to hold,</a> since the model is sub-optimal.<br /><br />The attractiveness of DSGE models is also clearer. All you need to do is impose arbitrary first-order conditions to constrain the system to follow some sub-optimal solution trajectory; all you need is some justification from microeconomics to impose the condition. However, instead of recognising that you are forcing sectors to behave in a sub-optimal fashion (which apparently would make the model subject to the Lucas Critique), you can instead give the impression that the sub-optimal outcome is the result of optimising behaviour!<br /><br />The conclusions drawn from DSGE models can also be seen as being the result of the entirely arbitrary nature of the "first-order conditions" chosen. If the author decides that fiscal policy has no effect on the economy, it just gets dropped from the "first-order conditions." Since the model solution is sub-optimal anyway, what difference does it make? Furthermore, it becomes clear why controversies about the nature of DSGE model solutions exist (for example, the Fiscal Theory of the Price Level, and the "neo-Fisherian" debate). Since the model solutions are essentially the result of arbitrary choices, any form of behaviour can be achieved for what is allegedly the same optimisation problem.<br /><h2>Concluding Remarks</h2>My concern with DSGE models (at least those of the representative household variety) is straightforward: they are not well-posed mathematical models. We have no idea what the properties of these objects are, and we have no reason to believe that they refer to any optimisation problem solution.<br /><br />More constructively, the <a href="http://www.bondeconomics.com/2013/07/theme-mmt-and-sfc-models.html" target="_blank">advantages of the stock-flow consistent (SFC) approach to modelling</a> are much more apparent. Unlike DSGE models that are defined by heuristic (and largely arbitrary) partial equilibrium "first-order conditions," SFC models are stock-flow consistent. (Accounting identities always hold.) Furthermore, the alleged disadvantage of SFC models -- that they do not represent optimisation problems -- is also shared by DSGE models in practice. A researcher can impose heuristic behaviour conditions -- so-called "first-order conditions" -- on SFC model behaviour in exactly the same way as can be done with a DSGE model.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-59232110105950230452017-04-27T14:12:00.000-04:002017-04-28T06:24:56.897-04:00Fun With Central Bank Calvinball<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" /></a></div>In the comments to "Does the Governmental Budget Constraint Exist?" Nick Edmonds reminded about another piece of mainstream macro logic that triggers me: the governmental budget constraint holds, because the inflation-targeting central bank says so. From the perspective of a mathematician, this is just a variation of "because I want to assume it to be true." In this article, I run through whatever logic appears to exist.<br /><br /><a name='more'></a>As a preamble, there might be an explanation for this that makes sense <i>somewhere</i>. I gave up after a couple of years of looking at this. I am sure some economics professor can try to explain this; the only question is whether it can be translated into mathematics.<br /><h2>Fundamentalist Bourbakianism</h2><br />As a preamble, I am a hardline member of the <a href="https://en.wikipedia.org/wiki/Nicolas_Bourbaki" target="_blank">Bourbaki faction: all mathematics is just set theory</a>. Philosophers of mathematics might beg to differ with on me, but that is not my problem.<br /><br />The view is simple: everything in mathematics is a set, or something related to a set. If it has nothing to do with a set, it isn't mathematics.<br /><br />If you are not familiar with this view, I will ask you to think about the following question: what is a function? (If you do not know the answer, I highly recommend that you think about it before running off to a search engine. The answer is at the bottom.)<br /><br />We can now turn to the issue of central banks in mainstream macro.<br /><h2>Position: The Budget Constraint Holds Because <i>X</i> Says So</h2>If one is attempting to decipher mainstream DSGE macro, you can run into statements that run roughly like this:<br /><div style="text-align: center;"><i>The budget constraint holds because of central bank policy [insert stuff about inflation targeting, or whatever].</i></div><br />If we translate this into mathematics:<br /><br /><div style="text-align: center;"><i>"The (budget constraint equation) holds because we assume X, and we assume that X implies that the budget constraint holds."</i></div><br />Any of the following could be inserted as <i>X</i> in that statement:<br /><br /><ul><li>inflation-targeting central banks;</li><li>the Easter Bunny says so;</li><li>Chuck Norris says so.</li></ul><div>This is just adding an extra level of indirection when compared to just saying: "We assume that the budget constraint holds."</div><h2>Position: The Central Bank Theory of the Price Level</h2><div>The next argument is that there is that the initial price level will always move to ensure that the budget constraint holds.</div><div><br /></div><div>From a mathematical perspective, this seems indistinguishable from the Fiscal Theory of the Price Level. However, it is not the private sector doing this; rather, the central bank can always set the price level at time zero. There is no reason within the mathematics why this is so: it just is.</div><div><br /></div><div>The question about the central bank's control over the price level appears open.</div><div><ul><li>If it can only set the price level to the level indicated by the Fiscal Theory of the Price Level, then it is just "the Fiscal Theory of the Price Level, because central banks."</li><li>If the central bank can set the initial price level at any level, that implies that it has arbitrary control over the evolution of the price level. We do not need economics models, we just ask the central bankers what price level they want that day. This position raises many philosophical questions about the study of economics.</li></ul></div><h2>Position: Set Elements Magically Changing</h2><div>Yet another alternative is the following:</div><div><ul><li>The Treasury picks an exogenous sequence of primary fiscal surpluses, <i>s</i>.</li><li>If the budget constraint does not hold, the central bank says "you cannot do that." (How this translates into a set operation is unclear.)</li><li>Therefore, the Treasury is forced to pick a new sequence of primary surpluses <i>z</i>, which follows the budget constraint.</li></ul><div>Mathematically, this is the same thing as assuming the set of feasible primary surpluses are those that meet the budget constraint. Once again, there is no justification why this must hold. </div></div><h2>Position: Those Darned Infinite Rates</h2><div>The next position attempts to justify how the central bank can enforce the budget constraint: it will raise interest rates to infinity if it does not hold.</div><div><br /></div><div>This is equivalent to saying that the price of Treasury bills will clear at $0.</div><div><br /></div><div>Needless to say, there is no examination of the mathematics of market clearing.</div><div><ul><li>If the household sector inherited any money from the previous period, it can buy an infinite number of bills. In what sense has the market cleared?</li><li>If the household sector had no money, the size of the central bank's balance sheet is zero. How can entity without a balance sheet drive the price of an asset to zero?</li></ul><div>In the absence of a mathematical examination of the market clearing condition, and a specification of the limits of central bank balance sheets, this argument looks sketchy.</div></div><div><br /></div><div><i>And rates have to be infinite, </i>implying that there is no solution to the optimisation problem. Any other outcome just results in the exact same chain of real quantities that are in the budget accounting identities. None of the analysis I did in the previous post made any assumption about nominal rates being "low."</div><div><br /></div><div>(The zero bound is the only major issue for the previous development.)</div><div><br /></div><div>Furthermore, the "neo-Fisherian" effect will hold as soon as we have finite interest rates: the higher the nominal rate, the higher the level of expected inflation. Really high interest rates imply really high future inflation; it is hard to see how saying that "central banks target inflation" justifies this entire line of thought. Once again, if mainstream economists actually solved the models that they develop, we could resolve these issues.</div><div><br /></div><h2>Position: Because Equilibrium</h2><div>"The central bank chooses the equilibrium." Good luck with converting that description to mathematics. This seems to the consensus view on the topic, by the way.</div><h2>Concluding Remark</h2><div>How mainstream macro went this far down the rabbit hole is a complete mystery.<br /><h2>Appendix</h2>In my first lecture on linear systems theory at McGill (a Master's level course), Professor George Zames asked us the "What is a function?" question. <i>Nobody got it right. </i>People thrashed around with discussions of "mappings" or rules, or whatever nonsense they teach undergraduate electrical engineers. (I took real analysis, but later.) He gave us the answer the next day, and I entered the Bourbaki cult.<br /><br />The answer: a function is set of ordered pairs. That is <i>y = f(x)</i> is just a shorthand for <i>f ={(x, y)}</i>. (<i>Alex Douglas rapped my knuckles for my original answer; I deliberately ignored the extra conditions on the set.</i> The typical restriction is that if <i>(x,y)</i> and <i>(x,z)</i> are elements of <i>f</i>, then <i>y=z</i>. Depending on how you want to attack the issue, it could be set up slightly differently, I believe.) There's no little elves mapping an input to outputs; it's just a set.<br /><br /></div>(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-79422727485327224652017-04-26T08:23:00.000-04:002017-04-26T08:23:23.192-04:00Does The Governmental Budget Constraint Exist?<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-PbnNLyn6-XU/VJN8rAfZ1uI/AAAAAAAABPg/rouADpT1fVIQdLRae3mum8Z_rf2GjbPJQCPcB/s1600/logo_DSGE.png" /></a></div>This article wraps up my discussion of the transversality condition and the governmental budget constraint. In summary, the governmental budget constraint used within mainstream macro has very serious flaws. I would have liked to use the title "The Governmental Budget Constraint Does Not Exist," but we need to take into account the rather curious Fiscal Theory of the Price Level. Furthermore, there are unsettling implications for the entire Dynamic Stochastic General Equilibrium (DSGE) model approach that relies upon optimisation. I abandoned looking at these models for this reason, and this article suggests why I believe the problems run much deeper. Unlike the previous articles, this article is largely free of mathematics, but I start out listing the various equations I refer to. <br /><br /><a name='more'></a><i>(I have been involved in discussions on this topic on Twitter, starting from an initial contact by Alex Douglas. I have just run across the work of C Trombley, who has written an article on similar lines here -- <a href="http://stochastictalk.blogspot.ca/2017/04/i-am-very-model-of-modern-macro-textbook.html" target="_blank">I Am The Very Model Of A Modern Macro Textbook</a>. It's interesting, but I had been stuck writing out my own chain of logic, and could not respond to his points.)</i><br /><h2>Can't Keep Your Equations Straight Without A Program</h2><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script type="text/x-mathjax-config">MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); </script><script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script> This section lists the various equations discussed here, and how to interpret them. They all refer to simplified mathematical frameworks that are seen in a variety of textbook DSGE models.<br /><br /><a href="http://www.bondeconomics.com/2017/04/mathematics-of-budget-constraint-again.html" target="_blank">The notation is defined in the previous article</a>. In all cases, the variable $b(t)$ refers to the stock of government debt in real terms, $r$ is the real interest rate, and $s(t)$ is the primary fiscal surplus.<br /><br />What I refer to as the governmental budget constraint is the following equation.<br />$$\begin{equation}<br />b(0) = \sum_{i=1}^{\infty} \frac{s(i)}{(1+r)^i} \label{eq:summation}<br />\end{equation}$$<br />This equation says that the summation of the discounted real primary surpluses is finite, and equals the initial stock of debt. The validity of this expression is what we are interested in.<br /><br /><div>The next equation is the <i>1-period accounting identity</i>:</div><br />$$\begin{equation}<br />b(t+1) = (1+r) b(t) - s(t+1). \label{eq:accountident}<br />\end{equation}$$<br /><br />This just says that the debt at time $t+1$ is equal to $(1+r)$ times the debt level at time $t$, minus the surplus in the current period ($t+1$). That is, debt will compound by the discount rate, less the primary surplus. <i>As an accounting identity, this has to hold.*</i> One annoying habit of some mainstream economists is to also refer to this as a "governmental budget constraint," conflating the dubious ($\ref{eq:summation}$) with the true-by-definition ($\ref{eq:accountident}$).<br /><br />The next equation is the <i>transversality condition:</i><br /> $$\begin{equation}<br />\lim_{t-> \infty} \frac{b(t)}{(1+r)^t} = 0. \label{eq:limit}<br />\end{equation}$$<br />This equation says that the stock of debt outstanding cannot grow faster than the discount rate $r$. Transversality is a term that comes from optimisation theory. Since we do not actually apply it, the exact definition does not matter.<br /><br />Finally, there are couple of workhorse accounting identities that relate the stock of debt at time $0$ and time $t$.<br /><br />The reasonable-looking forward relation, which tells us the level of future debt based on the initial debt level, and intervening primary surpluses.<br />$$\begin{equation}<br />b(t) = (1+r)^t b(0) - \sum_{i=1}^t (1+r)^{t-i} s(i). \label{eq:fwdsum}<br />\end{equation}$$<br /><br />There is also the unusual backward relation, which tells us the current level of debt based on a future level. This equation is just an algebraic restatement of the previous. (My first article had a deliberately obtuse attempt to decipher this equation.)<br />$$\begin{equation}<br />b(0) = \sum_{i=1}^t \frac{s(i)}{(1+r)^i} + \frac{b(t)}{(1+r)^t}. \label{eq:bkwdsum}<br />\end{equation}$$<br /><h2>Attempting to Follow Mainstream Logic</h2>For a variety of reasons, mainstream economists want to use the governmental budget constraint ($\ref{eq:summation}$). Within models, households face a budget constraint, and it would be unfair if governments did not have one (beyond the accounting identity ($\ref{eq:accountident}$)). As I discussed in "<a href="http://www.bondeconomics.com/2017/04/on-being-pelted-by-peanuts-part-i.html" target="_blank">On Being Pelted With Peanuts: Part I</a>," they could just assume it to be true.<br /><br />Of course, just assuming something to be true is not too useful. Mexico will pay for that wall, if we assume that they will do so. As such, mainstream economists searched for a reason for it to be true, The usual logic appears to work as follows. (I have never seen a coherent description of the logic behind this, so I had to use guesswork.)<br /><ol><li>Starting with the backward relation ($\ref{eq:bkwdsum}$), we can manipulate equations to show that the transversality condition ($\ref{eq:limit}$) implies the accounting identity ($\ref{eq:summation}$). (<a href="http://www.bondeconomics.com/2017/04/mathematics-of-budget-constraint-again.html" target="_blank">I did the proof in the previous article.</a>)</li><li>When households search for the optimal solution to their utility maximisation problem in the DSGE model, the optimal solution (allegedly) displays the transversality condition ($\ref{eq:limit}$).</li><li>Therefore, household optimisation preferences will imply that ($\ref{eq:summation}$) holds.</li></ol><div>The mainstream economists then go on to wave their hands about future surpluses cancelling out the effect of "debt-financed" fiscal stimulus, and so fiscal policy is ineffective, etc.</div><div><br /></div><div>Not so fast.</div><div><br /></div><div>If the backward relation ($\ref{eq:bkwdsum}$) holds, so does the forward relation ($\ref{eq:fwdsum}$). For the sake of argument, assume that the initial real stock of debt is fixed. (We return to this assumption later.) If we look at that equation, we see that the future debt level is pinned down by an accounting identity: the household sector in aggregate cannot alter the future trajectory of the debt by one (real) penny, no matter what optimisation choices it takes. (The models we are discussing feature fiscal policy that is completely unrelated to the state of the economy.) That is, the premises behind logical steps 1 and 2 above are inconsistent. The confusion in "Being Pelted With Peanuts" was the result of the inconsistency in the logic being used.</div><div><br /></div><div>The next line of defence is to argue that since households have future money ("at infinity") that they will not need, they will spend it now. In other words, they cannot stop the chain of single-period accounting identities which determine the ratio of future debt to the initial level, they can (somehow) change the starting point.</div><div><br /></div><div>Although that might work for an individual household, that cannot work in aggregate. All purchases made by households in the crippled DSGE modelling frameworks flow right back to the household sector, and there is no way of the household sector reducing its aggregate holdings of government-issued liabilities (other than voluntarily destroying money or bill holdings, which is not optimising behaviour). After all, we were only able to argue that household preferences influenced government debt outstanding because the only sector that held government debt in the model was the household sector. Furthermore, since we are assuming that all households act the same ("representative household"), they would all try to buy at the same time, without any extra supply forthcoming. There is no way of affecting the nominal debt level at time zero.</div><div><br /></div><div><i>The only thing that can adjust is the price level at time zero. </i>This is not "inflation," as that is the rise in the price level in future periods versus time zero. Instead, the entire price level has to shift instantly, which destroys the real value of financial assets held during the previous period. (Raising interest rates in time zero does nothing to stop this, as this only protects the real value of government bills against the inflation from time period $0$ to $1$.) This is how the assumption that the initial real debt level is fixed is relaxed: by changing the initial price level (which is the only thing that can move).<br /><br />This adjustment mechanism appears implausible, but it reflects the general under-determination of the price level at $t=0$. Almost all attention is paid on the relative price between current prices and the future, but there is little discussion why the initial price level has to be at any particular level. The only variables with nominal scaling are the inherited financial assets from the previous period. If those debt ratios are "too high," we just scale nominal GDP instantly so that the ratio hits the correct level. (Calvo pricing does not help; firms that are unable to adjust prices to the new starting point get squashed like bugs.)</div><div><br /></div><div>This effect is the <a href="http://www.bondeconomics.com/2014/12/monetary-impotence-and-triumph-of.html" target="_blank">Fiscal Theory of the Price Level</a> (FTPL). The implications of the FTPL are stark: the price level is entirely driven by the state of expectations about fiscal policy. The price level at $t=0$ is entirely determined by fiscal policy expectations at $t=0$. The price level at $t=1$ is entirely determined by fiscal policy expectations at $t=1$. <i>This means that monetary policy settings at $t=0$ is utterly irrelevant for the level of inflation at $t=0$.</i></div><br />The FTPL justifies the governmental budget constraint by saying that the private sector will raise (or lower) the price level -- changing the real value of existing debt -- if it ever looks like the budget constraint will not hold. This has nothing to do with "transversality" in optimisations. However, it is once again an assumption about economic behaviour that holds only because we assume that is true. If we assume that other factors influence the initial determination of the price level. the governmental budget constraint disappears.<br /><br />The FTPL appears to be the only internally coherent class of representative household DSGE models. Unfortunately, the models are fairly degenerate, in that nothing else will really matter for inflation. Furthermore, it seems that their empirical usefulness is highly questionable. (Do we have plausible infinite horizon fiscal forecasts?)<br /><h2>Further Optimisation Ugliness</h2>Even if we want to ignore the Fiscal Theory of the Price Level, the interaction between the macro constraints and household constraints are worrisome. I have not wasted much of my time looking at microeconomics, but I have severe doubts about how its "laws" have been applied to DSGE macro. We cannot assume that households are "infinitely small"; we need to model $N$ households, and see how they interact with an aggregate macro budget identity. In other words, we cannot use theorem statements that are cherry-picked from micro textbooks without ensuring that all the conditions required by those theorems apply to the model in question.<br /><br />The household sector's financial assets are held in a vise, which is the governmental fiscal surplus. There is no action that can be taken to change the end-of-period holdings, no matter what level of production takes place.<br /><br />If your financial asset holdings is completely outside of your control, how do they matter in an optimisation? It seems that the optimal strategy is to completely ignore financial asset holdings in the utility maximisation problem.**<br /><br />However, such a step destroys the entire premise of inter-temporal optimisation. Unless the model features real investment, there is nothing (other than the irrelevant financial balances) that link period $t$ and $t+1$. This suggests that the optimal solution to these problems is just to pick the naive point-in-time utility maximisation at every time point. (Insert the production function into the 1-period component of the utility function; find the maximising output.) The fact that optimisation is carried out on an infinite horizon is just a smoke screen; what happens in period 1 has no effect on the solution in period 0.<br /><br />Finally, since financial asset balances do not matter in these models, the rate of interest does not matter. Once again, the actions of the central bank are entirely irrelevant to the model outcome.<br /><h2>Concluding Remarks</h2>It is unacceptable that we have to speculate about the solutions to these models. The usefulness of mathematics is that it forces you to think clearly, and crystallise your logic in equations. However, once we lose the discipline of properly solving the equations, we are back to literary speculation.<br /><br /><b>Footnotes:</b><br /><br />* In some treatments of the topic, this accounting identity has been turned into an inequality, based on logic from financial mathematics. You know a field has completely lost any shred of common sense when the only things that we know hold with equality are turned into inequalities.<br /><br />** If money does not appear in the utility function, the construction appears straightforward. Assume we have a feasible trajectory $x$, with utility $U(x)$. We then construct $x^*$ with all state variables equal to $x$, other than the primary surplus sequence $s$ and the affected financial asset holdings (normally bills and money). The only restriction on $s^*$ is that it does not somehow bind the household sector's financial constraint by running too large surpluses. This set is non-empty; the time series $s^*(t) = s(t) - 1$ is one such primary surplus series. Importantly, the series $s^*$ is created by only adjusting the taxes imposed; real government consumption is fixed. Since the financial constraints would never bind, $x^*$ is also feasible. Moreover, $U(x^*) = U(x)$. Therefore, we can see that the optimal trajectory is (somewhat) indifferent to the path of financial variables. Of course, there is the problem of attaining a maximum of an optimisation where the set of feasible solutions is not closed and finite, <a href="https://medium.com/@alexanderdouglas/infinite-peanut-policy-reply-to-romanchuk-3502cff8f8b8" target="_blank">see this article by Alex Douglas</a>. If money appears in the utility function, the set of exogenous primary surplus sequences that we can use is limited to the set that allow the household sector to match the optimal money balance in each period (assuming that we do not allow the household sector to run negative bill holdings, or borrowings from the government).<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com16tag:blogger.com,1999:blog-5908830827135060852.post-30507055624676641682017-04-25T08:37:00.000-04:002017-04-25T12:06:53.248-04:00Mathematics Of The Budget Constraint (Again)<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ByBqBSERMJk/VJN8r_rvyZI/AAAAAAAABQY/Y-iBIdhWkqEQhFI0zQvkUw_YrUKW8exugCPcB/s1600/logo_fiscal.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-ByBqBSERMJk/VJN8r_rvyZI/AAAAAAAABQY/Y-iBIdhWkqEQhFI0zQvkUw_YrUKW8exugCPcB/s1600/logo_fiscal.png" /></a></div>This article attempts to give a simpler mathematical discussion of the governmental budget constraint and transversality. After throwing my hands up in the air in my previous article, I run through the basic mathematics of the accounting identity for governments, and we can see that what is called "transversality" is just equivalent to making the assumption that the discounted primary surpluses converge to be equal to the initial stock of debt. However, household sector optimisation is nowhere in sight, which raises the question why it comes up in discussion of this topic in the first place.<br /><br />Once again, the math-phobic may as well stay clear.<a href="https://medium.com/@alexanderdouglas/infinite-peanut-policy-reply-to-romanchuk-3502cff8f8b8" target="_blank"> I would also draw your attention to this article by Alex Douglas</a>; he is jumping ahead to an extremely point about optimisations (in general, we have no reason to believe that the optimum exists when the set of solutions is not closed and finite).<br /><br /><a name='more'></a>The equations are being generated by MathJax, and they might not be rendered on some browsers. If you can read LaTex, you might be able to follow the argument anyway. Please note that I went nuts with the equations here, and so it may take some time for the equations to render. I will be reverting to low math content in the future, but I just wanted to underline how cumbersome it is to deal with infinite summations, a point that is glossed over in a lot of treatments I see.<br /><br />As a disclaimer, this was relatively rushed; there's probably a few typos in here.<br /><h2>Preliminaries</h2>Let $\cal T$ be the set of time series defined on $\mathbb Z_+$. That is, if $x \in {\cal T}$, then $x(t) \in {\mathbb R}$ for all $t \in {\mathbb Z_+}.$ (The set ${\mathbb Z}_+$ is the set of positive integers greater than or equal to $0$.)<br /><br />(Note: I am unsure what is the formal name for $\cal T$; I have a bad feeling about its properties.)<br /><br />Definitions associated with infinite sums and limits are <a href="http://www.bondeconomics.com/2017/04/on-being-pelted-by-peanuts-part-i.html#more" target="_blank">described in the previous article</a>.<br /><br /><b>Assumptions</b><br /><ul><li>Money holdings are zero at all times; the only government liabilities are 1-period bills. (If we allow money to be held, we then get terms associated with money creation in the formulae. These added complexities offer little value-added.)</li><li>We are starting at time $0$ for notational simplicity.</li><li>The (expected) real discount rate is equal to $r$ for all times. (There is some embedded assumptions about deflation as a result of this. The household can get whatever real interest rate it wishes on money balances if there is sufficient deflation. This technicality is typically ignored elsewhere; I am following that assumption so that my equations align with the usual textbook ones. Otherwise, we need to start tracking nominal balances and the price level as well, and my treatment would bear no resemblance to what we see elsewhere.)</li><li>Realised variables are equal to expectations at time $0$. (If perfect foresight is bothersome, pretend this is a simulation at $t=0$.) </li></ul><div><b>Variable definitions:</b></div><ul><li>Denote the real market value of government bills outstanding at time $t$ as $b(t)$. (That is, $b \in {\cal T}$.) The initial value of $b$ ($b(0)$) is a positive number.</li><li>The primary fiscal surplus at time $t$ is $s(t)$. The variable $s$ is a fixed member of $\cal T$; that is, it is an exogenous variable. The initial value is fixed: $s(0) = 0.$</li></ul><div><br /></div><div><b>Definition</b> The <i>1-period government accounting identity</i> is given by (for $t>0$):</div><br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script type="text/x-mathjax-config">MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); </script><script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script> $$\begin{equation}<br />b(t+1) = (1+r) b(t) - s(t+1). \label{eq:accountident}<br />\end{equation}$$<br /><br /><b>Lemma </b>We can relate $b(t)$ and $b(0)$ as follows, for all $t \in {\mathbb Z}_+$:<br />$$\begin{equation}<br />b(t) = (1+r)^t b(0) - \sum_{i=1}^t (1+r)^{t-i} s(i). \label{eq:fwdsum}<br />\end{equation}$$<br /><b>Proof </b>Use induction.<br /><ul><li>Equation ($\ref{eq:fwdsum}$) is true by inspection for $t=0$, and by applying ($\ref{eq:accountident}$) for $t=1$.</li><li>Assume true for $t$.</li><li>Validate for $t+1$. Apply ($\ref{eq:accountident}$) and the induction assumption, we get:</li></ul><div>$$\begin{eqnarray}<br />b(t+1) & = & (1+r)b(t) - s(t+1), \\<br />& = & (1+r) \left( (1+r)^t b(0) - \sum_{i=1}^t (1+r)^{t-i} s(i) \right) - s(t+1), \\<br />&= & (1+r)^{t+1} b(0) - \sum_{i=1}^{t+1} (1+r)^{(t+1)-i} s(t).<br />\end{eqnarray}$$<br />Validating the induction assumption. $\fbox{}$</div><br /><b>Lemma </b>The following relationship holds:<br />$$\begin{equation}<br />b(0) = \sum_{i=1}^t \frac{s(i)}{(1+r)^i} + \frac{b(t)}{(1+r)^t}. \label{eq:bkwdsum}<br />\end{equation}$$<br /><b>Proof </b>By inspection (apply ($\ref{eq:fwdsum}$)). $\fbox{}$<br /><br /><b>Theorem</b> The equation<br />$$\begin{equation}<br />b(0) = \sum_{i=1}^{\infty} \frac{s(i)}{(1+r)^i} \label{eq:summation}<br />\end{equation}$$<br />is well defined if and only if<br />$$\begin{equation}<br />\lim_{t-> \infty} \frac{b(t)}{(1+r)^t} = 0. \label{eq:limit}<br />\end{equation}$$<br /><b>Proof: </b>We first prove that ($\ref{eq:limit}$) implies ($\ref{eq:summation}$). Rearrange terms of ($\ref{eq:bkwdsum}$) to give:<br />\[<br />b(0) - \sum_{i=1}^t \frac{s(i)}{(1+r)^i} = \frac{b(t)}{(1+r)^t}.<br />\]<br />This implies that<br />$$\begin{equation}<br />\left| b(0) - \sum_{i=1}^t \frac{s(i)}{(1+r)^i} \right|= \left| \frac{b(t)}{(1+r)^t} \right|.<br />\label{eq:absval} \end{equation}$$<br />Fix any $\epsilon > 0$. By applying the definition of ($\ref{eq:limit}$), there exists an $M$ such that<br />\[<br />\left| \frac{b(t)}{(1+r)^n} \right| < \epsilon, \forall n \geq M.<br />\]<br />Apply to ($\ref{eq:absval}$):<br />\[<br />\left| b(0) - \sum_{i=1}^n \frac{s(i)}{(1+r)^i} \right| < \epsilon, \forall n \geq M<br />\]<br />We then apply the definition of an infinite summation to see that ($\ref{eq:summation}$) holds.<br /><br />To validate that ($\ref{eq:summation}$) being well-posed implies ($\ref{eq:limit}$), we rearrange ($\ref{eq:bkwdsum}$) to give:<br />\[<br /> \frac{b(t)}{(1+r)^t} = b(0) - \sum_{i=1}^t \frac{s(i)}{(1+r)^i}.<br />\]<br />Fix any $\epsilon > 0$. By applying ($\ref{eq:summation}$), there exists an $M$ such that the right-hand side hand side has modulus less than $\epsilon$ for all $t \geq M$. We then apply the definition of the limit to see that the left-hand side converges to zero. $\fbox{}$<br /><br /><b>Remark</b> This proof is plodding, but there still might be issues that it glosses over. In a journal article in applied mathematics, nobody would bother with the $\epsilon$ arguments (unless it was much more difficult). However, the proof text would have to be careful to indicate why the various summations and limits exist, That is, it is unacceptable to write down infinite summations and use them in other manipulations without ensuring that the summations exist.<br /><h2>Discussion</h2>The theorem provided tells us that the condition that is called the "transversality condition" is a necessary and sufficient condition for the condition on the discounted sum of primary surpluses (equation ($\ref{eq:summation}$)).<br /><br />This is what is asserted in various DSGE macro papers, and <a href="http://www.bondeconomics.com/2017/04/on-being-pelted-by-peanuts-part-i.html" target="_blank">which caused me agony in my previous article</a>. Since it is actually straightforward, why complain?<br /><br />My complaint is this: this derivation was driven entirely by straightforward application of the 1-period accounting identity. There is no optimisation involved at any point during the derivation (the notion of transversality comes from optimisation theory). Very simply, the household sector has no choice with respect to this result, therefore it makes no sense to pretend that it is the result of microfoundations.<br /><br />In other words, since the initial stock of household debt holdings ($b(0)$) is fixed, and the path of the primary surpluses was assumed to be exogenous (a crazy assumption, but standard for simple DSGE models), the future path of debt holdings ($b(t)$) is deterministic, and not the result of any optimisation result. This raises the obvious corollary: if household wealth is determined entirely by fiscal policy, in what sense does it even matter for the optimisation problem?<br /><br />Correspondingly, there is no reason to believe that the condition <i>must</i> hold; it either holds or it does not. Since the nominal discount rate is quite often below the nominal growth rate of the economy, the expectation is that it will in general not hold.<br /><br />I will return to the economic discussions in a later article (with less equations).<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-74995308252659936082017-04-24T13:36:00.000-04:002017-04-25T10:50:44.947-04:00On Being Pelted By Peanuts: Part I<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-uumwAyLZoU0/VJN8sOuOSAI/AAAAAAAABQQ/Jpv4fyqxz6cUB2rlaNQijo5B4P1piLrcgCPcB/s1600/logo_models.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-uumwAyLZoU0/VJN8sOuOSAI/AAAAAAAABQQ/Jpv4fyqxz6cUB2rlaNQijo5B4P1piLrcgCPcB/s1600/logo_models.png" /></a></div>Alexander Douglas (a lecturer in philosophy) wrote an interesting article on two subjects recently: "<a href="https://medium.com/@alexanderdouglas/macroeconomics-a-view-from-the-peanut-gallery-f460e5abe92f" target="_blank">Macroeconomics -- A view from the peanut gallery.</a>" He covers two diverging topics: the transversality condition from mainstream macro, and the question of welfare functions in Stock-Flow Consistent models. I will take a stab at these topics over the coming days. In this article, I am expressing my deep displeasure with what is supposed to be trivial mathematics used by the mainstream: the transversality condition.<br /><br />(He embeds some mathematics in his post, and so I am using MathJax to format an answer. The equations may not be properly rendered on some browsers. People who are allergic to equations may want to skip this one... I did this quickly, and already squashed a few typos.)<br /><a name='more'></a><b>Update</b>: <i>I have written out the involved mathematics in a straightforward manner in <a href="http://www.bondeconomics.com/2017/04/mathematics-of-budget-constraint-again.html" target="_blank">"Mathematics of Budget Constraint (Again)"</a>. This article is a deliberately obtuse reading of some equations that appear in many introductory mainstream macro textbooks. (To be clear, I am targeting the textbook that Alex Douglas linked to; other introductory texts have very similar developments. I did not look at the linked textbook that carefully; it may have explained things better than a reader might conclude from my comments here.) If you can follow the mathematics, the other article follows a straightforward (semi-)rigourous approach and makes everything look really simple. (Pure mathematicians could probably find something to yelp about.) The sting in the tail, however, is what this means from the interpretation of economic models, which is what everyone here is presumably interested in. I give a brief version of my interpretation in that article, but I will later write a non-mathematical article explaining what problems I see. </i><br /><h2>Transversality</h2>Transversality refers to a condition associated with the governmental budget constraint of mainstream macro. I wrote about this in earlier articles, such as "<a href="http://www.bondeconomics.com/2014/07/if-r-g-dsge-model-assumptions-break-down.html" target="_blank">If r < g, DSGE Model Assumptions Break Down.</a>" From the perspective of mainstream macro, my logic in that article is not covering the mainstream microeconomic arguments involved. I will try to stay closer to the mainstream logic in the discussion here.<br /><br />Professor Douglas refers to this text, which offers a fairly standard treatment of the governmental budget constraint. (For those of you who are new to this topic, mainstream economists refer to two concepts are being the governmental budget constraint. The first is an accounting identity that links the current period to the previous, which is non-controversial. The second is the behaviour at infinity, which is what I am discussing here.)<br /><br />The key claim is that if $r$ is a real discount rate, $b_t$ represents (real) government bond outstanding at time $t$, and $s_t$ is the government's (real) fiscal surplus at time $t$, then we have the following relationship:<br />\[<br />b_t = \sum_{i=1}^N \frac{s_{t+i}}{(1 + r)^i} + \frac{b_{t+N}}{(1+r)^N}.<br />\]<br />We can allegedly let $N$ "go to infinity," let the "second term go to zero", and then we get the infinite horizon budget constraint:<br />\[<br />b_t = \sum_{i=1}^{\infty} \frac{s_{t+i}}{(1+r)^i}.<br />\]<br /><h2>To Infinity, and Beyond!</h2>(OK, the header was cheesy.)<br /><br />Professor Douglas discussed infinite time in his article, using terms that makes my head hurt. ($\aleph_0$, seriously?) I am going to approach infinite time the way it is usually done in applied mathematics: it does not really exist. From the perspective of real analysis, $\infty$ is just a short-hand.<br /><br />For discrete time models, the time axis is normally taken to be the set of positive integers (including zero), denoted ${\mathbb Z}_+$ (${\mathbb Z}$ is all integers. We can write ${\mathbb Z_+} = [0, 1, 2, ... \infty),$ as a shorthand. Importantly we close the sequence definition with ")" to denote that the element $\infty$ (whatever that is!) is not an element of ${\mathbb Z}_+$.<br /><br />If we write infinite summation of the form:<br />\[<br /> x = \sum_{i=0}^{\infty}a_i,<br />\]<br />where $a_i$ is a sequence in $\mathbb R$ defined on the support ${\mathbb Z}$, what it translates into:<br />\[<br />x = \lim_{N -> \infty} \sum_{i=0}^N a_i,<br />\]<br />which is itself a short-hand. What the above equation says: if $x$ exists, $x \in {\mathbb R}$, with the property: for any $\epsilon > 0$, there exists an $M(\epsilon)$ with the property that:<br />\[<br />\left| x- \sum_{i=0}^{N} a_i \right| < \epsilon, \forall N > M(\epsilon).<br />\]<br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script><br />In plain English, for any non-zero error tolerance (normally denoted $\epsilon$), we can guarantee that all sufficiently long summations lie within that error bound of $x$. Note that "$\infty$" appears no where in the underlying mathematical statements.<br /><br />We can always write down an infinite summation, but we need to validate that it converges. Otherwise, the summation is equivalent to the set $\{x: x \in {\mathbb R}, x = x + 1\},$ which is just a fancy way of saying $\{\emptyset\}$ (empty set). In most fields of applied mathematics, the first thing to do when faced with infinite summations is to validate convergence; mainstream economics, not so much.<br /><h2>Back to Transversality</h2>The expression:<br />\[<br />b_t = \sum_{i=1}^{\infty} \frac{s_{t+i}}{(1+r)^i}.<br />\]<br />is reasonable enough from the point of mathematics; the only issue is convergence. (Why it must hold will have to wait for another time; which is actually what Professor Douglas wanted to discuss. But we can't get there from here.)<br /><br />If mainstream economists started from there, things would be fine. Unfortunately, they wanted to link to optimisation theory somehow, and wanted to link to the following expression (which would be the result of a finite horizon optimisation).<br />\[<br />b_t = \sum_{i=1}^N \frac{s_{t+i}}{(1 + r)^i} + \frac{b_{t+N}}{(1+r)^N}.<br />\]<br />As a starting point, this <strike>expression is gibberish (to use the technical mathematical term)</strike> expression leaves a lot of open questions. <i>(I got my knuckles wrapped by a mathematician on Twitter for that, it was a joke, honest!)</i><br /><br />Going the other way is fine. Starting from the infinite summation expression.<br />$$\begin{eqnarray}<br /> b_t & = & \sum_{i=1}^{\infty} \frac{s_{t+i}}{(1+r)^i}\\<br /> & = & \sum_{i=1}^{N} \frac{s_{t+i}}{(1+r)^i} + \sum_{j=N+1}^{\infty} \frac{s_{t+j}}{(1+r)^j},\\<br />& = & \sum_{i=1}^{N} \frac{s_{t+i}}{(1+r)^i} + \frac{1}{(1+r)^N}\sum_{k=1}^{\infty} \frac{s_{(t+N)+k}}{(1+r)^k},\\<br />& = & \sum_{i=1}^N \frac{s_{t+i}}{(1 + r)^i} + \frac{b_{t+N}}{(1+r)^N}.<br />\end{eqnarray}$$<br /><br />However, you cannot go the other way. The equation:<br />\[<br />b_t = \sum_{i=1}^N \frac{s_{t+i}}{(1 + r)^i} + \frac{b_{t+N}}{(1+r)^N},<br />\]<br />actually defines $b(t)$ as a function of <strike>four</strike> five variables: $t$, the sequence $s$ (which is fixed), the summation termination $N$, the discount rate $r$, and $b(t+N,...)$ (there's a variable hidden in $b(t+N,...)$). That is,<br />\[<br />b(t) = f(t, s, r, N, b(t+N, s, r, N_N)).<br />\]<br />This is a recursive definition that does not appear to make sense, as we are defining $b(t)$ based on a future value of $b(t)$, which can only be defined in terms of another future value ($N_N$, whatever that is). When we define a recursive relationship, we normally need to define the initial value of the sequence.<br /><br /><b>Update:</b> <i>As pointed out by C Trombley (<a href="https://twitter.com/C_Trombley1" target="_blank">@C_Trombley1 on Twitter</a>), it might be possible to get such a backwards recursive definition to work somehow. I tried to make it clear that it was not impossible, but I have no idea how such a proof can be constructed without assuming that the infinite summation converges -- which is what we are trying to prove. In any event, although it is acceptable to skip some steps in proofs in published mathematics, relying on readers to guess what non-standard proof method exists is beyond the pale. If it were not necessary to supply missing steps, Fermat actually proved his last theorem.</i><br /><br />We could try to pretend that the following works:<br />\[<br />b(t,r,N,s,\alpha_N) = \sum_{i=1}^N \frac{s_{t+i}}{(1 + r)^i} + \alpha_N,<br />\]<br />where $\alpha_N \in {\mathbb R}$. This gives us a well-defined result. However, there is no guarantee that<br />\[<br />\alpha_N = \frac{b_{t+N}}{(1+r)^N}.<br />\]<br /><br />As far as I can tell, the idea is that we are supposed to get the government debt holdings from some optimisation problem somewhere. However, on a finite horizon optimisation, there is notion of optimisation over the period beyond the horizon. If we terminate the optimisation at period $N$, there is no period $N+1$ in the optimisation, and the household should dump all of its financial assets and have a final blowout in period $N$. ("Party like its 1999!") But the optimal solution presumably changes if in fact we decide that we will make it to the year 2000, and instead assume that the world blows up in 2001.<br /><h2>Concluding Remarks</h2>The fact that we cannot get more than a couple of lines in the mathematics of DSGE macro without raising existential questions like this is a sign that the mathematics in DSGE macro has long departed accepted mathematical norms.<br /><br />It should be noted that there is a simpler version of this analysis; the question is trying it to align to the way it is described by DSGE macro papers.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com4tag:blogger.com,1999:blog-5908830827135060852.post-72304839456653750112017-04-23T09:00:00.000-04:002017-04-24T07:42:15.103-04:00SFC Models And Introductory MMT-Style Fiscal Analysis<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-6zrbI5TXyik/VJN8rDhtFEI/AAAAAAAABQg/E6eGeSKaB08v0IR-If17jzN_fDmoMb4RACPcB/s1600/logo_SFC.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-6zrbI5TXyik/VJN8rDhtFEI/AAAAAAAABQg/E6eGeSKaB08v0IR-If17jzN_fDmoMb4RACPcB/s1600/logo_SFC.png" /></a></div>The usefulness of Stock-Flow Consistent (SFC) models is that they allow us to illustrate concepts in economics without relying solely on verbal descriptions. In this article, I will discuss my interpretation of some of the ideas floating around in Modern Monetary Theory (MMT). I will note that these are my interpretations of statements made by others, illustrated by an extremely simple model. The key is that even simple models can be used to clarify our thinking.<br /><br /><a name='more'></a>This article is only a partial response <a href="https://beinnbhiorach.com/2017/04/20/you-are-just-as-benighted-as-those-other-wrong-guys/" target="_blank">to an article by Gerard MacDonell</a>. He is unhappy about some of the writings of Professor Bill Mitchell, one of the leading MMT economists. I am not going to argue on Mitchell's behalf, rather I just want to offer some analysis that touches on some of the technical issues Gerard made. He noted that Federal taxation and spending are roughly similar, so how does that square with MMT pronouncements about the independence of taxation and spending? This outcome is not surprising, as it is exactly the sort of thing that is predicted by SFC models -- and MMT mathematical analysis of the economy uses SFC models.<br /><br />For those if you who are not fully up-to-date on post-Keynesian factionalism, please note that SFC models were meant to be a mathematical <i>lingua franca</i> for post-Keynesian economics. In other words, MMT economists use SFC models, but they are not exclusive to MMT.<br /><br />Since I want to work with my Python modelling framework here, and it currently cannot support full business cycle analysis (extensions will be added later), I cannot do complete justice to <a href="http://www.bondeconomics.com/2014/04/primer-what-is-functional-finance.html" target="_blank">Functional Finance</a>. Therefore, I have to just focus on a couple of more basic ideas about fiscal polict:<br /><ol><li>there is little relationship between taxes and spending; and</li><li>governments cannot control the budget deficit.</li></ol><div>I will address these here in turn.</div><h2>Taxes and Spending</h2><div>One basic reason to question the relationship between taxes and spending is that they are not in the same units,</div><div><ul><li>Taxes are mainly imposed as a <i>percentage</i> of incomes (or activity, such as tariffs or sales taxes). There are some user fees that are fixed, but these are typically a small part of total revenue for the central government.</li><li>Spending is set in <i>dollar amounts</i>, or dollar amounts based on rules. (For example, welfare recipients receive benefits based on some fixed scale).</li></ul><div>When we talk about taxes, we are talking about percentages; when we talk about government spending, we are talking about dollars (or whatever the local currency is). Very serious budget analysts will discuss changes to taxes in terms of dollar amounts, but those dollar amounts are based on crippled economic models that very few other people take seriously. (For example, see the latest debate on "dynamic scoring" in the United States,)</div></div><div><br /></div><div>Take the simplest possible SFC model of an economy with a government - model SIM. (From Chapter 3 of Monetary economics, my implementation is <a href="http://www.bondeconomics.com/2016/10/building-sfc-model-in-python.html" target="_blank">described in this article.</a>)</div><div><br /></div><div>Fiscal policy is set by two parameters:</div><div><ul><li>A tax rate, which is a flat percentage of household income.</li><li>Spending, which is expressed as an annual dollar amount. (Note that there is no explicit modelling of prices here.)</li></ul><div>We assume that the tax rate is 20%, and we are in a steady state with spending at $15/year. We then look at two scenarios, where we ramp up spending to:</div></div><div><ul><li>Scenario 1: $20/year.</li><li>Scenario 2: $25/year.</li></ul><div class="separator" style="clear: both; text-align: center;"></div>Note that we decided to ramp up spending purely based on feeling good (or bad) after some Stanley Cup playoff games; we did not care what the boring government debt nags have to say. <i>We make no adjustments to our tax rates whatsoever to compensate for increased spending.</i><br /><div class="separator" style="clear: both; text-align: center;"></div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-klGx7klZWZg/WPvv_s6bEFI/AAAAAAAACsY/PiSFpGQt-rkan8oSSLTExnvw0Jp_pnnPwCLcB/s1600/intro_X_XX_sim_fiscal.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-klGx7klZWZg/WPvv_s6bEFI/AAAAAAAACsY/PiSFpGQt-rkan8oSSLTExnvw0Jp_pnnPwCLcB/s1600/intro_X_XX_sim_fiscal.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: GDP in the two scenarios</td></tr></tbody></table><div>Since this is a red-blooded Keynesian model, we see that increased government spending resulted in greater activity. The more we spend, the more the economy grows.</div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-z-v9uxXtlWs/WPvv_qxuXgI/AAAAAAAACsc/rksixd_4xqo9AQBvRKlGtwAO3MLTTXHsACEw/s1600/intro_X_XX_sim_deficit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://1.bp.blogspot.com/-z-v9uxXtlWs/WPvv_qxuXgI/AAAAAAAACsc/rksixd_4xqo9AQBvRKlGtwAO3MLTTXHsACEw/s1600/intro_X_XX_sim_deficit.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: Fiscal Deficit</td></tr></tbody></table><div>Oh noes, the deficits! As can be seen above, we have a larger deficit when we ramp up spending. But in both cases, we end up in balance. I will return to this later,</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-3PJZyXWbIe4/WPvv_lEhSVI/AAAAAAAACsc/RXBxu_m_vPIz5_gK_gGTfe7bV5szAu7qgCEw/s1600/intro_X_XX_sim_debt_gdp.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://4.bp.blogspot.com/-3PJZyXWbIe4/WPvv_lEhSVI/AAAAAAAACsc/RXBxu_m_vPIz5_gK_gGTfe7bV5szAu7qgCEw/s1600/intro_X_XX_sim_debt_gdp.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: Debt-to-GDP Ratio</td></tr></tbody></table><div>What about the dreaded debt-to-GDP ratio? It must be explosive, with all the increased spending and no tax hikes? Whoops. As seen above, it actually fell, and then it reverts to the initial level. (The difference that is visible was the result of rounding issues, which I will look into.*)</div><div><br /></div><div>Even though we adjusted spending without any reference to raising the tax rate, we still ended up with the same debt-to-GDP ratio. If one reads <i>Monetary Economics</i>, there is a long discussion of steady states, and this outcome is exactly the sort of thing we are supposed to expect. The steady state debt ratio is a function of the tax rate and private sector behaviour.</div><div><br /></div><div>The behaviour of the deficit is unusual when compared to real world behaviour: it reverts to zero. This is because this is a no-growth economy that heads to a steady state. If we are in a steady state, all stock variables when scaled by GDP are constant, which implies zero flows. Therefore, the net creation of government debt has to be zero. If the steady state featured a positive growth rate, we would revert to a deficit that allows debt levels to grow in line with nominal GDP, as seen below.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-xFAL2V7nuuY/WP05iQ6uhdI/AAAAAAAACs4/PLSnZRRsRr0igB4pACiOm0yZPLdwsjtVgCLcB/s1600/intro_X_XX_sim_growing_deficit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Chart: Deficit for a growing country" border="0" src="https://4.bp.blogspot.com/-xFAL2V7nuuY/WP05iQ6uhdI/AAAAAAAACs4/PLSnZRRsRr0igB4pACiOm0yZPLdwsjtVgCLcB/s1600/intro_X_XX_sim_growing_deficit.png" title="" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: Deficit for country growing at 2%/year.</td></tr></tbody></table>The chart above shows the result of a different scenario, where the economy grows at 2% per year in the "steady state." The country did not start out at steady state, so the deficit as a percentage of GDP starts out at a higher level, then declines towards a constant value.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-cGmsGLd52mw/WP05iWcs6AI/AAAAAAAACs8/N_GXe3iK0EgvcYICxO6UyuJBhzJ8Ie5hwCLcB/s1600/intro_X_XX_sim_growing_fiscal.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Chart: Debt-to-GDP for a growing country" border="0" src="https://2.bp.blogspot.com/-cGmsGLd52mw/WP05iWcs6AI/AAAAAAAACs8/N_GXe3iK0EgvcYICxO6UyuJBhzJ8Ie5hwCLcB/s1600/intro_X_XX_sim_growing_fiscal.png" title="" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: Debt/GDP ratio for country growing at 2%/year</td></tr></tbody></table>The chart above shows that the debt-to-GDP ratio has reached a "steady state" by the end of the simulation. With the parameter values otherwise unchanged, the debt-to-GDP ratio stabilises at a level slightly below the no-growth steady state ratio of 80%. (The higher the nominal growth rate, the lower the debt-to-GDP ratio. This accords with the experience of the post-war era. It is possible to change the household sector behaviour to target the same wealth-to-income ratio regardless of growth rate.)</div><div><br /></div><div>There are a spectacular number of simplifications embedded in this model. A key issue is that the private sector is not a source of growth. However, the basic principles will be roughly the same. The key point is that the deficit will take care of itself eventually, the only issue is to avoid politically unsustainable inflation (whatever that is) in the meantime. (The prospects for inflation is why we normally would not have a government ramp up spending by a huge amount in a short period of time. That said, it has been done; the latest example being World War II. Note that governments implemented rationing to make room for the increase in military production, which is a step that would be hard to justify in peace time.)</div><h2>The Budget Deficit is not Controllable**</h2><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-8nKRqxKvzOQ/WPv1E5c07OI/AAAAAAAACso/-T_qWwxhKf8MmhLFQ--XrrPSXemCA5U8ACLcB/s1600/intro_X_XX_multiplier_deficit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="Chart: Government Spending and Deficit" border="0" src="https://1.bp.blogspot.com/-8nKRqxKvzOQ/WPv1E5c07OI/AAAAAAAACso/-T_qWwxhKf8MmhLFQ--XrrPSXemCA5U8ACLcB/s1600/intro_X_XX_multiplier_deficit.png" title="" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure: Government consumption (G) and the the deficit in the scenario.</td></tr></tbody></table><div><br /></div><div>In this example, we are looking at a country that is facing ever-increasing debt. (The source of the problem is business sector hoarding, <a href="http://www.bondeconomics.com/2016/11/primer-understanding-hoarding-behaviour.html" target="_blank">as described in this article</a>.) The government was running continuous deficits, and some fiscal conservatives got elected. At time period 24, the government was running a deficit of (about) $2.44, and the government decided to cut spending by $3 in time period 25, so that the budget would go back to balance.</div><br />This did not work. There was a temporary improvement in the budget balance, but it fell short of a surplus as a result of multiplier effects. (Note that a larger cut back would result in a small surplus in time period 25, that is, a surplus is possible to achieve.) This is an example of a failure of "static" budget analysis -- cutting spending by $1 does not improve the realised deficit by $1, even if the so-called budget wonks say it will. Furthermore, the improvement was only partial; the budget deficit reverted back to a similar level of deficit in response to lower output.<br /><br />Examination of what was causing the deficits -- hoarding behaviour in the business sector -- tells us that any attempt to use fiscal policy to correct the budget deficit was doomed. Unless there was a policy to force the business sector to run down its financial asset holdings, the government budget would always return to deficit.<br /><br />In other words, the budget balance will reflect decisions made in the private sector, and the government only has an illusion of control over the deficit. In the real world, it only looks like budgets are under control during expansions because budget assumptions systematically underestimate growth, and hence tax revenues. ("The deficit is less than projected due to our brilliant management of the economy!" is a standard press release.) Furthermore, a great deal of cookie jar accounting is used by governments to make it look like they hit budget targets.<br /><br />When we look at the various stockpiles of financial assets that are building up in pension funds, tax havens, and on corporate balance sheets, we should be able to extend the logic of this example to see why we should not be surprise by "high" government debt levels.<br /><br />Once again, this example is highly simplified. If we added in various welfare state programs, the budget deficit moves further and further from the control of government.<br /><h2>Concluding Remarks</h2>Even simple SFC models can be used to demonstrate that we cannot think of government budgets purely in dollar amounts under the control of the government.<br /><h2>Appendix: Code</h2>The code that generated these examples is on the GitHub repository. Unfortunately, the file names are only temporary placeholders, currently:<span style="font-family: "courier new" , "courier" , monospace;"> intro_X_XX_sim_fiscal.py</span>, i<span style="font-family: "courier new" , "courier" , monospace;">ntro_X_XX_sim_multiplier.py</span>. They will be used in my user manual, and the "X"'s will be replaced with the chapter/section number.<br /><br /><br /><b>Footnotes:</b><br /><br />* This is due to have too large an error tolerance for terminating iterations. I guess I will set the parameter default to be less tolerant of errors.<br /><br />** I am not using controllable in the technical sense used by control engineers, in case anyone is wondering.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com11tag:blogger.com,1999:blog-5908830827135060852.post-39586175828722844582017-04-19T09:00:00.000-04:002017-04-19T09:00:00.397-04:00Weaknesses Of Term Premium Estimates Derived From Yield Curve Models<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-99iysr6OvrM/VJN8rd0SuwI/AAAAAAAABPs/5yiQhkoQpwg7bjcj9bRPH3WeBSAunotIgCPcB/s1600/logo_bond_market.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-99iysr6OvrM/VJN8rd0SuwI/AAAAAAAABPs/5yiQhkoQpwg7bjcj9bRPH3WeBSAunotIgCPcB/s1600/logo_bond_market.png" /></a></div>Term structure models have been a growth industry for researchers in academia and at central banks. These models can be structured in many different ways, which makes generalisations about them difficult. For the purposes of this article, I am only concerned about the use of these models to estimate a term premium that is embedded in nominal yields (although my comments can be extended to cover related exercises, such as calculating an inflation risk premium). When I examine individual models, the term premium estimates appear unsatisfactory, but the issues are different for each model. I believe that the root problems for this exercise are fundamental, and we need to understand these fundamental problems before looking at individual models.<br /><br /><a name='more'></a><h2>Attempting to Estimate an Unobservable Variable Will End Badly</h2>In control engineering, there is the notion of an <i>unobservable</i> variable. These are state variables whose values are not just directly measured, their values cannot be inferred from any manipulation of measured values and known system dynamics. For example, if can accurately measure the position of a vehicle, we can infer its velocity, so velocity is not unobservable. However, we have no way of determining the internal temperature from the position data, and so the temperature is indeed unobservable.<br /><br />The normal practice is to delete unobservable variables from dynamic systems models. We have no way of determining their value, and they interfere with attempts to estimate the state variable. (Since there is an infinite number of valid solutions, algorithms will not converge.) It is not that these variables do not exist, but we cannot say anything useful about them with available data and known model dynamics (like the vehicle temperature in my example).<br /><br />As I noted in "<a href="http://www.bondeconomics.com/2017/04/how-to-approach-term-premium.html" target="_blank">How to Approach the Term Premium</a>," an aggregate term premium is a variable that we cannot hope to measure with currently available data. Although I have not formally proved that the term premium is unobservable, that certainly appears to be the case. The only way we can say that an aggregate term premium exists is if we can infer measurable effects on other variables.<br /><br />(As I discussed in that article, investors should probably have their own estimate of the embedded term premium when making investment decisions. Since it is your own estimate, you presumably know what its value is. The catch is that we do not know others' estimates based on market behaviour.)<br /><br />In other words, researchers are writings hundreds of extremely complex papers discussing a concept that shows little sign of existing. If we want to be careful with what we are doing, we need to accept that we should not accept the labelling given to the time series as given by the researchers. That is, just because a model output is referred to as a term premium by a researcher, we should not assume that what the variable really corresponds to. However, I will refer to these model estimates as term premia in this article, as otherwise the text will be confusing.<br /><h2>There's an Infinite Number of Term Premia Estimates</h2>The second issue with term premia estimates is that there is an infinite number of them. We can decompose observed nominal yields in an infinite number of ways, the rules for decomposition can change over time. The only restriction is that the <a href="http://www.bondeconomics.com/2017/04/primer-fixed-income-arbitrage.html" target="_blank">decomposition is arbitrage-free</a>, which is a relatively weak restriction (albeit with complex mathematics).<br /><br />This is wonderful for researchers, as an infinite number of models implies an infinite number of potential papers. (Of course, computational tractability eliminates most potential models.) However, it makes discussion of these models a question of hitting a moving target.<br /><br />One typical use of these models is to examine the effect of an event (for example, quantitative easing) on term premia. The abstract of such papers typically reads as follows:<br /><blockquote class="tr_bq">{Event <i>X}</i> caused the term premium at maturity <i>M</i> to move by <i>Y</i> basis points.</blockquote>Such papers can then be used to prove any number of statements about policy.<br /><br />The correct way of interpreting such papers if that the researcher has found <i>one </i>term structure model -- out of an infinite number of possibilities -- where event <i>X</i> coincided with a move in the term premium of <i>Y</i> basis points.<br /><br />Therefore, the usefulness of such research depends upon your prior beliefs about academic and central bank research. If you normally believe the claims of researchers in their abstracts, there is no problem. For those of us with more cynical prior beliefs, such results can easily be explained as being the result of <i>model-mining</i>.<br /><h2>The Decompositions are Dubious</h2>Once we get past the previous high-level problems, which are highly generic, we are left with more model-specific issues. These problems are usually the result of another inherent problem: we have no natural way to decompose observed yields into term premia and the expected path of short rates.<br /><br />In order to do this, the usual procedure is to force one of the components to follow some estimated value, and then the other component has to equal the residual. (One alternative -- interpreting statistical factors -- is discussed later.) That is, we could force term premia to be roughly equal to some variable, and then expectations is (roughly) equal to observed yields minus the estimated term premium. Vice-versa if we force expectations to follow some variable.<br /><br />Some example decompositions I have run across over the years include the following.<br /><ul><li>Use a survey of economists to determine the expected path of rates.</li><li>Use a measure that is roughly equivalent to historical volatility (or implied volatility) of rates to determine the term premium.</li><li>For inflation-linked curves, use a fundamental model with 2-3 variables to estimate expected inflation.</li></ul><div>The problem with all of these techniques is that they are questionable. In most cases, the importance of these assumptions is largely buried under a discussion of the mathematics of the curve structure model. However, for those of us who are primarily concerned about the level of the term premium, the results are entirely driven by these fundamental estimation techniques.</div><div><br /></div><div>There is an alternative way of approaching this problem, which is based on a yield curve model that relies solely on statistical risk factors. The researcher then interprets one or more of these factors as being a term premium. Such an approach appears more reasonable, but analysis comes down to battling interpretations of data. The presumed attraction of term premium models is that they were supposed to eliminate verbal arguments over how to interpret yield curve movements. Since these models are quite distinct, they are not discussed in the rest of this article.</div><h2>Frequency Domain Problems</h2><div>In most cases, model estimates for the rate component use data that are at a lower frequency than bond market data. By definition, all of the high frequency components of bond yields ("noise") have to be attributed to the other factor.</div><div><br /></div><div>In particular, if we have a slow-moving estimate of expected rates, term premia will be oscillating at a high frequency. In my opinion, such a decomposition makes little economic sense. (I would need to justify this intuition in other articles.)</div><h2>Why are Survey Estimates Dubious?</h2><div>It would seem that surveys regarding the path of short rates would be a useful estimator for rate expectations. However, these estimates are mainly used for entertainment purposes by market participants. (The people being surveyed tend to take them more seriously, of course.)</div><div><br /></div><div>The problem with surveys is that they are almost invariably set by chief economist, who has to work with a committee to set a house view on the economy. Since each committee meeting is invariably a compromise between factions, there is considerable institutional inertia in their estimated path for short rates. Market participants are well aware of the tendency for economists to be stubborn, and then only throw in the towel on their views after the bond market has already moved.</div><div><br /></div><div>Furthermore, there is considerable herding behaviour of economists in surveys. The optimal strategy is just put your view at end one of the consensus. If the outcome is way outside the consensus in your favour, you have the best forecast, and people love you. If the outcome is on the other side, your forecast was only slightly more wrong than the others.</div><div><br /></div><div>To top things off, what matters for bond pricing is what investors think, not economists. Even if the investment firm has a Chief Economist, the positioning of the bond portfolio may have no resemblance to the Chief Economist's views. Large bond investors are extremely coy about their positioning. If they write public bond market commentary, it may only reflect a desire to get out of a position. (Fiduciary rules should certainly imply that such investors not signal future portfolio shifts,) </div><div><br /></div><div>Finally, surveys are done at a low frequency (and with an unknown lag), while market makers adjust prices instantly based on incoming data and flows. As discussed in the previous section, this loads all of the high frequency dynamics in the curve on the term premium, which ends up wiggling around like a greased pig. <i>(The obvious fix to this frequency mismatch is to do a survey of views about the term premium; if it does move at a low frequency, you do not need to worry about aligning survey data to market data.)</i><br /><h2>Relationship to Realised Excess Returns</h2></div>If we want to interpret a time series as a term premium, it should have a relationship to future realised excess returns of a bond at that maturity. The deviation of the term premium from future excess returns is equal to the forecast error of the embedded rate expectations series.<br /><br />For the long end of the yield curve, we have problems with data limitations. The excess return of a 10-year swap starting in January 2000 are going to be pretty close to the excess return of a 10-year swap starting in February 2000. In order to create completely independent observations, we would need to use January 2010 as the next point we test. (I assume that there would be legitimate ways of taking samples closer together.)<br /><br />This runs into the problem that bond yields were regulated in the developed countries until the 1970s, or even the early 1980s. Furthermore, we had a major yield cycle within that era, in which it is clear that everyone overestimated future short rates. (<a href="http://www.bondeconomics.com/2013/09/historical-treasury-term-premia-huge.html" target="_blank">This did show up in historical excess returns.</a>)<br /><br />However, this is not the case for the front end of the curve. For example, in a 25-year period, we have 100 completely independent 3-month instruments issued. Additionally, short rates across currencies are somewhat uncorrelated, increasing the number of potential observations. This allows us to compare the predictions of the yield curve models with actual market behaviour. From an empirical standpoint, such historical analysis is where many term structure models fall apart.<br /><br />The overall relationship between a term premium and future excess returns is somewhat complicated; I may discuss it again in a later article.<br /><h2>Concluding Remarks</h2><div>This article outlines the generic problems with term premium estimates derived from term structure models. We can then look at particular models, and see how they relate to the specific technique used.</div><div><br /></div><div>I may look at one or two examples, but I am not enthusiastic about this task. Any critique of a model that points out that a model has an undesirable property, which just raises the response that another model does not have that property. Given the infinite number of models that are available, that is a never-ending game of Whac-a-Mole<span style="font-family: Calibri, sans-serif; font-size: 10pt;">™</span>. I would rather spend my time looking at techniques that are useful, than being bogged down chasing after an unlimited number of techniques that appear to have few redeeming features.</div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-11959035253527175612017-04-15T09:00:00.000-04:002017-04-15T09:00:00.154-04:00Book Review: The Road To Ruin<a href="https://www.amazon.com/gp/product/1591848083/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1591848083&linkCode=as2&tag=bondecon09-20&linkId=6ef16f124956ae5fc9ab5188046ce843" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=1591848083&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=1591848083" style="border: none !important; margin: 0px !important;" width="1" />James Rickards recently published <i>The Road to Ruin: The Global Elites' Secret Plan for the Next Financial Crisis.</i> His argument is that the risks of a financial crisis is building (possibly hitting in 2018), and that the financial system will be locked down as a result. (He argues that you need to buy gold to hedge against this.) The book is awkward, but has some interesting features. He describes various pop mathematics techniques for economic and financial analysis, although the book does not provide enough details to be able to evaluate them. James Rickards joined the LTCM hedge fund in 1994, and provides an insider's take on its collapse. He is also nostalgic for the economic framework of the 1950s, which parallels the views of a lot of post-Keynesians; the issue is that he is fixated on the gold peg, which was arguably an incidental feature of the 1950s economic institutions.<br /><h2><a name='more'></a>Book Description</h2><i>The Road to Ruin </i>was published in 2016 by an imprint of Penguin Random House. The hardback edition is 301 pages, excluding end matter. The hardback ISBN is 9781591848080.<br /><br />The book is aimed at a wide audience. Parts of the book are partly auto-biographical; the description of the downfall of LTCM was one of the highlights of the book.<br /><h2>World Government?</h2>The unsatisfactory part of the book is the opening chapters. In it, he outlines his theory that the "global elites" intend to implement:<br /><ul><li>world taxation;</li><li>world currency (SDR's, of course); </li><li>world governance; and</li><li>a financial industry lockdown, which he refers to as "Ice-Nine," which comes from Kurt Vonnegut's<i> Cat's Cradle</i>.</li></ul><div>From a casual glance, that sounds like a nutty conspiracy theory. My interpretation of the text is that was exactly was intended. His arguments on these topics are reasonable observations (although I disagree with many points), but he and/or his editors made a deliberate effort to make them sound as crazy as possible (without writing anything downright loopy if quoted).</div><div><br /></div><div>My guess is that this was a marketing decision. It is certainly more lively than summarising your book as "complexity theory and Bayesian inference are really cool." Since these points did not appear to be completely serious, I will largely ignore them here.</div><h2>Pop Mathematics</h2><div>A good portion of the book is a discussion of various popular mathematics that are floating around. He highlights Bayesian inference, chaos, and complexity theory. (Behavioural finance gets an inevitable anecdote as well.) His description of Bayesian theory gives an example of its use, but the other concepts are not covered from a technical point of view. Saying that a more complex society is riskier is a reasonable argument, but it could be derived from historical arguments, without any reference to mathematics.</div><div><br /></div><div>He describes how physicists and the CIA use these techniques, and we are meant to be impressed. However, just because a mathematical technique is useful in one area does not imply anything about its usefulness elsewhere. Frequency domain analysis (Fourier analysis) is the backbone of electrical engineering, yet is only rarely applied in economics and finance. Gee-wiz stories about physicists running big simulations is a standard trope of the pop mathematics genre.</div><div><br /></div><div>His description of Bayesian inference provides the best details of the various techniques. The idea is to start with a number of initial hypotheses about the world, and then revise your estimated odds of their truthfulness based on incoming data. The objective is to make a decision without requiring the long runs of data needed by other techniques.</div><div><br /></div><div>As Rickards notes, the problem is that you need to have good initial hypotheses for this to be anything other than a garbage in-garbage out exercise. I do not have strong opinions on this subject, unlike the people who get too deeply wedded to probability theory.<i> (There's something about probability theory that causes people to divide into cults that champion their preferred interpretation of probability density functions. Even when I was a hardcore academic mathematician, I felt that these probability theorists needed to get out more often.)</i></div><div><br /></div><div>Rickards incorrectly tries to paint Bayesian inference as being a heterodox view. On page 12: "Mainstream economists assume the future resemble the past within certain bounds defined by random distributions." Although I enjoy slagging off mainstream economists, this is a major misrepresentation. Currently, every second mainstream academic economics paper has "Bayesian" somewhere in the title. </div><br />Rickards repeats the various stories about chaos theory that have been a staple of pop mathematics for a couple of decades. He heralds the work of Edward Lorenz, a meteorologist, who allegedly discovered the notion of chaos in 1960. He lumps chaos theory in with complexity theory, which is a grave disservice to the rest of complexity theory.<br /><br />The key insight of the chaos theorists was that nonlinear systems behaved differently than linear systems. The response of any mathematician who was aware of nonlinear dynamics literature would be: what did you expect? None of the alleged insights of chaos theory would surprise anyone who studied the difficulties of determining the solution of nonlinear systems.<i>*</i><br /><br />Complexity theory is more interesting, but it poses its own set of problems. The idea behind complexity theory is that you start off with a web of connected small actors, each of which follows simple rules. The overall system displays behaviour that would not be expected based on the low level behaviour -- <i>emergent behaviour</i>.<br /><br />Emergent behaviour is interesting, and I understand that it is useful is physics. Emergent behaviour is also extremely important for video games.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ldZBMfT4Hl0/WO-mSAMdbPI/AAAAAAAACr0/Gs0F0ozpHZI4n0vwchCM6-SLxT28bQQEQCLcB/s1600/civilization.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="408" src="https://4.bp.blogspot.com/-ldZBMfT4Hl0/WO-mSAMdbPI/AAAAAAAACr0/Gs0F0ozpHZI4n0vwchCM6-SLxT28bQQEQCLcB/s640/civilization.PNG" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Civilization (1991). Icons depict subsystems that interact to create emergent behaviour.</td></tr></tbody></table><br />For example, <i>Civilization</i> was published by Sid Meier in 1991. It consists of many simple systems that create delicate interactions. This makes the game engrossing, and helps create the illusion of being an in-depth simulation. (It probably delayed finishing my thesis by a couple of weeks.)<br /><br />Software like <i>Civilization</i> could easily be used as a teaching tool. I have no problems with teaching models; this is how I approach stock-flow consistent models. However, no number of hours of playing <i>Civilization </i>is going to give someone a proper understanding of the collapse of the western Roman Empire. That is, we cannot assume that the real world has some property just because a teaching model has that property. The advantage of a simple model like SFC models is that it is much clearer what is going on -- and the associated limitations -- than a complex black box.<br /><br />Furthermore, from the perspective of macro, we are primarily interested in the emergent macro behaviour. Why spend time trying to tune individual entity models so that they get emergent behaviour that matches data, instead of just trying to fit a relationship to observed aggregate macro data?<br /><h2>Yeah, VaR has Problems. </h2>Like some other mathematical visionaries, Rickards spending considerable time slamming all the clods in finance who rely upon the normal probability distribution, which we also use to calculate Value at Risk (VaR). Of course, VaR is the only risk measure that we ignorant fools use.<br /><br />However, none of the tools my team built relied upon an assumption of normal probability distributions. In the external analysis that I respected, I cannot recall anything that relied upon the normal distribution, so I have little reason to believe that we were an isolated band of geniuses.<br /><br />Sure, the risk managers used VaR. However, "VaR" was calculated a few different ways; the original technique that used normal probability distributions was just one variant. And like everywhere else, VaR was supplemented by other risk measures.<br /><br />Value at Risk is used for a reason: how do we discuss the global risk profile of a multi-asset class portfolio? (The problem with other risk measures is that they tend to be biased towards one asset class or another. The managers of the penalised asset class are not going to be happy with that unfairness, as it would mean that their risk-taking capacity would be crippled.) Until someone can give an alternative answer to that question (and not just wail about the normal distribution and fat tails), firms will keep using it.<br /><h2>Back to the 1950s</h2>Rickards economic views are interesting. Although he is a fan of Schumpeter and gold, he essentially puts forth the 1950s as an economic utopia. This is position of many on the left, although they are a fan of the 90% marginal tax rates of that era (a detail that Rickards skips over).<br /><br />He argues in favour of a return to Glass-Steagall, protectionism and pegging the exchange rate. Unfortunately, he offers no idea how the United States could peg its exchange rate in the first, other than pegging to gold. If other countries were pegged to gold (which was historically the case), that work work. In 2017, no other country pegs to gold. The government would have to implement capital controls to make a currency peg work, which is another aspect of the Bretton Woods era that Rickards also skipped over.<br /><br />A unilateral move to peg the U.S. dollar to gold is just a giant attempt at price fixing. Whoever implemented such a policy would be known as "President Chump" thereafter.<br /><ul><li>If the price is set too low, Fort Knox would be drained in days as foreign central banks cash in a small portion of their foreign exchange reserves.</li><li>If set too high (which is the gold investor fantasy), the United States Federal Government would end up snarfing everyone's excess gold. The U.S. government would end up borrowing to fund an even bigger position in gold that will just end up sitting in Fort Knox. Although convenient for holders of gold and gold miners, it is hard to see how that benefits U.S. citizens. The only way the U.S. government could get out of this bad bet is to inflate the currency, so that the dollar price of gold drops to the world price. Not exactly a hard money policy. </li></ul><h2>Reliability Questionable</h2>Some of Rickards' statements throughout the book appear questionable. I will just comment on a couple of examples that stand out.<br /><br /><i>The Road to Ruin</i>, page 275:<br /><blockquote class="tr_bq">Where major currencies, bonds, and gold are involved, moves of this magnitude formerly took years. Now they take minutes or hours.<br />This type of volatility may be new to currency and bond traders.</blockquote>Ahem.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fMpXl0ZlYjc/WOu4l17GNWI/AAAAAAAACrQ/_G9ZN4UbRe8-kOXQDZGHzJfjRTzIxPimwCPcB/s1600/%2Bc20170412_Tsy10_vol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: 10-year Treasury Volatility" border="0" src="https://3.bp.blogspot.com/-fMpXl0ZlYjc/WOu4l17GNWI/AAAAAAAACrQ/_G9ZN4UbRe8-kOXQDZGHzJfjRTzIxPimwCPcB/s1600/%2Bc20170412_Tsy10_vol.png" title="" /></a></div><br />To be merciful, I am not extending the time series back to the "Saturday Night Massacre" of 1979. As for rapidity of movement, traders did not refer to the NAPM report (now ISM) as "Napalm" for nothing. Pretty much all of the volatility in rates is the result of yields gapping in one direction or another.<br /><br /><i>The Road to Ruin</i>, page 187:<br /><blockquote class="tr_bq">Still the presence of 35,000 tons of gold in government vaults, about 15 percent of all the gold mined in history, testifies to gold's monetary role beside official denials. </blockquote>This shows ignorance about recent gold history: <i>central banks cannot get out of the trade. </i>The <a href="https://en.wikipedia.org/wiki/Sale_of_UK_gold_reserves,_1999%E2%80%932002" target="_blank">U.K. Chancellor of the Exchequer Gordon Brown earned global ridicule for his sales programme at the bottom of the gold market in 1990-2002</a>, that was the end of the attempts of European central banks to exit gold. (The Bank of Canada was one of the few banks that managed to get out: they dumped their gold at the top of the market around 1980, rotating into bonds which also had peak yields at the time -- possibly the greatest macro trade by a central bank ever). Meanwhile, China is buying gold as that is one of the few ways of exiting their U.S. dollar position. Holding the bonds of a geopolitical rival is not a risk-free investment. That is a risk calculus not faced by most developed central country banks.<br /><h2>Crisis in 2018!</h2>One of the key takeaways of the book is that we are building up to a big financial crisis in 2018. Although such an outcome is plausible, there's not a whole lot of justification for it in the book. Saying that risks are building up in the system because of "complexity" is not a whole lot more informative than saying that we get financial crises in years that end with "8" (1998, 2008, ...).<br /><br />He also emphasises that this will lead to "global elites" locking down the financial system. I understand that gold fanciers were traumatised by the de-monetisation of gold, but they should learn to deal with it. Commerce resumed quite happily in the absence of gold. Ramming a "bank holiday" down the throat of the financial system to allow a panic to sort itself out is an emergency measure that has worked historically. However, the financial services needed by the real economy is always restarted quickly thereafter.<br /><h2>Concluding Remarks</h2>As should be obvious, I am not a fan of this book. However, it is interesting to see the drift towards nationalist economics even among gold investors. Meanwhile, if you want a compendium of popular mathematics, this book is a gold mine.<br /><br /><b>Footnote:</b><br /><br />* Don't get me started about the %$&# butterflies and hurricanes.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-84649415902446047702017-04-12T09:00:00.000-04:002017-04-12T09:00:26.630-04:00Low Bond Volatility Not Surprising<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-fMpXl0ZlYjc/WOu4l17GNWI/AAAAAAAACrM/dxRSLLYmhGYKbwW2hL3i9ICTf3Hc8wKAgCLcB/s1600/%2Bc20170412_Tsy10_vol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: 10-year Treasury Historical Volatility" border="0" src="https://4.bp.blogspot.com/-fMpXl0ZlYjc/WOu4l17GNWI/AAAAAAAACrM/dxRSLLYmhGYKbwW2hL3i9ICTf3Hc8wKAgCLcB/s1600/%2Bc20170412_Tsy10_vol.png" title="" /></a></div><br />The low levels of implied volatility in the bond market has attracted a fair amount of commentary. Although it seems reasonable to believe that volatility selling strategies have reduced market volatility, there's no fundamental reason to expect a big reversal (outside of another crisis).<br /><br /><a name='more'></a>I will immediately note that I am not particularly in tune with what is happening in the nooks and crannies of the fixed income volatility market; I am just attempting to give a simple explanation of the fundamental forces that result in low volatility. Additionally, this should not be construed as investment advice. Although there are good reasons to be complacent about fixed income volatility, it can only go one way in a crisis.<br /><br />The chart above shows the realised (historical volatility) of the 10-year Treasury (using the Fed H.15 data). As can be seen, the 50-day volatility is near a record low -- which is unusual for a tightening cycle. This reality has to be kept in mind when discussing the low levels of implied volatility: implied volatility is supposed to be an unbiased estimator for realised volatility. If traders kept implied volatility higher than realised, it would be easy to lock in a <a href="http://www.bondeconomics.com/2017/04/primer-fixed-income-arbitrage.html" target="_blank">pseudo-arbitrage profits</a>, with very little risk (and balance sheet exposure). Therefore, we should not be surprised that implied volatility is low.<br /><br />There is an interesting feedback loop between the options market and the bond market. If investors are selling a lot of options (volatility), then this has the effect of helping suppress realised volatility (which benefits investors who sell volatility). How this effect works is probably not obvious; I will now sketch how it works.<br /><br />In order to discuss volatility without taking a view on the direction of bond yields, we need to buy a straddle (a put and a call). <i>(My working assumption for this article is that the reader knows what a put and a call are.) </i>If we just buy a put (or a call), we are positioning for a move in interest rates in one direction or another, which takes the focus away from the effects of volatility. If we buy a straddle, we just want rates to move in either direction.<br /><br />It must be kept in mind that the options market is zero sum; for every seller, there is a buyer. If both side of options trade hedged in a similar fashion, there would be little net effect on the bond market. In order for the feedback loop to kick in, we need asymmetric behaviour.<br /><ul><li>The seller of the straddle does not delta-hedge (discussed below).</li><li>The buyer of straddle delta-hedges the risk.</li></ul><div>This is what we would expect to happen if there was a net imbalance of investors who wanted to sell volatility; the market is only balanced out by drawing in other investors who think implied volatility is too low, but do not want to be exposed to directional risks.</div><h2>Actions of the Straddle Buyer</h2><div>Let's assume that the bond yield starts at 2%, and the strike of the straddle is at 2% as well (it is <i>at-the-money</i>).* The directional risk of the call and put cancel out, and so the buyer has no directional exposure. The buyer wants to remain with no directional risk, so she follows a <i>delta-hedging </i>strategy.</div><div><ul><li>The next day, the bond yield rises to 2.10% (and hence, the price falls). The call option leg of the straddle loses money, and the put option leg gains money. The behaviour of options which are close to at-the-money tells us that the put option gains money faster than the call option loses money. As a result, at 2.10%, the position now has a net short exposure to bonds: it makes more money if bond yields rise by 1 basis point than if it falls by 1 basis point. In order to hedge out this undesired directional position, the straddle buyer would buy a small position in the bond to return to a net neutral position.</li><li>The following day, the yield drops back to 2.00%. Both the put and call are at-the-money again, and so once again, the straddle has a neutral interest rate sensitivity. However, the previously bought bond position now unbalances the portfolio. The straddle buyer will thus sell the piece of the bond that was bought the previous day.</li></ul><div>The profitability of the straddle buyer is determined by two factors.</div></div><div><ol><li>There will have been a profit on the sold bond position; it was bought at 2.10%, and sold at 2.00%. <i>(Yield down, price up!) </i>The size of these profit was determined by the size of the move in yields; if the bond had only gone from 2.00% -> 2.05% -> 2.00%, the profits would have been (roughly) halved.</li><li>The straddle is now two days closer to expiry. It's time value would have dropped (assuming unchanged implied volatility). If the option originally has two days to expiry, the straddle expired worthless.</li></ol><div>In summary, the net profit of the straddle buyer depends upon the relationship between realised volatility and the cost of the option (implied volatility). </div></div><div><br /></div><div>For our discussion here, the key point is that the straddle buyer acts in a stabilising fashion for the bond market: when the bond yield rose, she bought, and then she sold when the yield fell.</div><h2>Straddle Seller</h2><div>If the straddle seller delta-hedged as well, the transactions would mirror that of the buyer. This would have no net effect on the bond market. However, this is probably not what most volatility sellers want to do. Instead, the seller does not hedge.</div><div><br /></div><div>In order to lose money over the life the straddle, the bond yield has to move far enough away from the straddle strike so as to generate a capital loss that is greater than the received premium for the straddle. The scenario above makes the straddle seller happy, as the bond yield is back where it started from, and two days have ticked away from the life of the straddle.</div><div><br /></div><div>Therefore, in order to get the straddle seller worried, yields have to move strongly in one direction or another; it is not enough to just jump around around a certain level.</div><div><br /></div><div>In other words, if bond yields range trade, unhedged volatility sellers laugh all the way to the bank.</div><div><br /></div><div>However, things get ugly if yields start to move, and the straddle seller wants to get risk under control. In this case, the seller needs to act in a fashion that accentuates market movements: selling when prices fall, or buying when prices rise.</div><h2>Can We Hit a Volatility Crisis?</h2><div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-1HrCH3kAcX0/WOvOjrSs72I/AAAAAAAACrc/zq9Y19mj3VY7H25JB-mY76XgYGzVbcqmACLcB/s1600/%2Bc20170412_ED3M_vol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: 3-month USD eurodollar historical vol" border="0" src="https://2.bp.blogspot.com/-1HrCH3kAcX0/WOvOjrSs72I/AAAAAAAACrc/zq9Y19mj3VY7H25JB-mY76XgYGzVbcqmACLcB/s1600/%2Bc20170412_ED3M_vol.png" title="" /></a></div><br /></div><div>Although a crisis driven by volatility hedging in interest rates is possible, the obvious difficulty is that it will be hard to move yields away from where forwards lie. The chart above shows the historical volatility of the 3-month eurodollar rate, on a 24-month window. This volatility is driven entirely by Fed policy and the LIBOR spread, and is not affected by delta hedging. As one would suspect, it has been extremely low in recent years. Even though the Fed started hiking in December 2015, the realised volatility is still below almost the entire pre-Financial Crisis history.<br /><br />The Federal Reserve is dominated by New Keynesians, and they impute great powers to the central bank's ability to guide expectations. The result is that during an expansion, the short rate ends up pretty close to where the forwards priced it to be. This generates a pattern of range-trading -- which means that realised volatility remains low.<br /><br />There are a great many commentators who are disturbed by this state of events. They believe that the Federal Reserve should act in a erratic fashion, so as to blow up the bond market (<a href="http://www.bondeconomics.com/2017/04/the-term-premium-problem.html" target="_blank">by raising term premia?</a>). The apparent logic is that the Fed needs to cause a crisis, in order to prevent a crisis. It may be that personnel changes in the Fed could lead to such an outcome, but I would not hold my breath waiting for that.<br /><h2>Mortgage Market</h2>The mortgage market is always good for generating wonkish-sounding stories about risk in fixed income. The reason why American conventional mortgage-backed securities are good for volatility is that the bulk of them are 30-year amortising instruments, with an embedded call option -- homeowners are largely free to refinance at lower rates. (Such consumer-friendly mortgages do not exist in most other developed countries.)<br /><br />The result is that the duration of a mortgage-backed security (MBS) drops as yields get lower: we expect borrowers to refinance, and so we effectively end up with a short maturity instrument. If rates then rise, we no longer expect refinancing, and so the effective maturity lengthens. If you are hedging a pool of mortgages, you end up trading in the same direction as market moves. (As should be expected, this is a short volatility position.)<br /><br />If you are worried about rising yields, the MBS market should not be too much of a concern if implied volatility is low. The low implied volatility means that the discounted odds of a refinancing is already low, and so the effective duration of the MBS is already quite long. This is unlike the situation when mortgage hedging was a big deal in the market, which features large moves into and out of refinancing range for mortgage portfolios. Any mortgage hedging that might occur would be dwarfed by the need for funds to buy bonds to meet actuarial liabilities.<br /><br />The only real scare story is that we have a big rally in bonds, in which case refinancing might once again matter.<br /><h2>Concluding Remarks</h2>Although it is fun to imagine potential crises, the rates market is probably not going to be the source of problems. As always, the main risks revolve around unsound lending practices.<br /><br /></div><h2>Technical Appendix</h2>My implied volatility charts show the<i> normal</i> volatility, which measures volatility in a number of basis points per time period (day, month, year). It is calculated by the standard deviation of yield changes over the trading window. An alternative is to express <i>log-normal</i> volatility, where the volatility is expressed as a percentage of the level of yields. (This is similar to how volatility is defined for equity prices.) Fixed income models can have more flexible behaviour, for example lying between these two cases.<br /><br />In a low rate environment, log-normal volatility is problematic, and so my charts are reasonable. However, people who believe that rates are closer to log-normal in behaviour might argue that the rates volatility is overstated at the beginning of the time period (since the absolute level of yields was higher, and so volatility should be scaled down).<br /><br /><b>Footnote:</b><br /><br />* I am skipping a lot of the details that would be needed for option pricing; I am not even giving the maturity of the bond. Purists would note that what matters is the forward yield, and not the spot yield. The embedded assumption in this example is that the forward and spot yield are the same, which is not that far off most of the time.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-57632527522715951192017-04-10T09:00:00.000-04:002017-04-10T09:00:00.169-04:00How To Approach The Term Premium<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-99iysr6OvrM/VJN8rd0SuwI/AAAAAAAABPs/5yiQhkoQpwg7bjcj9bRPH3WeBSAunotIgCPcB/s1600/logo_bond_market.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-99iysr6OvrM/VJN8rd0SuwI/AAAAAAAABPs/5yiQhkoQpwg7bjcj9bRPH3WeBSAunotIgCPcB/s1600/logo_bond_market.png" /></a></div>The term premium is an important concept in fixed income analysis. For our own analysis, there are a few ways of using the term premium. Unfortunately, there is no way of extending the analysis for an individual to the market in general, as there is no need for market participants to agree on the term premium before undertaking a transaction. As a result, we should not expect to be able to infer an average term premium implied by market pricing using any algorithm.<br /><br /><a name='more'></a>This article follows on from the article "<a href="http://www.bondeconomics.com/2017/04/the-term-premium-problem.html" target="_blank">The Term Premium Problem</a>," which outlined my thinking about the term premium. I imagine that readers would be most interested in my criticisms of existing techniques to calculate the term premium. My argument is that the problem with those techniques is that they start in the wrong place; there is no technical fix as a result. Rather than attempt to criticise hundreds of complex algorithms, I will instead explain what I see as the best starting point. From that vantage point, the defects of the conventional approaches become more obvious.<br /><h2>Using the Term Premium as an Individual</h2>I believe that investors that are making directional decisions in the fixed income markets, they <i>should</i> use the concept of the term premium. (<i>Directional </i>trades are positions that have risk exposure to the level of the interest rates across the curve. Conversely, in relative value trading, one normally attempts to hedge out the directional risk as best possible.)<br /><br />(I assume that the reader is familiar with the concept; <a href="http://www.bondeconomics.com/2013/09/primer-what-is-term-premium.html" target="_blank">please see this primer for a definition of the term premium</a>. But as a quick summary, the term premium for a bond is the additional yield it is expected to have versus rolling over short-term bills -- cash, in bond market jargon -- over the life of the bond. The expected return on cash is equal to the expected average of the short rate, modulo various small technical effects that I am ignoring for simplicity.)<br /><br />Importantly, there are a number of ways of using the term premium; these different usages imply slightly different definitions for the concept. The reality that the definition of the term premium depends on how we are using it is a subtlety that I rarely see discussed.<br /><ul><li>How much extra return over cash do I demand in order to hold a bond instead of cash? This definition is purely determined by my preferences.</li><li>Given my expectations for the path of short rates, what level of term premium determines the fair value of a bond yield? That is, what is the fair value for the term premium. This fair value can be determined independently of the bond yield observed in the market; the objective is that we can buy or sell on a profitable basis when comparing the market yield to fair value.</li><li>Given my expectations for the path of short rates, what is the level of the term premium implied by the observed bond yield in the market? This market-implied premium -- relative to my rate expectations -- would presumably be an input into investment decisions.</li><li>What do I think other market participants believe is the proper level of the term premium for their own decisions? Why this is interesting may not be immediately obvious. As an example, if I am a market maker, I need to set my prices midway between other market participants, as I need to be have two-sided trading flows. I need to find what I believe is an "average" rate expectation and term premium so that my prices are roughly in the middle of market.</li></ul><div>I can obviously determine what the level of the term premium I am using in each of these cases, since I can always ask myself.</div><h2>Realised (Historical) Term Premia</h2><div>One alternative way of approaching the term premium is to look at historical excess returns for bonds. The problem is that a bond's excess returns are baked in at issuance, and those excess returns are highly auto-correlated over time. More simply, we have been in a mega bond bull market, and <a href="http://www.bondeconomics.com/2013/09/historical-treasury-term-premia-huge.html" target="_blank">so bond's excess returns have been huge</a>.<br /><br />I think we need to take into account historical excess returns when discussing the forward-looking term premium, but we have to accept that market participants have historically been quite wrong about the direction of interest rates, and these errors were persistent. However, if we are looking at maturities at 2 years and under, these errors should be less significant.<br /><h2>Should Term Premia be Positive?</h2></div><div>Under the classical "returns volatility is bad" approach to finance, one would assume that term premia are positive. (That would certainly match the historical experience.) However, in a world where investors need to match actuarial liabilities, reinvestment risk can easily be less significant than the aversion to returns volatility. That said, it is hard to see how a negative term premium estimate on a 2-year bond is plausible.</div><h2>The Moral Philosophy of Bond Pricing</h2><div>I deliberately used the word "should" when I wrote: "they (investors) should use the concept of the term premium." I have made a weak normative statement: investors should be <i>rational </i>when pricing bonds. (I am using rational as it is usually used in economics and finance; and the rate expectations/term premium approach is what is typically implied by investor rationality.)</div><div><br /></div>If everyone follows this prescription, all market participants would be trading bonds based on valuations derived using rate expectations and the term premium. Under this assumption, it appears to make sense that we could write about the average term premium amongst market participants.<br /><br />However, as I will discuss in the next section, we need to question this assumption. Once we take into account the various other factors that go into investment decisions, investors may no longer transact bonds based on rate expectations and term premium, even if they are not narrowly "irrational." <i>In such a case, we no longer have an "average" term premium (I am using average in a loose sense, not necessarily the arithmetic mean) that describes aggregate market participant behaviour.</i><br /><br />I have never run across any serious discussion of the aggregation problem for term premia (I never bothered searching for such discussions; as a non-academic, that's not my problem). One explanation is that the assumption of rationality is so ingrained that the possibility that people can trade bonds without a view on the term premia was never taken too seriously.<br /><h2>The Average Term Premium Does Not Exist</h2>I will now give a simplified example that highlights the problem with believing that there is an "average" term premium.<br /><br />Imagine that trading one day in the 10-year bond is dominated by four large fund investors (possibly intermediated by dealers that end up with no net positions); assume that all are transacting in roughly equal size. All four investors are behaving in an optimal fashion, based on their situation. For simplicity, we assume that all trades clear at 4%; we will not worry about the mechanism that determined the market clearing yield.<br /><ul><li><b>Buyer.</b> One investor assumes that the term premium is 0.50%, and the expected average of the short rate is 3.00%. As such, this investor is buying the 10-year.</li><li><b>Seller.</b> One investor assumes that the term premium is 1.25%, and revised up the expected average short rate to 3.25% as the result of new data released that day. This investor sells 10-year bonds.</li><li><b>Buyer. </b>One fund was forced to buy bonds in order to lower the Value at Risk of its aggregate portfolio; the 10-year bond was assumed to have a negative return correlation with the fund's equity position.</li><li><b>Seller.</b> A bond index fund was forced to sell to meet redemptions by households who invest in the fund. Although each household had its own reasons, many were selling to raise cash in order to make tax payments.</li></ul><div>There is no way of going from the observed bond yield to the risk premium. Even for the market participants who had term premium estimates, there estimates did not agree. Of course, two of the funds transacted without a defined term premium in mind. The market is cleared on the basis of bond yields, not risk premia. (The equity risk premium is in a similar position, but there are some practical differences, as I discuss below.)</div><div><br /></div><div>Pretending that there is an infinite number of investors that we can average out is not a realistic response. The fixed income market is a scale business; trading is dominated by a few entities.<br /><br /></div><div>Things get even worse for the idea of an "average" term premium if we bring in "irrational" investors.</div><h2>"Irrational" Participants</h2><div>There are many bond market participants that will transact in a way that cannot be interpreted based upon term premia and expected short rates. (The behaviour might be considered rational from the perspective of a more complex utility function; but it will appear irrational from our narrow perspective here.)</div><div><ul><li><b>Are committees rational?</b> Most funds make directional investment decisions using an investment committee; individual portfolio managers normally have limited discretion to take risk. There is no reason for committee members to agree on how to decompose bond yields; they invariably make decisions based on observed market yields. The complexity of the decision making for a group may make it unlikely that we can fit a portfolio allocation decision to observed behaviour.</li><li><b>Technical Traders. </b>There may still be investors that trade bonds based on things like candle stick charts (although such manager seem to be increasingly rare). However, there do seem to be people who trade government bonds based on stories they read on the internet.</li><li><b>Behavioural Finance.</b> <i><Insert behavioural finance anecdotes here.></i></li><li><b>Balance sheet driven investors. </b>Central bank reserve managers have been notoriously price insensitive. Many individuals have automatic investment plans; and it is a safe bet that most households do not have any views about the level of the term premium.</li><li><b>Borrowers. </b>We cannot look at just the investors for determining how markets clear. Borrowers also adjust their issuance profile over time, and they have to take into account multiple factors for their choices.</li></ul><h2>Analogy to the Equity Risk Premium</h2></div><div>There are similarities between the equity risk premium and the term premium. In my view, there is a key difference in their behaviour. In order to calculate the equity risk premium, we need a long-term earnings growth (or dividend growth) estimate, which only moves at a low frequency. The equity risk premium moves at a high frequency to bring the high frequency market data in line with low frequency fundamental data. (The discount rate also moves at a high frequency, but is largely inconsequential for valuation on a day-to-day basis.) Once we decide what earnings growth series we use, we have no difficulty in pinning down an equity risk premium -- if we assume that investors all agree on earnings growth prospects (which is unlikely). </div><div><br /></div><div>When we look at some estimates of the term premium generated by arbitrage-free yield curve models. both the rate expectations and the term premium are moving at a high frequency. It is going to much more difficult to untangle these time series.</div><div><br /></div><div>I will discuss this frequency issue in greater depth when I comment on the arbitrage-free yield curve estimates of the term premium.</div><h2>Even if You Have a Term Premium, What Do You Do With It?</h2>It might be possible to commission a survey of investors of what term premium they are assuming in their investment decisions. (This is quite different from how some analysts are using existing survey data; a point I may return to in later articles.)<br /><br />I would have serious doubts about the validity of such a survey; I certainly would not have offered outsiders any peeks at our proprietary investment analytics when I was with an investment firm. The most likely outcome that the responses would be filled in by junior economists, who would then just grab the latest data points off a term structure model that is in the public domain.<br /><br />However, even if we have access to such data, how so we use it? In the examples of how an individual uses a term premium, the concept makes sense; it offers <i>me </i>guidance how <i>I</i> should act. It is unclear what information an aggregate term premium would give us -- who is acting, and why? Unless it can be related to some observable financial or economic outcome, there is nothing that distinguishes one term premium estimate from another.<br /><h2>Concluding Remarks</h2>The term premium is a well-defined concept within our own analysis, although the exact definition depends on how you are using it. However, there is no way of looking at market data to determine an average term premium used by market participants. Therefore, when we look at a time series that is labelled as an average term premium, we should not expect it to be coherent with how any individual would price bonds.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com8