tag:blogger.com,1999:blog-59088308271350608522017-12-14T06:44:19.388-05:00Bond EconomicsBrian Romanchuk's commentary and books on bond market economics.Brian Romanchuknoreply@blogger.comBlogger623125tag:blogger.com,1999:blog-5908830827135060852.post-6461303249537233012017-12-13T09:00:00.000-05:002017-12-13T09:00:18.964-05:00Comments On Short-Dated Breakeven Inflation<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Bj9ku8L37AQ/Wi_fvxlDL8I/AAAAAAAADG8/K45c1CYkKXoIZauc4yJNgm5v-1d_ATXhwCKgBGAs/s1600/logo_linkers.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-Bj9ku8L37AQ/Wi_fvxlDL8I/AAAAAAAADG8/K45c1CYkKXoIZauc4yJNgm5v-1d_ATXhwCKgBGAs/s1600/logo_linkers.png" /></a></div><a href="http://www.bondeconomics.com/2017/12/is-breakeven-inflation-same-thing-as.html" target="_blank">In an earlier article, I commented on the usefulness of breakeven inflation as an inflation forecast</a>. This article continues that line of thought, but with respect to short maturity index-linked bonds. In my view, there is significant market segmentation in the index-linked market when compared to conventional bonds. My belief is that short-dated breakevens are relatively unbiased, but have to be interpreted as a play on oil prices.<br /><br /><a name='more'></a><b>Note: </b><i>This article is fairly loose. In it, I explain why I thought the front end of the TIPS curve was fairly valued when I looked at it in 2006-2008 (pre-Crisis 2008!). Someone with the access to the relevant data could bring the argument up-to-date. It is a safe bet that an academic would sneer at the suggested methodology. Instead, most academics would apply the standard statistical techniques to the problem -- and get the standard wrong answer.</i><br /><h2>Digression: Market Segmentation</h2>Before we discuss the efficiency of short-dated breakeven inflation, we need to understand why I am breaking out the analysis of this part of the curve in the first place. Although "short-dated" is a relatively vague term, in this case, we can be more precise: under 1-year maturity? What is special about the 1-year point of the curve? That's where bonds drop out of standard bond indices.<br /><br />The first background point to realise is that a significant portion of the bond market is now held by institutional investors who are managing against a bond index. For an non-levered indexed investor, your portfolio is thought of as:<br /><ul><li>the bond index itself;</li><li>levered spread trades that represent your deviations from the index.</li></ul><div>The fact that your departure from the index portfolio are effectively leveraged trades means that you can read exactly the same dealer research as levered investors and apply the same trade ideas. If you do not want to borrow securities, you are limited in how far you can sell securities short (minimum holding of $0), but that's about it.</div><div><br /></div><div>Holding a bond of maturity under a year means that you are effectively entering a spread trade against longer maturity bonds. Since the duration of bonds under 1-year decay much faster in percentage terms than long-dated bonds, your hedging ratios move rapidly. (For example, a 6-month bond loses 50% of its time-to-maturity over 3 months, while a 2-year bond loses 12.5%. Time-to-maturity is a decent proxy for duration.) This makes managing the position a pain, so you normally want to sell those bonds out of your bond index portfolio.</div><div><br /></div><div>In the conventional bond world, this is not a problem. Money market funds can generally buy bonds of up to one year maturity, and money market investors are generally desperate to find any way to outperform. As a result, selling from bond funds is easily absorbed.</div><div><br /></div><div>The same does not hold for index-linked bonds. There are no inflation-linked money market funds. It is extremely difficult to imagine scenarios in which anyone needs to buy inflation protection on a one-year horizon (outside of fantasies spun by some economists). Meanwhile, since nobody issues index-linked bonds of under one year maturity, there is no primary market activity to help create a yield curve.</div><div><br /></div><div>By default, leveraged investors have to step up and buy the bonds. And say what you want about leverage investors, they are not going to make analytical mistakes like buying an index-linked bond because it has an attractive quoted (real) yield. Since they are borrowing in nominal terms to fund the position, they need to know exactly what the forecast nominal return will be.</div><h2>Limited Sample Size</h2><div>One of the problems with analysis of this style is that the sample size is very small. There are no issuance programmes of index-linked Treasury bills; we only have a few bonds crashing through the 1-year maturity barrier.</div><div><ul><li>In Canada, the shortest maturity linker matures in 2021, and so at the time of writing, no Government of Canada Real Return Bond was below 1 year maturity. (There may have been provincial issues.)</li><li>The old U.K. index-linked gilt design was horror show, and coupon payment was fixed six months in advance. If you have the price data and associated pricers, this market gives the longest back history. That said, it appears that such bonds would have been completely illiquid, and so the reliability of pricing data would be open to question.</li><li>Since 2008, euro area linker pricing is very sensitive to things like default risk. There is a decent sample size, but you would need to be very careful with the data.</li><li>There were a few U.S. TIPS that matured to provide a sample.</li><li>(I am unfamiliar with the linker markets elsewhere, such as Australia and New Zealand.)</li></ul><div>Since the maturing bonds are at one extreme of the linker yield curve, you would need to look at their individual pricing data, and not rely on a fitted curve. Your statistical tests of market efficiency would be purely an analysis of how well the yield curve fitting algorithm extrapolates the curve. Based on my experience, I would have zero confidence in any algorithms ability to extrapolate a linker curve. (For conventional bonds, you can start pulling in other instruments to pin down short maturities.)</div></div><h2>Is Pricing Efficient?</h2><div>It is extremely common to discuss the "efficiency of pricing" in markets (or market efficiency). <a href="http://www.bondeconomics.com/2017/11/defining-market-efficiency-properly.html" target="_blank">As I discussed earlier, we really should be thinking about the efficiency of investors, and not anthropomorphise markets.</a> In my limited experience (which could be easily out-of-date), the short end of index-linked market featured efficient investors.</div><div><br /></div><div>If you looked at any strategy analysis, it revolved around comparing the breakeven CPI index fixings, and compared them to the author's forecasts. During the 2006- to mid-2008 period, those forecasts were pretty close to realised fixings. (I do not remember whether the forecasts from the strategists were close to the the survey consensus; but they probably close.)</div><div><br /></div><div>Under the assumption that is how the market participants were pricing the bonds in practice, that methodology is theoretically efficient. This is great contrast to the multitude of valuation methods used to come up with a fair value for the conventional 10-year Treasury yield among strategists and commentators. (The 10-year JGB yield is an even greater source of analytical confusion.) The price action I observed matched the evolution of the presumed weighted average forecast.</div><div><br /></div><div>That was obviously a subjective opinion. However, the reader could look at current pricing, and come to their own opinion in a straightforward fashion; they do not need some academic with a highly questionable affine term structure model to tell them whether the economic breakeven is off market when compared to forecasts. If one wanted a more general answer, there are two standard methods, both of which are unsatisfactory.</div><div><ol><li>One could compare the breakeven inflation rate versus published consensus forecasts. The problem with this is that these consensus forecasts largely represent herding behaviour of street economists, and most investors would not base investment decisions solely upon them. Furthermore, breakeven inflation rates are available in real time, while forecasts reflect slow-moving committee decisions.</li><li>We could compare the economic breakeven to realised inflation. The problem with that approach is that it testing whether market participants are clairvoyants, not whether they are leaving money on the table.</li></ol><div>The second point can be expanded upon. The main driver of CPI inflation in the short term is oil price (technically, gasoline) movements. All useful CPI forecasts are effectively conditional upon oil price movements. This means that short-dated index-linked positions are extremely interesting to fixed income macro investors: this is one of the few ways within fixed income to take a position on anything other than interest rates (or credit spreads).</div></div><div><br /></div><div>Historically, this was not the case. Bond investors would take currency risk as a way of macro trading. However, modern portfolio management techniques have put currency risk into the hands of the forex team ("the separation of church and state"). Once you take away credit risk (in the hands of the credit team). emerging market bonds (in the hands of an EM team), the only other way to generate excitement is insane leverage levels (either via derivatives or borrowing). This means that trading short-dated breakevens as an oil proxy generates trading activity.</div><div><br /></div><div>The side effect of this oil dependence is that the uncertainty of the effect of oil prices on the oil forecast is probably an order of magnitude larger than any term premium that might exist in the instrument. <i>If we try to see whether there is a bias in realised inflation versus the economic breakeven, all you are doing is testing whether these fixed income investors were correct in their oil forecasts.</i></div><div><br /></div><div>You just need to look at a oil price chart from 2007-2008 to see that a lot of people had to be wrong about oil prices in both directions. There is no particular reason to believe that fixed income investors did a better job forecasting oil than investors in other markets did.</div><div><br /></div><div>Once the Financial Crisis hit, pricing in the index-linked market bore no resemblance to serious inflation forecasts. The reasoning was simple: levered fixed income investors had been bullish on oil, and got trapped in long index-linked positions that <i>everyone</i> knew that they could not finance. There was a large "squeeze premium" in inflation-linked yields.</div><div><br /></div><div>The standard tactic in academic finance is to label any departure from fair value as a "term premium." In my view, this is misleading, as most people associate "term premium" as a risk premium associated with holding long maturity debt. The standard definition really should be labelled as "how far off ,market my model's fair value estimate is." In other words, labelling the extreme mispricing of index-linked bonds in the aftermath of the Financial Crisis was not the result of some mysterious move in some "term premium" or "inflation risk premium," it was just the result of hard-to-model market squeeze. (Perhaps an agent-based model could quantify such squeezes.)</div><h2>Concluding Remarks</h2><div>My analytical bias is to view short-dated economic breakevens for inflation-linked bonds as being very close to the weighted average forecast level of inflation over the bond's lifetime,. where the forecast comes from market participants (and not the herding economist consensus). Any bias in market pricing versus the forecast -- a term premium of some sort -- is going to be swamped by the uncertainty in the oil price forecasts that are embedded in the CPI forecast.</div><div><br /></div><div>However, this lack of bias may not transfer to longer-dated breakevens, which I hope to discuss in a followup article. <i>(This article is a very rough draft of material that may go into a report on breakeven inflation. This draft seems to be too subjective, so I expect that it will be heavily rewritten.)</i></div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-69047825546835387212017-12-10T09:00:00.000-05:002017-12-11T07:19:03.126-05:00Equations In Stock-Flow Consistent Models<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-OEPsOZdIerI/WivqMRROgxI/AAAAAAAADGQ/Ni6HCpqVvAsNcwIkEvpjJ0VyKaOwdniBgCKgBGAs/s1600/ereport04_SFC_6_9.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1067" height="200" src="https://3.bp.blogspot.com/-OEPsOZdIerI/WivqMRROgxI/AAAAAAAADGQ/Ni6HCpqVvAsNcwIkEvpjJ0VyKaOwdniBgCKgBGAs/s200/ereport04_SFC_6_9.png" width="132" /></a></div>I had some communications with a reader Adam K. who is doing some work on stock-flow consistent (SFC) models. He had some questions about the equations and variables in the Python <i>sfc_models</i> framework -- <a href="http://www.bondeconomics.com/2017/11/an-introduction-to-sfc-models-using_22.html" target="_blank">as described in my latest book</a>.<br /><br />One of the things I noticed late in the formatting stage of the book is that I did not give a detailed explanation of the algorithms that generate the equations. This was not entirely an oversight: I wanted the book to be survive updates to the code, and the equation generation algorithms are a target for a major refactoring. This article explains the current situation, and how it developed. The need for an easily extensible equation generation algorithm trumped the desire for formality. The structure of SFC models makes extremely formal procedures fairly brittle.<br /><br /><a name='more'></a><h2>Refactoring?</h2>The fact that I am not particularly concerned about the potential for a re-write of the equation generation code -- a core part of the library -- reflects the strength of the tools used to write the framework. The entire code base is wrapped in unit tests, which ensure that code always generated the same target results, regardless to changes elsewhere. (If we want to change the functioning of a particular function, we need to change its associated tests. The idea is that this will not have a ripple effect on other tests.)<br /><br />In particular, there are end-to-end tests that validate that<i> sfc_models </i>generates the correct final output for a number of (simple) benchmark models from Godley and Lavoie's <i>Monetary Economics</i>. No matter how we generate the equations, the key output variables should line up against known results.<br /><br />Therefore, we can rip apart the equation generation code, and know whether the new version is generating the correct output at every step of the way.<br /><br />(More realistically, I would a build a new version of the code, and then develop it until it passes all of the existing tests.)<br /><br />In the absence of these end-to-end tests, any changes to the equation generation code would effectively create an entirely new software package, and then it would take a lot of<i> ad hoc</i> tests to see whether it remains backwards compatible. The development time could be almost as long as the time as it took to build the existing package in the first place. (A lot of old school software projects were built in this fashion, and this explains the notorious inability of software to be delivered on time.)<br /><h2>Equations/Variables in sfc_models</h2>The problem with SFC models is that they have way too many variables (and hence, equations defining them).<br /><br />Take the classic model SIM, which I have written about extensively, which is the simplest recognisable economic model. The usual <i>sfc_models</i> implementation currently has 32 variables, although that includes some purely decorative variables (two time axis variables, fiscal balance, etc.).<br /><br />However, if we treated the system using standard systems theory, with the input variable being the exogenous government consumption variable, the system collapses to a linear state-space system with a single state variable (the previous period money balance held by the household sector). That is, given the inherited money balance, and the current period level of government consumption, we can calculate every other variable (including the end-of-period money balance, which goes into the next period's calculation).<br /><br />Based on some analysis I have seen, this leads some people with physical sciences or engineering into a trap. They do what they always do: strip down the mathematical system to its simplest state-space form, and view that as "the model." This ignores a critical problem with state space models: they are a completely non-robust way of looking at system dynamics.<br /><br />Any deviation of any defining equation will inject new dynamics into the system, presumably adding new state variables. The dimensionality of the state space changes, and there is no way to compare the original dynamics to the new one.<br /><br />Defining the system as circuit elements and plopping the system into a circuit CAD package is not much of a help, even if the CAD package can do frequency domain analysis. The system will still collapse to the same state-space model, and almost all of the dynamics implied by the equations were thrown out.<br /><br />In summary, we are stuck with having to keep all of the equations if we want to do the analysis properly. The fact that the equations collapse to a state space representation with a single state variable in the baseline model is very useful if you insist on doing equations by hand, as Godley and Lavoie did in the textbook.<i> </i>However, this hand-derived collapsed model may bear no resemblance to the model which results from the slightest tweak to the model. <i>(Doing equations by hand is an old school academic thing across disciplines. The electrical engineering curriculum caught up to the existence of digital computers a couple of decades ago, and they now tend to only inflict that on undergraduates in their first couple of years. If all you have are analog computers and slide rules -- a form of analog computer -- you have to do equations by hand.)</i><br /><h2>How Does <i>sfc_models</i> Build Equations?</h2>Once you realise that you have to set up all the equations, SFC models start to look ugly. The number of equations ends up being very large. Given my aversion to hand calculations, I needed a system to generate the equations. Hence, <i>sfc_models</i> was born.<br /><br />For my purposes, I needed a framework where I could easily change the model structure. One strategy would be to write down an insanely complex model, and then look at what happens when we make simplifying assumptions. Since that is an extremely brittle strategy, I opted for a framework where we can switch around the sectors within a country, and where we can drop in other countries if desired. Such a framework could easily be extended towards a "build a model" graphical user interface.<br /><br />If I were stuck with the old school computer languages I learned when I was younger, such a framework would have been implemented with a giant heap of spaghetti code -- <i>"if this sector is in the model, do this..., else ...". </i>Such an implementation would work for the simplest models, but the code complexity would explode as new types of sector behaviour would be added. The system would inexorably march towards uselessness.<br /><br />Using object oriented techniques, we can isolate the implementation of each sector, greatly simplifying code structure. New sectors can use existing sectors as initial templates, and changes to one are naturally isolated from others. This makes the project feasible.<br /><br />The next issue is: where does the equation generation logic reside?<br /><ol><li>One strategy is to centralise the equation generation structure. Each sector makes its contribution to the structure, but there is one block of code setting the rules.</li><li>The next is to decentralise the equation generation. There is a central container for all equations, but each sector does its contribution to the list of equations without worrying about rules.</li></ol><div>Since I had a hard time discerning the formal principles for equation generation, I opted for the decentralised approach. This is why there is no formal structure for equation generation; from a casual glance at the <i>Model</i> object code, the equation generation algotrithm looks like a free-for-all.</div><div><br /></div><div>However, such a description is slightly misleading: there are some basic principles in play. Most recognisable economic sectors -- households, businesses, governmental entities -- define their own behaviour using variables that are internal to that sector. For example, a household sector typically has a consumption function that depends upon its own income, and its own stock of financial assets. Occasionally, the sector has to look up variables from elsewhere: perhaps it needs the rate of interest to determine its portfolio allocation decision.</div><div><br /></div><div>Other "sectors" handle the interaction between the economic sector. (They are defined as "Sector" objects in the code, and at a high level are indistinguishable from economic sectors, but they are not sectors of the economy as recognised by economists.) The most important of these are Market objects: they align supply and demand in each market, and stitch together the cash and commodity flows between sectors. This includes the labour market; it matches up the supply and demand for labour (of a given type, a multi-sector labour market can be implemented), and it ensures that businesses pay the wages, and the household sector gets the wages added to their pre-tax income. The other major object to handle interactions is the TaxFlow object, which manages the taxation rules set by the government.</div><div><br /></div><div>This creates general principles of how sectors are supposed to generate equations. The sectors packaged with <i>sfc_models </i>obey these principles, and act as a template for user extensions. However, users are free to plop in equations as they wish in order to make extensions; they obviously do so at their own risk. This flexibility is useful: it is a lot easier to make a quick test of something by sneaking in a few arbitrary equations than it is to build new sectors. There is no point spending a lot of time building a new sector model if the equations will not work, so doing feasibility analysis is a useful first step.</div><div><br /></div><div>However, these are only software engineering principles, there is no formal structure to the equations. Since I hold myself out as a very serious formal mathematician, what's up with that?</div><h2>Classification of Variables</h2><div>One obvious classification is that we should divide variables into state variables versus other variables that can be inferred from the state variables and inputs. In my discussion with Adam K., he argued that if we are looking at flows and stocks, only stock variables should be state variables; we can infer flows from stocks. (Other variables, such as expectational variables and prices, may become state variables.) Unfortunately, this might be true for some models, but not all.</div><br />For example, the amount of financial savings in the period -- a flow -- is equal to the change of the stock of financial assets. We could infer the savings in each period by just looking at the change in the stock, and drop the savings from our state space representation. However, this would not work if for some reason we wanted to work with the previous period's saving; we would need to create a new variable that stores that value for the current period's calculation. <i>(I cannot think of an obvious use for that information, but it is more obvious for some flow variables, such as income.)</i><br /><br />Since we do not know in advance which flow variables we need to reclassify as state variables, we do not gain a lot by treating them differently. In fact, the framework makes no formal distinction between stock and flow variables, since that distinction does not have any operative effect on the solution to the model. (Users could add a decoration to distinguish the variables, but that would only be useful for making a prettier output.) From a systems engineering perspective, my initial instinct would be that we have to strictly divide state variables from other variables, However, as I worked with these models, I came to the view that we need to be flexible when working with the model. Once a particular model is frozen mathematically, we can be more formal, but most of the code has to work with the model when the equation structure is still largely arbitrary.<br /><br />For example, a sector may need to use its after-tax income as a variable in various equations (e.g., consumption function). However, it does not know the exact form of the equation that determines its after-tax income until the model is finalised. For example, it could be in a model without income taxes, and so there is no tax term within the after-tax income equation. This uncertainty is handled in the present framework by the sector just declaring the existence of an after-tax income variable and then using that variable within equations; it lets the framework fill in the missing equation.<br /><br />In fact, we cannot normally distinguish between "inputs" and "state variables" in these models. We can change a parameter value from being fixed for all time to having a shock applied at some time point: we have promoted the variable from being a (constant) endogenous variable to being an exogenous input! Such a change completely alters the state space representation of the model.<br /><br />The only formal distinction between variables (other than exogenous/endogenous) within the framework is that a heuristic is used to prune "decorative" variables at each time step before solving the system equation (but after the model structure is frozen). For example, there are a lot variables that are of accounting interest -- such as the fiscal deficit -- that may have no effect on behaviour on a particular model implementation. Such variables are pruned. However, if we want to model the effect of dimwitted fiscal hawks, we would have behaviour affected by the level of the fiscal deficit, and so we cannot drop the variable from the system before the solution step.<br /><br />The financial interactions between sectors -- mediated by the net holdings of financial assets (F) -- is handled almost entirely by the <i>Market</i> and<i> TaxFlow </i>sectors. (Interactions between sectors is what generates monetary inflows and outflows.) These equations largely cover most of the discussion of stock and flow variables. We could break out these variables as being special, but that would be largely decorative, and not help the model solution.<br /><h2>Behavioural versus Non-Behavioural</h2>If I were to attack the formalisation of the structure, I would classify variables as either behavioural or non-behavioural. Non-behavioural equations would include the accounting identity and stock-flow relationships that SFC modellers love writing about, but also things like production functions and even the definition of the time axis.<br /><br />The behavioural variables are the more interesting. In most cases, we would create new sectors by adding new behavioural rules; the non-behavioural relationships remain the same. For example, we plop in a new consumption function into the household sector, and see what happens.<br /><br />To be fair, this is pretty close to the existing behaviour. All you need to do to build a new Sector is to sub-class it, and then change out the behavioural equations you wish. The new sector will inherit all of the non-behavioural relationships from its parent implementation. As such, a formal re-definition might be of mathematical interest, but would have almost no effect on workflow.<br /><h2>A Note on the Solver</h2>Following the suggestion of Adam K., I may look into adding in the MINPACK solver from the scipy Python package. The user would have the option of switching over to its use, assuming that scipy is installed. I will probably leave in my clunky iterative solver in the package so that the user can run basic examples without any external dependencies. (The advantage of the current setup for me is that I know how to use it to detect badly-posed equations; I would need to add debugging hooks for the new solver.)<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-36197356923184562152017-12-09T12:25:00.000-05:002017-12-09T18:28:59.792-05:00Bitcoin As A Vindication Of Mainstream Monetary Economics<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-a4AW3CZ7aGk/WiwRae6OsKI/AAAAAAAADGg/VxRf-jfh78URb84GFdkHNftmC8meInu9ACKgBGAs/s1600/logo_money.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-a4AW3CZ7aGk/WiwRae6OsKI/AAAAAAAADGg/VxRf-jfh78URb84GFdkHNftmC8meInu9ACKgBGAs/s1600/logo_money.png" /></a></div>Say what you want about the backers of Bitcoin, they managed to create a currency unit that acts in the insane fashion that matches the way money works in mainstream economic models. For anyone with any attachment to reality, it offers a very good real world explanation why the DSGE approach to economics is inherently useless.<br /><br /><a name='more'></a><h2>Is Bitcoin a Bubble?</h2><div>The major question my readers probably have is: is Bitcoin a bubble? My response is straightforward: what do you think?</div><div><br /></div><div>That said, anyone who wants to short the various crypto-currencies should probably pour themselves a stiff drink, and read between the lines from the following theoretical rant. You need to ask yourself: how prudent is it to short something that has a "fair value" that is arbitrarily large?<br /><br />I was a junior analyst during the dot-com era. The most reliable road to ruin was to pay attention to what internet analysts had to say. It's a speculative vehicle. Ignore stories; read charts.</div><h2>What Pins Down The Initial Price Level?</h2>If you are even slightly numerate and look at standard DSGE models <i>(that do not include money in the utility function, see below)</i>, you see that that they offer a wonderfully complex characterisation of the relationship between the price level from the initial time period and later time periods. That said, there is absolutely nothing that pins down the initial price level. <i>(The Calvo fairy says that some prices will be set from the previous period, which raises various chicken-and-egg questions.)</i><br /><br />(Relative prices like wages/goods prices, are pinned down, even in the first period.)<br /><br />If we are in standard framework, with one (composite) good and money, there is nothing to pin down the exchange rate between money and that good. We know what the rate of change of the good will be, (the ratio of its price between this period and the next), but that's it.<br /><br />There's a number of ways to deal with this indeterminacy.<br /><ul><li>The central bank magically sets the price level. In other words, every Canadian wakes up every morning, and waits for the price level announcement from the Bank of Canada, and then go off to work. This apparently is the consensus opinion.</li><li>The "money in the utility function" kludge. You stick the household's money balance into the utility function. This pins down the relative value between money and the composite good. That said, there is absolutely no microfoundational story to explain this kludge; it's just an arbitrary term introduced to make it possible to fit the model to data. We have no idea where this factor comes from, or how it could change.</li><li>A cash-in-advance constraint. You put an arbitrary limit on how many transactions can be supported by an amount of money. This is pretty much the same thing as assuming that velocity is fixed (or a stable function of something or another). The whole Monetarist nonsense peddled by various hucksters relied upon the apparent stability of velocity in the 1950s.</li><li>The fiscal theory of the price level. You impose a relatively arbitrary governmental budget constraint on your model, you do the mathematics, and you back out that the price level is the level that equates the real value of government debt to the discounted values of future primary surpluses. Although this is the correct mathematical answer, the profession slaps its hands over their ears and goes "la-la-la" when people bring it up. The reason being that the Fiscal Theory of the Price Level makes even stupider predictions than the belief that the central bank arbitrarily sets the price level. For example, the price level was supposed to jump <i>a lot</i> when the Republicans made it clear that they wanted to ram a tax cut through.</li></ul><div>The reader would notice that these explanations are either obviously wrong or trivial (the price level is what it is).</div><h2>Inching Towards the Real World</h2><div>If we start to take into account the various things that are stripped out of DSGE models, we can start to get a better idea of what pins down the price level.</div><div><br /></div><div>The first step to notice is that most people have debts denominated in the local currency. Let's say you woke up morning with a house and a car, and a $100,000 mortgage on your house. You have no other fixed ideas about prices. Then imagine that a free market fundamentalist tells you that fiat currency is inherently worthless, and offers you $100,000 for your car. You can think about: do I value the mortgage or the car more? This creates a way of determining the relative value.</div><div><br /></div><div>However, the fact that we live in a <i>capitalist </i>society offers a better idea of how to pin down the value of a currency. The system is dominated by <i>capitalists</i>. What do capitalists do? For various reasons, people want to think that capitalists shuffle around paper claims on equity. That is the wrong answer. The correct answer is the scarily Marxist one: capitalists <strike>exploit</strike> employ workers. </div><div><br /></div><div>In other words, the value of currency is determined by how much labour a unit of currency commands.</div><div><br /></div><div>(It should be noted that this leads to the Modern Monetary Theory arguments about price level determination: the Job Guarantee wage creates a nominal anchor for the currency.)</div><div><br /></div><div>And as I repeatedly note, this breaks down if we start indexing wages to an external unit. A hyperinflation results when the value of the local currency versus the index unit goes to zero.</div><h2>Back to Bitcoin</h2><div>The techno-anarcho-crypto-Libertarians who brought us Bitcoin (and whatever other crypto-currencies you want to think of) have taken free market economics to its logical conclusion. They have created a currency unit that has no nominal anchors for value whatsoever.</div><div><br /></div><div>At the time of writing, almost nobody has debts denominated in Bitcoin. A wage denominated in Bitcoin would probably run afoul of various labour regulations, and I doubt that most <strike>wage slaves</strike> workers would be willing to work for a fixed Bitcoin wage. (Since most people have fixed expenditures that are north of 90% of their after-tax wage, there is not a lot of room for currency volatility.)</div><div><br /></div><div>Therefore, the price of Bitcoin is exactly as free as would be predicted by DSGE models. There is no central bank to magically set the price level. There is no government to do the Fiscal Theory of the Price Level thing. Since most non-speculative transactions are just a way to avoid bank intermediation, there is no "Bitcoin-in-advance" constraint. Even if 99.99% of Bitcoin are hoarded, the remaining 0.01% float would be enough to support transactions.</div><div><br /></div><div>The only thing that will bring this wonderful natural experiment to an end is the inevitable march of energy constraints, or an outraged populace with pitchforks and torches forces the regulators to shut this puppy down.</div><br />But in the meantime, mainstream economists should pat themselves on the back for coming up with models that perfectly explain Bitcoin dynamics. Well done!<br /><h2>Addendum: What if We Treat Bitcoin Like a Commodity?</h2>We could come up with a fair value estimate for Bitcoin if we treat it like a commodity (a parallel that the crypto-gold bugs played up in the marketing of this scheme). The fair value is related to the energy cost of mining Bitcoin -- just like mining gold!<br /><br />If Bitcoin miners hedging programmes were large enough, they might be able to cap the dollar Bitcoin price at the cost of production. This could be aided by large holders who view that as the fair value. However, this will only work if they are willing to sell, and they do not believe their own press releases. (Think about how gold bugs hate the hedging programmes of gold miners.) However, the ability of these hedging programmes to hold a lid on prices when faced by a speculative wave is limited by the size of the programmes, which is probably much smaller than the supply of greater fools.<br /><br />This cost of production is less useful on the way down. Unless miners want to double down on losing Bitcoin positions, all they can do is slow down mining. (Stopping mining would be putting a fork into this sucker.)<br /><br />Another possible alternative is that some large actor steps in to create a nominal anchor. A real world commodity producer could offer commodities at a fixed Bitcoin price. Alternative a cartel of large holders could decide to fix a price, and then use their market power to keep trading near the target. However, such price fixing attempts would be susceptible to runs (and/or various anti-cartel regulations).<br /><br />The problem with offering goods for sale at a fixed BTC price is that your expenses are probably not fixed in BTC terms. It might be possible in some cases, but it would be relatively rare.<br /><br />I will offer an analysis from my own perspective. For example, if I sold my ebooks on my own sales platform, I could offer them at a fixed BTC price. I could do that relatively safely, since my marginal cost of production is near zero. However, I could not do that for paperback editions, since I do have a relatively large dollar (or GBP/EUR) fixed cost of production. Furthermore, putting the book up for a fixed BTC price would mean that I could not post a USD price, as otherwise I just end up arbing myself. Meanwhile, I would be sitting duck for Revenue Canada. They could use whatever random BTC quote they can get to make it look like I made a fortune selling the books, and I would end up owing a large CAD-denominated tax bill. (The fact that I probably would have translated the profit at different prices could be chalked up as capital losses -- and I would have no corresponding capital gains to offset them with!) Since I am not a complete idiot, I leave the sales to the online retailers. They can sell the books to customers using crypto-kitties if they wish; they just need to send me my fiat currency-denominated royalties.<br /><br />In any event, if the price stabilises as a result of such forces, it can no longer be treated as a currency using mainstream analysis. The value of Bitcoin would be largely determined by a fairly arbitrary price relationship to an existing fiat currency. Unless energy production costs are expressed in Bitcoin, a linkage to energy costs is actually a linkage to the dollar cost of production and so is not derived from the Bitcoin economy itself.<i> (Although not all energy is produced in the United States, most energy producers have a semi-dollarised economy, or are developed countries whose currencies are relatively stable versus the USD. Compared to the volatility of BTC, energy prices look pretty close to fixed in USD terms.) </i>This is unlike the situation during the Gold Standard, where the cost of mining gold was set in currencies themselves linked to gold.<br /><br />In any event, mainstream macro theory would revert to having no useful predictive powers.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com9tag:blogger.com,1999:blog-5908830827135060852.post-34820923264730870532017-12-06T09:00:00.000-05:002017-12-07T07:04:30.512-05:00Is Breakeven Inflation The Same Thing As A Forecast?<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-GksZnK9op3M/WicjcDBKWqI/AAAAAAAADGA/3rrp7hWlZO0qfVhd8hMIS03L03uVWLhqQCKgBGAs/s1600/logo_linkers.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-GksZnK9op3M/WicjcDBKWqI/AAAAAAAADGA/3rrp7hWlZO0qfVhd8hMIS03L03uVWLhqQCKgBGAs/s1600/logo_linkers.png" /></a></div>One of the difficulties in discussing the index-linked market is pinning down what we mean when say a certain rate of inflation is priced into the market. The breakeven inflation rate is (roughly) the rate of future inflation for which the nominal bond and the inflation-linked bond have the same total return. Alternatively, we might refer to the breakeven inflation rate as the market expectations for inflation. Unfortunately, we could have a situation in which the breakeven rate does not match what market participants forecast for inflation.<br /><br /><a name='more'></a>This article is somewhat loose, I am collecting my thoughts as I start delving into my report on inflation-linked bonds. <a href="http://www.bondeconomics.com/2017/11/defining-market-efficiency-properly.html#more" target="_blank">I am picking up a train of thought that started in this earlier article on market efficiency.</a> As can be seen, there is a lot of overlap with the discussion of the term premium. Unfortunately, this article only introduces the problem, and later article(s) will fit in other pieces of the puzzle.<br /><h2>Breakeven Inflation Definition</h2><a href="http://www.bondeconomics.com/2014/05/primer-what-is-breakeven-inflation.html" target="_blank">I have an earlier primer on breakeven inflation, which gives a longer definition.</a> The usual definition of breakeven inflation is that it is the spread between a conventional (nominal) bond, and an index-linked bond of the same maturity. For example, if a 10-year conventional bond yields 3%, and the quoted yield on a 10-year index-linked bond is 1%, the breakeven inflation rate is 2%. (The breakeven inflation rate is quoted in percent, not in basis points, unlike other spreads.) Unless circumstances are very unusual, this spread is extremely close to the rate of future inflation that results in the conventional bond and the inflation-linked bond having the same total return, for bonds with a maturity of five years or beyond.<br /><br />Shorter maturity bonds -- particularly under one year -- can have an economic breakeven that markedly differs from the spread. For short-dated bonds, the seasonality of CPI can cause an impressive movements in the annualised breakeven rate. A 0.5% seasonal adjustment factor on 2 months turns into roughly 6% annualised.<br /><br />(My report will have a long, drawn-out analysis of this aspect of breakeven inflation. Economists and market strategists use this spread a lot, and they should have an idea of how good -- or bad -- the approximation of the economic breakeven the spread represents.)<br /><br />For the purposes of this article, I am referring to the true economic breakeven rate.<br /><h2>Hating On Expectations</h2>Anyone who has to write about markets loves synonyms. Since you are typically writing variations of the same thing every working week for a good portion of your brief life, you want to at least be able to change up the wording.<br /><br />In many cases, people use "expectations" and "forecasts" interchangeably (I certainly do so myself sometimes). However, the connotations are very different in finance. In financial mathematics, expectations refers to the definition in probabilistic mathematics, which is a probability-weighted average of a random variable.<br /><br />For example, if we are betting $1 on a fair coin toss, our expected return is $0., since we have a 50% chance of winning $1, and a 50% chance of losing $1. <i>(I'd like to thank my reader Jerry Brown for pointing out that my initial draft of this example was wrong; my numbers were for a lottery ticket, not a bet.)</i><br /><br />If we are indifferent to taking risks, the fair value of an instrument is equal to the expected value of its cash flows. (This is referred to as being risk neutral). In which case, an instrument is trading at fair value if its expected value matches our forecast. And if we are looking at forward interest rates, we see that the forward rate has to match the expected rate if we want to price options without arbitrage possibilities.<br /><br />Many heterodox economists have issues with the notion of market efficiency and rational expectations. One can easily debate the wisdom of market pricing. That said, trying to operate as a market maker in fixed income options is going to be a debacle without some version of market efficiency embedded in your pricing algorithm.<br /><br />This carries through to (economic) inflation breakevens. Although inflation option trading was not particularly active when I was involved in the markets, the breakeven rate is going to match the (risk neutral) expectations for future inflation. Using the mathematical definition, the breakeven rate has to match expected inflation.<br /><br />However, this does not have to equal what anyone is forecasting, for reasons I discuss next. This is why I try to avoid the use of expectations in this context.<br /><h2>Forecasts Not Matching Expectations?</h2>The economic breakeven does not have to match any market participant's forecast for inflation. All that we need is for some factor to drive observed prices from the <i>risk neutral</i> expectation.<br /><ul><li>There could be something like a term premium (an inflation risk premium).</li><li>There could be a liquidity premium for one bond.</li><li>(Related to the previous.) It may be possible to finance one bond more cheaply in the repo market. (Since this effect is quantifiable, we could adjust the economic breakeven to take it into account.)</li><li>The tax treatment of index-linked bonds is at a disadvantage versus conventional bonds in most markets. This was a concern in the early days of the index-linked market in the United States. However, the trading of these bonds is dominated by tax-exempt investors, and so the observed tax effects seem to small in practice. (A taxable investor would need to take them into account when calculating the economic breakeven.)</li><li>There can be institutional factors that create an imbalanced demand in the inflation-linked bond market. (There is a wider participation by borrowers in the conventional bond market, and so it is generally possible to sidestep demand imbalances. However, it is still possible, as seen in the U.K. gilt curve after the pension reforms of the 1990s.) Market participants will push their estimate of fair value away from the economic breakeven rate based on their estimate of this supply effect. In practice, this ends up looking like a term premium.</li><li>Market pricing can be gosh darn dysfunctional, as was seen in the Financial Crisis. During the crisis, it was clear that there were many forced sellers of index-linked bonds, and no buyer was willing to take them out of their positions at a "fair" price. Once again, this might be thought of as something like a term premium. </li></ul><br />As can be seen, many of these factors might be thought of as a "term premium" (as that phrase is used within finance, which may not match economic intuition). We have two alternative formulations:<br /><ol><li>there is an inflation premium, (which might be either positive or negative); or </li><li>there are term premia embedded in both the conventional bond and the index-linked bond, and the breakeven inflation rate is biased by the difference in term premia.</li></ol><div>In my view, it is a lot easier to think of the differential in term premia rather than an inflation premium.</div><h2>Why Are We Asking This?</h2><div>The topic of the term premia (and inflation risk premia) has been the recipient of a large amount of academic research. I am not a particular fan of the affine term structure model estimates of the term premium; they are prey to the garbage in-garbage out syndrome.</div><div><br /></div><div>In my view, the complexity of the analysis takes us away from a more fundamental question: why do we care? If we return to first principles, our attitude towards the term premium depends upon who is asking the question.</div><div><ul><li>If you are a long-term investor, you should look at the raw economic breakeven. You want to decide which type of bond offers a better return, and you should have your own assessment of risk premia. Subtracting a random term premium from the true economic breakeven biases your subsequent analysis.</li><li>If you are an active fixed income investor, you may need a good estimate of where the market should be trading (conditional on an inflation forecast). You can make an estimate of what the term premium should be based on the current trading environment. Your methodology may change over time.</li><li>If you are an observer of the bond market, you may just want to see what the market is pricing in for inflation over various horizons. Usually, this is being done by economists who want to create a time series estimate of inflation expectations. An algorithm that is consistently applied over time is necessary.</li></ul><div>As can be seen, analysis of the term premium is trickiest for observers of the market. The problem with many estimation techniques is that the term premium estimate is unstable, and it ends up absorbing most of the volatility of market prices. Instead of movements in the raw breakeven being interpreted as changes in inflation expectations, all that is happening is that the term premium is bouncing around for completely unknowable reasons.</div></div><div><br /></div><div>Returning to the original subject of this article, it is clear that breakeven inflation may have a bias relative to the inflation forecasts of market participants. I will have to return to the discussion of the size of this bias in a followup article. My view is that the bias is relatively small out to the 5-year point of the curve, probably not worth worrying about at the 10-year point, and is wildly uncertain at longer maturities.</div><h2>Concluding Remarks</h2><div>We need to very careful in distinguishing forecasts from raw economic breakevens (forward rates, inflation breakevens). I will return to the discussion of the magnitude of the bias in a later article.</div><div><br /></div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-26980591471477104972017-12-03T09:00:00.000-05:002017-12-04T09:40:53.582-05:00Robust Control Theory And Model Uncertainty<a href="https://www.amazon.com/gp/product/0486469336/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0486469336&linkCode=as2&tag=bondecon09-20&linkId=e3a4c818fc01c807f87b05f11d8d2b2d" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=0486469336&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=0486469336" style="border: none !important; margin: 0px !important;" width="1" />Mainstream mathematical economics has a very strong influence from optimal control theory. <a href="http://www.bondeconomics.com/2017/11/why-parameter-uncertainty-is-inadequate.html" target="_blank">As I discussed previously</a>, optimal control was abandoned as a modelling strategy decades ago by controls engineers; it only survives for path-planning problems, where you are relatively assured that you have an accurate model of the overall dynamics of the system. In my articles, I have referred to robust control theory as an alternative approach. In robust control theory, we are no longer fixated on a baseline model of the system, we incorporate model uncertainty. These concepts are not just the standard hand-waving that accompanies a lot of pop mathematics; robust control theory is a practical and rigorous modelling strategy. This article is a semi-popularisation of some of the concepts; there is some mathematics, but readers may be able to skip over it.<br /><br /><a name='more'></a><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>It should be noted that I have serious doubts about the direct application of robust control theory to economics. In fact, others have discussed robust control (sometimes under the cooler-sounding name $H_\infty$ control). As such, the examples I give probably have little direct application to economics. Instead, the objective is to explain how we can work with model uncertainty in a rigorous fashion.<br /><br />This article reflects the thinking in robust control theory up until the point I left academia (in 1998). I have not paid attention to subsequent developments, but based on the rather glacial pace of advance in control theory at the time, I doubt that I missed much. It should be noted that there were disagreements about the approach; I was part of the dominant clique, and this article reflects that "mainstream" approach. (I discuss briefly one alternative approach at the end, only because it was actually adopted by an economist as a modelling strategy.)<br /><br />For readers with a desire to delve further into robust control theory, the text <i>Feedback Control Theory</i>, by John C. Doyle, Bruce A. Francis, and Allen R. Tannenbaum, was the standard text when I was a doctoral student. There are more recent texts, but the ones that I saw were only available in hardcover.<br /><h2>The Canonical Control Problem</h2>The standard control problem runs as follows.<br /><ol><li>We have a system that we wish to control. By tradition, it is referred to as the <i>plant, </i>and is denoted <i>P</i>. We have a baseline model $P_0$, which comes from somewhere -- physics, empirical tests, whatever. (This model is provided by the non-controls engineers.)</li><li>We design a <i>controller </i>-- denoted $K$ -- that is to stabilise the system. It provides a feedback control input $u$ that is used to guide the plant's operation.</li><li>We typically assume that we are analysing the system around some operating point that we can treat as a linear system. (My research was in nonlinear control, and the amount of theory available is much smaller.)</li><li>We often assume that the observed output ($y$) is corrupted by noise ($n$), and there may be disturbances ($d$) with finite energy that also acts as an input to the plant.</li></ol><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-T8wwFiEGw-g/WiLpdY9vwwI/AAAAAAAADFY/zW13AsBkS2ECUjL3O3FgPvnvRSc97D_xQCLcBGAs/s1600/robust_control_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="270" data-original-width="360" src="https://3.bp.blogspot.com/-T8wwFiEGw-g/WiLpdY9vwwI/AAAAAAAADFY/zW13AsBkS2ECUjL3O3FgPvnvRSc97D_xQCLcBGAs/s1600/robust_control_1.png" /></a></div>The diagram above shows the layout of the system, with variables indicated. We assume that each variable is a single variable (not a vector).<br /><br />If the system is linear, time invariant, and discrete time, we can use the <i>z</i>-transform to analyse the system. (In continuous time, we use the Laplace transform.) The z-transform uses the frequency domain to analyse systems; we are not tied to any state-space model.<br /><br />The reason why we use the z-transform is that it turns the operation of a system into a multiplication. If $Y(z)$ is the z-transform of $y(k)$, then:<br />$$<br />Y(z) = P_0(z) (D(z) - U(z)).<br />$$<br />(By convention, we use a negative feedback for the control output <i>u</i>.)<br /><br />We can now calculate the transfer function from the disturbance <i>d </i>to the plant output <i>y</i> (ignoring the noise <i>n</i>).<br />$$<br />U(z) = K(z) Y(z).<br />$$<br />Then,<br />$$<br />Y(z) = P_0 D(z) - P_0(z) K(z) Y(z).<br />$$<br />We arrive at:<br />$$<br />Y(z) = P_0(z) (1 + P_0(z) K(z))^{-1} D(z).<br />$$<br /><br />The term $P_0(z) (1 + P_0(z) K(z))^{-1}$ is the closed-loop model of the system (the area in dotted lines in the above diagram). If the closed loop model of the system is stable, standard linear system theory tells us that the closed loop will reject noise and disturbances. The zero point is a stable equilibrium, using the rigorous dynamical system definition of equilibrium (and not the hand-waving metaphysical definition used in economics).<br /><br />The above was standard system theory; optimal control worked within this framework. Any notion of uncertainty was assumed to be handled by either the disturbance or noise.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-WYL7I-95Szw/WiL7aO2wiOI/AAAAAAAADFs/HqLeByxbYKkVwqw0uhhARlz5KtglwsSEACLcBGAs/s1600/robust_control_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="270" data-original-width="360" src="https://2.bp.blogspot.com/-WYL7I-95Szw/WiL7aO2wiOI/AAAAAAAADFs/HqLeByxbYKkVwqw0uhhARlz5KtglwsSEACLcBGAs/s1600/robust_control_2.png" /></a></div><div class="separator" style="clear: both; text-align: left;">In robust control, we assume that the "true" plant model lies close to our baseline model, but it is not exactly the baseline model. We can express this uncertainty in a number of ways. The diagram above shows one standard possibility: the actual plant is equal to the baseline plant, which is locked in a feedback loop configuration with an unknown system $\Delta$.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We obviously cannot do much analysis if there are no constraints on $\Delta$, the true system could be literally anything. We constrain $\Delta$ so that its gain in the frequency domain is less than or equal to 1. (This is denoted as $\| \Delta \|_\infty < 1$, or the infinity norm is less than one. (This is where $H_\infty$ control gets its name.) This characterisation was developed by the late George Zames, a Professor at McGill University.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-FLx-ihcKyms/WiL7aHHIUoI/AAAAAAAADFw/ANEgTdQTHXI-P_El2MeS45DmrgDUra4PgCLcBGAs/s1600/robust_control_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="148" data-original-width="207" src="https://2.bp.blogspot.com/-FLx-ihcKyms/WiL7aHHIUoI/AAAAAAAADFw/ANEgTdQTHXI-P_El2MeS45DmrgDUra4PgCLcBGAs/s1600/robust_control_3.png" /></a></div><br />We can then manipulate the systems to calculate the baseline closed loop model, and have it in a loop with $\Delta$. We then can apply a fixed point theorem -- called the Small Gain Theorem in control theory (which is also due to George Zames) -- to show that if the infinity norm of the baseline closed loop model is less than 1, the overall system will be stable. In other words, the controller will stabilise the true plant, for any $\Delta$ in the set of possible perturbations.<br /><br />By contrast, optimal control was highly aggressive in its usage of the baseline model. Almost any perturbation of the system from the assumed model would result in instability. Alternatively, the numerical procedure to determine the optimal control law was numerically unstable.<br /><br />The above specification of uncertainty is standard, but is somewhat naive. In practice, we have a rough idea what sort of uncertainty we are up against. We can extend the analysis to allow ourselves the ability to shape the uncertainty in the frequency domain. For example, we usually have a good idea what the steady-state operating gain of a system is, but often have little idea what the high frequency dynamics are. We shape the frequency domain characterisation in such a fashion, and we thus constrain how to design our control laws.<br /><h2>Applications to Economics?</h2>The direct application of control theory is in the design of policy responses, as was done in the Dynamic Stochastic General Equilibrium literature. The difficulty with trying to apply robust control is that we do not really have a good notion of the baseline system. Also the true models are certainly not linear.<br /><br />Another issue is that the type of uncertainty we face is somewhat different. We know with certainty that accounting identities will always hold. The true uncertainty we face is the behaviour of economic sectors. <a href="http://www.bondeconomics.com/2017/11/the-theoretical-incoherence-of-full.html" target="_blank">I made an initial stab at analysis that exploits this feature in an earlier article</a>. However, I do not see an obvious way to shoehorn that type of model into existing robust control theoretical frameworks.<br /><br />However, <a href="http://www.bondeconomics.com/2017/11/why-parameter-uncertainty-is-inadequate.html" target="_blank">the realisation that we can rigorously discuss model uncertainty means that should not be treating uncertainty via parameter uncertainty</a>.<br /><h2>Hansen and Sargent's Approach</h2>In 2008, Lars Peter Hansen and Thomas J. Sargent published the book <i>Robustness</i>, which was an attempt to bring robust control theory to economics. I started reading the book with high expectations, and gave up fairly quickly.<br /><br />Within control theory, there were a number of differing approaches to robust control. Back when I was a junior academic, I would have had to be diplomatic and pretend that they were all equally valid. Since I am no longer submitting papers to control engineering journals, I am now free to write what I think. My view was that those alternative approaches were largely bad ideas, and were only useful for expanding publication counts.<br /><br />One such alternative approach was to apply game theory. Instead of a truly uncertain model, you are facing a malevolent disturbance that knows the weak points of whatever control law you are going to apply. You are forced to use a less aggressive control strategy, so that it is not vulnerable to this interference. You ended up with the same final set of design equations as in robust control, but that was just a lucky accident of linear models. Any nonlinearity destroyed the equivalence of the approaches; that is, an uncertain nonlinear system could not be emulated with a game theory framework. (Since I worked in nonlinear control, I largely managed to ignore that literature. However, I was forced to work through a textbook detailing it as part of a study group, and I hated every minute of it.)<br /><br />Working entirely from memory, I believe that you also largely lost the ability to shape the uncertainty in the frequency domain. That is, the fact that we generally know more about the steady-state characteristics of a system than the high frequency response is lost, since we have a "malevolent actor" that is responding at an extremely high frequency. For linear system design, you can cover this up with kludges that allow you to restore the equivalence to standard robust control design equations, but there was no theoretical justification for these kludges from within the game-theoretic framework.<br /><br />Given mainstream economists' love of game theory, it was perhaps not surprising that Hansen and Sargent chose that formalism. You end up with a new variant of DSGE macro, but still without any true model uncertainty. It may be better than the optimal control based DSGE macro, but that's setting the bar very low. I was thinking of writing a review of the book, but I would have been forced to be all academic-y. The resulting review would have been painful to write (and read). It may be that there are more redeeming features to the approach than I saw in my first reading of the book, but I remain skeptical.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com4tag:blogger.com,1999:blog-5908830827135060852.post-23587470840408107732017-12-02T09:10:00.000-05:002017-12-02T09:10:58.420-05:00Republicans Cut Taxes. Sun Rises In The East.<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-nNnbPg8f93M/WiKkguLWA6I/AAAAAAAADFI/XrxsUdinV5Q3ZJNgxD48hHalwIrgfiHowCKgBGAs/s1600/logo_fiscal.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-nNnbPg8f93M/WiKkguLWA6I/AAAAAAAADFI/XrxsUdinV5Q3ZJNgxD48hHalwIrgfiHowCKgBGAs/s1600/logo_fiscal.png" /></a></div>At the time of writing, it appears that the Republican-controlled Congress will be able to patch together a tax cut bill. What exactly will be in the bill is still unknown; you gotta pass the law to see what's in it. Based on the chatter I have seen, it seems likely that the tax cut will not have a major macroeconomic impact, although it might help trigger rate hikes. This cut has raised in an interesting debate within the Democratic Party: should they follow the same strategy that has failed for decades, or take a MMT line on the topic?<br /><br /><a name='more'></a><h2>Macro Impact</h2><br />Once again, it is very unclear what tax cut bill will ultimately be passed. The bill is unpopular, and the legislative process around it can only be described as chaotic. (This process is completely foreign to me. In Canada, the budget is written up by presumed adults for the Minister of Finance, and then the budget is rammed down the throat of Parliament.)<br /><br />However, my guess is that the final bill will mainly consist of tax cuts aimed at businesses and the wealthy, with minor tax increases for everyone else.<br /><br />In analysing the bill, the dollar cost is essentially a piece of trivia; what matters is how many jobs would be created. Needless to say, those estimates would largely reflect the biases of the forecaster. My bias is that very few jobs would be created, and so the tax cut is not a big deal.<br /><h2>Effect on Inflation</h2>One of the problems with mainstream analysis is that it relies on highly aggregated analysis -- a single household sector, with a single unemployment rate. That was the point of failure of the Old Keynesian economists, and the New Keynesians learned the wrong things from the Old Keynesians' failures.<br /><br />If we have a more realistic analysis, there are two offsetting factors in the tax cut.<br /><br /><ol><li>The tax cut is aimed at entities that have a low propensity to consume. The net stimulus would be small.</li><li>However, the stimulus will be hitting the sectors/regions of the economy that were already doing well. This would cause <i>sectoral</i> inflation.</li></ol><br />My guess is that the effect on measured inflation would be small. The reasoning behind this guess is straightforward. The spending of the ultra-rich does not appear to show up measured price indices; if it did, we probably would have had higher inflation rates already. Very simply, private jet prices do not have an impact on the CPI. Since policy is aimed at aggregates like the CPI, the observed effects would be very hard to disentangle from the expected effect of the cycle -- inflation rates generally rise during the cycle.<br /><h2>Interest Rate Offset</h2>It is interesting that many liberal mainstream economists are worried about the economy overheating, even though they subscribe to the theory that the central bank will just offset the effect of stimulus. If we were to believe mainstream theory, the central bank will raise rates to counteract the stimulative effect of the tax cut.<br /><br />However, that is exactly what mainstream economists were asking for just a few years ago: we need to get interest rates away from zero, so that we can avoid the "liquidity trap" (their definition).<br /><br />Furthermore, higher rates are exactly what pensions need. Decades ago, most developed countries pinned their hopes on pension plans to provide retirement income. Such plans would obviously run into trouble if we had a long period of negative real rates. However, mainstream economists almost certainly assured people that it was impossible for real interest rates to be negative for a long period of time -- the natural rate of interest is positive! <i>(I do not know whether they said that about pension provision, but they certainly said the negative real rates of the 1970s were an aberration.)</i><br /><h2>"But We're At Full Employment!"</h2>One strand of argument against the tax cut is that we are already at full employment. Since there is actually no good empirical evidence about what "full employment" represents (using the mainstream definition), this argument is full of holes. Starting in 2010, I had to sit through presentations by economists arguing that the United States was already at full employment.<br /><br />Since there are no reliable estimates of full employment, the argument really boils down to said economist not liking the tax cut.<br /><h2>Probability of a Debt Crisis</h2>The probability of an involuntary default by the United States Federal Government will rise to 0% (from 0%) as a result of this tax cut.<br /><h2>Political Strategy</h2>For decades, the Republicans have followed the strategy of "starve the beast." Cut taxes, and then use the public's irrational fears of debt levels to force through spending cuts. The size of the government (other than defense spending!) is steadily ratcheted lower.<br /><br />This strategy obviously requires lying to the public, which offends many people. I come from a political environment where politicians routinely said one thing in French, and the opposite in English (when language tensions are high); nobody really bats an eyelid. Politicians lie; that's part of their job description. <i>(I would note that I am a Prairie Populist; my favoured politicians did not lie. I just have very low expectations for other politicians.)</i><br /><br />The traditional Democrat response to this strategy has been to emphasise how unsustainable Republican tax cuts are. In the current environment, this will have the predictable response: spending on programmes favoured by the Democrats will be cut by the Republicans, in the name of fiscal responsibility. Since the programme being cut almost certainly have a higher multiplier than the tax cuts, we end up with a net fiscal drag, with the negative effect aimed at the weaker sectors and regions of the economy. In other words, the Democrats help legitimise starving the beast.<br /><br />From the perspective of Modern Monetary Theory, the dollar amounts do not matter, only the effect on the real economy. It is very easy to question what purpose is served by cutting taxes on the cohort that has had the greatest income recovery since the financial crisis. The Republicans won the election, and should be expected to enact their preferred agenda. However, they need to be held accountable and forced to justify the effects of that agenda, and not be derailed about arguments over mythical debt crises (which they will win).<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-55774350912355732312017-11-29T09:00:00.000-05:002017-11-29T09:48:10.741-05:00Why Parameter Uncertainty Is An Inadequate Modelling Strategy<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-gRnHjfOK9pI/Wh2JXqw0DQI/AAAAAAAADEQ/lLSnwsSI3Z4KPtYNPV6pS2_0YZ4PvQGgQCKgBGAs/s1600/logo_models.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-gRnHjfOK9pI/Wh2JXqw0DQI/AAAAAAAADEQ/lLSnwsSI3Z4KPtYNPV6pS2_0YZ4PvQGgQCKgBGAs/s1600/logo_models.png" /></a></div>We live in a world of uncertainty. One strategy used in economics is to incorporate the notion of parameter uncertainty: we have the correct model, but the parameters have some random variation from a baseline value. This strategy is highly inadequate, and has been rejected by robust control theory. The belief that we have the correct model was an underlying premise of optimal control theory, and the weakness of this premise in practice explains why optimal control theory was largely abandoned in controls engineering. (Interestingly enough, it persists in Dynamic Stochastic General Equilibrium (DSGE) models).<br /><br />In this article, I give an example of an abject failure of parameter uncertainty as a notion of model uncertainty.<br /><br /><a name='more'></a><h2>Why Care About This Example?</h2>The example I give here is deliberately simple: it is a model with only a single parameter. It is not a recognisable economic model. However, it is easy for the reader to experiment with this model to validate the failure of parameter uncertainty.<br /><br />Optimal control failed in practice as a result of the general principles illustrated by this example. To be clear, this is not a model system that caused them difficulty in particular. I will return to optimal control after I describe the example.<br /><br />One could look at the simplicity of the example and argue that modern economists are far too sophisticated mathematically to make such an error. However, such arguments ring hollow when we consider that the optimal control engineers fell into a similar trap.<br /><br />Firstly, the optimal control engineers were more sophisticated mathematically than modern economists. They developed the mathematics that DSGE macro modellers now use. A major driver in the development of optimal control theory was the path-planning required to get a manned mission to the moon in the 1960s: they were literally rocket scientists.<br /><br />Secondly, they were working on engineering systems. Macro economists constantly complain that they cannot do experiments on their systems of study (except by using DSGE models, of course!). The optimal control engineers had the luxury of doing almost any tests they wanted on the physical systems they studied to determine the dynamics.<br /><br />Even with these advantages, optimal control still failed as a design technique (outside of said path-planning problems).<br /><h2>The Example</h2><i>(This article uses a bit of mathematics. Equation-averse readers may be able to skip most of them. The figures were generated by running simulations that were computed using my sfc_models equation solver. The code is given below, and is available in the development branch of the project on GitHub.) </i><br /><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script><br /><br />We have an extremely simple system, and we have a baseline model for it, denoted $P_0$. (We use $P$ to stand for "plant," which is the traditional control systems name for such a system, and it technically is an operator from a discrete-time single input to single output.)<br /><ol><li>We know that the steady-state gain from the input $u$ to the output $x$ is 1. That is, if we have a constant input of $k \in R$, the output converges to $k$.</li><li>We believe that the model lies within a class of models defined by a parameter $a$. The baseline model has $a=.05$, and that $a \in [0.005, 0.09]$.</li><li>The input $u$ is being set by the controller of the system, and hence has access to the data.</li></ol><div>The parameterised model is defined by: $x[k] = (1-a) x[k-1] + a u[k-1], \forall k > 0, x(0) = 1.$</div><br />(Following electrical engineering tradition, I denote the discrete time index with $[k]$.)<br /><br />We assume that we have access to a great deal of historical data for this system. Once we have validated that the steady state gain is equal to one, the resulting linear system has to have the above form (or be related to it by a linear scaling of $x$).<br /><br />For some reason, we have reason to be certain that the parameter $a$ lies in the interval $[0.005, .09].$ We can run simulations of the model with three parameter values (the baseline $a=0.05$, plus the two endpoints of the interval) to get the simulated response of a step rise of the input $u$ from 1 to 2. (Figure below.)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-r9wo5aII1jc/Wh2K-AA9VuI/AAAAAAAADEk/zxAJBLhWaK0BUtO0gzvRlyrThokK9CopgCLcBGAs/s1600/robust_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="720" height="480" src="https://3.bp.blogspot.com/-r9wo5aII1jc/Wh2K-AA9VuI/AAAAAAAADEk/zxAJBLhWaK0BUtO0gzvRlyrThokK9CopgCLcBGAs/s640/robust_1.png" width="640" /></a></div><br />We can then compare the output of the true system (in red) to these simulations. We see that the true step response is quite close to the baseline model. (An eagle-eyed reader might spot the problem here, but this would be difficult if I buried the true system response with random noise.)<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-HP90LinRsbk/Wh2K-AGJuAI/AAAAAAAADEg/6ZWBRNgY9moZYR5GHIGqe5Q2lXnft49DwCLcBGAs/s1600/robust_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="720" height="480" src="https://3.bp.blogspot.com/-HP90LinRsbk/Wh2K-AGJuAI/AAAAAAAADEg/6ZWBRNgY9moZYR5GHIGqe5Q2lXnft49DwCLcBGAs/s640/robust_2.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: left;">As we can see, the system marches at a leisurely pace from 1 to 2, following the change in the input, However, one could imagine that this slow adjustment would be seen as suboptimal, and we can then speed the response up.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We want the output $x$ to track a reference signal $r$. We define the tracking error $e[k]$ as $x[k[ - r[k]$. We would like the tracking error to have the dynamics: $e[k] = \frac{1}{4} e[k-1].$</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We can achieve this by setting $u$ so that it cancels out the existing dynamics for the baseline model, and force $x[k]$ to emulate the above behaviour. This is achieved by setting a control law:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">$u[k] = \frac{-0.7 x[h] + 0.75 r[k]}{.05}.$</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">If we simulate the closed loop responses for our baseline system, and the systems at the extremes of the parameter set, we see that behaviour is relatively acceptable for all three models.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-oM8DBrmEll8/Wh2K-Zz5obI/AAAAAAAADEo/UAwv2ix4NsEQpNp20un85hRDwqnAYV0xACLcBGAs/s1600/robust_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="720" height="480" src="https://1.bp.blogspot.com/-oM8DBrmEll8/Wh2K-Zz5obI/AAAAAAAADEo/UAwv2ix4NsEQpNp20un85hRDwqnAYV0xACLcBGAs/s640/robust_3.png" width="640" /></a></div><br />However, if we apply the control law to the actual system model, we end up with unstable oscillatory behaviour (in red).<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-GImpWpl2j8U/Wh2K995dNeI/AAAAAAAADEc/-smN0U1dI2kkKa4p92ZvozjA5QiY_joSwCEwYBhgL/s1600/robust4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="720" height="480" src="https://4.bp.blogspot.com/-GImpWpl2j8U/Wh2K995dNeI/AAAAAAAADEc/-smN0U1dI2kkKa4p92ZvozjA5QiY_joSwCEwYBhgL/s640/robust4.png" width="640" /></a></div><br />In other words, although parameter uncertainty covered the open loop behaviour nicely, actual closed loop behaviour was nothing close to what was implied by the extremes of the parameter set.<br /><br />The reason for this failure is quite familiar to anyone with a background in control engineering -- or taking showers -- a lag between input and output. In this case, the model is perturbed from the original model by adding two lags to the input signal. This is enough to make the resulting system unstable. The same effect is felt if one is impatient in setting a shower temperature. If you keep setting the dial temperature based on the current water temperature, you will end up repeatedly overshooting between too hot and too cold. You need to give time to allow the water to flow through the pipe to see whether the temperature dial needs further adjustment. <i>(An automatic shower temperature control is a standard control systems engineering project.)</i><br /><br />In summary, systems can behave in ways quite different than predicted solely by parameter uncertainty. Missing dynamics can be fatal.<br /><h2>So What?</h2>The simplicity of this example might make some readers impatient. "That's a lag. We know how to incorporate them into our estimation strategy by adding parameters."<br /><br />Not really. Optimal control engineering did not fail because they did not have enough parameters. I had to use a fairly drastic lag to destabilise the system only because the underlying system (a low-pass filter) is remarkably stable. If the system had oscillatory dynamics of its own, much more subtle perturbations to the model would achieve destabilisation.<br /><br />The optimal control strategy failed because it assumed that the model was known. The methodology was:<br /><ol><li>Take assumed model of the plant.</li><li>Calculate the optimal trajectory (using some objective function).</li><li>Force the system trajectory to follow the optimal trajectory by cancelling out the assumed plant dynamics.</li></ol><div>This procedure was equivalent to determining the inverse of the mathematical operator of the plant, and using that inverse to calculate the target dynamics. Unfortunately, cancelling out an operator with an inverse creates severe numerical instability unless the matching is perfect. (Your system matrices end up being ill-conditioned.) This numerical instability made optimal control laws useless in practice.</div><div><br /></div><div>For those interested in the history of optimal control, I want to underline that I do not think that they were misled by parameter uncertainty. When I did my doctorate in the early 1990s, optimal control theory was only studied as a historical curiosity; nobody really cared what their exact thought processes were. Based on my hazy memory of the literature (and the only optimal control textbook I own), there was no formal notion of model uncertainty. Instead, the explanation for mismatches between models and reality was explained as follows.</div><div><ol><li>Our measurement of outputs was corrupted by random noise.</li><li>Model dynamics included additive random disturbances.</li></ol></div><div>These issues were dealt with by using the Kalman filter to estimate the state despite the noise, and the assumed stability of the system would handle (finite energy) external disturbances. (This should sound familiar -- this is exactly the strategy that DSGE modellers inherited.) As can be seen, this is inadequate. Incorrect dynamics can imply a consistent force driving the actual output away from the theoretical trajectory, a possibility that is not included in random disturbance models.</div><div><br /></div><div>I am unaware of optimal control theory that dealt with parameter uncertainty. However, it seems likely that engineers would like have done basic "sensitivity analyses" where they varied parameters. They would have done so without the aid of any formal theory to guide them, and would likely have ended up with an approach similar to what I sketched out above (what happens if a parameter is near its assumed limit?).</div><div><br /></div><div>In any event, robust control -- sometimes called $H_\infty$ control -- abandoned the approach that we know the plant model with certainty. We still have a baseline model, but we want to ensure that we stabilise not only it, but a "cloud" of models that are "close" to its behaviour. There are formal definitions of these concepts, but readers would need to be familiar with frequency domain analysis to follow them.</div><h2>Real World Implications?</h2><div>In the modern era, we are unlikely to see disasters similar to those created by the application of optimal control in engineering. People are aware of the issue of variable lags, and so policy is unlikely to be as aggressive as they were in engineering. Furthermore, modern mainstream applications of optimal control to policy is in the domain of interest rates. Fortunately, interest rate changes have negligible effect on real economic variables, and so policymakers cannot do a lot of damage.</div><div><br /></div><div>Instead, the applications of the notion of model uncertainty would be more analytical. We might begin to ask ourselves: although we managed to fit some model to historical data, is that fit spurious? How large is the class of models would yield an equally good fit? We just need to keep in mind that varying a parameter is a much weaker condition that having unknown model dynamics.</div><br /><h2>Code </h2><br /><script src="https://gist.github.com/brianr747/1fb39c587147cb59eb8f18abc8c0e8a5.js"></script> <br /><br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com16tag:blogger.com,1999:blog-5908830827135060852.post-78851285009807753162017-11-26T10:25:00.000-05:002017-11-26T10:25:51.977-05:00The Natural Rate Of Interest Is Determined By Indexation Policy<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-3b1GQOojjcM/WhrR7BPMSzI/AAAAAAAADD8/pcPGHTc3WNgytlOooPba51H03fdw7_5CQCKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-3b1GQOojjcM/WhrR7BPMSzI/AAAAAAAADD8/pcPGHTc3WNgytlOooPba51H03fdw7_5CQCKgBGAs/s1600/logo_MMT.png" /></a></div><a href="http://www.bondeconomics.com/2017/11/the-theoretical-incoherence-of-full.html" target="_blank">My previous conjecture about full employment arguments</a> -- has a wide range of implications. At this point, I am unsure about the generality of the argument; it is clear that we have some classes of models where J* does not exist; is it as wide as I think? Under the working assumption that the proof technique is indeed correct, the natural rate of interest looks like it is also in theoretical trouble. In the class of models studied, it is not too hard to demonstrate that the "natural" rate of interest is entirely determined by a fiscal policy decision: indexation policy.<br /><br /><a name='more'></a><h2>An Assertion About a Conjecture</h2><a href="http://www.bondeconomics.com/2017/11/from-j-to-u-what-my-conjecture-is-about.html" target="_blank">As I previously noted, I am way out on a limb here mathematically</a>. I could either keep my mouth shut while I double-check my work, or double down. I have decided to double down (perhaps I could say that my account was hacked if my proof does not work?).<br /><br />If we look at the (asserted) proof of the conjecture, interest rate aficionados would have noted that a locked nominal rates at zero, pretty much as an afterthought. Even so, we ended up with a situation in which every model with a steady state ends up with a steady state with an inflation rate of zero. Obviously, that steady state assumption could be challenged; its existence would require a lot more effort. That said, most standard classes of economic models do converge to a steady state under such conditions (no investment, growth, disturbances); in fact, pretty much all workhorse DSGE models are based on linearisations around some (unknown) steady state.<br /><br />By applying basic arithmetic, we see that the real (and nominal) rate of interest is zero in every single one of the steady state for every single model. This by itself should not be disturbing to neoclassical economists; it seems to be a standard result that the natural rate of interest is zero in a no-growth economy.<br /><br />However, if the government wants to torment neoclassical economists, it could institute a policy of indexation of the Job Guarantee wage, and the progressive income tax brackets, with the indexation factor fixed. For example, the government may have read on the internet that a 2% rate of inflation is optimal, and have all nominal values in policy parameters grow by 2% per year.<br /><br />It is clear that the inflation rate can no longer be zero under these circumstances. If there is a steady state with a rate of wage inflation of less than 2%, the average private sector wage would drop below the minimum reservation wage (defined versus the Job Guarantee wage) in finite time. If the steady state wage inflation were greater than 2%, the average tax rate would rise to 100% (or whatever draconian marginal tax rate is imposed to preserve price stability). (It is clear that the proof of non-explosive wage growth is non-trivial, and I would need to formalise things more to make it acceptable.)<br /><br />Since the nominal interest rate was still fixed at 0%, we end up with a steady state real rate of interest that is less than or equal to -2% (assuming a steady state exists). I am unaware of a rigorous standard definition of the natural rate of interest, but I assume that it would have difficulties with such a thought experiment. A change to fiscal policy parameters -- which may be very difficult to discern from just the time series of spending and taxation -- has moved the real rate of interest.<br /><br />Of course, we can use a statistical procedure to tell us what the average real rate of interest is in a steady state. For example, we know that the observed real rate of interest in any steady state will converge to the 12-month moving average of the real rate of interest. However, that observation tells us nothing about the real rate in any other steady state (unless the steady state value is coincidentally the same!). Therefore, it is clear that if we define the "natural rate of interest" to be the output of a statistical procedure, it will likely exist, but has no predictive power for other steady states.<br /><br />It is clear that this result would contradict the interpretation of most mainstream economic models. The reason for the contradiction is that someone's mathematics is wrong (possibly mine), or it is an artefact of poorly-specified fiscal policy in mainstream models. In mainstream economics, tax rates and spending are indexed to the price level (pure price-taking behaviour), and so it accommodates <i>any</i> steady state inflation rate. As my assertion suggests, indexation policy is a key determinant of long-run inflation trends. If we assume that fiscal policy is actively destabilising price level stability, it is clear via a process of elimination that only monetary policy can control inflation.<br /><h2>Definition Difficulty </h2>We see that it is difficult to come up with a definition of a natural rate that will apply to all of these models. Any attempt to argue that the steady state rate of interest will be constant can be as easily shot out of the water as J*.<br /><br />We then need to say that the observed rate of interest in steady state is equal to some "natural level," but then modified by "other factors." However, if we do not pin down those other factors, the statement is trivial. <i>Any</i> steady state value of <i>any</i> variable is zero plus "other factors."<br /><br />Unfortunately for the attempt to define the natural rate of interest, the value is being determined by the change in tax rate schedule in these models. At any given point in time, the tax schedule is driven by a pair of vectors (tax rates, bracket levels). We need to somehow incorporate these vector-valued variables into our statistical estimation procedure. It is unclear whether such a procedure exists.<br /><br />We cannot just use the observed average tax rates to replace these vectors. The observed tax rate will rise and fall with the cycle, and it makes it appear that a purely passive tax policy is active. This is misleading, and it would be near impossible to extrapolate historical behaviour to a new policy regime.<br /><h2>Back to the Real World</h2>What's happening in the real world? In the high-inflation era of the 1960-1970s, fiscal policy was actively indexed to inflation. The highest income tax brackets featured a high marginal tax rate, but only fools and the unlucky paid taxes at those rates. (Entertainers are essentially employees and unable to structure their affairs to avoid taxes, and thus the "hippies against high taxes" sub-genre of popular culture appeared.) For the bulk of the income distribution, spending and tax brackets accommodated inflation.<br /><br />In the current era of price stability, tax brackets and government spending has been stable. Unsurprisingly, we have been able to sit in a steady state where inflation sticks around 2% (outside of Japan, where the steady state has been closer to 0%).<br /><br />In any event, my discussion here is theoretical, and is not meant to be policy advice.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-22181984239878838382017-11-24T10:44:00.000-05:002017-11-24T10:44:44.747-05:00From J* To U*: What My Conjecture Is About<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-NUVUeWkcXgI/WhgyIRj4W-I/AAAAAAAADDs/g3Pq8K1cwG0tfLNd6dfQsKYo_rq2nH_1ACKgBGAs/s1600/logo_models.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-NUVUeWkcXgI/WhgyIRj4W-I/AAAAAAAADDs/g3Pq8K1cwG0tfLNd6dfQsKYo_rq2nH_1ACKgBGAs/s1600/logo_models.png" /></a></div><a href="http://www.bondeconomics.com/2017/11/the-theoretical-incoherence-of-full.html" target="_blank">I have little doubt that my previous article on J* </a>-- a definition that I invented -- was confusing to most of my readers. As I wrote, I reverted to a mathematical style of writing. It is likely that inventing a concept and proving it does not exist is a pastime that would mainly be of interest to mathematicians (and philosophers). However, I have a real-world target in mind: NAIRU. All we need to do generalise the theorem procedure, and we can prove that a similar concept -- U* -- does not exist in the current institutional structure. We can then use that information to annihilate <i>any</i> definition of NAIRU that ends up being equivalent to U*.<br /><br /><a name='more'></a>Why not take on NAIRU directly, a reader might ask? This is because economists are not mathematicians. They use any number of different concepts, and assume that they are the same thing. It is a waste of time trying to prove the incoherence of each of these concepts; we just prove that U* cannot exist, and we can then just prove the equivalence of any particular definition of NAIRU to U* as needed.<br /><br />Obviously,. that seems to be a rather grandiose assertion. I could easily be wrong. The most obvious hurdle is that there could be a flaw in my J* non-existence proof. I have thrown it out there, and I am waiting for it to be shot down. A more intelligent approach would have been to approach people privately and get their opinion, but hey, I decided to roll the dice.<br /><br />The next hurdle is generalisation. I have added some comments below (which I added to the original article as well). When I first did the proof, I had very little idea how to generalise it; I just needed to pin down a proof for a single case. However, if we start pulling theoretical tricks out of the nonlinear robust control toolkit, the job might be easily done. If we want to generalise the result, things could get pretty ugly from a mathematical point of view. For example, norms of series will not converge. I dealt with this in my doctoral thesis in an inelegant fashion; mathematicians may have a better way to approach it.<br /><br />If my guess is correct, finding a counter-example model is much tougher than it would appear. It is not enough to pick a model that violates one of the many assumptions I imposed: the model cannot be approximated by any such system in steady state. Is such a model going to be anything other than pathological?<br /><br />Furthermore, lack of a steady state might not be enough. If we move into the frequency domain, we can just pin down the zero frequency behaviour. If this works, we can hit any system with oscillatory or even chaotic behaviour.<br /><h2>Trivial Definitions of NAIRU</h2>The problem with mainstream economic analysis is the tendency to interchange variables that are the result of statistical methods and theoretical concepts. It is obvious that if we define NAIRU to be the result of some statistical procedure, it exists. The problem is that such a definition is trivial: it has no predictive power.<br /><br />Imagine that I define "NAIRU" to be the 12-month moving average of the unemployment rate. We know that if we take any reasonable definition of steady state, the observed unemployment rate has to converge to my definition of NAIRU. However, seems clear that this is a trivial observation: this estimate of NAIRU has essentially no predictive power for other steady states.<br /><br />I imagine that the economists producing NAIRU estimates would be very unhappy with the suggestion that their fancy models are equivalent to taking the 12-month moving average of the unemployment rate. However, I see no evidence that their estimates have any more predictive power than that definition.<br /><h2>What is not Implied by this Conjecture</h2>My arguments here are theoretical. I have little doubt that they could be misinterpreted by those who do not understand economic theory. I assume that my readers would not make these particular arguments, but one could conceive that someone somewhere might make comments that are equivalent to the following statements.<br /><ul><li><b>Non-existence of U* implies no trade-off between unemployment and inflation.</b> This conjecture does not imply that we would not see a trade-off between unemployment and inflation. All the conjecture states is that the trade-off has to include the effect of fiscal policy. We can adjust the unemployment rate (within limits) by changing fiscal policy, without affecting the steady state inflation rate.</li><li><b>Why don't policymakers drive the unemployment rate to zero?</b> That question presupposes that policymakers <i>care</i> about the fate of the unemployed; I see little evidence of such care in the post-1990s world. Secondly, there is no mechanism -- other than the Job Guarantee -- to force the "involuntary" unemployment rate to zero. (If a Job Guarantee were implemented, there would still be residual unemployment from people on jobless benefits, or conducting a job search. Since they could take a Job Guarantee job, such unemployment might be viewed as voluntary. Whatever.) Traditional demand-management techniques used in the 1960s failed, for reasons outlined by Hyman Minsky (and presumably others) at the time. Within the simplistic class of models studied, the "fiscal coherence" assumption does imply a non-zero lower bound for the observed J or unemployment rate does exist: we have a theoretical limit for how low tax rates can be an still preserve price stability. In the real world, we have clearly never flirted with that theoretical lower bound.</li></ul><h2>Remarks on Generalisation</h2><b>Remark. </b>We do not need to assume that economic models lie in the class of models described here. It may only be enough to specify that they can be approximated by such a model in steady state. This is obviously a much wider class of model behaviour. We can formalise the notion of approximating a system by applying the definitions used in robust control theory. (My doctoral thesis is the only reference I can offer off-hand that covers the issues I see.)<br /><br /><b>Remark.</b> The steady state assumption is needed to allow us to apply algebra in the proof. If we were willing to delve into more advanced mathematics, it seems possible that we could specify conditions in frequency domain terms, and just look at the zero frequency ("DC") component of signals. Since most time series are expected to be bounded away from zero for infinite periods of time, there would be obvious convergence issues for the Fourier transform. We would need to do our analysis on finite intervals, and then take limits. This would be tedious, but the generalisation may result in a more elegant proof.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-32631628386794949472017-11-23T09:00:00.000-05:002017-11-24T14:51:34.932-05:00The Theoretical Incoherence Of Full Employment Arguments<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Ck1S_lc-EuA/WhNWdFDwnXI/AAAAAAAADCQ/iMmZS8aaxcUeu3SfbJHJo8EfA9PY9rg3wCKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-Ck1S_lc-EuA/WhNWdFDwnXI/AAAAAAAADCQ/iMmZS8aaxcUeu3SfbJHJo8EfA9PY9rg3wCKgBGAs/s1600/logo_MMT.png" /></a></div>One quite often runs into arguments that rely on assuming full employment, and then relating that policy decisions. In my view, such arguments are fundamentally weak; we need to refer to actual model results to discuss policy. In this article, I explain why an attempt to apply a NAIRU argument to a Job Guarantee is misguided. The analysis is unusual: instead of discussing a single model, the behaviour of an entire class of reasonable economic models is analysed. This reflects the attitude towards model uncertainty that animates robust control theory.<br /><br /><b>NOTE:</b> <i>I have added comments about the generalisation of these results. [2017-11-24] These comments will be repeated in a separate post.</i><br /><a name='more'></a><br />Since my thesis is that full employment arguments are mathematically incoherent, I had little choice but to lapse into a stilted mathematical writing style. My apologies.<br /><br /><i>Also, as I note below, the argument is perhaps more general than just applying to an economy with a Job Guarantee. The economic structure mattered more in my initial pass at the proof, but the logic changed as I cleaned up the proof.</i><br /><h2>The Premise</h2><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script><br />I assume that the reader is familiar with a description of a Job Guarantee. I denote the percentage of the workforce that is employed by the Job Guarantee as $J$. For simplicity, we assume that the rest of the workforce is employed. In other words, $J$ acts in a similar fashion as the unemployment rate in the current institutional structure.<br /><br />One could imagine the following argument, which is an attempt to apply existing beliefs about full employment to the analysis of a Job Guarantee. We suppose that there is a variable $J^*(t)$ that has the following properties.<br /><ol><li>The variable $J^*(t)$ cannot be affected by policy choices, in particular fiscal policy choices.</li><li>Wage inflation acceleration is given by some function $f(J(t) - J^*(t))$, with $f(0) = 0$ and $f$ is strictly decreasing. That is, inflation accelerates upwards (downwards) if the gap $J(t) - J^*(t)$ is negative (positive). (We see that $J^*$ is an analogue of NAIRU.)</li></ol><div>My conjecture is that such a variable $J^*$ cannot exist for internally consistent, reasonable economic models. I cannot prove this conjecture for all possible internally consistent models, but it is possible to demonstrate it for a non-empty set of <i>economies</i>. The argument being that it is possible to extend the set of economies by relaxing simplifying assumptions. I justify this belief below, once the proof technique has been demonstrated.<br /><h2>Theoretical Economies versus Model</h2>To begin, I would note that my analysis here appears to be unusual. We need to distinguish two theoretical concepts: a (theoretical) economy, and economic models. The reader is presumably familiar with economic models; stock-flow consistent models, or dynamic stochastic general equilibrium (DSGE) models would be two classes of models. A theoretical <i>economy</i> is a set of mathematical constraints for which we can find a set of economic models that respect those constraints.<br /><br />I do not have a formalisation of the concept of economy constraints, but the constraints that I have in mind would be things like production functions and national accounting constraints. A particular economic model is a mathematical model which meets those constraints, while at the same time filling in behavioural rules. In other words, an <i>economy</i> is a mathematical description that sets the rules of the game for which behavioural rules live in. This distinction does appear in some treatments of stock-flow consistent models, but is arguably absent from many mathematical descriptions of DSGE models: behavioural assumptions are freely substituted for accounting constraints. In order to fit the existing literature within my framework, we need to draw that distinction.<br /><br />One key point is that a theoretical economy has a number of variables that are assumed to exist in all models of that economy. These are various non-behavioural variables: the level of employment, average wages, prices, etc. These variables are presumably augmented by variables that are used to determine behaviour, which are model-specific. For example, one economic model might use the moving average of price changes as inflation expectations (adaptive expectations). Another model may have an inflation expectation variable that is determined by some complex optimisation procedure. The key point is that both models have to have an actual price level, which maps to the price level of the theoretical economy.<br /><br />Alternatively, we can argue that economic models are defined by four classes of mathematical constraints.<br /><ol><li>Things that have to be true: accounting identities.</li><li>Constraints that define the laws of motion of an economy: production function, etc.</li><li>Plausibility constraints: the money stock cannot be arbitrarily large relative to nominal GDP.</li><li>Behavioural constraints that determine the precise economic trajectory in a simulation.</li></ol>A theoretical economy is defined by the first three types of constraints, particular models fill in the final set.<br /><br />My discussion here is largely informal. I rely on common usage for most definitions used. </div><h2>Economy Assumptions</h2><div>I am now pinning down the class of theoretical economies that are to be analysed. Since it is somewhat unclear how to define constraints on economies, I will phrase the constraints as being on economic models themselves.</div><div><br /></div><div>We need to distinguish some theoretical economy variables as being "inputs." These are exogenous variables, or external shocks. Other economy variables are determined by these inputs. If we have a stochastic (random) model, the particular realisation of those random variables has to be fixed. There is a certain awkwardness that arises with shocks to behavioural parameters, which are obviously model-specific. For our purposes here, all behavioural parameters have to be clamped down. </div><div><ul><li>The economic models are time invariant. That is, if we apply a time shift to "input" variables, the model variables in the model output are similarly shifted. (A formal definition can be found in the control theory literature.)</li><li>Accounting identities hold in a standard manner.</li><li>Models are of a closed economy, with a fixed workforce and reasonable private sector production function. There is no investment or capital. The production function can be allowed to vary, but it should have the property that more people working in a period implies greater output.</li><li>There is an implicit policy objective of long-term price level stability, and policy is set consistent with that objective. In particular, the Job Guarantee wage is a fixed value (denoted $W_j$). Income from this wage is not taxed. </li><li>Workers are either working for the private sector, or in the Job Guarantee pool. There are no other welfare programmes in place. Implicitly, this assumes that people do not stop working completely.</li><li>Government (real) consumption is assumed to be a constant within any model scenario.</li><li>Interest rates are pegged at 0%. This is done for simplicity; otherwise we need to worry about interest income effects. The net result is that household sector pre-tax income just equals private sector wage income plus Job Guarantee payments, plus dividends.</li><li>We denote the average private sector wage as $W$. Taxes are paid via a non-regressive tax rate denoted $\tau_W$. (Non-regressive implies that $\tau_{W_1} > \tau_{W_0}$ if $W_1 > W_0$.) Average tax rates always lie in the interval $[0,1)$. The after-tax wage is $(1-\tau_W)W$, and denoted $\hat{W}$.</li><li>Job Guarantee output is not sold in the market. Workers are paid, but all output is given away. Therefore, total household after-tax income is either saved (as non-interest bearing money) or used to purchase goods produced by the private sector.</li><li>There is a trade-off between paid work and the Job Guarantee that is based on the gap between the fixed Job Guarantee wage and $\hat{W}$. In particular, for any giveneconomic model, there would be a value $\hat{W}_l$ at which the entire work force would choose to work in the Job Guarantee programme. (This value may be model-dependent.)</li><li>Market clearing. We assume that the business sector will produce at least enough goods to meet the demand from the government and household.</li><li>We assume that household holdings of government money cannot be negative, and that they cannot be arbitrarily large versus nominal income. The first condition is reasonable: governments generally do not finance indefinite household deficit spending. The second imposes a behavioural assumption: that we will never see a situation where someone has money holdings that is 1000000% of GDP; the presumption is that they would attempt to buy up all available production, causing a price shift. Alternatively, we argue that government liabilities cannot grow to be infinitely large relative to GDP, or that the money multiplier is bounded away from zero.</li><li>We assume that business sector government money holdings cannot be negative, or arbitrarily large versus household money holdings. We cannot have the business sector acting as a black hole in the national accounts (an awkward issue for DSGE model accounting).</li><li>There are upper and lower limits (both positive real numbers) for the ratio of the wage $W$ to the price level of private sector output.</li></ul><div>The above set of assumptions is a mixture of things that have to be true (accounting identities hold) as well as simplifying assumptions. However, none of them should be controversial, they apply to many classes of existing economic models.</div><div><br /></div><div>We then end up with some more poorly-defined assumptions. If I formalised this entire argument, I hope that these assumptions could given proper definitions. My feeling is that we would need to pin the properties of the production function (which is currently fairly open-ended) to make some decisions more precise.</div><div><br /></div><div><b>Fiscal Coherence.</b> By analogy to the governmental budget constraint of the DSGE literature, I assume that tax rates are sufficiently high to prevent arbitrarily large wages. There exists a wage limit $W_h$ for which the tax take is so large that the private sector is unwilling to work to meet demand. The fact that this a nominal upper limit arrives from the assumption that tax rate schedule follows a fixed schedule in nominal terms, and the Job Guarantee wage is fixed, implying that there is a maximum nominal expenditure on that programme by the government. The exact level of $W_h$ depends upon the behavioural assumptions of a given model. I believe that if we were to formalise this constraint, it is that the marginal tax rate is eventually greater than the supremum of all possible steady state government consumption levels as a percentage of GDP, which implies that imposed taxes must be much greater than government spending for all sufficiently high wage levels.<br /><br /><i>As an example, imagine that the top marginal tax rate is 40%, and there is something in the government consumption function rule that limits government spending to 10% of GDP. Arbitrarily large nominal incomes (GDP) would imply a fiscal surplus of 30%, which should presumably tamp down inflationary pressures at some point. </i><br /><i><br /></i><b>Remark. </b>We can formalise this concept in a brute force fashion by assuming that the <i>average</i> tax rate converges to 1 in a uniform fashion, and that we assume that all models incorporate an assumption that workers will be unwilling to work in the private sector if they are unable to buy the output of their labour. This assumption is stronger than we need, but other formulations would require a more formal definition of the production function.</div><div><br /></div><div><b>Steady State Definition.</b> There is a final (heroic) assumption about the economy entering a steady state. We assume that the economy always enters a steady state in model simulations. In particular, we end up with:</div><div><ul><li>wage inflation converging to a constant;</li><li>$J$ converging to a constant;</li><li>real inventories converge to a constant.</li></ul><div>We then make a theoretical leap to assume that we refer to those constants as being a steady-state solution. We could re-start the simulation with those values, and they would remain unchanged in the new simulation. If the reader is uncomfortable with this, we would need to use $\delta - \epsilon$ arguments around the limiting values of series, which would be <i>inelegant </i>and tiresome.<br /><br /><h2>Generalisations of These Assumptions</h2>(Added 2017-11-24).<br /><br /><b>Remark. </b>We do not need to assume that economic models lie in the class of models described here. It may only be enough to specify that they can be approximated by such a model in steady state. This is obviously a much wider class of model behaviour. We can formalise the notion of approximating a system by applying the definitions used in robust control theory. (My doctoral thesis is the only reference I can offer off-hand that covers the issues I see.)<br /><br /><b>Remark. </b>The steady state assumption is needed to allow us to apply algebra in the proof. If we were willing to delve into more advanced mathematics, it seems possible that we could specify conditions in frequency domain terms, and just look at the zero frequency ("DC") component of signals. Since most time series are expected to be bounded away from zero for infinite periods of time, there would be obvious convergence issues for the Fourier transform. We would need to do our analysis on finite intervals, and then take limits. This would be tedious, but the generalisation may result in a more elegant proof.<br /><h2>Proof</h2></div></div><div><div><b>Lemma</b> The growth rate of wages in a steady state is zero.</div></div><div><br /></div><div><b>Proof.</b> By definition, the growth rate of wages in a steady state is some constant $c$. </div><div><br /></div><div>If $c < 0$, then $W(t)$ is constantly declining. There would be some time point $t^*$ for which $W(t^*) < \hat{W}_l$. Since after-tax incomes cannot be greater than pre-tax incomes, this implies the entire work force would prefer to take a Job Guarantee job at $t^*$. That in turn implies zero private sector output. However, there is non-zero demand for goods (government consumption, and the spending of Job Guarantee wages). The market clearing assumption rules out such an outcome; wages cannot fall to such a low level.</div><div><br /></div></div><div>If $c>0$, then $W(t)$ is constantly rising. This implies that $W(t)$ will eventually surpass $W_h$, which is precluded by assumption. The level of $W_h$ depends upon the functions determining the incentive to work, and the tax schedule. (This argument is obviously a short-cut; we could apply a formal fiscal coherence definition to prevent unbounded wage growth.) $\square$<br /><br /></div><div><b>Remark. </b>The previous lemma was a statement of the obvious. The fact that an economy with non-zero government spending (and a JG) cannot spiral into a deflation where nobody is working for the private sector should not raise eyebrows. The argument that we cannot have a continued positive inflation rate solely as the result of a progressive tax system is probably controversial in some quarters. However, it is very difficult to see how continued inflation could be sustained if the tax take is twice government spending (for example), and there is essentially no welfare state spending (since Job Guarantee wages are fixed, and inconsequential when compared to private sector wages).</div><div><br /><b>Lemma </b>The growth rate of consumer prices in steady state is zero.<br /><br /><b>Proof.</b> Apply assumption on the wage/price ratio being bounded. $\square$<br /><br /><b>Remark.</b> The previous lemma is just a way of getting rid of consumer prices in analysis; the only inflation that matters is wage inflation.<br /><br /><b>Lemma </b>In steady state, the fiscal balance has to be zero (as is the household financial balance).<br /><br /><b>Proof</b> In steady state, we know that government spending and taxes have to be fixed (given the previous results on price and wage stability, as well as the assumption that $J$ and government consumption are constant). Therefore, the fiscal balance is constant. If this constant is non-zero, it implies that the absolute value of household sector money holdings are eventually unbounded relative to household income, which is assumed to be impossible. (The arguments for the household sector fiscal balance to be zero are similar.) $\square$<br /><br /></div><div><b>Lemma. </b>The variable $J^*(t)$ has to be a constant $J^* \in R_+$.</div><div><br /></div><div><b>Proof. </b>We take any steady state solution. By the definition of steady state used here, $J(t)$ is a constant, and wage acceleration is zero for all $t$. By applying the definition of $J^*$, we see that it has to be constant. $\square$<br /><br /><b>Theorem</b> The variable $J^*$ cannot exist (with the above assumptions in place).<br /><br /><b>Proof. </b>We have shown that $J^*$ has to be constant. For simplicity, we take a scenario that assumes that government consumption is zero (discussed below). We determine the steady state wage, denote it $W_0$. Since the household sector (and government sector) exhibit fiscal balance, we have the following relationship:<br />$$<br />(1-J^*)W_0 \tau^0_{W_0} = J^* W_j,<br />$$<br />with $\tau^0_{W_0}$ being the average tax rate associated with average wage $W_0$ for this scenario (index 0).<br /><br />This implies that<br />$$<br />W_0 \tau^0_{W_0} = \frac{J^*}{1-J^*} W_j.<br />$$<br /><br />If we then run a second scenario, we see that:<br />$$<br />W_1 \tau^1_{W_1} = \frac{J^*}{1-J^*} W_j = W_0 \tau^0_{W_0}.<br />$$</div><div><br />We see that if the tax rate $\tau^1_{W_1} > \tau^0_{W_2}$, then $W_1 < W_0$. This means that a rise in tax rates between the two scenarios creates a double whammy for after-tax incomes: the tax rate is higher, and the pretax income is lower.<br /><br />We then can decide to have the tax rate in the second scenario to be determined by a completely flat average tax rate of $\tau^1_W = \frac{(1+\tau_{W_0}^0)}{2}$, which lies in $[\frac{1}{2},1)$. We can then verify that the after-tax income $\hat{W}_1 \leq \frac{\hat{W_0}}{2}$. Since we can divide the after-tax wage in half in one iteration, it is clear that we can repeat the process, and drive the after-tax wage below $\hat{W}_l$ in a finite number of iterations. This contradicts the assumption that the after-tax wage remains above that threshold. $\square$<br /><br /><b>Remark. </b>We have just demonstrated that there cannot be a natural rate of $J$ that is independent of fiscal policy settings. In order to achieve such a result, we need to have a model where behaviour is unaffected by things like prices or utility maximisation.<br /><h2>Remarks on the Steady State Assumption</h2></div><div>The steady state assumption is presumably controversial.<br /><br />The fact that it features price level stability should not be surprising. So long as the marginal tax rate is sufficient high and spending is not completely indexed to the price level, the fiscal surplus will be highly restrictive. This is only surprising for a modelling tradition that has no variables that act as anchor points for nominal variables.<br /><br />The second line of attack is that wages remain bounded, but the system oscillates or is allegedly "chaotic." For example, a Minsky-ite might argue that economic models should feature endogenous business cycles. However, it is extremely difficult to compare such models: since all scenarios feature business cycles, how can we compare them other than going over the entire time history? If we are looking at "full employment" argumentation, there is always an implicit assumption that we have a static scenario, so we can compare steady state values.<br /><h2>Remarks on Solution Generality</h2>One could look at the host of assumptions used and argue that I have found a special case. However, that underestimates the logic of the argument.<br /><br />If I were to use an "all else equal" argument, it would look like this.<br /><ol><li>We have shown that in a steady state, the fiscal balance is zero.</li><li>Fix an initial steady state solution.</li><li>Raise the tax rate, and "hold all else equal."</li><li>Since "everybody knows" that raising tax rates increases the fiscal balance in steady state, we end up with a steady state with a fiscal surplus, which was shown to be impossible.</li></ol><div>If I were to bury the above argument with enough bloviation, it might appear just as rigorous as most non-mathematical economic analysis. <i>However, the logic is incorrect</i>.</div><div><br /></div><div>If we know the fiscal balance is constrained to be zero, it is possible that other variables squirt to new values that allow an unchanged fiscal balance yet with higher tax rates. Many of the questionable assumptions taken were simplifications to force the adjustment onto a single variable: average wages. (This is why I set government consumption to zero: to eliminate the effect of the price of private sector output on the fiscal balance.) The only way to preserve fiscal balance with a higher tax rate with the given assumptions is to force wages lower, by a comparable amount. We just cudgel the economy with higher taxes until the after-tax wage rate drops below a plausibility threshold value.</div><div><br /></div><div>All we need to do to extend this proof is to see what other variables can change, and then create plausibility limits for their movements. However, it seems likely that we have to start imposing restrictions on the production function, and the proof is far more cumbersome. In order to create an elegant proof, we need to find a way of expressing such plausibility assumptions in a clean fashion.<br /><br />However, it appears clear that finding a counter-example model would be very awkward. One needs to find a model in which fiscal policy settings (consumption, tax rates) must<i> always</i> have no effect on the steady state solution. It seems obvious that would result in a model that most observers of fiscal policy debates would find implausible. (This may provide an avenue for a more general framing of the plausibility constraints.)</div><div><br /></div><div>Furthermore, the proof technique sidesteps heterodox complaints about equilibrium analysis. There is absolutely no requirement that any two steady states are near to each other in the state space; all that is required is that the system converges to <i>some</i> steady state regardless of initial conditions. There is no temporal relationship between successive steady states; they are the limiting results of completely independent model simulations. Arguments that a steady state cannot exist are somewhat plausible, but in such an environment, it is exceedingly difficult to compare policy choices.</div><div><br /></div><div>Finally, there exists misunderstandings regarding the post-Keynesian usage of the expression stock-flow consistency. As illustrated here, it is not just that model accounting adds up. Instead, it means that stock variables are accounted for, and they are not allowed to reach implausible values. The assumption that the money stock cannot grow in an unbounded fashion is a key reason why $J^*$ cannot exist.</div></div><h2>Remarks on Robust Control</h2>The above argumentation is certainly not using existing robust control theory. However, it reflects the spirit of sensible robust control theory: our analysis should not be tied to particular model. We instead should try to cover a wide class of models within a single analysis.<br /><br />Applying robust control principles to economics appears to require a re-thinking of the mathematical formalism. We need to distinguish between constraints that must apply, and behavioural constraints. We then need to have a high level description on what constitutes reasonable economic behaviour. As an example, we see that it would be unreasonable for workers to remain in the private sector if the Job Guarantee wage was ten times the average private sector wage; any model that suggested that outcome should be ruled out of contention.<br /><h2>Other Remarks</h2>This section is a group of observations regarding the mathematical exposition. I am following the mathematical writing style that these remarks are stand-alone observations, and there is no narrative arc connecting them. That is, they can be safely read in any order, and issues with one remark do no impinge on the others.<br /><br /><b>Remark. </b>As I have emphasised, this proof is informal, relying on common usage of terms. Once the argument is formalised, it is unclear whether or not there is a Job Guarantee matters; all that is needed is a lower bound for nominal after-tax wages. If this is indeed correct, this argumentation could be extended to take on the concept of NAIRU in the present institutional structure.<br /><br /><b>Remark. </b>The assumption that government consumption is zero in the theorem is jarring. It was done solely for algebraic simplicity, I believe it can be relaxed. The problem is the assumption that government purchases are done in a purely price-taking fashion, as is the case in other economic models. The assumption that government spending is completely indifferent to price levels seems obviously incoherent with a belief that price level stability is the primary economic objective. A simple alternative is to impose the condition that total government spending in nominal terms on consumption is constant.<br /><br /><b>Remark. </b>As to be expected, this model is Chartalist. If we allow the Job Guarantee wage and tax brackets to be indexed to the price level in some fashion, it would not be surprising that steady states could feature non-zero inflation rates. Very simply, the argument is that if the objective is price level stability, fiscal policy has to be set in a fashion that is consistent with that objective. Attempting to use monetary policy to counteract an incoherent fiscal policy is a questionable strategy.<br /><br /><b>Remark.</b> One could argue that higher marginal tax rates did not prevent inflation historically. However, that was in an environment where all variables ended up being indexed to realised inflation rates. This behaviour obviously destabilises the system, and is inappropriate to deal with analysis that presupposes that the objective is price level stability.<br /><br /><b>Remark. </b>The fact that Job Guarantee income is not taxed, but private sector wages are, is used to simplify the proof. This unfairness is regrettable.<br /><br /><b>Remark.</b> It is possible to imagine someone arguing that mathematical arguments do not matter, we know that $J^*$ has to exist for some reason. The only response is that this person does not have an internally consistent mathematical model that respects the given behavioural assumptions, particularly the accounting constraints. It is easy to construct models where the inflation rate has to be determined by $J^*$, however, they would feature behaviour that either impervious to tax rates (and so economic incentives do not matter for employment decisions), or feature the household or business sector generating money balances with an arbitrarily large magnitude. We cannot know which assumption fails until the model is constructed and analysed.<br /><br /><b>Remark.</b> The transition to a constant-growth framework seems straightforward. We need to have the Job Guarantee wage and tax brackets growing at a constant rate, and we would need to use these variables as scaling constants in analysis. This would increase the amount of algebra required, and obscure the working of the proof.<br /><br /><b>Remark. </b>The time invariance assumption does not seem to appear in the analysis. However, I believe that is required for the manipulations involving the steady state to work (which are currently informal). We need to be able to cleanly transition from solutions converging to a steady state, and a solution to the economic model with constant values. Time invariance makes this process much simpler to work with.<br /><br /><b>Remark.</b> This result should be worrisome to those who believe that they can take a purely empirical approach to macroeconomics. For any model in the class studied, it is clear that we can calculate inflation acceleration and $J$ for all time for a scenario $s$. Furthermore, unless the model has some very unusual dynamics, $J(t)$ and inflation acceleration will be both auto-correlated. We can then apply the same statistical techniques used to estimate NAIRU in the real world to this model data. It seems likely that they will produce an estimate for a "natural rate" of $J$ which converges to some value; call this $\hat{J}_s^*$. However, as can be seen from this proof, the estimate $\hat{J}_s^*$ has no predictive value for any other scenarios generated by the same model, <i>for every single model in this class</i>.<br /><br /><b>Remark.</b> The proof is based on a construction that shows we can always raise taxes to drive up $J$ if it is less than 100% (if necessary, by driving the whole labour pool into the Job Guarantee programme). The implication appears to be that we can lower $J$ by cutting taxes. However, that process cannot be repeated indefinitely: sooner or later, we have to hit the theoretical minimum for tax rates (which is presumably model-dependent). One could attempt to rescue "full employment" arguments by assuming that we are always and everywhere at the absolute theoretical minimum for tax rates. However, such a belief is hard to square with observed data. I may expand this discussion in a later article.<br /><br /><b>Remark.</b> One could argue that there is an irreducible minimum value to $J$. Such an argument does not appear to make much sense for a sensible Job Guarantee scheme, but it is more plausible if we translated back to the unemployment rate. There are strong arguments that there will always be a certain part of the workforce who are transitioning between jobs at any given time, and show up as unemployed in the survey. I believe that this minimum level was estimated to be 2% or so. However, the existence of such a minimum tells us nothing about $J^*$: if it is a threshold that cannot be surpassed, we can never observe $J$ below that level. Trying to fit this concept into a NAIRU-like definition is difficult. It is equivalent to saying that accounting identities always hold, and if they do not, it would be inflationary. In any event, the theoretical lower bound for tax rates would presumably interact with this barrier to create a theoretical minimum value for $J$; however, if we are in steady state, we still will have price stability.<br /><br /><b>Remark.</b> Neil Wilson made a comment with observations about the non-uniformity of workers. This is a real world issue, However, non-uniformity implies that wage incomes are not all equal, and it would be possible to to have the same average wage rate corresponding to different average income tax levels. That violates the assumptions about the tax rate. However, if we can approximate the more complex model in steady state with a model that meets these assumptions, the conjecture is that the result would still apply.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com10tag:blogger.com,1999:blog-5908830827135060852.post-88090973502566345172017-11-22T09:00:00.000-05:002017-11-22T09:00:15.463-05:00"An Introduction to SFC Models Using Python" Published<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-clGFKX5vnxc/WhTEhvOxVCI/AAAAAAAADC8/Vck9jRu48YwxSownuSaJyLUMXBJvm7XUACKgBGAs/s1600/ereport04_SFC_6_9.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1067" height="320" src="https://2.bp.blogspot.com/-clGFKX5vnxc/WhTEhvOxVCI/AAAAAAAADC8/Vck9jRu48YwxSownuSaJyLUMXBJvm7XUACKgBGAs/s320/ereport04_SFC_6_9.png" width="213" /></a></div>Stock-Flow Consistent (SFC) models are a preferred way to present economic models in the post-Keynesian tradition. This book gives an overview of the sfc_models package, which implements SFC models in Python. The approach is novel, in that the user only specifies the high-level parameters of the economic model, and the framework generates and solves the implied equations. The framework is open source, and is aimed at both researchers and those with less experience with economic models. This book explains to researchers how to extend the sfc_models framework to implement advanced models. For those who are new to SFC models, the book explains some of the basic principles behind these models, and it is possible for the reader to run example code (which is packaged with the software online) to examine the model output.<br /><br /><i>The book is available in ebook and paperback editions.</i><br /><br /><a name='more'></a><h2>Book Description</h2>The book is 128 pages, excluding end matter. It includes some figures, along with some example code. The writing style is different from the author's usual style, as most of the text involves a technical description of the Python code. There are some descriptions describing the behaviour of stock-flow consistent models which are less technical.<br /><br />The <i>sfc_models</i> source code is available at: <a href="https://github.com/brianr747/SFC_models">https://github.com/brianr747/SFC_models</a>, or as a download from the Python Package Repository. The large examples discussed in the book are packaged with the source code.<br /><br />The book has been released in both paperback and an ebook edition. The paperback is available at a number of online retailers; it may take several weeks for it to appear at some distributors. The ebook edition is currently only available for the Kindle (Amazon). The Kindle format has a fixed page layout that replicates the paperback, similar to reading a PDF file. (This means the ebook has some surprising features, such as intentionally blank pages.) Not all Kindle readers may support this format.<br /><br />If the paperback edition is purchased from Amazon, you should be offered the option to get the Kindle edition as well for free. It may take a day or two for this feature to show up, as it takes time for their systems to link the two editions.<br /><br />There were formatting problems discovered very late in the production process which have delayed the release of the ebook edition to other retailers (EPUB edition). The EPUB format that is accepted by other retailers features reflowing pages (like my previous ebooks). Reflowing pages are like web pages, in that they reformat according to the size of your viewing screen (very convenient for small reading devices). Unfortunately, this flexibility broke the Python code formatting in the example code. It is unclear when this will be fixed.<br /><br /><b>ISBN Information</b><br /><br /><ul><li>Paperback ISBN 978-0-9947480-9-6</li><li>Kindle ISBN 978-1-775167-0-0</li></ul><iframe align="left" frameborder="0" marginheight="0" marginwidth="0" scrolling="no" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=bondecon09-20&marketplace=amazon&region=US&placement=0994748094&asins=0994748094&linkId=60ebb45000e10d7423659fdcc8bd4255&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff" style="height: 240px; width: 140px;"> </iframe> <br /><h2>Chapters</h2><div><ol><li>Introduction</li><li>Why Python?</li><li>Model Structure</li><li>Equations and Their Solution</li><li>Closed Economy Models</li><li>Open Economy Models</li><li>Extending the sfc_models Framework</li></ol></div><br /><br /><br />Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-79758330619393762052017-11-20T18:14:00.003-05:002017-11-21T19:19:30.493-05:00Kindle Version of SFC Models Book Available...<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-nsicSroQUrw/WhNdnAhoyxI/AAAAAAAADCg/VgVORrN31UAlIi4E8t7Y-1R2uWS2HSb3wCKgBGAs/s1600/logo_SFC.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-nsicSroQUrw/WhNdnAhoyxI/AAAAAAAADCg/VgVORrN31UAlIi4E8t7Y-1R2uWS2HSb3wCKgBGAs/s1600/logo_SFC.png" /></a></div>I just wanted to let everyone know that I have a Kindle edition of the SFC models textbook available. However, the ebook edition is a "textbook" version of the book -- effectively the same thing as a PDF with fixed pages. Not all Kindle readers will support this format (particularly older ones). I believe that the Kindle store will not sell you the book if your reader does not support it, but I suggest caution. I just glanced at the book on my iPad, and it looked OK (which is unsurprising, since an iPad is set up to render PDF files).<br /><br /><b>UPDATE</b>: The formatting looks OK.<br /><br /><br /><a name='more'></a>Since this edition is a direct representation of the paperback edition, it has features that might be unusual for those who have gotten used to reflowing PDFs, such as blank pages. (Chapters always start on the right-hand page in printed books.)<br /><br />I am unable to use fixed-format pages for my non-Amazon ebook distributor, and so I cannot use the same trick to distribute outside of Amazon. I am still attempting to find a solution for that.Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-82130232295794288612017-11-19T09:00:00.000-05:002017-11-19T10:47:09.288-05:00On Using NAIRU To Analyse A Job Guarantee<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/--fp159U8mbE/WhCuBqmUlCI/AAAAAAAADB0/GWknm92U-RkEB0BBaxw_svbOiD4PIkEsACKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/--fp159U8mbE/WhCuBqmUlCI/AAAAAAAADB0/GWknm92U-RkEB0BBaxw_svbOiD4PIkEsACKgBGAs/s1600/logo_MMT.png" /></a></div>Professor Simon Wren-Lewis wrote "<a href="https://mainlymacro.blogspot.ca/2017/11/some-thoughts-about-job-guarantee.html" target="_blank">Some thoughts about the Job Guarantee</a>," in which he makes an attempt to analyse a Job Guarantee using the NAIRU concept. The analysis suffers from the well-known defects of NAIRU.<br /><br /><a name='more'></a><br /><br />In the article, he argues that a Job Guarantee implementation would cause a one-time upward shock to wages. He argues that this is not "acknowledged" by MMT authors, even though it appears this effect is common knowledge to anyone who has read the MMT literature. As a result, that is a curious argument. However, he then flips to an analysis where the Job Guarantee has no effect on inflation.<br /><br />However, the situation gets more interesting. He writes (note JG = Job Guarantee, IU = involuntary unemployment, and NAIRU = NAIRU):<br /><blockquote class="tr_bq">Suppose we start with an economy with stable inflation, implying unemployment was at the NAIRU, and introduce JG. As this puts upward pressure on inflation because the costs of losing a job are reduced. the only way of keeping inflation stable is to deflate demand, which of course would reduce output, labour demand and therefore increase the number of people on JG jobs. So if we were to compare two economies where inflation was stable, one with IU and one with JG, the number of JG jobs would exceed IU in the other economy.</blockquote>This passage would be helped by a clearer mathematical exposition; as it stands, I have to make some guesses as to what he means.<br /><br />My interpretation of his argument is as follows.<br /><br /><ul><li>Let the percentage of the people in the workforce not employed in "regular employment" be denoted as NW. </li><li>In the current institutional structure, the number NW is equal to the the unemployment rate, or involuntary unemployment (IU).</li><li>In the current institutional structure, there is a number NAIRU for which inflation will not accelerate only if NW = NAIRU. (It accelerates upwards if NW < NAIRU, and downwards if NW > NAIRU.)</li><li>If we change institutional structure, NAIRU is unchanged.</li><li>If we implement a Job Guarantee, the term NW = (percentage of workforce in the Job Guarantee) + (people in the workforce who refuse to take a Job Guarantee job). (This latter term is presumed to be very small.)</li><li>Because a Job Guarantee job is assumed to be better than unemployment, there is less disciplining effect. Therefore, the new point of inflation stability is NAIRU + <i>k</i>, where <i>k</i> > 0.</li></ul><div>Although this analysis is simple, it runs into obvious problems.</div><div><ul><li>NAIRU is obviously not a constant in the current institutional environment. Any attempt to specify NAIRU as a constant fails statistically.</li><li>We have no idea what determines NAIRU in the current environment. It is estimated by a statistical procedure that tells us what NAIRU would have to be if the gap (NW - NAIRU) explained inflation acceleration. This procedure is obviously non-falsifiable; any variable correlated to the business cycle would yield just as useful predictions when the same procedure is applied to it. Correlation does not imply causation, and all that.</li><li>If NAIRU was invariant to institutional structure, it would be the same for all countries. It obviously is not.</li><li>There is no way to estimate <i>k </i>in the absence of a working Job Guarantee. Even if it is positive, is it larger than the errors in the estimate of NAIRU?</li><li>No microfoundations. There is no explanation why this model would work, other than some vague discussion about the cost of losing a job. </li><li>Incoherent model dynamics. The model implies that if the initial JG pool was too small, the economy would explode in a puff of hyperinflation. Imagine that the JG wage is fixed for all time (0% inflation policy). If NW < NAIRU + k, the model says that (wage) inflation will accelerate upwards. However, the gap between the private sector wage and the JG wage would be expanding, and there would be less incentive to enter the program. The result is that JG spending would decrease. How does a shrinking spending programme that buys a real input at a fixed price going to cause increasing inflation? And if we flip the sign, the result is even more ridiculous. If the JG pool gets too large, the private sector wage falls at an accelerating rate, and so we simultaneously have expanding JG spending, a shrinking labour force, and falling inflation. In other words, the predicted sign from the model appears backwards.</li><li>Critically dependent upon the specification of "potential." If we use potential GDP to estimate the output gap, it is unclear whether implementing a Job Guarantee affects potential private sector GDP. (The accounting for the output of the Job Guarantee programme is going to raise issues.)</li><li>Plausibility is terrible (related to microfoundations). In what sense does an excessive number of French-speaking unemployed workers in the Gaspé influence wage determination in (mainly English-speaking) Calgary? The usual state of affairs is that the bulk of the unemployed are in "low pressure" regions, and so it is unclear that we can relate aggregate unemployment to aggregate inflation.</li><li>It ignores existing labour market research. There is a large body of work discussing "hysteresis": long-term unemployed becoming viewed as "unemployable," and no longer mattering for "disciplining" wages. Re-attaching such workers to the job market (which is a major objective of the Job Guarantee) would effectively raise the size of the workforce. It is unclear what might happen to the measured unemployment rate (such people typically are dropped from the workforce in surveys in the current environment), but total private sector workers (and presumably output) would be higher. In other words, the postulated "k" effect is smaller than hysteresis-reduction gains. Since the objective of policy is to raise living standards (and not play games with questionable survey definitions), the Job Guarantee would be an unambiguously superior outcome.</li><li>Is a Job Guarantee job really better than unemployment benefits for many workers? For the upper half of the income distribution, the Job Guarantee wage is as irrelevant as the minimum wage. Meanwhile, given rampant wage inequality, the top half of the distribution rakes in a lot more than half of the total wage bill. In other words, how can the existence of the programme move that whole mass of salaries?</li><li>The analysis is static. If the Job Guarantee wage is high enough to pull employees away from existing jobs, those employers have no choice but to either raise wages or invest to increase productivity. With investment higher, so will be potential output, and the estimated NAIRU could end up lower.</li><li>Completely ignores other variables, such as the gap between average private sector wages and the Job Guarantee wage. Can the same level of NW be as inflationary (deflationary) if the Job Guarantee wage is 20% of average private wages instead of 50%?</li></ul></div><br />As an aside, the whole premise of DSGE macro was to avoid analysis like this. We have an attempt to analyse a new policy framework based on a questionable fitting of historical data. Instead, DSGE macro was to replace that with an in-depth analysis of the decision-making process of economic agents, and so behaviour would be invariant to institutional change.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com13tag:blogger.com,1999:blog-5908830827135060852.post-15055194591438322632017-11-15T22:37:00.000-05:002017-11-15T22:37:09.220-05:00How Not To Defend DSGE Macro<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-096oOMkizWY/Wgz8GxlYjqI/AAAAAAAADBk/h7lw9NSesMcjhTGujrLXNQTxd50DaKfBwCKgBGAs/s1600/logo_DSGE.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-096oOMkizWY/Wgz8GxlYjqI/AAAAAAAADBk/h7lw9NSesMcjhTGujrLXNQTxd50DaKfBwCKgBGAs/s1600/logo_DSGE.png" /></a></div>I used to be quite animated in my dislike for Dynamic Stochastic General Equilibrium (DSGE) models, but my attitude has shifted: I have declared victory and gone home. At present, I only have a somewhat disinterested academic pursuit on one topic: are DSGE models actually mathematical models? Whether or not that is the case does not really affect the question of whether they are useful. The recent attempt at a defence by <a href="http://faculty.wcas.northwestern.edu/~lchrist/research/JEP_2017/DSGE_final.pdf" target="_blank">Christiano, Eichenbaum, and Trabandt</a> confirmed my attitude; it was such a spectacular intellectual failure that it is not worth taking seriously.<br /><br /><a name='more'></a><a href="http://noahpinionblog.blogspot.ca/2017/11/the-cackling-cartoon-villain-defense-of.html" target="_blank">Noah Smith has done an impressive smackdown on the paper on his website here</a>. Noah Smith is probably one of the most sympathetic backers of mainstream macro outside of academia, and even he thought this attempt at defending DSGE macro was ridiculous (the "cackling cartoon villain" defense of DSGE).<br /><br />If we rephrased the arguments of Christiano, Eichenbaum, and Trabandt, they are perhaps reasonable. One could argue that we need to use a modelling strategy similar to the one used by DSGE modellers to account for:<br /><br /><ol><li> a shifting policy environment; </li><li>and to take into account macro relationships between all variables.</li></ol><br />The first point is a lot more straightforward than economic arguments would suggest. For example, we cannot blindly rely on some regression model that is fit to data in one economic regime, and then extrapolate those results to a new institutional environment. Although this was allegedly the big insight of the forefathers of DSGE macro, it is an obvious point that I believe that most Keynesian economists were aware of. Keynes was not exactly a cheerleader for the early econometric work.<br /><br />The second point -- that we need to take into account all macro relationships, and not reason from a partial analysis -- is the defining characteristic of macroeconomics. Keynes' discussion of effects such as the fallacy of composition are what drove the creation of a sub-discipline of macroeconomics in the first place.<br /><br />Although those are reasonable points, it does not mean that DSGE macro actually fulfils those objectives. (One could easily raise doubts about other methodologies, including my preferred stock-flow consistent models.) I cannot hope to settle debates on those lines here.<br /><br />However, the paper by Christiano, Eichenbaum, and Trabandt went completely off the rails when compared to that reasonable line of argument. In the abstract, we read:<br /><blockquote class="tr_bq">One strategy is to perform experiments on actual economies. Unfortunately, this strategy is not available to social scientists. The <b>only </b>[emphasis in original - BR] place that we can do experiments is in dynamic stochastic general equilibrium (DSGE) models.</blockquote>They literally argue that no other economic modelling methodology even exists.<br /><br />I would note that other DSGE researchers were somewhat horrified by the abstract as well. However, it finally explicitly raises a point that was always implicit in the DSGE literature: do they acknowledge the existence of other research programmes? In practice, the answer was they did not do so, which is exactly what the abstract suggests they should do.<br /><br />As an outsider, one can only revise down one's opinion of the academic standards of mainstream economists. We have an intellectual debate in which one side refuses to admit the existence of the debate in the first place. I am not an expert on the scientific method, but it seems to me that is not how it is supposed to work.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-48825758278576226872017-11-14T11:02:00.000-05:002017-11-14T12:20:23.180-05:00"An Introduction to SFC Models Using Python" Paperback Edition Published<iframe align="left" frameborder="0" marginheight="0" marginwidth="0" scrolling="no" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=bondecon09-20&marketplace=amazon&region=US&placement=0994748094&asins=0994748094&linkId=60ebb45000e10d7423659fdcc8bd4255&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff" style="height: 240px; width: 140px;"> </iframe>My latest book: <i>An Introduction to SFC Models Using Python </i>is now available as a paperback edition. It is currently available at (some) Amazon book stores, and will show up at other online booksellers (such as Barnes and Noble) over the coming days. Many bookstores would be able to find the book via a special order.<br /><h2><b>Book Description</b></h2>Stock-Flow Consistent (SFC) models are a preferred way to present economic models in the post-Keynesian tradition. This book gives an overview of the sfc_models package, which implements SFC models in Python. The approach is novel, in that the user only specifies the high-level parameters of the economic model, and the framework generates and solves the implied equations. The framework is open source, and is aimed at both researchers and those with less experience with economic models. This book explains to researchers how to extend the sfc_models framework to implement advanced models. For those who are new to SFC models, the book explains some of the basic principles behind these models, and it is possible for the reader to run example code (which is packaged with the software online) to examine the model output.<br /><br /><b></b><br /><a name='more'></a><b>Paperback ISBN:</b> 978-0-9947480-9-6<br />The book is 128 pages (excluding end matter).<br /><br />The book contains some figures, and sample code. The code is available online, as the <i>sfc_models </i>Python package.<br /><br />Recommended retail price: $11.95 (USD), GBP 8.49, EUR 10.45 (excluding VAT).<br /><h2>Intended Audience</h2>This book is quite technical, as it mainly describes the Python programming framework. There are some sections describing how SFC models operate, which largely appeared in draft form already on my website. It is mainly aimed at economists who would want to start working with the <i>sfc_models</i> framework to build stock-flow consistent models, or programmers who are interested in learning about SFC models. More casual readers will hopefully find sections of interest, although some of the material would be too technical. It is possible to run the example programs without much programming experience, but it would require installing the software. (Since the Python installation process varies depending upon the operating system, the reader is largely directed to online resources for the exact steps.)<br /><h2>Ebook Editions Delayed</h2>Unfortunately, the ebook editions have some formatting issues that I have been unable to fix right now. I thought they were ready (which is why I pulled the trigger on the paperback edition), but I discovered a fairly small (but vital) problem in the formatting of the Python code in the ebook version. I hope that I will fix the ebook edition by the weekend, but I am not making any promises. Although it is more expensive, the paperback edition has a fixed page layout which handles the code formatting relatively faithfully. For readers who are new to Python, the paperback edition is probably the preferred edition, since it gives a better idea of the layout of Python code. (Python is an unusual computer language in that the the white space -- code indentation -- matters.) Experienced Python programmers would spot the formatting issues, and know not to format their own code that way.<br /><br />If I cannot resolve the issue easily, I may be forced to sell the ebook as a fixed page layout on Kindle, a format that may not be supported by older readers. For other retailers, I would need to use a PDF version (or fixed format EPUB) which is generally not supported.<br /><br />I will put out a longer description of the book once I get a better handle on when the ebook editions will be ready for sale. Since the paperback edition is already for sale, I just wanted to put this announcement up.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com2tag:blogger.com,1999:blog-5908830827135060852.post-27447166311219119752017-11-11T15:36:00.000-05:002017-11-11T15:36:12.800-05:00The Difficulty In Defining The Real Yield<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-A0IUC_PGHQw/WgcmJ5Z9MRI/AAAAAAAADBI/ivOaM9E9YH0RqTF5l4nQW4HumMcrs3IEgCKgBGAs/s1600/logo_linkers.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-A0IUC_PGHQw/WgcmJ5Z9MRI/AAAAAAAADBI/ivOaM9E9YH0RqTF5l4nQW4HumMcrs3IEgCKgBGAs/s1600/logo_linkers.png" /></a></div>The concept of the <i>real interest rate</i> is relatively old, and often associated with the work of Irving Fisher (the <i>Fisher Equation</i>). We decompose observed nominal interest rates into two components: inflation compensation, and a real rate of interest. I prefer to avoid using the term real yield, as the standard definitions cannot be easily related to real world data, and there are at least five definitions that have been used in practice. In the context of index-linked bonds ("linkers") using the Canadian pricing model, I prefer to call the quoted yield the "indexed yield," and not the "real yield" (which appears to be the market convention).<br /><br /><a name='more'></a><i>(This article is an unedited draft of a section that is hoped to make it into a book on inflation-linked bonds. Since I am in the process of trying to finish off the formatting of my book on Python and SFC models, the writing process for this article was rushed.)</i><br /><h2>Possible Definitions</h2>If we denote the nominal interest rate as <i>i</i>, inflation compensation as <i>p</i>, and the real yield as <i>r </i>(putting aside the definitions of those terms until later), the preferred mathematical relationship between these variables is given by:<br /><div style="text-align: center;"><i>i = (1+r)(1+p),</i></div><div style="text-align: left;">(the Fisher Equation).</div><div style="text-align: left;"><br /></div><div style="text-align: left;">If the inflation compensation and real yield variables are much smaller than 1, then we end up with the approximate relationship:</div><div style="text-align: center;"><i>i = r + p.</i></div><div style="text-align: left;"><br /></div>(Additionally, one might use this additive definition in some cases, such as yield curve modelling, where we need to have a simpler decomposition in order to fit variables.)<br /><br />In my writing, I will often make statements like "subtract inflation from the nominal rate to get the real rate." This is a shorthand for the slightly more complex relationship implied by the Fisher equation. If I write out the equations, the relationship I am using in that context is given exactly.<br /><br />The problem with this definition is that there is no clear way to define <i>inflation compensation</i>, and so we can get (at least) five very distinct types of definitions used that match up with the common usage of real yield. (Since nominal interest rates are very well defined in legal terms, once we come up with a fixed definition of inflation compensation, the real yield definition if fully determined. One could theoretically define the real yield and inflation compensation first, and then the nominal interest rate would be determined by those two definitions. However, it would not necessarily match up to the legal definition of interest rates, and so we would need to translate quoted nominal yields to match this theoretical definition. This is a step that most analysts would want to avoid.)<br /><br />Five competing definitions for real yield/inflation compensation are given next. They are definitions that apply to a particular debt instrument (with a fixed maturity) at a particular date. Note that some of the terms are ambiguous, as will be discussed later. For the first three definitions, the instrument in question is a conventional bond (with a nominal quoted yield), the final two definitions apply only to index-linked bonds.<br /><br /><ol><li>Inflation compensation is equal to the latest available value of the inflation rate, and we then back out the yield yield.</li><li>Inflation compensation is equal to the annualised inflation rate over the life of the debt instrument for which we extracting the nominal interest rate; we then back out the real yield. <i>Note that we can only arrive at this measure after the maturity of the debt instrument, and so we cannot use it in real-time. However, it could be used in historical analysis.</i></li><li>Inflation compensation is equal to the expected value of inflation at the valuation date; we then back out the real yield.</li><li>The instrument in question is a Canadian model inflation-linked bond, and the real yield is the quoted yield on the bond. I<i>n this definition, there is no strict definition of the nominal interest rate, nor inflation compensation.</i></li><li>If the debt instrument in question follows the old model of U.K. inflation-linked gilts, the real and nominal yields are the result of a fairly complicated calculation procedure. This methodology requires the analyst to set an expected rate of future inflation, and then the implied internal rate of return in nominal terms is calculated. Since these bonds are now a curiosity, I will not attempt to discuss this in detail.</li></ol><div>(There is an additional complication regarding nominal interest rate quote conventions, which I am avoiding. In order to fit into the equation, all nominal rates would need to be converted to a clean mathematical convention, and not the various silly quote conventions that are used by market participants.)</div><div><br /></div><div>Historically, inflation-linked bond yields were not available, and economists mainly focused on the first and third definitions (where inflation compensation is either equal to spot historical inflation, or spot expected inflation). For a variety of reasons, the two usages ended up being commingled, and so unless the economist in question specified which definition they are using, the usage was ambiguous. However, even if they specified whether they were using spot inflation or expected inflation, those definitions themselves are ambiguous. I will discuss these in turn.</div><h2>Historical Inflation Ambiguous</h2><div>For someone working with a time series database and analytical software, specifying the real yield using historical data seems like a straightforward operation. You load the time series of the bond yield (as a monthly series), get the time series of inflation, and subtract the inflation rate from the bond yield to get the real rate. (Or use the slightly more complex Fisher Equation to back out the real rate.)</div><div><br /></div><div>Although this might satisfy market economists, it is not good enough for investors where every basis point counts.</div><div><br /></div><div>I will take as an example the data that are publicly available as of the time of writing (Saturday, November 11, 2017) for the United States. The interest rate data comes from the Federal Reserve H.15 release. As a result of the Veteran Day's holiday, the last H.15 release was on the Thursday. covering the data for Wednesday, November 8.</div><div><ul><li>The effective Federal funds rate (an overnight bank rate) was 1.16%.</li><li>The (fitted) constant maturity 10-year Treasury rate was 2.35%.</li><li>The last Consumer Price Index (CPI) data was released by the Bureau of Labor Statistics on October 13, and corresponds to the month of September 2017.</li></ul><div>Let us step back to the situation on November 8. If you are a market participant, you would have had the yield data available in real time (for traded instruments, not the fitted/averaged rates produced by the Federal Reserve). </div></div><div><br /></div><div>We then want to know: what is the real effective Federal funds rate on that day? A Fed funds agreement is a lending agreement on an overnight basis, that is, from November 8 to November 9.</div><div><br /></div><div>We can immediately see problems with the definition using historical inflation. Our last data point is for the month of September; we will not get November CPI data until several weeks have passed. Furthermore, the CPI data for September is a level for September, not a rate of change. The usual technique is to look at the annual rate of change in the index; in this case, from September 2016 to September 2017. In other words, we deflating the returns on a lending operation between November 8, 2017 and November 9, 2017 by the percentage change in the CPI index between September 2016 and September 2017. What sort of information is supposed to be conveyed by that comparison?</div><div><br /></div><div>We can attempt to make the inflation data more comparable in time by looking at a shorter-term rate of change. The closest we can get to the valuation date is to look at the 1-month annualised inflation rate between August 2017 and September 2017. However, that operation runs into the problem that monthly rates of change are extremely erratic, especially if we use the non-seasonally adjusted index data. (The use of the seasonally adjusted CPI poses other challenges; it is generally not used in legal contracts -- like in inflation-linked bond calculations -- for these reasons.)</div><div><br /></div><div>The situation for the 10-year bond is even more challenging. The hypothetical constant maturity 10-year bond has cash flows every six months from November 8, 2017 to November 8, 2027 (with holiday and weekend dates moved according to predetermined rules). What relevance does the change of the CPI index from September 2016 to September 2017 have to those cash flows?</div><h2>Expected Inflation: Also a Mess</h2><div>Many economists would have looked at the previous discussion of the definition using historical inflation and tut-tutted. The correct answer -- allegedly -- is to use expected inflation to stand in for inflation compensation in the Fisher equation.</div><div><br /></div><div>Unfortunately, this alternative is largely as problematic as the use of historical inflation.</div><div><br /></div><div>The obvious problem is that we have no idea whose expectations we should use. (I am using expected inflation here as a synonym for <i>forecast</i> inflation, and not the mathematical expectations operator. The problem with the use of the mathematical definition of expectations is that it is defined in terms to a probability distribution for future inflation outcomes, and so we still have the problem of deciding whose probability distribution to use.)</div><div><br /></div><div>If you wish, you can substitute your own forecast for inflation into the expression. Although I think it is a silly way to approach fixed income investing, using your own forecasts is an entirely reasonable internally coherent methodology. The problem with the approach is straightforward: the results are only useful to yourself; different individuals/entities will have different inflation forecasts, and so different implied real rates. Furthermore, there is no way of doing historical analysis, unless you have written down your inflation forecasts every day -- and what happens for dates before you started writing down those forecasts?</div><div><br /></div><div>An alternative is to use some surveys of expected inflation. This eliminates much of the subjectivity, although it still raises the question of how to analyse data from before the inception of the survey. However, it raises the problem that the people being surveyed may have entirely useless opinions about the path of inflation. Let us take an extreme example: imagine a survey of inflation expectations taken from a sample of kindergarten students. Under what circumstances would we expect such data to provide useful information?</div><div><br /></div><div>However, even if we accept the surveys as being slightly useful, we still have issues with timing. Firstly, we generally do not have inflation survey data available on the same day as the survey was taken. Therefore, we would be comparing the yields on November 8 with a survey taken at some earlier date. Secondly, in order for the data to be useful, we need to match up the maturity of the debt instrument with the period for which we are doing a survey.</div><div><br /></div><div>Even if we can get a survey that matches the monthly frequency of the CPI data, we have no way of determining the inflation rate between November 8 and November 9 using the monthly CPI. We need a daily frequency price index in order to compare properly to debt instruments. (In hyperinflations, we get this level of granularity by estimating the price level by the use of inferring the price level from the level of the currency.) There are no surveys that cover inflation at a daily frequency.</div><div><br /></div><div>We end up in a curious situation. According to conventional economic models, we need to use inflation expectations to determine the real yield. However, in practice, there are no data available that meets the requirements. We have economic theory that literally cannot be applied to real world data.</div><h2>Quoted Inflation-Linked Bond Yields has a Hidden Twist</h2><div>Using the quoted yields on Canadian-style inflation-linked bonds appears to be the only rigorous way to approach the definition. This makes sense: we need to be able to determine the invoice price on a bond trade to the penny, based on an agreed yield quote. Like nominal interest rates, the contract law ensures clarity of definition. </div><div><br /></div><div>There is a hidden trick to the story: the reason why it works as we have dropped the other two components of the Fisher Equation. <i>There is no nominal yield quoted on the instrument, and so there is no inflation rate associated with it. </i>If we want to go from the quoted yields on inflation-linked bonds to nominal interest rates and/or inflation compensation, we need to find a nominal interest rate to compare the yield on the inflation-linked bond. This is normally done via breakeven inflation analysis, but there are multiple potential definitions of breakeven inflation.</div><div><br /></div><div>(The old U.K. style inflation-linked bonds did tie together inflation compensation, real yields, and nominal yields. However, they did so in a fairly horrific fashion, using an inflation assumption that causes other problems.)</div><div><br /></div><div><i>[My book will be covering the gory details of rigorous definitions of inflation breakevens. A draft may be published later.]</i></div><h2>Concluding Remarks</h2><div>The concept of a real yield, although ubiquitous in economics, is extremely poorly defined. Furthermore, there is considerable mythology regarding the concept. The safest way to approach index-linked bonds is to drop the use of the term completely. The best replacement I am aware of indexed yields. One could rightfully complain that this term is problematic: the yield is not the subject of some indexation procedure. Instead, it is a contraction of "quoted yield on index-linked bonds."</div><div><br /></div><div>(As an aside, my awareness of the linguistic minefields associated with the term real yield probably came from some observation by my boss at the time, Gerard MacDonell.)</div><div><br /></div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-30732112299129633432017-11-11T11:31:00.000-05:002017-11-11T11:31:22.602-05:00Python SFC Models Book Finished, But...The good news is that I received the printed proof of my book earlier than I expected, and my first pass examination indicated that it is good to go.<br /><br />The bad news is that I was hoping to figure out the technical problems with the ebook editions, and those problems are only half finished. I will try some fixes on my own this weekend, but I may end up having to deal with technical support in order to get the ebook out.Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-48178098823699945312017-11-08T09:00:00.000-05:002017-11-08T09:00:13.166-05:00Defining Market Efficiency Properly<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-WpLFITFQmGo/WgJi3ixL1wI/AAAAAAAADA0/6B9pAHsLsk87N5fOKXcRyMkAminEZ6tSACKgBGAs/s1600/logo_bond_market.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-WpLFITFQmGo/WgJi3ixL1wI/AAAAAAAADA0/6B9pAHsLsk87N5fOKXcRyMkAminEZ6tSACKgBGAs/s1600/logo_bond_market.png" /></a></div>The concept of market efficiency has attracted a considerable amount of debate over the decades. The issue is that the definition is problematic; efficiency is more an attribute of the investors in the market, rather than the market itself. If we turn the focus to the role of investors, most of the mysteries associated with the concept evaporate.<br /><br /><a name='more'></a><i>(Note: This article is in a preliminary state. I started thinking about how to write up the role of market efficiency in the context of breakeven inflation rates. I realised that I was unhappy with the existing definitions used, and this article represents my initial thoughts on the matter. It is quite possible that I will again revise my thinking.)</i><br /><h2>Efficiency is not an Absolute Concept</h2>The idea of efficiency has considerable moral overtones in economics: saying something is efficient is the same as saying that something is good. I have an engineering background – where efficiency is a well-defined concept – and I was always skeptical about those normative associations with the term.<br /><br />One thing to keep in mind is that efficiency is not an absolute concept, it is only defined relative to some context. For example, we could have a highly efficient natural gas furnace, where the efficiency is measured a percentage of the heat energy which is not lost for the purposes of heating the home. Meanwhile, we can have a step-down transformer that is highly efficient in converting high voltage from the power lines to the household-rated voltage level, where the efficiency is measured in terms of how little power is dissipated in the transformer. However, the furnace is not going to be a very efficient transformer, or vice-versa.<br /><br />We need to think about what we are trying to do. In the case of investments, it is easier to think of inefficiency, rather than efficiency. Very simply, an inefficient investor is an investor that fails to take advantage of an investing strategy that offers abnormally high returns over relatively short investing horizons, and those returns are reliably realised. (Please note that the previous definition deliberately used vague wording; I expand it later.)<br /><br />In other words, the investor is not taking advantage of strategies that will make them a lot of money. Knowing what we know about investor psychology, there is not a lot of institutional investors in that boat.<br /><br />What happens if all investors are “efficient” (actually, not inefficient) by this definition? Well, it implies that there are no get-rich-quick schemes lying around for investors to take advantage of. This is equivalent to the common interpretation of market efficiency by market professionals: it is difficult to meaningfully outperform.<br /><br />Conversely, if we say “the market" is efficient, what exactly is the objective “the market" is attempting to achieve? Without such an objective, we can hardly say that it is efficiently meeting goals. We do not have this problem when speaking about investors; we politely assume that they are attempting to maximise the returns on their assets. (This would not apply to ponzi schemes, of course.)<br /><br />The advantage of this definition is that we are explicitly taking into account the institutional structure of investors, and this explains a lot of the supposed mysteries about market efficiency.<br /><h2>The Definition Depends on the Context</h2>The terminology I used was deliberately vague, as it depends on the context. “Short-term” could mean one day for some investors (or even millisecond), but I would use one- to three months as the horizon I am thinking of. “Abnormally high returns” in the current developed market context would be 1% per month, but it would be much higher back in the day when Treasury bill yields were 12%.<br /><br />Furthermore, the definition of “investors” I am using has some fine print attached to it. I exclude entities that are running a business for which the actual security investments are just a sideline.<br /><ul><li>Market makers at investment dealers should expect to generate a high return on equity (or else expect to be looking for new employment). However, those market makers are just the endpoint of an infrastructure that was developed to give them a flow of orders and information. The business of the investing firm is to make money off of the flows; in an ideal world, positions are flat at the end of the day.</li><li>Big commodity traders own infrastructure, and broker flows from producers to consumers. They should be able to generate a higher return on equity than punters gambling on commodity futures.</li><li>Warren Buffet is not just a stock picker; he is the head of business that buys and sells stakes in firms where he has an effect on strategy. Meanwhile, his insurance business has the famous float that is used to finance positions.</li></ul>The term “reliable” used in the definition implies that the probability of outperformance is high, and the strategy is not expected to blow sky high if there are problems. Examples of “unreliable” strategies include:<br /><ul><li>Risk assets like high-yield bonds, equities, or even Treasury bonds may be expected to outperform cash on a multi-decade horizon, but there is a high chance of losses in the near term.</li><li>It is possible to earn a slightly higher return investing in risky commercial paper than Treasury bills. However, in order to generate outsized returns on equity (10% per annum, say), considerable leverage would be needed. You are one financial crisis away from having that strategy blow up.</li><li>Selling short-dated options tends to make money in the long run, but it can also blow up in a financial crisis.</li></ul>Another nuance I have in mind (although not obvious from my current phrasing) is that this concept is tied to investing strategies, and not the market pricing at all times. It was possible to find a great many great trades in the depths of the financial crisis, but such anomalies were not repeatable. (Furthermore, almost no investors were able to take advantage of them, which underlines that the investors drive the concept of efficiency, not markets in the abstract).<br /><br />Also, by focusing on strategies, we can end up in a situation where the same market appears to be both efficient and inefficient. Based on my reading of market lore, that was the situation in the 1980s in fixed income. Old school investors attempted to guess the direction of interest rates, with not particularly impressive results (that is, the market appears efficient). However, relative value investors made a fortune taking advantage of the mispricings generated by old school investors (that is, the markets were inefficient). Therefore, whether you think a market is “efficiently priced” depends on whether you have a trading strategy that can exploit mispricings.<br /><h2>Does not imply Forecasting Perfection</h2>It is rather obvious that an investor who can accurately predict short-term market movements reliably would end up being extremely rich. Since we do not see a lot of evidence of such individuals in the real world, we can probably assume that having perfect forecasting abilities is not an option as strategy for investors. Therefore, one should not expect that forward pricing should predict actual outturns; all we can hope is that the forwards should not be so far off reality that we can reliably make money off them consistently in the short-term.<br /><h2>Concluding Remarks</h2>By the focus to investors, rather than some abstract concept of the market, we have a much more concrete understanding what market efficiency really implies. As I turn to writing about this in my breakeven inflation analysis book, I may return to this topic.<br /><div><br /></div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-75236132077690755972017-11-05T09:00:00.000-05:002017-11-05T09:00:01.424-05:00Initial Comments On Zero Rate Policy And Inflation Stability<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-YUNZola8u_s/Wf3f_lwgHPI/AAAAAAAADAU/jf2ktH_HgKcXD0TSYbFXmORwjqDtyUDdwCKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://2.bp.blogspot.com/-YUNZola8u_s/Wf3f_lwgHPI/AAAAAAAADAU/jf2ktH_HgKcXD0TSYbFXmORwjqDtyUDdwCKgBGAs/s1600/logo_MMT.png" /></a></div>This article represents my initial comments on the question of the stability implications of locking interest rates at zero. Martin Watts, an Australian academic, had an interesting presentation at the first Modern Monetary Theory (MMT) conference (<a href="http://www.mmtconference.org/conference-video/" target="_blank">link to videos of presentations</a>). Although MMT fits within a broad-tent definition of "post-Keynesian" economics, there are still sharp debates with other post-Keynesians. One topic of debate is the effect of permanently locking the policy interest rate at zero, which is a policy advocated by many MMT economists. In my view, this is a debate that is best approached by using stock-flow consistent (SFC) models.<br /><br /><a name='more'></a>One published paper that discusses this topic was written by Louis-Philippe Rochon and Mark Setterfield -- "A Kaleckian model of growth and distribution with conflict-inflation and Post Keynesian nominal interest rate rules."* This article was triggered by my initial reading of the paper; as the reader will discover, I have strong views on how this topic should be approached. I want to outline my thinking, before attempting to explain their methodology.<br /><br />I will summarise the debate as follows. What happens to the economy if the government vows that the policy rate will be locked at 0% for all time? With the loss of monetary policy (as conventionally understood), will it be possible for inflation to be stable?<br /><br />Although we have had experience with central banks keeping rates at low levels for long periods of time, everyone (such as JGB bears) only viewed this as a temporary state of affairs. Since we have limited historical experience of such a policy (such as the days of the Federal Reserve pegging interest rates during and just after World War II), we need to work with economic models to guess what the effects of the policy might be.<br /><h2>Two Channels from Interest Rate Policy to the Economy</h2>There are two ways in which the policy interest rate can affect the economy. (Note that private sector interest rates are set as a spread over the policy rate; I am ignoring the movements in that spread for reasons of simplicity here.)<br /><ol><li>Changing interest rates will affect the behaviour of economic agents (somehow).</li><li>Changing the policy interest rate has an effect on the interest paid by the government (and thus interest received by the "non-governmental sector" on governmental liabilities).</li></ol><div>When we look at these two effects, it is clear that we have a much better <i>a priori</i> knowledge of the second effect (under the assumption that the yield curve on governmental liabilities is driven by the policy rate, which is arguably true for the floating currency sovereigns). If we have an estimate of the trajectory of the yield curve as well as the governmental borrowing requirements, we can map out fairly accurately the government's future borrowing costs -- which is someone else's interest income. There are some unusual possibilities to keep in mind. For example, a deranged central bank could buy up most of the governmental debt outstanding, and pay interest on reserves. In such a case, the intra-governmental flows are different, but the net interest received by the non-government sector is the same (except that some of it now comes from the central bank instead of from Treasury bills).</div><div><br /></div><div>(Changing interest rates will also redistribute income within the private sector; this is more complicated to model, but in principle we can do it.)</div><div><br /></div><div>The first effect -- the effect on behaviour is obviously more difficult to judge. There are many different ways in which interest rates can affect behaviour. Unfortunately, there is a tendency to ignore the income effects and focus solely upon the behavioural effects in economic theory (particularly in mainstream economic theory).</div><div><br /></div><div>The role of a properly constructed stock-flow consistent model is to highlight the relationship between behaviour and balance sheets. Both effects are taken into account.</div><h2>What does Interest Income Tell Us?</h2><div>We can tell a very simple story about the relationship between interest rates and inflation using interest income.</div><div><br /></div><div>We firstly note that private sector entities face a budget constraint: how much they can spend is limited by their existing financial resources and incomes, as well as what they can borrow. (In dynamic stochastic general equilibrium models, it is possible to borrow against the net worth of your three-legged descendants four billion years in the future, but that is not a behaviour we see in real-world behaviour.) <i>Roughly speaking, if nominal prices rise, households need a rise in nominal income in order to purchase the same quantity of goods (under the assumption that borrowing capacity will not rise without a rising income).</i></div><div><i><br /></i></div><div>Therefore, in order to sustain inflation, the household sector needs steadily rising nominal incomes. For workers, this is provided by wage inflation. However, not all households earn wages. In particular, some households survive off of interest payments.</div><div><br /></div><div>As a result, a rising interest bill by the government would presumably help sustain an inflation. For those who wish for a more formal argument, Godley and Lavoie discuss this in the textbook <i>Monetary Economics</i> in the context of Model PC (which is implemented in the <i>sfc_models </i>framework).</div><h2>Empirical Results</h2><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-HbJlN7OxeOE/Wf39eASSYQI/AAAAAAAADAk/oETfo3QmfoQDUQuftQh4G-Z4UeIu3IMhQCLcBGAs/s1600/c201711105_fedfunds_vs_inflation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="600" src="https://3.bp.blogspot.com/-HbJlN7OxeOE/Wf39eASSYQI/AAAAAAAADAk/oETfo3QmfoQDUQuftQh4G-Z4UeIu3IMhQCLcBGAs/s1600/c201711105_fedfunds_vs_inflation.png" /></a></div><div><br /></div><div><br /></div><div>The figure above shows the historical relationship between the policy rate and inflation in the United States during the period 1965-1995.** In order to make my point <i>really clear, </i>I have added helpful arrows and text to highlight the big trend moves in these variables. <i>(I have used the United States data as it was the easiest for me to chart; we see similar trends over that period in most of the developed countries. Having stared at multi-national interest rate and inflation charts for years, I cannot think of obvious exceptions.)</i></div><div><br /></div><div>As can be seen, the data are consistent with the story suggested by income flows: raising the nominal rate of interest is associated with a rise in the inflation rate.</div><div><br /></div><div>The previous paragraph probably caused considerable distress among a great many people. Pretty much everyone has been indoctrinated by economists into believing the opposite is true (that is, you raise interest rates to suppress inflation). Believing that raising interest rates causes higher inflation is enough to get you fired from most central banks.</div><h2>What do we do?</h2><div>It seems clear that we need to be careful about such analysis. We need models that take into account balance sheet effects as well as behavioural effects of interest rates, so that we have an idea of what the trade-offs are.</div><div><br /></div><div>In particular, if we believe that institutional/psychological factors make negative inflation rates difficult to sustain (a belief that has empirical support), a permanent 0% nominal rate of interest has some inherently stabilising properties. Governmental liabilities will tend to have a negative real rate of interest, and drag down the income of bondholders. This helps suppress nominal income growth, and presumably inflation. The question then remains: can the presumed behavioural effects of low interest rates overcome this force? Even if we cannot answer the question from the standpoint of pure theory, we at least have a guidepost to gauge observed data.</div><div><br /></div><div>In other words, we have to work with a relatively complete stock-flow consistent model to look at this question.</div><h2>Concluding Remarks</h2><div>At the time of writing, the <i>sfc_models</i> framework does not directly support the ability to test the effect of changing interest rate rules. However, such an analysis may be one of my first extensions.</div><br /><b>Footnotes:</b><br /><br />* Louis-Philippe Rochon & Mark Setterfield (2012) <i>A Kaleckian model of growth and distribution with conflict-inflation and Post Keynesian nominal interest rate rules</i>, Journal of Post Keynesian Economics, 34:3, 497-520.<br /><br />** Pedants could debate whether we should describe the Fed Funds effective rate as "the policy rate" during the period in question (1965-1995). I believe that this terminology is operationally correct, even if the central bank's policy framework had cosmetic changes.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com6tag:blogger.com,1999:blog-5908830827135060852.post-30094992351759849312017-11-01T09:00:00.000-04:002017-11-01T11:21:11.160-04:00Inflation Breakeven Analysis Will Probably Be My Next Book<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-8uycHjFdvLc/Wfd-ydUBdTI/AAAAAAAAC_0/JlWXrDe8oXw8o_T-YWI_HCsjaFbYNaH8QCKgBGAs/s1600/logo_linkers.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://3.bp.blogspot.com/-8uycHjFdvLc/Wfd-ydUBdTI/AAAAAAAAC_0/JlWXrDe8oXw8o_T-YWI_HCsjaFbYNaH8QCKgBGAs/s1600/logo_linkers.png" /></a></div>I am now in the final stages of formatting <i>An Introduction to SFC Models Using Python</i>. I will publish both the electronic and paperback editions simultaneously once I see that the printed proof is in good shape. (At the time of writing, the electronic book edition has some formatting issues that should be resolved by the time I receive the printed proof). Although I have some other projects outstanding, I expect to turn to a report on index-linked bonds, with the working title <i>Inflation Breakeven Analysis</i>.<br /><br /><a name='more'></a>My earlier books were aimed at somewhat general audience with an interest in economics, while these later ones have a more specialised audience. The Python book is aimed at readers with an interest in implementing stock-flow consistent models. The inflation breakeven analysis book is going to be aimed at financial market participants and economists with an interest in inflation-linked bonds (and the implied inflation breakeven rate). (<a href="http://www.bondeconomics.com/2014/05/primer-what-is-breakeven-inflation.html" target="_blank">Link to primer on the breakeven inflation rate.</a>) Although the book will still mainly be a primer, it will not shy away from some of the technical issues (and equations) that arise.<br /><br />I hope to cover two audiences with the breakeven analysis book:<br /><ol><li>Financial market participants who are familiar with the bond market, but less familiar with index-linked bond mechanics.</li><li>Economists who want to understand the relationship between market participant inflation forecasts and inflation breakevens.</li></ol><div>I will be extending my SimplePricer Python package (<a href="https://github.com/brianr747/SimplePricers">https://github.com/brianr747/SimplePricers</a>), however I will not be doing any discussion of Python coding in the book; the reader can figure out how to run the code if they wish. The SimplePricer package can show how the principles of the pricing involved, without wasting spectacular amounts of time worrying about archaic fixed income quote conventions. </div><br />Readers with questions or information about the inflation-linked market are welcome to contact me. I would be particularly interested in a sample yield curve data that I could use in calculations, without having to pay for data licensing fees (without breaking any licensing agreements).<br /><h2>Projected Contents</h2><ul><li>What is breakeven inflation? Primer chapter which covers main points (in case readers quit after one chapter).</li><li>Market overview: major developed markets.</li><li>Inflation-Linked Bond Mechanics.</li><li>Inflation swaps. (Might be merged with the previous...)</li><li>Yield curve calculations.</li><li>What do we know about inflation? Covers known facts about the CPI, possibly debunking some "facts" pushed by various economic schools.</li><li>The ugly reality of the indexation calculations (seasonality, etc.).</li><li>Simple breakeven versus true economic breakeven.</li><li>How are index-linked bonds priced? Relative value analysis and factors.</li><li>Breakeven inflation versus market forecasts of inflation.</li><li>Investing strategies. (Since I am not lawyered up, not sure about this chapter.) Since this is a book that I aim to sell on a multi-year horizon, I would not waste time discussing the current state of the market. I could possibly discuss how my thinking evolved during my career.</li><li>Policy considerations (Not sure about this part.)</li><li>Appendix: Bond pricing basics, and the SimplePricers package. </li></ul><div>Fans of economic theory would mainly be interested in my discussion of inflation forecasts versus breakeven inflation. The discussion of inflation is supposed to be non-controversial, although I may target some urban myths that have sprung up around "inflation" in my "What do we know about inflation?" chapter. I will not attempt to put in a discussion of what determines the rate of inflation; that would be a later report.</div><div><br /></div><div>The contents will probably show up as draft primer articles over the coming months. I have some existing primers. but I would probably need to beef them up.</div><br /><h2>What About sfc_models?</h2>I have to clean up some other projects (hi Alex!), and I think it would take considerable time to follow up a book on SFC models. I need to build various business cycle models, which will take time. I would rather write this breakeven inflation book -- which should be the easiest book for me to write -- and slowly build up a set of models that could be used on my next SFC models book.<br /><br /><br /><br /><br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-10739702182344524482017-10-29T14:39:00.000-04:002017-11-02T10:20:41.528-04:00Social Structure And The Determination Of Interest Rates<a href="https://www.amazon.com/gp/product/3319407562/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=3319407562&linkCode=as2&tag=bondecon09-20&linkId=52d525f7e0b6079db31c46a27a6fbcb4" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" target="_blank"><img border="0" src="//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&MarketPlace=US&ASIN=3319407562&ServiceVersion=20070822&ID=AsinImage&WS=1&Format=_SL160_&tag=bondecon09-20" /></a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=bondecon09-20&l=am2&o=1&a=3319407562" style="border: none !important; margin: 0px !important;" width="1" /><br />In <i>The Reformation of Economics,</i> Philip Pilkington argues that societal structure determines the power of creditors and therefore interest rates. He then attacks mainstream financial and economic theories about interest rate formation. Although I agree that institutions matter for the determination of the power of creditors, I see mainstream theories of interest rate formation as adequate within the current institutional structure of developed countries. (<a href="http://www.bondeconomics.com/2017/07/book-review-reformation-in-economics.html" target="_blank">Link to my review of <i>The Reformation in Economics</i>.</a>)<br /><br /><a name='more'></a><h2> My Summary of Pilkington's Arguments</h2>This article is based on some of the contents of Chapter 9 of the book -- Finance and Investment. As an initial disclaimer, I want to emphasise that I am going to summarise some of the points that Philip Pilkington makes there, but I am not attempting to discuss the entirety of his arguments.<br /><br />He is highly dismissive of mainstream economics and finance, and the use of the Efficient Markets Hypothesis with respect to interest rate formation. I agree with some of his criticisms, but I rely upon the efficient market hypothesis in my analysis of interest rate determination (rate expectations theory). The divergence in views can be viewed as the result of looking at different questions.<br /><br />Firstly, the discussion of interest rates in classical economic theory is utterly worthless. My disagreement with Pilkington on that score is that I think the entire topic should be ignored as an intellectual embarrassment, whereas he argues that "Wicksell is no relic" (page 253) <i>[Update: Typo fixed; thanks to Calgacus.]</i> . I come from an academic background where we do not waste time on dead theories; for example, I could find no mention of optimal control in a quick scan of the standard robust control theory textbook <i>Feedback Control Theory, </i>by Doyle, Francis, and Tannenbaum. This is despite the fact that there is an obvious mathematical linkage between optimal and robust control. (The Kalman Filter is one of the few relics left behind from optimal control theory.) As an ex-academic, I understand the concerns regarding originality, but at the same time, we cannot cripple our ability to advance economic theory by wasting time worrying what Wicksell -- or Keynes -- really meant.<br /><br />Modern financial theory argues that we can decompose the interest rate of any instrument into three components (assuming there is no embedded optionality, such as the ability to prepay, convert, call, or put the instrument back to the issuer):<br /><ol><li>The expected "average" of the short-term credit risk-free rate (usually the policy rate) over the maturity of the instrument. (Technically, a geometric average.) </li><li>The term premium for credit risk-free instruments (e.g., Treasury bonds in the United States) associated with the term of the instrument.</li><li>A credit spread.</li></ol><div>(If you want to get finicky, there are second-order effects, such as the effect of being able to fund a bond cheap at a special repo rate, as well as benchmark or liquidity premia. The liquidity premium is a particularly confusing concept in this context, as Philip Pilkington prefers Keynes' liquidity preference theory. His concept of a liquidity premium is what I would call the term premium; the liquidity premium under my definition is how much more expensive a benchmark bond is relative to a fitted curve.)</div><div><br /></div><div>In my view, modern mainstream models (i.e., Dynamic Stochastic General Equilibrium) are largely consistent with this version of financial theory, although they contain other elements that are the source of problems (the embedded assumption how interest rates affect economic dynamics).</div><div><br /></div><div>Conversely, Philip Pilkington argues that borrower's interest rates are determined by two factors.</div><div><ol><li>Institutional structure of the economy.</li><li>Liquidity preferences of investors.</li></ol><div>I will discuss these in turn.</div></div><h2>Institutional Factors</h2><div>The modern financial theory decomposition of interest rates makes sense in the modern institutional context, where we have large dedicated fixed income investors and a well-defined bond market. It would probably be of little use in analysing lending in ancient Rome. </div><div><br /></div><div>Pilkington argues that interest rates depend upon the power of creditors. This is arguably true; we no longer have debt slavery or debtors' prisons (although some political groups seem to be sneaking debtors' prisons back under the door). Therefore, I have no argument that the "total cost" of borrowing (when we take into account the risk of being thrown into prison) depends upon institutional factors. However, does this have much to say about market interest rates in the developed economies over the past few decades? It is very hard to see trajectory of interest rates from the post-war lows, to the early 1980s peak, and back to the current lows as being the result of changes to the power of creditors as a class.</div><div><br /></div><div>He raises the question of loan sharks. They can charge exorbitant rates of interest, as their demands are backed by the threat of violence. That said, it is very hard to see the effect of loan sharks on national economic data. (Other illegal activity can leave a mark that is picked up in the national accounts, such cigarette smuggling in Canada, or the narcotics trade in Vancouver.) I cannot recall any Fortune 500 corporation declaring bankruptcy (or getting its kneecaps broken) as a result of an unfortunate run-in with loan sharks.</div><div><br /></div><div>In other words, power considerations matter for social researchers, but are not something that a fixed income analyst is going to spend a lot of time on.</div><h2>Liquidity Preference</h2><div>Although it may not help my reputation among post-Keynesians, I am not a fan of "liquidity preference" when discussing interest rate formation. </div><div><br /></div><div>I would summarise Pilkington's discussion of the liquidity preference is that he (like Keynes) is interested in the interest rate faced by the private sector, which it needs to take into account when making financing and capital investment decisions. (He notes that the rate of interest is not too important when making those capital investment decisions.) Investors are taking risk by lending long-term to the private sector, and they need to weight that risk versus investing in safe "cash" assets. (Cash is not just economist's "money," it includes all short-term instruments that are viewed as safe; "money good.") The extra interest demanded rises and falls in line with investor's desires to hold cash.</div><div><br /></div><div>In my view, that is mingling four separate effects:</div><div><ol><li>expected path of short rates changing;</li><li>the (risk-free) term premium changing;</li><li>credit spreads changing;</li><li>ability to finance positions changing. </li></ol></div><div>The final factor (changing financing conditions) is the least familiar, and I will only discuss it briefly. During the Financial Crisis, a great many large investors had financed positions using short-term repo financing in any number of instruments. Once that crisis hit, that repo financing disappeared. The most extreme example in (non-euro) sovereign bond space that I can remember was the case of long-dated index-linked gilts: they were trading at LIBOR+150 basis points. That pricing was stupidly cheap, and did not reflect anyone's beliefs about the credit quality of the gilts in question. However, everyone knew that there were a lot of big investors who were stuck with index-linked gilt positions that they could no longer finance, and so pricing was set at stupidly cheap levels. (Since my firm had not traded those instruments before the crisis, we were unlikely to start then. However, if we were to do so, I would certainly recommended buying them near LIBOR + 125 basis points, even though "fair value" by any of my models was almost certainly below LIBOR.) In summary, positioning can matter for bond pricing; but this is normally a short-term issue that will eventually be worked out.</div><div><br /></div><div>Returning to the more standard decomposition of rates, I believe that we need to firmly distinguish credit spread movements from the movements of the risk-free curve (which includes both rate expectations and the term premium; I will ignore the term premium here for interests of space). The explanation is straightforward: they can move in the opposite directions. For highly-rated instruments (like <i>pfandbriefe</i>), this is typically not enough to allow the all-in yield on the credit instrument and a government bond to move in opposite directions, but this does happen with credits with wider spreads. If long-term government bond yields are falling, and long-term industrial BBB yields are rising, it is hard to see how a single "liquidity preference" factor can help explain that situation. </div><div><br /></div><div>Of course, mainstream economic theory is going to be fairly useless in explaining <i>why</i> credit spreads move. That said, the models that incorporate credit spreads do not even try to do so: credit spreads are one of those magical exogenous shocks that allow newer DSGE models to "explain" the Financial Crisis. I subscribe to a fairly crude version of the Efficient Markets Hypothesis: it is difficult to outperform markets consistently. If this hypothesis is correct, we should not expect to find an economic model that can predict credit spreads accurately, and so we should not warp our economic modelling strategies trying to do so.</div><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com5tag:blogger.com,1999:blog-5908830827135060852.post-30610954616949849982017-10-25T16:37:00.000-04:002017-10-29T09:50:18.616-04:00Python SFC Models Book Being Formatted<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-dQLvMUuMe-I/WfD1p9r5S4I/AAAAAAAAC_c/HhKJlQpp7LoSg7xUuC69KWXlwfZ8umlnwCKgBGAs/s1600/logo_blog.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://1.bp.blogspot.com/-dQLvMUuMe-I/WfD1p9r5S4I/AAAAAAAAC_c/HhKJlQpp7LoSg7xUuC69KWXlwfZ8umlnwCKgBGAs/s1600/logo_blog.png" /></a></div>I am now formatting the Python SFC models book. I am trying a new layout technique, so no promises for the delivery time. I was going to do a promotional summary today, but other things popped up. I may do one shortly, but I wanted to just put up an announcement rather than miss publishing on my regular publication day.<br /><br />UPDATE: The initial formatting pass is finished, but there are still some kinks to work out. I will be waiting for the printed proof before publishing, and so the publication date will not be immediate.Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com3tag:blogger.com,1999:blog-5908830827135060852.post-6413543995223211702017-10-22T16:07:00.000-04:002017-10-22T16:07:44.609-04:00How I Would Analyse A Job Guarantee<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-irVeP-L0-bw/WezT-fr4dHI/AAAAAAAAC_A/ueXBelStAsAoRAYgV_WVB6gH2nU9xWsXQCKgBGAs/s1600/logo_MMT.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-irVeP-L0-bw/WezT-fr4dHI/AAAAAAAAC_A/ueXBelStAsAoRAYgV_WVB6gH2nU9xWsXQCKgBGAs/s1600/logo_MMT.png" /></a></div>The Job Guarantee proposal is a core part of the policy analysis of Modern Monetary Theory (MMT). If implemented, it would be expected to cause a structural change in the economic structure, and so analysis techniques that extrapolate current conditions would be inapplicable. Although this analysis is aimed at the Job Guarantee, the basic principles would be applicable for other measures that cause a structural change in the labour market, such as the Universal Basic Income. (In fact, it's my adaptation of analysis by Hyman Minsky for the fore-runner of the basic income proposal, the negative income tax.) The analysis here is a back-of-the-envelope discussion for Canada; it could be adapted to other countries and more detailed at the analyst's discretion.<br /><br /><a name='more'></a>As an initial disclaimer, the title of this article is a very deliberate choice: it is how I would approach the problem, and does not reflect the state-of-the-art research on the topic by academics in the MMT school of thought. I only have a limited knowledge of their detailed research. Furthermore, I discuss a potential implementation in Canada that is based on my political instincts as to what would be politically sustainable programme; as a result, my views on implementation probably vary from the main MMT academics. My political instincts are probably out-of-date, so what I write here should not be taken as a definitive statement on how Canada should approach implementation.<br /><h2>What is the Job Guarantee?</h2><div>The first question we face is: what is the Job Guarantee programme? The one-sentence summary is: the central government guarantees <i>a</i> job for all citizens that are willing to work. There are a lot of hidden complexities in that summary, which tends to trip people up (particularly those who are politically opposed to the programme, who tend to be deliberately obtuse when writing about it).</div><div><br /></div><div>My description here is quite brief; <a href="http://bilbo.economicoutlook.net/blog/?p=23719" target="_blank">this article by Bill Mitchell has more information</a> (from a more-informed source).</div><div><br /></div><div>The demand for such jobs obviously rises in a recession, and so the wages paid would be counter-cyclical. As a result, the central government (which controls its own currency) is the natural level of government to pay those wages.</div><div><br /></div><div>There are two broad ways the job creation could be managed.</div><div><ol><li>The government has a permanent bureaucracy that manages centres where would take in entrants, and find work for them to do. (I come back to the definition of "the government" later.)</li><li>The government allows municipal governments and charitable non-profits to find and hire workers. (Presumably with a government-supported electronic platform for would-be workers to find such positions; citizens without internet access would be able to use terminals at government-run facilities such as public libraries.) The government's role would be to set standards, offer technical support, and have a team of inspectors making sure the wages are being paid according to set standards.</li></ol><div>(Of course, a mixture of these two job-delivery modes could be used.)</div></div><div><br /></div><div>The general preference seems to be towards the second option; by decentralising the programme, it makes it much closer to the conditions on the ground in different regions. In countries like Canada and the United States, that is an important consideration. Furthermore, it creates a much larger body of dedicated citizens who see the programme as useful (every charity that has manual labour needs). This creates a voting bloc that only the most sociopathic free market ideologues would want to mess with.</div><div><br /></div><div>I have not looked recently at the Canadian Constitution, but my instinct is that the programme would be implemented at the provincial level, like the other aspects of the welfare state. The role of the Federal government is to set standards, and offer cash support. Otherwise, it would be difficult to come up with a programme that would keep the provincial governments in both Alberta and Québec happy. This provincial-level implementation would allow even more customisation based on local conditions. The Job Guarantee wage bill would probably replace some existing Federal fiscal transfer spending (which might distress some empire-building provincial politicians).</div><div><br /></div><div>I am going to dodge the issue of what the workers will do; it seems that existing charities have plenty of labour needs. However, one basic principle is that the workers would generally not be producing goods or services that would be sold into markets, as that would put them into direct competition with the private sector.* (If the intention were to sell goods and services, various slow-witted individuals would argue that the objective is to "make money," and that when it inevitably "loses money," the programme is a failure. The whole point of the programme is to "lose money" during a downturn, so as to inject income flows in a counter-cyclical fashion.)</div><div><br /></div><div>A key assumption I am making is that the people enrolled in the programme are indeed working, and so these positions would not be viewed as a "money-for-nothing" proposition. This means that people would presumably not just quit private sector employment to work at a Job Guarantee position for lower pay (unless that private sector work was very unattractive relative to the pay offered).</div><h2>How I am Not Analysing the Job Guarantee</h2><div>Such a programme would have a variety of effects on the economy, and could be analysed in many ways. I am going to outline a number of ways I am not doing so here.</div><div><ul><li><b>Aggregated macro analysis.</b> Aggregated macro analysis (such as using a stock-flow consistent model) would have its place in analysing the cyclical properties of a country with such a scheme. However, such analysis would rely on behavioural assumptions about macro behaviour, and we do not have the data to be able to fill in the parameters with confidence.</li><li><b>Mainstream microfoundations. </b>The only justification for the mainstream micro-founded approach to macro is that it is supposed to allow us to judge the effects of policy regime shifts ("Lucas Critique"). However, the assumption that everyone is the same (which is a <i>de facto </i>requirement for these models) falls flat on its face when we obviously have two classes of citizens, depending on whether they are in the programme or not. In other words, these mainstream models are even more useless than they normally are for this task.</li><li><b>Small-scale experiments.</b> Small-scale experiments are useful for determining the best implementation practices, but by definition, cannot tell us about the macro effects that I am interested in here. A small-scale experiment cannot cause structural changes to the labour market of the economy, which is exactly what I want to know about.</li></ul><h2>The Analysis Technique</h2></div><div>We need to get our hands on a detailed breakdown of the current wage distribution of Canadian workers across industries. I have looked at this data previously, and have a rough idea of what it looks like, but one could throw as much effort one wants at getting more details.</div><div><br /></div><div>I am interested in finding what the new "steady state" in the economy is after the programme is implemented, and not trying to guess what the quarterly GDP numbers over the next few quarters. The working assumption is that the current wage structure is relatively persistent, and that it would be the baseline for the steady state (in the absence of a Job Guarantee programme). Since the time interval involved is vague, we will assume that all wages are growing at the same rate, and so we work with "current dollar" wages (effectively a base year for a real wage).</div><div><br /></div><div>We know (by just looking at the data) that wage structures changes over time. Although it would be nice to be able to capture this, the reality is that the resulting model would be hideously complex and unusable.</div><div><br /></div><div>We now move away from our base case, and introduce a Job Guarantee. The key policy variable is the level of wages paid. Depending on that hourly rate, we would expect the effects to vary. </div><div><br /></div><div>In order to avoid getting distracted by numbers, I will refer to the wage as being relative to the minimum wage. As anyone familiar with Canadian politics would guess, that wage varies by province. That creates issues for detailed regional analysis, but not my back-of-the-envelope analysis.</div><h2>Case: Extremely Low Wages</h2><div>The government could set the wages in the programme at a stupidly low level, like $2 an hour. Only the truly desperate or masochistic would show up. The programme would be an expensive joke, since it would be necessary to set up an infrastructure that is never used.</div><h2>Case: Low Wages</h2><div>The wage level is somewhat of a guess, but if the Job Guarantee wage was set $2/hour below the minimum wage (and the minimum wage stays where it currently is), there would be at least some willingness of the currently-unemployed to take up jobs with the programme. However, the pay level would be so low that it seems unlikely that many would quit existing jobs to work on Job Guarantee jobs.</div><div><br /></div><div>(I will discuss the interaction with the minimum wage in the next section.)</div><div><br /></div><div>However, the wage level is very far below the current wages and salaries of the middle class. If they lose their job, they would take their Employment Insurance benefits and look for a job that matches their existing skills; the Job Guarantee would not be true option. This is not just obstinacy: a significant portion of Canadian households have fixed costs that consume most of their after-tax income (notably mortgage payments). A severe drop in cash flow relatively rapidly leads to personal bankruptcy; it is better to spend your time looking for a job (with a chance of avoiding bankruptcy) than taking a job that guarantees eventual bankruptcy.</div><div><br /></div><div>I am assuming that the Employment Insurance (EI) programme (which is what some marketing genius decided to rebrand the Canadian Unemployment Insurance scheme as) remains in place. For our purposes here, the key attribute of EI unemployment coverage is that it based upon payments into the programme: it pays larger benefits to those who had a higher wage/salary (up to a capped level). This income replacement feature makes it useful to the middle classes, as it keeps a level of cash flow that helps stave off bankruptcy.</div><div><br /></div><div>Actuaries do a very good of pretending that the EI programme is an insurance scheme (and not a welfare state programme), and it is useful to the middle class. This means that it is one of the few politically untouchable remaining parts of the Canadian welfare state. I think it would be a disastrous strategy to restructure this programme when implementing the Job Guarantee (although the contribution schedules might be adjusted to account for low income workers taking advantage of the Job Guarantee).</div><div><br /></div><div>My guesstimated impact of the Job Guarantee in this case is that it would be a relatively low impact programme. The number of people in the programme would be relatively modest, and there would be little effect on the existing pattern of employment. It would have a counter-cyclical effect, but it may be smaller than the effect of the existing Employment Insurance scheme.</div><h2>Case: At Minimum Wage</h2><div>The next possibility is that the wage is set at the current minimum wage ("inflation adjusted"). Whether or not minimum wage laws would still be needed is a question of debate; I believe they would still be needed to protect employees from abusive employers.</div><div><br /></div><div>My feeling is that the bulk of minimum wage jobs are pretty lousy jobs. Not all of them, certainly. I worked as a dishwasher/short order cook/chef's aide at a country club during my last summer before I entered university, and the job was excellent training for becoming a chef or even setting up your own restaurant. Of course, one would probably have to go to cooking school later, but I was learning the business courtesy of the fact that I spent most of the day working directly with the chef. As a result, I would have stayed in that job, rather than taking a marginal pay increase if I had continued working instead of going to university. That said, the bulk of minimum wage restaurant jobs are without that redeeming feature.</div><div><br /></div><div>As a result, I would expect that this choice of wage would have a marked impact on some sectors of the economy. Many employers would be faced with the choice: either make their minimum wage jobs more appealing, or raise wages above the minimum wage.</div><div><br /></div><div>This where having the detailed data on the existing wage structure comes in. Depending on the quality of that data, it would presumably be possible to pin down how many workers might consider the shift, and which industries would be most affected.</div><div><br /></div><div>My guesstimate is that the effects would be relatively selective, with some businesses either raising wages (and presumably selling prices), or going out of business. One might be able to use the existing empirical literature on the effect of changing the minimum wage to get an estimate of the effect.</div><div><br /></div><div>From a macro perspective, I would argue that this would ultimately be a relative price effect: prices in the affected sectors would rise relative to those of other sectors. Given the high levels of existing wage inequality, the rise in the effective "minimum wage" would not affect many workers. (If one of my analysts had come in to argue that they needed a raise because the minimum wage went up, that would have been met with one of my characteristic eye rolls and sighs.)</div><div><br /></div><div>It could very well be that there would be a one-time increase in the measured price level as a result of the industry restructuring, but there is no reason to be this translates into a steady-state inflation.</div><div><br /></div><div>The number of people within the programme would be uncertain, as we cannot be sure how many low-wage jobs are completely unattractive. However, it would certainly be expected to be fairly large when compared to the existing number of unemployed.</div><h2>Case: Wage Above Existing Minimum Wage</h2><div>If the Job Guarantee wage was well above the existing minimum wage, it is clear that there would be restructuring of the economy. Certain industries (such as fast food restaurants) would have to change their business model.</div><div><br /></div><div>It might be possible that the private sector response would be a broad inflation in wages across the board -- a generalised rise in the price level. If not, there would certainly be a flattening of wage inequality (which might be a desired outcome).</div><h2>Longer Term Analysis</h2><div>If the Job Guarantee wage were relatively high, it would be the <i>de facto</i> minimum wage. Employers would have to either pay at least that much, or make their entry-level positions otherwise attractive.</div><div><br /></div><div>Eventually, we would return to a situation that resembles the case where the Job Guarantee wage is below the minimum wage: it would be a relatively unattractive job, and the take up by citizens would be expected to be small (during an expansion, at least). Even if the introduction of the Job Guarantee flattens wage inequality, it would not be surprising if it returned, as nominal wages rise away from the Job Guarantee wage.</div><h2>Price Level Anchor</h2><div>The MMT authors emphasise the importance of the Job Guarantee wage as a nominal price anchor. Since this article is already lengthy, I would have to discuss that aspect at a later date.</div><h2>Concluding Remarks</h2><div>The key policy variable for the Job Guarantee is the wage level. If it comes out below the existing minimum wage, the effect of the programme would be limited. If higher, it will force a one-time restructuring among employers with minimum wage employees. In the longer term, private sector wages would rise above the Job Guarantee wage, and the number of people in the programme would shrink.</div><div><br /></div><div><b>Footnote:</b></div><div><br /></div><div>* They could support governmental run operations. For example, the government could set up factories that churn out Canadian-themed souvenirs, such as lumber jackets and toques. These items can then be held in inventory, and sold at national/provincial parks. Since the items could be held in inventory for a long time, there would be no worries about the counter-cyclical nature of production.</div><div><br /></div>(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com14tag:blogger.com,1999:blog-5908830827135060852.post-79315218703800489662017-10-18T09:00:00.000-04:002017-10-18T09:00:07.839-04:00Quick Update, Slope Comment<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-UTHloq3Kvuo/Weay6o_RQXI/AAAAAAAAC-g/Dddl1yXELrs0Ko0SlMkBlgegU9oTU2DeACLcBGAs/s1600/c20171018_tsy_5_10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Chart: 5-/10-year Treasury Slope" border="0" data-original-height="400" data-original-width="600" src="https://2.bp.blogspot.com/-UTHloq3Kvuo/Weay6o_RQXI/AAAAAAAAC-g/Dddl1yXELrs0Ko0SlMkBlgegU9oTU2DeACLcBGAs/s1600/c20171018_tsy_5_10.png" title="" /></a></div><br />The Treasury yield curve has continued its relentless flattening. The 5-/10-year slope has hit a new post-crisis low. We should not be too surprised by a flattening yield curve during a hiking cycle, but at the same time, the pace of rate hikes has been laughable.<br /><br /><i>Note: I have been tied up with other projects (my SFC models book is ready to be shipped for the final editing pass), so this article will be extremely brief. I just wanted to let readers know that there will be publishing hiatus until the weekend. (Although this is a free website, I want to keep to a regular writing schedule, particularly on Wednesday.)</i><br /><br /><a name='more'></a><br />I have not been doing a lot of business cycle watching recently. I could easily be missing something significant. However, I would tend to interpret the yield curve action as a form of capitulation. An investor can only stand being in negative carry positions for so long. The flattening in the 5-/10-year curve (for example) is just a writing down of forward rates to levels that are are closer to what has been realised over the past half decade. In any event, the curve is still not that flat in absolute terms; it is only near what used to be considered normal in the Japanese curve.<br /><br />As always, the big question is: are we near a recession? Modern recessions are the result of collapse in misguided fixed investment. (Historically, the inventory cycle was enough to trigger a recession.) One of the advantages of our tepid growth environment is that fixed investment growth has been tepid, and so there is less need for it to be slashed. There certainly has been a lot of stupid investor behaviour on display, but this stupidity has largely been an equity investor affair. Creating crypto-currencies generates a lot of headline chatter, but there is not a whole lot of jobs or fixed investment being deployed.<br /><br />I may comment later on the fixed income pricing aspects of yield curve analysis, but that will be delayed. I first want to write an article about analysing the Job Guarantee, which got sidetracked by money discussions (sigh).<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com0tag:blogger.com,1999:blog-5908830827135060852.post-60798776208514594612017-10-15T10:06:00.001-04:002017-10-15T11:14:14.885-04:00Should We Care About Seigneurage?<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-IHExHW7CTao/WeNcPZTaw9I/AAAAAAAAC-E/Qults5XIhvkXjb-Xh1BW_PNOJI7_tcCSwCKgBGAs/s1600/logo_money.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="70" data-original-width="80" src="https://4.bp.blogspot.com/-IHExHW7CTao/WeNcPZTaw9I/AAAAAAAAC-E/Qults5XIhvkXjb-Xh1BW_PNOJI7_tcCSwCKgBGAs/s1600/logo_money.png" /></a></div>I believe I have a better understanding of Eric Lonergan's arguments regarding whether fiat money is a liability of a state with currency sovereignty. <i>(This discussion does not apply to commodity money, or a state using a money issued by an entity not under its direct control.) </i>If I am correct, I would phrase his argument as: the existing accounting treatment of money is incorrect, since it does not account for seigneurage revenue. (Seigneurage has multiple English spellings; I was using the French spelling on Twitter -- <i>seigneuriage</i>.)<br /><br /><a name='more'></a><h2>Aside: Should We Care About Governmental Accounting?</h2><br />This discussion can easily be sidetracked by philosophical debates regarding the usefulness of governmental financial accounting. To be clear, I do not think that it is very meaningful.<br /><br />What is meaningful for a government are two things:<br /><ol><li>How is the government utilising real resources?</li><li>What is the purchasing power of its money? (Although the state can be indifferent to inflation, the voting public is not. Furthermore, the private sector may replace the government-issued currency with something else if its purchasing power is too unreliable.)</li></ol><div>Unfortunately, those are complicated questions to answer. Financial accounting offers us a shortcut, assuming the following things are true.</div><div><ol><li>The multiplier on all spending (that generates income in the private sector) and taxes is the same.</li><li>There is a single price level in the economy.</li><li>There is a <i>monotonic</i> relationship between economic growth and the effect on the price level. (Simply, more growth means a higher price level.)</li></ol><div>These are assumptions that are embraced by most mainstream economists (which explains why they are happy looking at governmental financial accounting); in the heterodox view, some or all three of these points are incorrect.</div></div><div><br /></div><div>However, the assumptions are not totally and completely wrong; we can use them to give directionally correct views on what will happen in extreme cases. For example, if the Canadian government handed out $1 million in cash to every Canadian citizen, the purchasing power of the Canadian dollar would collapse. This outcome can be predicted using traditional financial analysis -- the government blew out its financial spending capacity. (Conversely, this would be hard to predict using financial analysis if the "cash" were not a liability.)</div><h2>Seigneurage</h2><div>The idea of <i>seigneurage</i> is that "money" (government <i>liabilities</i> that pay 0% interest) is a low-cost funding source <i>in a world where we assume that interest rates are positive</i>. (Money is the high-cost funding source in a world of negative interest rates!) Note that this is the definition of "money" I am using in this article; I do not care what other whack-job definitions other people can come up with. (<a href="https://books2read.com/abolishmoney" target="_blank">Abolish it!</a>)</div><div><br /></div><div>There are two well-known instruments that qualify as "money" using this definition. (Note that this list is not exhaustive; any account payable with no associated interest rate would qualify.)</div><div><ol><li>Currency in circulation -- notes and coins.</li><li>Required reserves held at the central bank, that pay 0% interest.</li></ol></div>Please note that the Federal Reserve pays interest in settlement balances held in excess of reserve requirement ("excess reserves"). Although they are part of the "monetary base," they not longer qualify as "money" by this definition, which matters in the context of the discussion of seigneurage.<br /><br />We can then define the <i>estimated one-year seigneurage revenue</i> as the interest saved over one year by replacing interest-paying government debt by "money." (If you insist on sticking excess reserves in your definition of "money," you then need to account for the interest you pay on excess reserves, making the exercise more complicated than what is shown here.) Since we do not know exactly what debt instruments would be replaced by "money," our best guess is the interest rate on Treasury bills over the year.<br /><br />Note that the use of the word "revenue" is contentious; it's really an opportunity cost saving. I am just using it as a technical term that follows existing usage, it is definitely not a view about its proper accounting treatment.<br /><br />I do not want to derail this article with a discussion of rate expectations and the choice of discount rates. I have opinions on those areas, but they are distraction. We will instead keep everything simple and assume that we are in a situation where all governmental interest rates (and discount rates) are flat at a strictly positive level. I will use 1% for simplicity, but any interest rate above 0% gets the same final result.<br /><br />In our 1% world, "money" saves the government 1% per year versus governmental debt. We forecast this to occur for every year going forward ("to infinity"). Let's say we want to capitalise this stream of <i>one-year seigneurage revenue</i>. What is it worth?<br /><br />An instrument that pays 1% per year perpetually is a <i>consol </i>with a 1% coupon. If its quoted yield is 1% (as by assumption), its price is $1 (for $1 face value).<br /><br /><i> In other words, in a world of a perfectly flat yield curve (with a strictly positive yield), the capitalised seigneurage value of the stock of money is equal to the face value of the stock of money. </i>If we argue that seigneurage revenue can be capitalised as an asset, it would be an asset on the balance sheet of the central bank that matches the liability value of the "money" stock.<br /><br />If one wanted to cancel out the two entries, one might argue that the stock of money is no longer a net liability. However, this operation is an attempt to cancel out known liabilities with a definite face value with a model estimate using highly uncertain input parameters, and so many accountants would scream bloody murder about that.<br /><br /><i>(Furthermore, there is an additional financial saving associated with notes and coin: some of them get destroyed. If we could identify these instruments, they should be written off as liabilities. However, unless the government periodically redeems its currency outstanding, there is no good way to measure this effect. Having currency used overseas -- which is well-known attribute of the U.S. dollar -- might appear to make the liability disappear. From a functional finance standpoint, it does: such currency should not cause inflation in the United States in the short term. However, we can easily imagine policy changes that would cause such currency to return the territory of the United States, and hence the liability is still hanging over the government.)</i><br /><h2>Does This Work?</h2>In the abstract, attempting to value seigneurage revenue is not objectionable. Any detailed simulation of debt dynamics needs to take this account into effect. For example, any DSGE model that has non-zero money holdings has to add a correction to the so-called inter-temporal governmental budget constraint to account for this. (The simpler DSGE models imply zero money holdings, so this effect is zero.)<br /><br />Attempts to do real-world calculations of this effect probably give smaller numbers than the idealised example above. The explanation is that long-term discount rates have an upward bias relative to the true forecast of the path of short rates: the term premium. Treasury bill return underperformance of Treasury bonds is a known empirical regularity, and we use Treasury bond yields for long-term discounting. (Yes, it's internally consistent.)<br /><br />The larger practical problems involve the use of a highly debatable estimate on a balance sheet, and whether "saving interest" is actually a form of revenue that can be capitalised. A bank would not be particularly amused by your attempt to capitalise Boxing Day Sale shopping savings on your loan application.<br /><h2>Unfortunately, No Policy Implications</h2>In any event, there are effectively no policy implications associated with this analysis. Very simply, there is no way to force the private sector to voluntarily hold instruments yielding 0%. Yes, you can force the banking system to hold excess reserves that pay 0% (which was historically the case in the United States), but that just drives down Treasury bill rates to 0% as well. In that case, the one-year seigneurage revenue is $0, and a perpetual stream of $0 cash flows has a NPV of $0, regardless of interest rates.<br /><br />Yes, required reserves (that pay 0%) are way of getting this cost saving. However, required reserves are just a tax on the formal banking sector, and you end up driving activity to the poorly regulated shadow banking system (that blows itself up, and needs to be bailed out). This net present value analysis is what bank lobbyists would use to value the cost of this tax.<br /><br />The fact that the private sector is willing to hold notes and coin yielding 0% is a way to reduce interest expense, but that willingness is not enough to build a new economic theory around.<br /><br />(c) Brian Romanchuk 2017Brian Romanchukhttps://plus.google.com/112203809109635910829noreply@blogger.com11