This article is a qualitative overview, I expect that I will dig deeper into the details of at least popular variant of this class of models in a later article. One of the difficulties with these models is their popularity; there is large variety of models published in the literature. This means that I cannot make strong statements about the structure of these models. From my perspective, the limitations of these models -- mainly those that rely on market data -- limits my desire to dive too deep into the topic.
IntroductionThe basic principle behind any of these models is straightforward: we need to identify time series (either economic or from financial markets) that act in an unusual fashion ahead of recessions. (Finding variables that react either simultaneously or with a lag to a recession are not particularly useful, as we can can just infer a recession probability from activity indicators; at best, you eliminate the data publication lag.)
The key is that the behaviour has to be unusual. For example, we could imagine a (normalised) time series that reliably drops below the threshold value -1 six months ahead of a recession. Unfortunately, this time series is volatile, and it hits -1 every year. Since recent recessions have been about ten years apart, this means that the signal of hitting the threshold happens a great many times without there being a recession -- a false positive.
Once you find such variables, you then put them through some mathematical model, a probit model, or some data mining techniques. The multiplicity of such techniques explains why the literature on these models can be so diverse.
The rest of this article will mainly discuss the limitations of such models. The reason for discussing limitations rather than the advantages of such models is based on my assessment of how users perceive them. People are attracted to mathematical models, and the more complex, the better. The problem is that the mathematical complexity distracts us from the underlying structure of the model.
Training Against Isolated Data PointsOne structural issue we face with such models is that they are attempting to match a binary variable -- is the economy in recession? In the early post-World War II decades, recessions were more frequent, and so there are a lot of episodes to fit our indicators against. However, since 1990, recessions have hit the United States roughly once per decade. (The situation is worse for Australia, which has largely managed to avoid recession at this time.) If we accept that there may be structural changes in the economy that leads to the choice of indicator variables changing, we are searching data sets for events that happened only a few times. The obvious risk is that we end up searching for variables that will act exactly like they did in the various post-1990 recessions, and we may be looking for an exact repeat of history.
The next issue is that we are sensitive to the methodology used for recession dating. In the United States, I would argue that one theme of the literature is that the NBER recession dates are fairly robust to changes to the methodology of the analysis of activity variables (so far).* When we start to look at other countries, we may find that recession dating is more controversial, which obviously casts questions on the validity of models trained on any particular data set.
For the post-1990 period, we need to be concerned about the nature of recessions. For the United States at least, recessions coincided with some form of a financial crisis. Although I believe that there are good theoretical reasons for financial crises to cause recessions, we only need to look at the early post-World War II period to see examples of recessions that happened independent of financial crises. As was observed by Minsky at the time, the post-war financial system was exceedingly robust, as a result of a strong regulatory regime and a cautious private sector (which was highly scarred by the Great Depression). One could argue that regulators and credit investors did learn something from the Financial Crisis, and so it is entirely possible that we can have a recession without large financial entities blowing themselves up.
Finally, there is the issue of small dips in activity -- technical recessions. I discussed technical recessions in an earlier article. If there is such a dip, is a signal provided by an indicator really a false positive, even if the NBER does not call a recession? This is the inherent problem with such binary signals; if we have a model that predicts growth rates, it should be able to distinguish a dip from a full-fledged recession.
Economic Structural ChangesIf we attempt to train our model against a long run of data, we will be covering differing economic regimes. This is not an issue for activity-based model, since recessions are defined by declines in the same set of economic activity variables (employment, etc.). For the United States, the secular decline in the manufacturing sector has meant that previously-important manufacturing variables are less useful indicators for aggregate activity.
Even if manufacturing employment dips by 10%, that's now a drop in the bucket compared to employment in the service sector.
Regional DisparityDiffering regions of a country can have quite different economic outcomes, as well as different sectors. For example, we can track provincial GDP in Canada. It is entirely possible that some provinces are in recession, while others are still expanding. Whether aggregate Canadian activity will decline is just a question of the weights of the provinces in the aggregate.
This means that indicators related to a particular industry may correctly call a regional recession, but miss on aggregate activity. Once again, we need to ask whether this is truly a false positive.
Financial Market ShenanigansMany of these forecast models are highly reliant on inputs that are financial market variables. The correct way to interpret them is that the model is giving a probability of recession that is priced into markets -- under the strong assumption that market behaviour matches historical patterns.
If one is involved in financial markets, one needs to be mindful of circular logic. So long as we accept the assumption that behaviour matches previous norms, we should trade the markets involved using the indicator only if our personal view on recession odds differs from what is implied by the markets. The alternative is that we end up trend following: we put on curve flatteners because the model says that there is high probability of recession -- but that probability is based on the previous flattening of the curve.
The remainder of this section will discuss broad asset classes.
Rates (yield curve). The yield curve has been one of the best recession indicators. The reasoning is straightforward: the central bank will normally cut rates when a recession hits, and so the bond market is just pricing in those future cuts. However, the indicator only works if bond market participants are actually able to forecast the recession -- which means that we should be using the models they use to forecast the recession, not the yield curve! One could try to argue that bond market pricing is a mysterious equilibrium process and that prices are arrived via supply and demand factors that are independent of what market participants believe. Obviously, individual participants can be dead wrong, so we are looking at a weighted average opinion. That said, I would argue that the belief that yields are determined independently of a market consensus is mystical hogwash (based on my market experience).
There are a few obvious weaknesses to the yield curve (I will probably turn this into a separate article, so I will be brief.)
- Bond market participants could be out to lunch, and so they get the recession forecast wrong (on average). This could either be forecast errors, or the result of non-forecastable uncertainty (as Keynes described it).
- The central bank could be expected to act in an unusual fashion.
- There could be unusual factors affecting the yield curve. The dash for duration in the GBP market because of pension liability matching regulations in the 1990s created a hopelessly distorted curve.
- Proximity of the "zero bound" should tend to keep the curve steep. (Admittedly, some central banks have experimented with negative rates, but they are not expected to get "too negative.")
The final issue is: what "yield curve" do we use? Theoretical yield curves are a continuum, with an infinite number of maturities. Economists pick an arbitrary pair of maturities to create "yield curve" variables to use in these models. Since the curve on the continuum is defined by a few parameters, these slopes will be correlated. However, they are not perfectly correlated. This can allow for a certain amount of data mining.**
Credit Markets. Various credit spreads -- particularly bank-based spreads -- are popular indicators. The question is whether we are extrapolating the experience of recent recessions forward (as noted earlier). It is entirely possible that the investment grade credit markets (which these indicators are usually based on) can sidestep a recession. For example, large banks have a great many mechanisms to manage the credit losses that they will expect from their small customers. (The United States with its branch banking model does have the property that any downturn will wipe out some small regional banks.) It takes a real team of chowderheads to put a bank into bankruptcy.
Equity Markets. Equity markets are supposed to be discounting an infinite stream of cash flows, not the next few months of activity. Theoretically, the effect of a recession on stock prices should be small. That said, equity markets do tend to swoon ahead of recessions. This makes sense if we believe that equity market pricing is closer to extrapolating the last few data points out to infinity. I am not the person to resolve that debate.
However, equity markets are volatile, and so periods of falling prices are expected to happen periodically. As a result, we should expect some false positive recession signals.***
However, large equity indices are dominated by multinationals; it is unclear how much their prospects are tied to the domestic economy. Even if a country can avoid a downturn while the major economies are in recession, it is likely that its domestic equity indices will still fall in tandem with its overseas peers.
Commodities. One of the interesting regularities of U.S. recessions is that they tended to be preceded by oil price spikes. (Discussed in Section 5.4 of Interest Rate Cycles: An Introduction.) There might be other examples one could find.
The basic problem with using commodity prices as an indicator is the markets are global. It is possible that a country's cycle will be independent of the global cycle. Furthermore, China has been a major source of commodity demand, while its domestic economy is somewhat isolated from the rest of the world (if we dis-aggregate its export industriees from the rest of the domestic economy).
However, for some commodity producers, commodity prices may be all we need to forecast recessions. The nominal income loss from a price fall may be enough to swamp any other factors in the economy.
Concluding RemarksThis article outlined the inherent limitations of recession forecast models. With these points out of the way, I will be free to dig deeper into some example models.
* For example, see the references given in the previous article on activity-based recession models.
** Using "data mining" in the derogatory sense that it is used in financial markets, and not in the positive sense by "data scientists."
*** Insert Samuelson quip here.
(c) Brian Romanchuk 2019
Post a Comment
Note: Posts are manually moderated, with a varying delay. Some disappear.
The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.
Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.