*non-causal*model is a model where the output depends upon the future values of inputs. In the absence of access to a time machine, such a model cannot be directly implemented in the real world. In practice, a non-causal model output is “revised” as new datapoints are added to input series. The result is that we cannot use the latest values of the series to judge the quality of previous “predictions” of the model.

The use of non-causal model might be acceptable for the analysis of a historical episode, or an earlier economic regime (such as various Gold Standard periods). Since new data will not arrive, there will be no revisions.

*(Note: this is a sub-section from my manuscript, which I hope was not previously published. I have edited it to be a stand-alone text, but this editing may have been imperfect.)*

## A Non-Causal Business Cycle Model

The figure above shows a simple non-causal model, which is generated by a technique often referred to as "standardisation": the input series are converted to new series, where each data point is the number of standard deviations from the series mean. The top panel shows the "standardised" 2-/10-year slope, as well as employment (deviation from trend). The two series are then averaged to create a composite indicator (bottom panel).

The issue of causality arises as it used the entire period to calculate the mean and standard deviation. This was done to simplify the explanation, but it is effectively cheating: we use future information to calculate the standardised variable.

In order to create indicators that could be used in real time, we need to use more complicated calculations, where the output of each time period solely depends on data available at that time period. Such calculations are somewhat non-standard, and often do not appear in the statistical packages used by analysts. In the case of standardisation, the usual technique is to either calculate the mean and standard deviation over a moving window (e.g., the moving average), or a “stretching window” that starts at the beginning of the analysis period to the latest data point. These are not difficult to implement (and may be built into time series analysis packages), but the calculations are harder to describe in text.

If we look at the above series, we see that the mean and standard deviation of the variables chosen were fairly stable. This means that if we either used a stretching or moving window calculations, the results would be qualitatively similar to the figure above. It would only be during an initial training period that the mean and standard deviation would depart greatly from their values during the entire interval.

## Trending Series Create Bigger Problems

We can find cases where causality matters much more. The figure above shows what happens when we standardise the 10-year Treasury yield over the period 1970-1982. As is well known, bonds suffered a secular bear market during this period (yield rising). The standardised variable shows the yield trading about one standard deviation below its mean in the early 1970s, which would be interpreted as being quite expensive. However, this expensive valuation is dependent upon us knowing the average yield for all of 1970-1982: which means we would have needed to have forecast the future path of interest rates to know that they were below average.

In Bond Economics, Causality In Models is a difficult concept that is a multi confusing one. I am glad you share this post with us. It clears my doubts about this topic. I am a good learner but sometimes am confused about a few topics. Most of the time I am learning economics topics from Smile Tutor and some times I search topics randomly like I do today and find your post. Thanks for sharing this post with us. I always appreciate it if I seem that the post is knowledgeable.

ReplyDelete