Recent Posts

Sunday, March 6, 2016

Discrete Time Models And The Sampling Frequency

Economic data are available in discrete time; that is, they are time series that are available in the form of samples at a fixed frequency (for example, quarterly, monthly, daily). Similarly, most complex economic models are specified in terms of a discrete time specification. By contrast, models in physics, or things like analog circuits, are defined in continuous time: time series are defined at "all times" (the time axis is a subset of the real line). Building economic models using continuous time has been done, but doing so creates many problems. However, one of the issues with discrete time models is the choice of the sampling frequency: too low a frequency means that you might not be able to model some dynamics.

A Very Brief Introduction To Time Domain Sampling


Systems engineers were forced to develop the theory of sampling in the post-war in response to the rise of digital electronics. All electronics are in fact analog (continuous time), but in a digital circuit, we have taken the convention of ignoring the value of signals except during the short periods when a timing circuit is active. We can therefore get away with treating the system as stepping between discrete points in time.

Chart: Sampling Two Cosine Waves

The chart above illustrates the mathematical operation of sampling. We have decided to sample two time series at a frequency of 1 hertz, that is, every second. The two series are two pure cosine waves, with a frequency of 1 and 2 respectively.

The red lines indicate the sample times, (t = ...,-1,0,1,2,...). The sampled discrete time series are defined by the value of the input continuous time series at those time instants.

We immediately see a problem: both sampled series are given by the constant series 1,1,1,... Under sampling, we cannot distinguish these series, nor can we even distinguish them from a pure constant.

A full explanation of this effect requires diving into frequency domain analysis techniques. But to summarise, a sampled series can only reproduce the low frequency components (the "spectrum") of the input signal; the range of reproduction is the set of frequencies [0,f/2), where f is the sampling frequency.  In other words, with the sampling frequency of 1 in my example, we can only distinguish sinusoids with frequencies less than 0.5.

Higher frequency components do not disappear, they are "aliased" into a lower frequency. We see this in the example: the two high-frequency sinusoids effectively turn into constant signals, with a frequency of 0.

Effects On Models

The effect of sampling on models is a more complex topic. Since you are throwing out high frequency dynamics, you can end up with ghost dynamics that are associated with the sampling frequency (or one-half of the sampling frequency) and these dynamics are not associated with anything in the original continuous time system.

Luckily for control systems engineers, the physical dynamics of engineering systems are at a much lower frequency than the timing frequency of digital electronics. The fastest moving dynamics in an aircraft air frame are on the order of 10 hertz or so, whereas even 1960s era digital electronics ran at kilohertz frequencies (thousands of samples per second). The physical components are effectively "standing still" when compared to the electronics, so engineers can get away with some fairly basic approximations when translating a continuous time model to discrete time model. The high frequency dynamics created by sampling have no interaction with the physical system model, given the large gap in frequency responses.

That said, possession of an engineering degree does not mean that you will get it right. I was an anonymous referee on a paper written by three control systems engineering academics which developed a control strategy for a discrete time model. They took a standard control systems laboratory system (an inverted pendulum), sampled the model using a standard approximation, and then applied their control law to that discrete time system.

It looked good (at least to them) in discrete time. However, when I translated what was happening back to continuous time, I found that the tip of the inverted pendulum was whipping around at a significant fraction of the speed of light. Needless to say, I suggested that this was perhaps not the best way of designing control systems.

Returning to economics, we see some problems created by sampling. Economic models are often developed at a quarterly frequency (4 samples per year), which is a lower frequency than quite a bit of economic dynamics. For example, we see a seasonal ripple in many time series (for example, consumer spending around Christmas), which we cannot hope to capture in at a quarterly frequency. Importantly, American recessions are often dated to be only a several months in length (5-8 months, say), which is hard to translate into a quarterly time series. Within a recession, there is a downward self-reinforcing slide in activity, which is then arrested by the action of the automatic stabilisers. Capturing such differing dynamics in 1-2 time samples seems to be a difficult problem, particularly the action of the automatic stabilisers (which require deviation away from trend growth, presumably for more than one time point).

The inability to capture such fleeting dynamics may be one reason why many mainstream models are biased towards assuming that private sector activity is inherently stable. Since unstable dynamics disappear at the standard quarterly frequency, they cannot affect parameter estimates by definition.

The area of theory which faces the greatest problems are overlapping generations (OLG) models. In some classes of these models, a single time point is a "generation," and an individual lives for 2 or 3 periods. This implies a sample period of at least 20 years, which eliminates practically all  business cycle dynamics.

I am not a fan of OLG models, as I discussed in earlier articles. Essentially, they are a way to put forth dubious theories to justify austerity policies; since the business cycle is eliminated, it is impossible to properly analyse the effects of fiscal policy and government debt. This is essentially how the mainstream found a way to ignore Functional Finance (which argues the way to gauge the effect of fiscal policy is to look at how it affects the business cycle): completely change the subject.

Not Just For Models

My comments here are not solely aimed at formal "models," even economy-watchers need to keep this in mind.

The NBER in the United States dates recessions using monthly data. This is certainly a better approach than the "two quarters of declining real GDP" that is used in other countries. In a slow growth environment, it may not be that hard to trigger two consecutive declining quarters, even though the slowdown was not material. Quarterly data are often quite rich, but they may be available with only too much of a lag to be useful.

Going to a higher frequency above monthly is also difficult. There are only a handful of series available at higher frequencies (job claims, monetary statistics), and they can easily be disrupted by unusual holidays or weather. We need to wait for monthly data -- which naturally tend to smooth out very short disturbances -- in order to confirm what weekly data are telling us.

Information Transfer Economics?

This article is partially a response to claims made by the physicist Jason Smith in the article "More like stock-flow inconsistent."

He summarises his argument as:
TL;DR version: ΔΔ in SFC models [Stock-Flow Consistent models] has units of 1/time and therefore assumes a fundamental time scale on the order of a time step.
As discussed above, any discrete time model has a "fundamental time scale" on the order of one time step; this is not a particular property of SFC models.

He complains that SFC model proponents describe their models as "just accounting," which misses how SFC modellers describe their modelling strategy. The initial hope was that the majority of model dynamics would be specified by the accounting relationships, but reality intervened: it was clear a long time ago that "behavioural" relationships matter a lot. Unfortunately, even superficially similar models could end up with quite different dynamics as a result of small changes to behavioural rules. We can use SFC models to gain an understanding of the economy, but we need to be realistic about what can be accomplished with reduced order models.

Smith advances what he calls "information transfer" models. They are largely based on analogies to models in physics. Unfortunately, he buries his discussion in a layer of physics (and his own) jargon, even though there is limited reason to believe that this will help the comprehension of readers who are not physicists. I have attempted to follow his logic, and I have had a hard time discerning the advantages of his suggested methodologies.

He advances continuous time models as being the "true" model for economic dynamics. Since monetary transactions settle at the end of day, the inherent frequency of transactions are truly only at a daily frequency. Although a much higher frequency than quarterly, that is a long way away from a hypothetical continuous time model.

Continuous time models also suffer from other defects.
  • We have very little reason to be certain that solutions to the differential equations exist, or are unique, once we start to incorporate nonlinearities.
  • Transactions, such as transfer payments, take the form of the transfer of non-zero amounts of money at particular times; continuous time models assume that there is a continuous infinitesimal flow between entities. The only way to replicate the large transfers is to invoke dubious concepts like the "Dirac delta function," which is not actually a function; and once we start using these delta "functions," we no longer can clearly characterise what space time series objects inhabit.
  • Very few economic decisions, such as hiring a new worker, can be truly made at a high frequency.
  • Continuous time models require an infinite number of state variables to simulate time delays. Time delays appear throughout economic analysis; even forward-looking agents cannot react instantaneously to new information.
The most imposing problem facing his suggested continuous time methodology is his embedded belief that we can learn something about aggregate economic behaviour from disaggregated individual transactions. The "microfoundations" belief of mainstream economics argues that we can; but the general failure of the mainstream to develop realistic models based on those microfoundations calls into question that belief. There is little reason to hope that we can capture the properties of macroeconomic aggregates starting from the level of individual transactions.

Once we start to study economic aggregates, we are dealing with aggregated transactions across accounting time intervals. The behavioural relations that we get will end up as approximations, and those approximations may vary based on the sampling frequency.

Concluding Remarks

The sampling frequency of a model matters, and modellers should spend some time questioning the effect of  the chosen sampling frequency. That said, it is unclear whether using a higher frequency beyond monthly will add any value for macroeconomic models. Quarterly frequency models are likely to be acceptable, but they may miss the unstable dynamics around recessions.

See Also:

(c) Brian Romanchuk 2015

36 comments:

  1. Clear and concise. Accessible. There must be a catch.

    Really though , thanks for this. It clarifies the "PK models debate" for me , considerably.

    Not very much 'information transfer' was going on over there , ironically.

    Marko

    ReplyDelete
    Replies
    1. Thanks.

      Jason Smith had some legitimate complaints about SFC models -- such as the more complex models have too many parameters to be able to fit them to data. However, the people developing those models were aware of that problem, and not attempting to blindly fit to data. (Conversely, that is exactly what the DSGE modellers are trying to do, and the critique is more valid.)

      I am not an expert on the entire SFC literature, rather more what Godley and Lavoie wrote. The way that they are using models is not what Jason Smith expects. The mainstream is attempting to be "scienitific" with their models, and use them in a manner closer to what he expects.

      Delete
  2. Another in-depth look, this time at sampling theory. Thanks, and thanks for great example of aliasing.

    You mentioned the Christmas spending peak without any elaboration of the high frequency aspects of an impulse signal (a "shock" in economic terms). By the time that we record the economic effects of a "shock", it has been seriously muffled (or aliased).

    On the other hand, the Christmas shock has some characteristics that should make it available for a calibration tool. First, it is predictable that a seasonal peak will occur. Next, the products sold have been made BEFORE Christmas so the economic effects of making the products should be visible before the actual peak (if the sampling rate is high enough). And third, the Christmas peak represents a sizable transfer between those who plan for the peak (by building inventory) and those who consume more on impulse (the last minute shopping represented by a Christmas peak).

    Off subject, but the Nick Rowe post "Does Saving Count Towards GDP" and the entire comments was enlightening to me. GDP is more income based that I realized, and less an measure of transfers of labor and material. I will tweak my thinking a little because I now think there is less relationship between money supply and GDP than I was assuming.

    ReplyDelete
  3. Brian, I provide a means of changing the sampling period for G&L's SIM model on an embedded interactive spreadsheet here.

    I also made the point you're making here (aliasing) in a comment on Jason's previous post here, which I'll repeat here for convenience:

    Significance of choosing Δt appropriately:

    Starting with a continuous time model:

    w = pi/5

    x'' = (-w^2)*x

    discretizing with Δt = 10 gives x[n] = x[0], for all n (a constant system: i.e. no dynamics), but then you'd totally miss that it's actually a harmonic oscillator.

    ReplyDelete
  4. Aliasing is visually demonstrated in this 4 minute video showing the wagon wheel effect:

    https://www.youtube.com/watch?v=VNftf5qLpiA

    During an early period of the financial crisis in the United States the Treasury had a special program to sell securities to assist the Federal Reserve in draining excess bank reserves. Based on my research into flow of funds data and papers describing this program the flow of funds data does not capture the peak amount of Treasury assistance under the program. Also I think the mechanism which allows banks to expand the aggregate bank balance sheet without increasing reserves might be determined by studies incorporating data for daylight credit operations of central banks. In other words I suspect high frequency data is necessary to research the interaction between a central bank and the aggregate bank with respect to subtle features of how the float of financial instruments tends to grow over time without necessarily creating net new bank reserves on an overnight basis.

    ReplyDelete
    Replies
    1. The intraday flows in the inter-bank market are insanely huge. In Canada, I believe one day's flows end up being a significant portion of annual GDP. (A MMT author had an article on this; forget who.) Trying to sample that would be brutal, but it is also unclear how much it matters. At the end of the day, pretty much all the Canadian banks get their settlement balances within a hair of $0.

      But even with end-of-day balances, there can be big movements, such as at year end and month end. Those dynamics would probably disappear at a low sampling frequency.

      At the same time, it is unclear whether this matters outside the banking system. From the perspective of the real economy, all of these inter-bank flows are largely a game played between themselves and the Fed, and it is not going to affect anyone's ability to get financing. (There is an incentive to avoid certain maturities in the commercial paper markets, such as around year end, but that is just a technicality to keep Treasurer's busy.)

      Delete
    2. I agree modern banks are not reserve constrained and the primary reason appears to be that the central bank must ensure intraday payment clearing. This means banks in aggregate create deposits from "thin air" and can borrow back those deposits to settle end of day reserve positions. When markets do not function to allow settlement of end of day positions the central bank provides reserves as lender of last resort. It would be interesting to consider the policies and mechanisms which render an aggregate bank sector "reserve constrained" versus "unconstrained" with respect to expansion of the aggregate bank balance sheet using a fixed pool of reserves. I do not find the MMT logic that Fed/Treasury must add reserves before draining reserves to be compelling although it may be that intraday Fed credit is extended to Primary Dealers to purchase Treasuries at auction, then the sale of Treasuries absorbs the added reserves. However if Treasury can match its spending to bond sales and taxes there is no change in reserves and no need for Fed to add liquidity. Sorry ... this may not be related to frequency of data theme!

      Delete
    3. The Treasury could operate without coordination with the Fed, but the problem is that would be a bad idea. The Treasury would have to do all the work, and the Fed almost nothing. The Fed would probably have to rely on paying interest on reserves to hit its interest rate target (a corridor system), since it could not engage in large open market operations. Furthermore, since the Treasury cannot control when cheques get cashed, there would still be times when banks get squeezed by mistake.

      In any event, all those details disappear in any simplified model that we hope to solve.

      Delete
  5. "There is little reason to hope that we can capture the properties of macroeconomic aggregates starting from the level of individual transactions."

    As a reader of Jason's blog, I think that he would agree. He certainly eschews behavioral microfoundations. His assumes that in general the economy is near a state of information transfer equilibrium, recessions being an exception. He does not get there by considering individual transactions.

    But maybe I am wrong. I wondered why he and Ramanan were having at it about discrete vs. continuous time, since the question at issue, I thought, was the stable state reachable in either way. {shrug}

    ReplyDelete
    Replies
    1. But if you are not looking at individual transactions, you do not have any reason to think that things are operating in continuous time.

      I am unsure what they were arguing about originally; I was objecting to comments about the assumptions about time constants.

      Delete
    2. To be sure, Jason frames arguments using differential equations. But since the results are admittedly only approximate, I view that as a matter of convenience.

      Delete
    3. I admit I do not understand Jason when he invokes implicit time scales.

      Delete
  6. Brian, in addition to that previous spreadsheet (call it SIM2) I link to above, I also have a spreadsheet to implement G&L's SIM model in a more traditional SFC manner. Call the traditional SFC one "SIM."

    The two spreadsheets are precisely the same (both matching G&L's results) when the sample period Ts = 1. I call this period "1 year" to give it a name.

    Interestingly there are small differences (with default parameters α1=0.6, α2=0.4, θ=0.2) between the two models when Ts is changed. That's because there's a difference between SIM and SIM2 in how the sampling period is changed. Assume the starting sample period is Ts1 = 1 year, and the new desired sample period is Ts2.

    For SIM I used Ramanan's suggested method of scaling α2 by Ts2/Ts1; something like this:

    α2(Ts2) = (Ts2/Ts1)*α2(Ts1)

    After some initial confusion on my part about what else it was assumed I was doing in SIM (which, I'll add made perfect sense to assume, from Ramanan's point of view), I was able to get this to work pretty well, with some restrictions: in G&L they specify:

    θ < 1 (I assume 0 <= θ as well)
    0 < α2 < α1 < 1

    Thus starting with the above default parameters we have the restriction that Ts2 < 1.5*Ts1 = 1.5 years.

    And again, assuming these default starting parameters, the only problem with this is that the time constant (call it Tc) of the system changes when α2 is changed. It doesn't change very much or by a big percentage, but it does change. It's possible to see a big percentage change in Tc with different valid parameters, but in this case the absolute value of the change is fairly small. Why does the it change? Well I preferred to write both models in a standard state space formulation. I wont' bother with the "measurement" equations here (producing Y,T,YD and C), but the dynamics are described by:

    H[n+1] = A1*H[n] + B1*G[n+1]

    Where G is taken as a rate expressed in dollars/year. Doing the algebra reveals:

    A1 = 1 - θ∙α2/(1 - α1 + θ∙α1)
    B1 = 1 - θ/(1 - α1 + θ∙α1)

    Thus with Ts = Ts1, the time constant is Tc1 = Tc(Ts1) = -Ts1/log(A1) years. Which is about 6 years with Ts=Ts1. So what happens to Tc when we change Ts?

    Tc2 = Tc(Ts2) = Ts2/log(1 - θ∙α2∙(Ts2/Ts1)/(1 - α1 + θ∙α1))

    this ranges from 5.7 years (with Ts=1.5) to 6.5 years (as Ts -> 0)

    Not a big change (13% total). With other valid initial parameter settings though (all of them close to 1) this can be more like a 690% difference in Tc as a function of Ts.

    ReplyDelete
    Replies
    1. So *I think* part of what Jason was getting at was this system time constant variation with choice of sample period.

      That's the price for keeping "stock-flow consistent" over all choices of sample periods.

      In SIM2, I take a different route: I try to preserve Tc as a constant, invariant to the choice of Ts, and I accomplish that goal. How do I do it? Rather than scale α2 I do the following:

      A(Ts2) = A2 = exp(a*Ts2) where a = log(A1)/Ts1

      The value a could be used in a continuous time version of the model:

      dh/dt = a*h + b*g(t)

      In terms of A1 and B1

      A2 = A1^(T2/T1)
      B2 = B1∙(A2-1)/(A1-1)

      This works great, except that I no longer have

      (1) ΔH = G - T

      for Ts =/= Ts1. That's because my G is the gov spending RATE in dollars/year and T = rate of tax dollars collected in dollars / year. Multiplying by Ts2/Ts1 doesn't correct the problem (i.e. (Ts2/Ts1)*(G - T)). The real problem is I take T to be samples from a continuous time rate curve (which will have the form G*(1 - exp(at)) for t>0 because G is a step function). In contrast SIM (the SFC approach) does the equivalent of assuming T is constant over each sample period.

      So anyway, perhaps that really doesn't amount to a big deal, but that's the crux of the issue as I see it: you either get true sample period invariance (SIM2) or you get equation (1) invariance (SIM).

      In addition, with SIM2 there's no upper bound on the new sample period like there is with SIM, so starting w/ the default parameters, you can have Ts2 > 1.5 for SIM2. (Once you get Ts2 > 6.5 in SIM, you start to get oscillations).

      Make sense?

      Delete
    2. TL;DR version: I have two versions of G&L's SIM model:

      SIM: with stock-flow consistency, but not invariant to the sample period.

      SIM2: not stock-flow consistent, but invariant to the sampler period.

      Both produce the exact same results at sample period = 1, and the exact same results in steady state for any sample period.


      It seems to me that you can't have everything.

      Delete
    3. (actually *I think* you CAN have stock flow consistency that's invariant to the sample period... but you won't match G&L's results at Ts=1 if you do... not unless you resort to something awful like making a continuous time model of zero-order hold versions of the T function... or something equivalent)

      Delete
    4. I do not have time to look at this right now (time to sleep...), but there would be changes in behaviour as a result of changing the sampling period. In order to keep the output with different sampling periods and the "same" parameters, the flows would have to be at a constant rate. Since the flows in the SFC models react to the state, how you sample will affect the results. (That is, monthly flows will not be exactly one-third of quarterly flows when you run a simulation. If flow rates are constant, then monthly would be exactly one-third of quarterly).

      In order to keep fidelity to a single continuous time model, you would need to recalculate all discrete time parameters each time you change the sampling frequency. (If you start from discrete time, you need to go to the continuous time, and then use the continuous time to calculate a second discrete time; you could not go directly to the discrete time model without introducing approximation errors.) You should be relatively close, so long as there are no dynamics that are being aliased by the sampling operation.

      For models with realistic parameter settings, you would probably start seeing aliasing effects at around the quarterly frequency (my guess). But if you compared a weekly and monthly sampled versions of a continuous time model, the discrete time responses should end up looking like the continuous time version, with very small errors.

      What this means is that we should expect that if you fitted a model at monthly data, you should theoretically redo the fitting again if you want to go to quarterly, rather than rely on an approximation. However, since your fittings are not going to be extremely accurate in the first place, that source of error is going to be so small that it is safe to ignore it. This is only an issue for theoretical discussion, where we assume that we have access to extremely accurate models. Since nobody believes that we have those accurate models, most economists are correct in largely ignoring this issue. Electrical engineers actually have accurate models, and run into a lot of sampling problems, and so they developed the theory for handling this.

      Delete
    5. Having thought about it, it comes down to an embedded assumption Jason Smith is making. He assumes that there is an underlying true continuous time system, and that all discrete time models reproduce that underlying continuous time system.

      One could argue that in the real world, there is a fixed set of high frequency transactions, and that we could theoretically change the accounting period, and end up with coherent results.

      But the point is that we are talking about mathematical models, not the real world. There is no assurance that if we start off with a closed form quarterly model, that we can convert it to a coherent monthly closed form model. Since every economic model is going to have mismatches with reality, this inability to change frequencies and have the exact same results is not a big deal. It is to Jason Smith -- because he is imposing analytical assumptions that he makes as a physicist.

      From what I have seen, people have a hard time transitioning from physics training to macroeconomics.

      Delete
    6. Sure, but like I wrote above: if you make a complicated enough model, you can have everything. In lieu of that you can have a simple yet good continuous time model that's sample period invariant AND stock flow consistent, but that's not what G&L do. Or you can choose: SFC and nearly sample period invariant, or sample period invariant and nearly SFC. It's the modeler's choice.

      In some cases (perhaps) sample period invariancy is more important, and in some cases not. It doesn't hurt to be aware of the various trade offs.

      Delete
    7. Also the problems I point out above I wouldn't describe as "aliasing." I'd describe them as (inadvertently?) changing the system dynamics in G&L's approach, when changing Ts. Why? Essentially they're only doing "compounding" at the sample times they choose, and ignoring any others. For a lot of situations this doesn't matter too much. For others it can be devastating (e.g. harmonic oscillation or exponential growth).

      200% / year growth compounded once in 1 year = 3x
      200% / year growth compounded continuously = 7.4x

      Delete
    8. OK, I will write up my comments as a new article. This article did not take on all of his arguments as directly as I should have, partially as I had no way of explaining myself without dragging in a boatload of systems theory. I think I can better explain the intuition with a simple example, and once again, no serious equations required. (It's related to your compounding example.)

      In any event, I still have a massive theoretical objection against his assumption that we need to look at continuous time. That objection makes it hard for me to discuss his arguments -- since I object to his starting point, following his later logic becomes even more difficult.

      Delete
    9. Brian, the aliasing issue is related to assuming an underlying continuous time problem. That's why I say the problem I discuss above isn't so much an aliasing problem: it's not about sampling an underlying system, it's that an inherently discrete time system (compounding at discrete dates), will have dynamics dependent on the sample times (compounding times). In the simple case of SIM, this means different time constants... and a limitation on alpha2 in terms of alpha1 that prevents some sample times from being considered (make alpha2 too big, and A goes negative in H[n+1] = A*H[n], and you get oscillations at pi (a pole along the negative real axis).

      I get your objection (I think). I'm not taking a stand one way or another. I'm just trying to point out the problem Jason was getting at *the way I understand it.* I may not be correct. Actually I think the way he expressed it may be more general than sample period invariance, but that's the way I came at it.

      I agree, most of the time it's a minor issue, or a non-issue. And while aliasing is certainly a problem, that's not the core issue he was getting at (although it may be related, in a way I'm not seeing).

      I came at this with a challenge: to meet all three of these requirements (for my own entertainment):

      1. Reproduce G&L's SIM results at Ts=1

      2. Satisfy all of G&L's equations between any two sample points (not necessarily just the ones they used)

      3. Remain sample period invariant.

      My conclusion is that's not going to happen unless you do something ugly like build a continuous time model of a ZOH system. In lieu of that, assuming all the compounding dates happen precisely at G&L's sample times may be just fine for you. From Jason's point of view, I can see where that's a draw back (why would he want sample time dependence?... he's trying to get AWAY from all the messy details of complex human behavior... not get further into the weeds).

      Delete
    10. Put another way:

      In the SFC modeling world it appears to me it's better to speak of "compounding times" rather than "sample times." The SFC modeler is not necessarily "sampling" an underlying continuous system.

      In other worlds that's not necessarily the case (perhaps growth models, for instance? Where you're actually trying to fit parameters to empirical data?... I don't know, I'm just guessing here).

      You'll notice that Jason spends close to 50% of his time comparing his models with empirical data and making forecasts. THAT is the interesting part of his approach to me: rather than attempting to build "The Matrix" simulation of an economy... he's saying "You're assuming complexity there, where potentially none exists if you look at things from a certain scale." He's also saying "You have to crawl before you can run or walk, and I'm trying to provide a 1st order useful model of the real world... crawling essentially." (those quote of mine are paraphrasing BTW, based on my understanding of his framework).

      So I think there's a wide chasm between PKE and SFC modeling and what he's trying to do in general. That's why I was so curious to see his take on PKE ... and I'm disappointed he got derailed on what amounts (in my mind) to a technical issue. Other readers of Jason's blog (like John Handley) were not keen on going there in the first place, and were all to happy to see him abandon the project. (If you haven't seen John's blog BTW, you should check it out: he's a truly amazing 15-year old!)

      Delete
    11. Here's one big reason why I'm intrigued by Jason's approach: it's not necessarily about all the details of what he's doing... it's because he's one of the few econ bloggers who's ever given me a satisfying answer to this question:

      "Jason, what evidence would convince you that you're wrong?"

      He was able to provide a clear explanation of that for me, and added that he's not interested in adding "epi-cycles" so save his "research project" and would prefer to write a post mortem and abandon the whole framework if the data didn't go his way.

      I've asked that same question to a lot of other people (Scott Sumner, Nick Rowe, Roger Farmer, John Cochrane, Mark Sadowski, Marcus Nunes, Stephen Williamson... and others), and I never got something that clear and straightforward in response. Cochrane actually got quite offended!

      A falsifiable framework in econ! Who knew? (It probably helps that it's not his day job!... a remark on my part that I'm sure Cochrane would identify as a "Bulverism.")

      Delete
    12. I meant to write "research program" rather than "research project" above. That's the sense in which Jason (and I think Noah Smith too) has used it (i.e. Popper, Kuhn and Lakatos).

      Delete
    13. Falsifiability is extremely important; this is my main complaint about mainstream macro.

      His model outputs look interesting, but he buries what he is doing under a lot of jargon. You see the chart, but the mathematics behind them are buried in an appendix somewhere. When I tried tracing through his code sample, my reaction was - why are you doing that? Unless he drops the physics analogies - which are inherently meaningless - and just writes down exactly how he calculates his model output, I felt that I did not want to spend my time on it. Mathematics was designed to be easy to understand, without requiring verbal hand waving. Unfortunately, DSGE modellers do exactly the same thing.

      Delete
    14. Brian, I'm no PhD in physics, nor am I an economist. I'm an engineer, and not a particularly bright one at that. However I was able to understand his paper (link to it in his tag line), at least the 1st part, but I would guess all of it since most of it came straight from his blog.

      I'm not saying you don't have a valid complaint. I had to put a bit of effort in, but once I did, it's actually a very simple idea. I won't try to say it's ALL simple, but one of the core ideas that's used over and over again is very simple, and it's this:

      Given a system in which there are:

      1.) Two process variables, call them X and Y

      2.) Underlying complex "micro-states" which look random from a macro scale, and which give rise to X and Y (think of X and Y as "emergent" from these complex micro-states.

      3.) A plausible communication channel between X and Y

      You can formulate a simple relationship between X and Y based on the idea of natural information equilibrium (see Fielitz & Borchardt's paper on this, linked to in the right hand column of Jason's blog).

      This is not necessarily a superior way to go about things if you know more information. For example, you can use the above to derive the ideal gas law... but that's just an example. We have more knowledge about what's going on with gasses which make the information transfer (IT) derivation unnecessary. There's no reason to use IT for such a system, except as an example. IT is handy when you DON'T have anything else to go on (except those three conditions I list above). So what does it lead to? Well, without the math, it leads to power laws and exponential laws between X and Y. E.g.

      (X/X0) = (Y/Y0)^k

      You can add the concept of an "abstract price" (or a detector in F&B's terminology):

      P = dX/dY

      I won't list the exponential laws, but they're equally as simple (they represent the case of a "fixed source" rather than a "floating source" in F&B's terminology... or "partial equilibrium" vs "general equilibrium" in Jason's).

      You can solve in terms of P if you'd like:

      P = (k*X0/Y0)*(Y/Y0)^(k-1)

      etc.

      You can chain such "process variables" together, and do other things, but in the end it's just power laws!

      Generally you have fit the few parameters such models require from data. Sometimes you can derive them from the problem (k for example, which is called the "information transfer index").

      That's a good chunk of all that's going on right there! How do you check? Use FRED. Sometimes Jason's "models" don't check out. He hypothesizes a relationship. He doesn't KNOW that it's true. But it's easy to check.

      There are some that do seem to check out. k (for example) can be expressed as function of time, and he can recreate time series data going back decades in several countries and do out of sample predictions which seem to do markedly better than 40+ parameter Fed DSGE models. In all fairness those DSGE models are attempting to do much more than the limited scope of Jason's models typically shoot for, but Jason keeps his parameter count very low ... 2 or 3 is typical.

      If you actually want to understand, I recommend this post (it even has an early version of a varying k, which does pretty well).

      If you understand that, you can go a long way on Jason's blog. There are many other ideas... so I don't want to sell it short, but the core of a pretty big chunk of it is dead simple.

      Delete
    15. This comment has been removed by the author.

      Delete
    16. Also, I just another version of SIM (SIM3) which isn't completely filled out yet, but looks to satisfy my three objectives above: match G&L at Ts=1, sample period invariance, and satisfy G*L's equations between any two points. It wasn't hard actually. We'll see if it hangs together for the rest of the outputs (Y,YD and C).

      Delete
    17. Re: the post referenced.
      I looked at it very quickly, and it looks pretty much what you would get if you regressed the log of the money supply versus the log of GDP.

      Of course, you get some kind of fit; if you plot the money supply as a % of GDP, it used to be stable (pre-QE). Does this tell us about information transfer? Not really. It just tells us that people want to keep their money holdings stable versus their incomes. That is exactly what SFC models predict, and it explains why - without needing to invoke mystical concepts like "information transfer".

      Maybe I am missing something, but it is easy to churn out regression results like that. The problem is that the random regression methodology is frankly terrible out of sample, which anyone who has ever had to maintain such models in real time can tell you.

      Delete
    18. Brian, I won't question your experience with regressions. I can tell you that the he has some good out of sample results. For example, with price level: fitting his parameters from data between 1955 and 1990, he had a good fit out of sample from 1990 till today (in the US). I'll try to dig up a link.

      But the model makes predictions as well: the k parameter varies very slowly. k for Japan is near 1, and has been for some time. k for the US is a bit above 1. k for Canada is just starting to sink enough below 2 that he's predicted that Canada will start to undershoot their inflation target over the next decade. (k=2 means the QTM holds basically, but k=1 means this is not the case).

      If you're interested, I'm sure he'll dig up the best examples he has. He occasionally puts a bunch of updates in a single post. I can't find the one I was thinking of right now ... but here's a page of some predictions.

      Also, don't think of the "information" in "information transfer" as the meaning content of the symbols. Think of it in the technical Shannon sense. Fielitz & Borchardt define an extension of Shannon's definition and call it the "natural amount" of information (which covers the case of there only being one kind of symbol as well) in their Appendix A. But again, this concept of information doesn't involve interpreting what the information means... no more so than the volume of a gas interprets the meaning the of information transferred to it by applying mechanical work to the gas. It's kind of a generalization of the concept of information.

      Delete
    19. Also, if you don't like the information transfer angle, there are other ways to see it. For example one of the fundamental differential equations he uses:

      P = dD/dS = kD/S

      is the simplest differential equation you can form that's consistent with the long term neutrality of money (it's homogeneous with degree zero). D=demand and S=supply. Irving Fisher included such an equation in his 1892 thesis (only slightly less general: i.e. w/ k=1).

      Also, economist Gary Becker explored some ideas related to what Jason is doing in the early 1960s, showing that supply and demand curves can be obtained from the feasible space of agent budget choices w/o making any behavioral assumptions other than agents will behave seemingly randomly, and tend to fill the entire space.

      What I find intriguing is the concept of removing human behavior from the picture. Treating humans as mere mindless atoms bouncing around doing all manner of things it's possible for us to do... no need for complex game theory optimizations, micro-foundations, all-knowing rational agents or million parameter behavioral models. So, it'd be nice if he could get some decent 0th order or 1st order results by sweeping all that garbage to the side, wouldn't it? Lol. We'll see I guess.

      Delete
    20. In Jason's formulation, it's actually UNCOORDINATED agent behavior that's tractable. It's when agents coordinate that information transfer is "non ideal" (e.g. dD/dS > kD/S) and only limited things can be said about it (usually what you can say is things go wrong... it's only the very very rare case when coordination (such as a perfectly realized expectation) improves matters any).

      Sorry for all the comments! It took me a while to get up to speed with the concept, but once I did (parts of it at least) seem so simple! Plus I'm intrigued that the idea is general enough that I might actually be able to find a use for it in a completely unrelated discipline, like one of Jason's readers did (Todd Zorick).

      Delete
    21. Sorry, an example of non-ideal is p ≡ dD/dS ≤ k D/S

      Delete
  7. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.