Recent Posts

Sunday, March 13, 2016

Models Are Not Frequency Invariant

If we create multiple discrete time models of the same mathematical system, and those models are at different frequencies, those models will result in different outputs if those outputs are converted back to a common frequency. For example, this means that the output of an economic model which runs at a monthly frequency will have different outputs than a quarterly model, if we convert the monthly time series into quarterly. This is unfortunate, but this is generally going to be a small problem relative to the other issues economic models face. We only need to worry about this effect if we have an extremely low frequency, such as seen in some overlapping generations (OLG) models.

I doubt that there is a whole lot of academic research on this topic within economics. This is entirely reasonable, since this is a relatively unimportant effect. The only case where it should have been heeded is in the previously mentioned case of OLG models.

Even in systems theory, I cannot recall much in the way of treatments of this topic. This is the despite the fact that real world engineering systems are now largely all developed using discrete time digital controls. Once again, my feeling is that the observations I am making here are viewed as relatively obvious -- we know that we are losing information when we sample a continuous time system (which converts it into a discrete time system). The only reason that this subject has come up was a response to the writings of Jason Smith, a physicist, which I first responded to in the article "Discrete Time Models And The Sampling Frequency." I hope that this article answers some questions that were raised in readers' comments.

Terminology and Assumptions

Within systems theory, we refer to the concept of a state variable: a vector of time series which capture all of the dynamics of the system. (Other time series can be constructed as functions of the state variable.)

For simplicity, we will assume that the model is time-invariant, and the state variable is finite dimension. Imposing these assumptions is not just laziness; otherwise, we are rapidly in a position that we can say very little about the properties of the system we are talking about. Additionally, I will just refer to conversions between monthly and quarterly frequencies; the reader is free to generalise the discussion to other frequency conversions.

Basic Example - Compound Interest

We can convert frequencies for systems that do not feature inputs that are external to the system.

The simplest example to look at is the case of a compound interest model. There state variable consists of a single time series -- the bank balance b(t), which starts out at $100.

The monthly model is generated by having the bank balance grow by 1% per month. Written out:

b(t+1) = (1.01) b(t);  b(0) = 100.

We can then create an equivalent quarterly model. It is a similar model, except that the growth rate per period is larger. The only trick is that we cannot use the approximate relationship that the quarterly interest rate is triple that of the monthly; we need to take into compounding over three periods. The correct period interest rate is (1.01)^3 = 1.030301.

The model is:
b(τ+1) = (1.030301) b(τ), b(0) = 100.

Note the two time variables are not on the same calendar time scale. The output at t=3 of the monthly model corresponds to the τ=1 period for the quarterly model.

More General Models

Under the assumptions, we can describe any monthly model with the dynamics:

x(t+1) = f(x(t)), x(0) = x_0.

(Note: I have not figured out how to embed mathematical notation into these posts, since I want to avoid confusing readers with unnecessary mathematics. The "x_0" should be x with a subscript 0. Remember that x(t) is a vector of time series, and not just a single time series,)

If the system is linear, we can write:

x(t+1) = A x(t), x(0) = x_0,
where A is a N x N matrix.

We can generate a quarterly model from the monthly by creating a new dynamic system:

y(τ+1) = g(y(τ)) = f(f(f(y(τ))), y(0) = x_0.

In the linear case, we we have a new dynamics matrix, which is the original matrix A to the third power. It is straightforward to verify that every third value of y(t) will equal x(t) from the monthly model.

A conversion going from quarterly to monthly is more complicated, but presumably can be done. In the linear model case, we need to be able to take the cube root of the matrix A (take the matrix to the power 1/3). There is no guarantee that we will be able to do that operation, in which case we cannot generate a time invariant linear model.

Although we may be able to find a new model which matches the output of the original, there is no guarantee that it is in the same class of models. For example, take a class of stock-flow consistent models in the text by Godley and Lavoie: they specified by a set of behavioural parameters. There is no guarantee that a frequency converted model can be generated by another choice of those parameters. Moreover, care has to be taken when doing the conversion; if we use approximations, the results would presumably be mismatched. (Using my example compound interest system, that would correspond to using a quarterly interest compounding of 1.03 instead of 1.030301).

External Inputs Means That We Cannot Do Frequency Conversions

Unfortunately, as soon as we allow for external inputs into the system (exogenous variables in economist jargon), we can no longer do frequency conversions.

Imagine that we modify our compounding interest model to allow for deposits or withdrawals, which is a time series that is external to the original system that is denoted as u(t). In this case, we start with an initial balance of $0. The new equation is:

b(t+1) = 1.01(b(t) + u(t)), b(0) = 0,

(This equation is saying that deposits or withdrawals take effect at the beginning of the period, and so it affects the balance on the next time point.)

We now will look at what happens if we set u(t) to be a $100 deposit at one time point, and 0 elsewhere.
  • If we deposit $100 in the first month (t=1), the bank balance at t=3 will be $100*(1.01)^2  =  $102.01. 
  • If we instead make the deposit in the second month, the bank balance will be $101 at t=3.
  • If the deposit is made at t=3, the balance will equal $100.

When we switch to a quarterly frequency, we cannot distinguish between these cases: the monthly cash flows would be aggregated to the same quarterly time series, with an inflow of $100 at τ=1. When simulating the quarterly model, the balance at  τ=1 would have to equal $100, since we have no way of knowing whether the true cash flow arrived slightly earlier, allowing for interest to accumulate.

We have lost information as a result of moving to a lower frequency, and there is no way of distinguishing the two different inputs. There is no way of defining the quarterly model to match the output of the monthly model. (This loss of information was the subject of my original article.) The only way to avoid this loss of information is to hide monthly data within the "quarterly" model, which means that it is no longer a true quarterly model.

So What?

Models with different frequencies cannot reproduce each other's outputs exactly; this means that the choice of sampling frequency affects model outputs. This is true for any model, and is not an artefact of SFC models, which was Jason Smith's claim.

However, the magnitude of the error is driven by the "compounding" that occurs within the high frequency model (when compared to the lower frequency model). So long as there is not a spectacular time scale difference (for example, a sample time period of monthly versus twenty years), these mismatches are going to be smaller than other sources of model error.

What About Continuous Time?

Since there are no continuous time economic series, it makes little sense to insist upon the accuracy of moving from a continuous time model to discrete time model. A continuous time model is already an approximation of true economic data.

However, if one insists upon starting from continuous time, one needs to consult the literature on the numerical approximation of differential equations. The message of that literature is straightforward: unless we have a closed form solution of the differential equation, any discretisation of the differential equation is only approximately correct. The quality of the fit depends upon the frequency composition of the continuous time system versus the sampling frequency, and the quality of the method of approximation.

In other words, small errors are inevitable if we create a model starting from a continuous time approximation of an economic system.

(c) Brian Romanchuk 2015

42 comments:

  1. For me the correct size of the time unit is a day. Intra day rules tend to be different to inter day rules.

    You get what can probably be called a 'quantum effect' intra day you don't get with the longer time units.

    And that suggests that continuous system dynamic models have a big blind spot people need to be careful about.

    ReplyDelete
    Replies
    1. Agreed; intraday would require inventing new accounting. Even daily is debatable; one can imagine tracking transactions on a daily basis, but economic decisions cannot be made at that high a frequency. When we hired someone, it would take months from the time of the decision before their first day on the office; try tracking that kind of a time delay in a daily model.

      Delete
  2. Brian, it's easy to make sample period invariant models of underlying systems which compound continuously. Simply use a state transition function of, for example, exp(a*Ts), where Ts is the sample period. The system time constant(s) will remain fixed. For example if 'a' is a scalar rather than a matrix, Tc = time constant = 1/|a| for any sample period Ts.

    As for aliasing, as long as the sample rate 1/Ts >= twice the overall bandwidth of the process, zero information is lost.

    ReplyDelete
    Replies
    1. Models without exogenous variables can be converted. Your example falls under the closed form solution case. I cannot think of any economic system which is going to be described by the exponential growth of a matrix; even compound interest features time-varying interest rates. Moreover, any reasonable economic model has exogenous variables.

      As soon as we add those complications, we can no longer reproduce the hypothetical continuous time state variables with perfect accuracy with a fixed sample period. (If we do not have a closed form solution, we can only approximate the continuous time solution with a numerical method, which has an implied sampling frequency, although some methods vary step size based on the equation.)

      There are no non-zero signals on a finite time horizon that have a finite bandwidth. (You need a combination of pure sinusoids starting at t= negative infinity to have a finite bandwidth.) This ensures that there will always be at least some errors introduced by sampling of real world signals.

      For reasonable choices of the sampling period, the errors I discuss are going to be small. But they are non-zero, and they create the sample period dependence within a model.

      Delete
    2. Re: exogenous variables: sure, that's practically true. Although I do take a stab at formulating G&L's SIM model to accept a wider class of exogenous inputs (the government spending function): which I assume is the sum of a set of steps + a smooth Taylor expandable function.

      And if it's restricted to a rate (like their example), then you just evaluate the integral from -inf to t of exp(a(t-tau))*b*g'*dtau = (exp(a*Ts)-1)*b/a = B (as I'm sure you already know). So all I do is expand that to the other derivatives. Why? for fun. Weird sense of fun I guess. But, yes, in general you'll need some numerical method, because you won't have linearity.

      Also I agree about signals on a finite time horizon, however, depending on the signal, you can sometimes choose an interval to make them arbitrarily small outside a particular band (e.g. a Gaussian which transforms to another Gaussian). When you're -200 dB down outside of some band, you're essentially not making any contribution there.

      BTW, I did work out the outline of a way that G&L could update their alpha2, alpha1 and theta parameters to make that simple model time-invariant (with the restricted set of exogenous inputs G they consider). I describe it here under the paragraph heading called "SIM5." Why? Who cares? Well, again, I did it just for fun. I haven't finished it yet, but it will result in an "alternate" set of equations I've started to tack onto this version (SIM2) of the spreadsheet (with the modifier "alt").

      For example, rather than scaling alpha2 by Ts2/Ts1, it amounts to setting:

      alpha2 = (1 - A1^(T1/T2))*(1 - alpha1*(1 - theta))/theta

      You can then work out new values for theta and alpha1. Once you have a whole new set of parameters at the new sampling period Ts2 which preserve the system time constant, then G&L's accounting equations must be satisfied as well (since you've changed the whole set of parameters together).

      Again, I realize this is probably not helpful for anyone, but I was curious to see if it could be done.

      Delete
    3. "you just evaluate the integral from -inf to t of exp(a(t-tau))*b*g'*dtau"

      Should be

      "you just evaluate the integral from (n-t)*Ts to n*Ts of exp(a(n*Ts-tau))*b*g'*dtau"

      Delete
    4. As a quasi-practical example, say you had a system with continuous compounding, and an exogenous input u(t) that could be described over a finite set of contiguous time intervals as a finite Taylor expansion in each interval: they you could sample the response of the system to that input, in a manner invariant to the sampling times, provided the sampling times at least included the start times of each of those intervals.

      Granted, it's "quasi" practical at best. But I do like seeing a path towards arbitrarily small errors.

      See what you make of this if anything (a more recent comment by Jason, trying to explain in a different way to another commentator (Greg) what he sees as "the problem").

      To me "the problem" is straightforward: just a fun intellectual exercise: what's a practical way to get this one model to not care about sample times? So I'm not necessarily of one mind with Jason on this.

      Delete
    5. I took a look at Jason's comment. He has come up with a similar example, where accounting treatments do not capture the effect of intermediate cash flows.

      But this is beside the point -- if we have quarterly data, by definition, we cannot reconstruct monthly time series. If you want a more troublesome example, take the monthly series 1,-1,0,0.... This series is indistinguishable from 0,0,0,..., quarterly, under accounting definitions. Yet the first non-zero monthly data point will have a non-zero effect on the system, and be only imperfectly cancelled by the second.

      The only way to deal with this is to hide the monthly information within the "quarterly" model; but this is no longer a model that works with quarterly data, it is a monthly model where you pretend to look at the data every three months. That is not how we build quarterly discrete time models.

      If you wish, develop a daily model, and then downsample it. Since accounting does not really exist for intraday time scales (there is no such thing as intraday interest, for example), you can simulate any economic time series. But you would rapidly discover that the model is unmanageable-- very few transactions settle on the same day (for example, it takes days for cheques to clear). This rapidly becomes a nightmare, which will be familiar if you ever attempted to balance a cheque book. (I never worried about my own cheques, but balancing a cheque book for a small business was an accounting exercise I did in high school.)

      Delete
    6. Also - I have not had a chance to look at your work. My feeling is that you should not be attempting to do this by hand. For monthly <> quarterly conversions, we know the answer to the conversion -- take the state dynamics to the power 3 or 1/3. Even in the linear case, this creates an ugly expression, which you cannot easily align to create a parameter conversion. I think you need to solve a particular case numerically, and you could then try to back out parameter values for the new system.

      Delete
    7. Another point - the G&L models have a form of equilibrium embedded within them; variables are solved so that they coherent within the time period. This means that there is a "fiscal multiplier" within the period; that is there is a reaction to a change in fiscal policy which affects variables in the period. This means that the models did not have an obvious closed form solution (if there was, I did not see it, and they did not specify it; maybe the model SIM did, but certainly not the later models) The solution to their models has to be calculated numerically; this was done implicitly in eviews (which their source code uses), or via an iterative procedure in Matlab.

      This in-period equilibrium assumption by definition creates a time period dependence.

      You need to keep in mind that Jason's statements that SFC models are "just accounting" is just a straw man; some ill-informed commenters may have said that, but no one who understands SFC models says that. The in-period equilibrium assumption is a huge behavioural assumption, and creates another dependence upon the time period.

      Delete
    8. "I think you need to solve a particular case numerically, and you could then try to back out parameter values for the new system."

      For the particular case of SIM, I think it's a closed form solution for theta, alpha1 and alpha2 as a function of (Ts2/Ts1), assuming continuous compounding. Extending this to a broad class of exogenous inputs (government spending functions) is also closed form. I can try it out and see.

      Delete
    9. I don't know much about this "in-period" equilibrium you mention (a commenter on Jason's blog "Bill" mentioned the same thing, producing an expression for the case of SIM, but didn't tell me how he obtained it). I'm just looking at this from a general sampled system point of view... so I'll have to read G&L.

      Also what about Jason's point (in that comment, and the ones immediately below) expressed by this interchange between he and I:

      Me:
      "Jason, wouldn't these problems affect other discrete time models, for instance the New Keynesian model[s]...?"

      Jason:
      "It wouldn't affect all models because not all finite difference models have this same invariance. NK DSGE models are also written in log linear variables which don't have the same issues."

      Delete
    10. "For the particular case of SIM, I think it's a closed form solution for theta, alpha1 and alpha2 as a function of (Ts2/Ts1), assuming continuous compounding. Extending this to a broad class of exogenous inputs (government spending functions) is also closed form. I can try it out and see"

      SIM may be too simple for this effect to show up (I have not had a chance to look too carefully at it). It definitely shows up when fiscal policy is introduced.

      When I created my SFC models, I simplified them by eliminating that "in-period" equilibrium, so that I could add complexity in other directions. I think G&L like it because it is similar to other Keynesian models with multipliers. Additionally, it embeds a certain amount of expectations effects into the model; the models allow for "single period rational expectations"; all that is lost is forward-looking expectations. It's a partial answer to the Lucas Critique (if that means anything to you).

      Delete
    11. This is what Bill wrote:

      "Tom, there are two equilibrations in their SIM model. One occurs across time periods, one occurs within them. The within period equilbrium value of GDP is given by this equation:

      Y∗ = G/(1 − α1 · (1 − θ))

      What happens after within period equilibrium is reached? Apparently nothing, until the next time period.

      The model does not work if the time period is too short for within period equilibration, so there is a minimum time period. It doesn't make much sense to have an extended period within which nothing happens after equilibration, either. So I think that there is an implied time period in the model, we just don't know what it is. ;)"

      Delete
    12. "Also what about Jason's point (in that comment, and the ones immediately below) expressed by this interchange between he and I:

      Me:
      "Jason, wouldn't these problems affect other discrete time models, for instance the New Keynesian model[s]...?"

      Jason:
      "It wouldn't affect all models because not all finite difference models have this same invariance. NK DSGE models are also written in log linear variables which don't have the same issues.""

      If one insist on creating a continuous time model to replicate a discrete time model, like he does, you get a dependence upon the sample time. No ifs, ands, or buts about it.

      Since the underlying data is discrete time anyway, he is completely and utterly wrong in his critique. The accounting is determined by a sum of discrete cash flows over a time interval -- continuous flows are never, ever, observed in the real world. Since we are summing a sequence of non-zero flows, there is no way that you can fit that to a clean continuous time model, other than by forcing all of the flows to be a comibination of Dirac delta functions. In which case, you cannot have nice clean exponential response functions in your dynamics.

      He's right that DSGE models do not have this problem -- they have worse problems. The act of log-linearisation breaks accounting identities. Whatever log-linear models are, they are not models of economic systems that obey accounting identities.

      Delete
    13. I notice that Bill's expression:

      G/(1 − α1 · (1 − θ))

      is the "G" component (DY*G) of "measurement" Y in my expressions:

      Y[n+1] = CY*H[n] + DY*G[n+1]

      All those are listed at the bottom of this post (SIM). I don't know if that means anything. I suppose Bill found that in G&L's text.

      And yes, the Lucas Critique does mean something to me, but I'm far from an expert on it. Thanks. I'll look up "single period rational expectations" in G&L.

      Delete
    14. "And yes, the Lucas Critique does mean something to me, but I'm far from an expert on it. Thanks. I'll look up "single period rational expectations" in G&L."
      They definitely do not call it that, that is how I would phrase it. The discussion in section 3.7.2 (page 80-81 on my copy) covers the notion of how the household needs to know the "equilibrium" value of income for the period in order to decide upon its spending plans. It's been awhile since I looked at this, but that equilibrium is what makes the G&L SFC model solutions not immediately obvious.

      Delete
    15. "Whatever log-linear models are, they are not models of economic systems that obey accounting identities."

      Well, OK. It seems like they might still be something useful though (I'm far from being able to judge!), especially if you can demonstrate they have some empirical validity (I'm not saying you can show that!).

      "continuous flows are never, ever, observed in the real world"

      Well, banks do offer continuous compounding though, don't they? Isn't that a kind of continuous flow?

      "other than by forcing all of the flows to be a comibination of Dirac delta functions. In which case, you cannot have nice clean exponential response functions in your dynamics"

      Well, OK, but a Dirac delta in g' in this model:

      h' = a*h + b*g'

      produces a step in h. I'd say the dynamics of h are still described by an exponential with time constant 1/|a|, regardless of the forcing function g'. The time history of h may not always look exponential, e.g. if g' is a sinusoid with frequency f, then h is too of course. Maybe it's just semantics, but I'd say the dynamics of that system are determined exclusively by a.

      I'm not going to argue with you about what's better: continuous vs discrete, because I have no idea.

      However, if the empirical data is so crappy (Noah Smith has stated repeatedly that "macro data is uninformative") that all that's warranted is a 0th or 1st order model, it seems to me you'd be justified in cutting the parameters to their bare minimum. That is if you're interested in falsifiability. In other words, perhaps macro data CAN be informative if your model is simple enough.

      One of the economists I asked "What would convince you that you're wrong?" (Nick Rowe) also seemed to indicate that the data was too crappy to tell (in the particular case I asked him about).

      Regarding DSGE, Jason asks this in a post today:

      "Is DSGE a framework?"

      Spoiler alert: his first line: "TL;DR = No" Lol.

      Delete
    16. re: continuous compounding rate. There's a huge number of interest rate conventions within fixed income mathematics; every market has its own compounding convention. The "continuous compounding" rate allows us to convert amongst the different conventions. But under all conventions there is no interest based on "intraday loans"; all interest is based on end-of-day settlement. (And note that the end-of-day might be different for different segments of the fixed income universe...) (Perhaps loan sharks have intraday loans, but I doubt that they worry about their interest rate quote conventions...)

      My point is that other than as an approximation, you cannot see a unit step function as a legitimate flow in a model. There is no circumstance under which an entity continuously pays another entity infinitesimal amounts of money; the flows are discrete jumps that occur during the end-of-day settlement. Feel free to approximate these flows with continuous time flows, but there is no legitimate complaint that a discrete time model does not align exactly with such an *approximation*.

      As for falsifiability, it's a big subject. My complaint about the DSGE framework - as it is used in practice -- is non-falsifiable. You could search my posts to see why I say that; it's related to how the natural interest rate is calculated. As for post-Keynesian approaches, I believe that the authors are more honest about what can and cannot be done with models.

      One can distill some post-Keynesian views as saying that we cannot expect to be able to predict the economic future. Although that's a fairly depressing negative result, the general failure of mathematical forecasting models is consistent with that view. The only way to falsify the PK view is to come up with an accurate forecasting methodology...

      But we need to be careful about making statements about the quality of the data. The destruction of the Greek economy by austerity policies was predictable; one was not able to to pinpoint how big a disaster it was going to be, but one knew it was going to be bad. Therefore, we have a useful recommendation for policy -- don't be idiots like the European policy makers. I certainly think that we can use economic theory to be able to make statements like that. That's the sort of question that we need to answer; generating a probability distribution for the next quarter's GDP result is not actually that useful, but that is what the "scientific" approaches to economics want to answer.

      Delete
    17. What you describe about a useful model for Greece sounds like a 0th order: thumbs up or down. PKE was not alone in that evaluation, was it?

      Delete
    18. There were a lot of mainstream people talking about Ricardian Equivalence, and how there would have been no effect from fiscal tightening. Not all agreed, but they were not necessarily actually using what their theories allegedly say.

      Outside the mainstream, some Austrians also think that government spending cuts are good for growth, but otherwise, they tend to have a similar view to PK economists with regards to fiscal policy.

      Delete
    19. What about people like Simon Wren-Lewis, Paul Krugman and Brad DeLong?... and perhaps Miles Kimball and J.W. Mason? What was their record concerning austerity in Greece? Aren't they all fairly mainstream?

      Delete
    20. Sure, some of them got it right. But the question is whether that was the result of the theories they follow, or common sense overriding theory. That said, Simon Wren-Lewis is probably the best on fiscal policy.

      From what I saw, a lot of the justifications I saw on mainstream blogs were pretty sketchy("the magical zero bound changes everything!"). (As for J.W. Mason, I like his work, but I am not sure that I would classify him as mainstream....)

      Some of the mainstream authors are worth reading, but for myself, I find that I read them less and less. I have read enough of their work to know what they are likely to say on a topic, and there is absolutely no chance that they would ever address my concerns about the blind spots in their theory. I would rather advance my own research agenda than attempt to respond to the controversy du jour.

      Delete
    21. BTW, my model above:

      h' = a*h + b*g'

      if stimulate with a Dirac delta in g' does produce an instantaneous step up, followed by an exponential (rise or decay, depending on if a > 0 or a < 0, resp.) with time constant 1/|a|. Only if a = 0 does it produce a true step, with an infinite time constant. But whatever it does, it's literally the "impulse response" of the system, and it completely characterizes it. b just plays a scaling role.

      Delete
    22. I did find a closed form way (SIM6) to adjust both alpha1 and alpha2 for different sampling periods with SIM so as to keep it truly sample period invariant and so that it satisfies all of G&L's accounting expressions at all times. Of course I'm assuming continuous compounding. It's curious to me that the third parameter (theta) did not require adjustment... I wonder if there are a whole set of solutions with different thetas.

      Basically you make these calculations: defining Ts_0 := the original sample period = 1 "period" (I call this period a year):

      First calculate an "original" A_0 and B_0, from G&L's original alpha1 and alpha2 (called alpha1_0 and alpha2_0 respectively) and theta:

      A_0 = 1-theta*alpha2_0/(1-alpha1_0*(1-theta))
      B_0 = 1-theta/(1-alpha1_0*(1-theta))

      Then for the new desired sample period Ts, calculate:
      T_ratio = Ts/Ts_0 = Ts

      Then:
      A = A_0^T_ratio
      B = (B_0*(A-1)/(A_0-1))/T_ratio
      alpha1 = ((1-B)-theta)/((1-B)*(1-theta))
      alpha2 = (1-A)*(1-alpha1*(1-theta))/theta

      You'll find that the time constant doesn't change:
      Original Tc := Tc_0 = -Ts_orig/LN(A_0)
      New Tc = -Ts/LN(A) = Tc_0

      And that all the rest of the parameters (calculated only in terms of alpha1, alpha2 and theta) produce outputs (Y, T, YD and C) satisfying all of G&L's expressions.

      Delete
    23. alpha1_0, alpha2_0, and Ts_0 can be any initial values actually, not just those specified by G&L, and the change to a new sample period Ts should work with the above. However, if you want to match their results then stick with G&L's values.

      I did not attempt to make this work with any other functions for G other than a scaled unit step. You can choose G&L's $20/period to match theirs, or any other. More general G functions addressed here. But even with the spread sheet above, you could make it work for G at a constant rate per period, by typing over the G values in the calculation table: so long as you matched the function precisely: for example originally you may have G be $20/period for period 1 and then $40/period for period 2. If you halved the sample rate to 0.5 periods, then you'd leave G at $20/period (=$10/half-period in the spreadsheet) for the 1st 2 sample periods, and then switch it to $40/period (=$20/half-period in the spreadsheet) for the next two, etc.

      Delete
    24. I have not had a chance to look at this. Yes, you could match up to a steady G, but what happens when you change G on a monthly basis? The quarterly series only has the sum of the monthly values to work with, which is the effect that I am describing in the article.

      I am now looking at the equilibrium notion within G&L. They have different notions about expectations, and the dynamics are slightly different for each. I believe that you will definitely see some differences if you drop the "perfect foresight" assumption; if you are adapting to errors, the adaptation is quicker on a monthly basis.

      Delete
    25. Yes, it's a problem if G changes constant levels on a monthly basis and you're sampling quarterly. As you mention in the post you'd essentially have to keep track of each monthly change in the last quarter... which almost amounts to sampling monthly and then down-sampling (I treat a similar case in my SIM4).

      However, if G can be approximated with arbitrary accuracy by a finite Taylor expansion over the last quarter, and you have access to the relevant derivatives... then you can expand my B from a scalar to a row vector and handle that case as well in a sample period invariant way w/o having to do monthly sampling. Contrived? Perhaps... :D

      Or, like you mention, solve this equation numerically for general g' = dg/dt, where g = integral of all government spending. You'd of course replace 0 with t0 and t with t1 (the start and stop times of your sample period, resp.).

      Delete
    26. Using the Taylor Series to encode information is a way of sneaking higher frequency data into the model. You would not be able to do that if all you have is quarterly data to begin with. A "quarterly" model is one that is applied to quarterly data (only).

      Think of how you would apply this to real world data. Critical data like GDP components and Flow of Funds data are only available at a quarterly frequency. (They are critical for a SFC model, at least.) Even though you have other monthly data, how would you convert the quarterly data to monthly? The errors created by making up your own monthly GDP (relative to a hypothetical "true" monthly GDP) is going to be larger than the errors created by sampling the dynamics.

      The fact that a lot of data is only available at relatively low frequencies (quarterly...) is why I have a hard time worrying about this issue in the first place.

      [OK, Canada has a monthly GDP figure, but it is only a subset of the national accounts; a model builder would need all of the GDP components, like investment, which are only available quarterly.]

      Delete
    27. Or you could just keep the continuous time model without any extra terms in b. Then you can do an observability and controllability analysis (easy in this case), do pole and zero placement with a fixed feedback law, add noise to the analysis, design a Kalman filter, design a time varying optimal feedback control law for various objective functions or design a robust control law to achieve a minimum performance level over a whole set of plants that could exist due to our uncertainty about the plant. These things could be done in either discrete or continuous time, but the model is simpler in continuous time. ... and I'm sure you already know all that. :D

      Delete
    28. You need to convert to discrete time in order to make the model coherent to observed data. For toy models that you are not fitting to data, that is not an issue, but sooner or later you need to worry about that.

      You also face a lot of modelling problems in continuos time - for example a lag creates an infinite dimensional state space. The only real analytical tools that work with infinite dimensional systems are frequency domain, which precludes nonlinearities.

      These are not simple systems; there's a lot of deep mathematical issues that are faced by continuous time models. Does a solution exist, is it unique? Etc. Many of these problems disappear in discrete time. Relying on numerical simulation just hides the discretisation within your software, and you still do not know the properties of the system

      Delete
    29. "for example a lag creates an infinite dimensional state space"

      Yes, but as I recall from my robust controls class, a Padé approximation can be useful there.

      Delete
    30. Also, I took another look at Jason's ΔH = Γ*(G - T) expression, and it's pretty clear what Γ represents in a state space description of the model... which explains his Update 4, part 2 and 3 here in another way. Exemplified by his figure here (with Γ=0.5).

      Delete
    31. Hi, Tom. :)

      Yes, I did find that equation in the text, along with their explanation of it.

      I haven't been responding because of illness.

      Delete
  3. Your post, your references and others have raised the issue of frequency recently. A very appropriate issue for economist.

    But a question popped into my mind. Aren't we already treating our economic measurements as if they were continuous? At least those of us who look at a GDP measurement and think "this is the sum of transactions within a period". A sum of transactions between two time points is an integration of that continuous time period.

    We could make the integration period TWO YEARS, which is a longer continuum of the continuous system. Two years would not be two times either GDP number, it would be the sum of the two different numbers.

    We should be wary that we do not treat an integration as if it were a RATE of CONSUMPTION. For example, we should not define one year's GDP number as the "Rate of Consumption" with units of consumption/year. Instead GDP should be defined as "consumption in one year" with units of consumption-year.

    Just thinking.

    ReplyDelete
    Replies
    1. My argument is that we do not have true continuous time signals in economics; we do send eachother infinitesimal flows over periods of time, rather we send eachother non-zero amounts at particular transaction times. This means that we are not integrating variables, rather summing them. If we wanted to integrate, in order to replicate those lump sum payments, we need to use monstrosities like the Dirac delta "function". (Whatever it is, it is not a function.)

      Nominal GDP is the sum of the monetary value of production within a year. The units are dollars. For modelling purposes, you could create a variable N in continuous time, so that if you integrated it over a year, it equals GDP. That variable would have units of $/time. Although one might want to think of N as GDP, it would be a technical mistake to call it that; GDP is what is the measured dollar transactions over a period. Within a continuous time model, there would be no particular time series that corresponds to GDP, other than the trailing one-year integral of N -- which would not be used by model calculations.

      Delete
  4. Another way to think of GDP is as if it were a signal to be filtered. The signal is of a nature that accumulates over the course of a time period. The filter designer chooses the time period. There may or may not be continuous delivery of the signal but because the rate of true delivery is unknown (and perhaps unknowable), it becomes the best we can do.

    Then, as you say, the accumulation of data is always a trailing observation. We have a data point that can be recorded but it is not current in time of actual data arrival--only current as the most recent observed total.

    If we wanted to improve the correlation of data with actual arrival time, we would need to average the data (by dividing by two) and reassigning the arrival time earlier by one half period. That estimation would be in obvious error if the current signal was different from the previous signal recorded. In this last case, we could weight the time location by assuming a shape of the true signal but this would just be an educated guess. [I think this would be the equivalent of assigning the actual signal average strength as the sum of much smaller units, identical to your term N in units of $/time. However, in a signal situation, the smaller units would not considered as magnitude per time (mag/time). They would simply be a magnitude that would be re-weighted and assigned a time location.

    This all becomes of great importance in some economic models of the economic system.

    ReplyDelete
    Replies
    1. To continue the signal--GDP analogy's, measuring GDP is like measuring rainfall. There is no need to assign a time period to whatever we collect. We only need to assign the collection to a time period.

      Delete
    2. With rainfall, the equivalent to GDP would be the amount of rain collected in your trap, in inches. This is only useful if we specify the time; 1 inch in a day is a very different thing than 1 inch in a year. The flow variable is inches/unit of time.

      Delete
    3. Or rainfall could be measured in weight. Or volume. The unit of time we select is the size of the container.

      Delete
  5. But it would be better to say:

    Or rainfall could be measured in weight. Or volume. The unit of time we select is the size of the time container.

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.