Pages

Wednesday, April 3, 2024

In Defence Of Discrete Time Models

Steve Keen recently wrote “I’m not Discreet, and Neither is Time” in which he discusses the alleged defects of discrete time models as opposed to continuous time ones.

(In discrete time, the model state is defined on a time axis that can be labelled as integers: step 1, step 2, etc. In a continuous time, a model’s time axis is the real axis. A discrete time model can be thought of as being defined by difference equations, while continuous time is normally defined by differential equations.)

Steve Keen has different modelling priorities than I do, so I will not attempt to respond to him point-by-point. He wants his models to do different things than I want mine, which is a more philosophical clash than a question of discrete time versus continuous. I will instead just cover more generic issues, which overlap his points. I already discussed this in the appendix to my book An Introduction to SFC Models Using Python — where the models are discrete time.

Even if the Real World is Continuous, The Best Models May Not Be

My undergraduate degree was in Electrical Engineering, and courses covered both continuous time and discrete analysis and design, as well as the difficult issues of integrating the two types of time axis.

The reason why discrete time is needed is that digital circuits are best modelled as the variables having quantised values on a discrete time axis. (In most cases, the fact that values are quantised is ignored, and we use a real number to express values.) This is despite the fact that all circuits are ultimately analog. (The Brits write “analogue,” but I stick with the American spelling for circuitry, I write “analogue” for the word’s other uses.)

For a single wire, it will have a voltage that varies been the ground voltage (nominally 0) and the power supply voltage, which I will label V. Voltages that are within some tolerance of 0 are treated as a binary 0, while voltages near V are treated as a binary 1. Values outside a tolerance of 0 or V are undefined, and supposed to be avoided.

The circuitry is analog, and so if we look at a continuous time view of the voltage on an oscilloscope, we will see the voltage move fairly smoothly between 0 and V (assuming the circuit is working properly). Since that leaves the binary interpretation undefined sometimes, this appears bad. This is dealt with by running the circuit with a clock frequency — what matters for the circuit is what the value is when the clock ticks. The design is supposed to be such that the voltage settles correctly close to 0 or V before the next clock tick. If we push the clock rate to the tolerance of the circuitry, it will be just settling in time before the tick — which poses risks of errors due to the reality that not all chips behave identically.

The best way to analyse a pure digital circuit is to not worry about the intermediate time values, and just model the system at the clock ticks. You only look at the continuous time axis when the digital circuit is interacting with an analog one — which has gotten much more rare than when I graduated in 1990.

There is a direct analogy to economics. All monetary transactions occur at distinct points in time, and accounting invariably uses end-of-period closing values. The closest to continuous time variables are quotes from electronic markets — but even there, the transactions happen in discrete jumps. Although my knowledge of fixed income is increasingly antiquated, I have only seen fixed income cash flows as being modelled as (end-of-day) flows.

All Economic Data Are Discrete Time

All observed economic data re discrete time. There will never be continuous time economic data published, particularly when we consider the fact that even one second of continuous time contains an infinite number of data points.

If all data are always discrete time, any relationship between them by definition is a discrete time model.

Conversions Between Continuous Time and Discrete Time Is Tricky

We lose a spectacular amount of information when we move from continuous time to discrete time. Conversions are not one-to-one; an infinite number of continuous time models can give rise to the same discrete time model. This makes discussing relationships between them difficult for people who have no been trained in the topic — which is largely electrical engineers.

For example, I unfortunately once encountered a physicist who positioned himself as an expert on communication systems. However, this expert was unable to understand the issues involved in converting a linear first order continuous time model to a discrete time one.

Given that observed data will never be continuous, it makes little sense to say that a continuous time model is the “correct” one given the ambiguity in conversions.

Continuous Time Mathematics Is Hard

Doing proofs with continuous time models is hard. Models are allegedly defined by equations using differentials (derivatives), while we quite often functions that do not have derivatives defined everywhere — or even anywhere, if the function is the result of a random process. And if we want models that embed optimising behaviour, we probably want random processes to show up somewhere — if the model is deterministic, all virtual model inhabitants are completely aware of what will happen for all time.

As an example of the complexity, we need the Dirac step “function” to generate jumps in time series. The only way to properly deal with these objects is via measure theory; undergraduate teaching involves some hand-waving that would raise issues as soon as we attempt to do the analysis in a more complex system.

Undergraduate teaching skates over the mathematical complexity by the expedient tactic of using handwaving (or bullshit, using the technical math term). Practitioners miss this by relying on digital simulations (which are effectively discrete time unless the equations are solved symbolically).

The problem is research. People who use undergraduate mathematics to do applied math only have a chance if they stay in their lane and avoid anything with hidden complexity. Control systems engineers are able to do this by sticking to matrix algebra in their proofs, and not worrying about the details of the differential equations that drove the use of matrices. Stepping out of that comfort zone results in a literature that is filled with “proofs” that do not in fact prove anything, because they skip key issues. This is both a problem for engineers in control theory, as well as pretty much the entire published DSGE literature.

In control systems, this led to a quiet underground academic war when I was an inmate of educational institutions. The problem is straightforward: does a correct theorem statement with an incorrect proof deserve priority over a later paper that does the mathematics properly? Meanwhile, it was possible to find entire sub-literatures that relied on theorems that were outright incorrect. Although that sounds insane, it is the entirely predictable outcome of the publish-or-perish environment: everybody is so busy publishing papers, that nobody has the time (nor wants to take the academic political risks) of shredding existing papers that probably nobody will read anyway.

This creates the situation that a competent researcher needs to quietly pretend large portions of the literature do not exist, even though somebody reading the abstracts would think the papers are pertinent. As one might expect, this can generate a difficult review process. One may note that this is exactly the sort of behaviour post-Keynesians (including myself!) complain about in their dealings with neoclassicals. (The key difference is that I actually read and understood the papers involved, and could quietly explain my exclusions by pointing out factual problems with the work.)

Agent-Based Hybrid Models

If we wanted a truly accurate model, we would use an agent-based model where transactions occur in a sequence of some sort within an accounting period, then once the transactions stop firing, the model does end-of-period accounting. The end-of-period accounting is what would correspond to measured economic data.

Concluding Remarks

For what Steve Keen wants to do, continuous time models may be the best way forward. But if we want to argue fundamentals, since all economic data are discrete time, discrete time models are completely natural.

Email subscription: Go to https://bondeconomics.substack.com/ 

(c) Brian Romanchuk 2024

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.