tag:blogger.com,1999:blog-5908830827135060852.post2835564966914732982..comments2024-10-04T12:06:07.606-04:00Comments on Bond Economics: Discrete Time Models And The Sampling FrequencyBrian Romanchukhttp://www.blogger.com/profile/02699198289421951151noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-5908830827135060852.post-7843665103210578472018-02-17T23:49:37.279-05:002018-02-17T23:49:37.279-05:00This comment has been removed by a blog administrator.Bloggerhttps://www.blogger.com/profile/07287821785570247118noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-62899193141313133452016-03-09T19:19:53.559-05:002016-03-09T19:19:53.559-05:00Sorry, an example of non-ideal is p ≡ dD/dS ≤ k D...Sorry, an example of non-ideal is p ≡ dD/dS ≤ k D/STom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-49958664214008721782016-03-09T19:13:12.800-05:002016-03-09T19:13:12.800-05:00In Jason's formulation, it's actually UNCO...In Jason's formulation, it's actually UNCOORDINATED agent behavior that's tractable. It's when agents coordinate that information transfer is "non ideal" (e.g. dD/dS > kD/S) and only limited things can be said about it (usually what you can say is things go wrong... it's only the very very rare case when coordination (such as a perfectly realized expectation) improves matters any).<br /><br />Sorry for all the comments! It took me a while to get up to speed with the concept, but once I did (parts of it at least) seem so simple! Plus I'm intrigued that the idea is general enough that I might actually be able to find a use for it in a completely unrelated discipline, like one of Jason's readers did (<a href="http://www.nature.com/npp/journal/v40/n1s/full/npp2015326a.html" rel="nofollow">Todd Zorick</a>).Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-36470826413376453702016-03-09T19:02:19.824-05:002016-03-09T19:02:19.824-05:00Also, if you don't like the information transf...Also, if you don't like the information transfer angle, there are other ways to see it. For example one of the fundamental differential equations he uses:<br /><br />P = dD/dS = kD/S<br /><br />is the simplest differential equation you can form that's consistent with the long term neutrality of money (it's <a href="http://informationtransfereconomics.blogspot.com/2014/02/i-quantity-theory-and-effective-field.html" rel="nofollow">homogeneous with degree zero</a>). D=demand and S=supply. Irving Fisher included <a href="http://4.bp.blogspot.com/-WqZ5HxA6JbQ/VpAP1WEwusI/AAAAAAAAIeY/H-t2UXnLXoM/s1600/fisherthesis.png" rel="nofollow">such an equation</a> in his 1892 thesis (only slightly less general: i.e. w/ k=1). <br /><br />Also, economist Gary Becker explored some ideas related to what Jason is doing in the early 1960s, showing that <a href="http://mcadams.posc.mu.edu/econ/Becker,%2520Irrational%2520Behavior.pdf" rel="nofollow">supply and demand curves can be obtained from the feasible space of agent budget choices</a> w/o making any behavioral assumptions other than agents will behave seemingly randomly, and tend to fill the entire space.<br /><br />What I find intriguing is the concept of removing human behavior from the picture. Treating humans as mere mindless atoms bouncing around doing all manner of things it's possible for us to do... no need for complex game theory optimizations, micro-foundations, all-knowing rational agents or million parameter behavioral models. So, it'd be nice if he could get some decent 0th order or 1st order results by sweeping all that garbage to the side, wouldn't it? Lol. We'll see I guess.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-6743951506804430782016-03-09T18:31:11.722-05:002016-03-09T18:31:11.722-05:00Brian, I won't question your experience with r...Brian, I won't question your experience with regressions. I can tell you that the he has some good out of sample results. For example, with price level: fitting his parameters from data between 1955 and 1990, he had a good fit out of sample from 1990 till today (in the US). I'll try to dig up a link.<br /><br />But the model makes predictions as well: the k parameter varies very slowly. k for Japan is near 1, and has been for some time. k for the US is a bit above 1. k for Canada is just starting to sink enough below 2 that he's predicted that Canada will start to undershoot their inflation target over the next decade. (k=2 means the QTM holds basically, but k=1 means this is not the case). <br /><br />If you're interested, I'm sure he'll dig up the best examples he has. He occasionally puts a bunch of updates in a single post. I can't find the one I was thinking of right now ... but here's <a href="http://informationtransfereconomics.blogspot.com/2014/03/macroeconomic-predictions-for-2016.html" rel="nofollow">a page of some predictions.</a><br /><br />Also, don't think of the "information" in "information transfer" as the meaning content of the symbols. Think of it in the technical Shannon sense. Fielitz & Borchardt define an extension of Shannon's definition and call it the "natural amount" of information (which covers the case of there only being one kind of symbol as well) in their <a href="http://arxiv.org/pdf/0905.0610v4.pdf" rel="nofollow">Appendix A</a>. But again, this concept of information doesn't involve interpreting what the information means... no more so than the volume of a gas interprets the meaning the of information transferred to it by applying mechanical work to the gas. It's kind of a generalization of the concept of information.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-16799566590492109132016-03-09T17:33:00.337-05:002016-03-09T17:33:00.337-05:00Re: the post referenced.
I looked at it very quick...Re: the post referenced.<br />I looked at it very quickly, and it looks pretty much what you would get if you regressed the log of the money supply versus the log of GDP.<br /><br />Of course, you get some kind of fit; if you plot the money supply as a % of GDP, it used to be stable (pre-QE). Does this tell us about information transfer? Not really. It just tells us that people want to keep their money holdings stable versus their incomes. That is exactly what SFC models predict, and it explains why - without needing to invoke mystical concepts like "information transfer".<br /><br />Maybe I am missing something, but it is easy to churn out regression results like that. The problem is that the random regression methodology is frankly terrible out of sample, which anyone who has ever had to maintain such models in real time can tell you.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-14512080248957292812016-03-09T17:04:43.659-05:002016-03-09T17:04:43.659-05:00Also, I just another version of SIM (SIM3) which i...Also, I just <a href="http://banking-discussion.blogspot.com/2016/03/sim3.html" rel="nofollow">another version of SIM (SIM3)</a> which isn't completely filled out yet, but looks to satisfy my three objectives above: match G&L at Ts=1, sample period invariance, and satisfy G*L's equations between any two points. It wasn't hard actually. We'll see if it hangs together for the rest of the outputs (Y,YD and C).Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-82474606808246430802016-03-09T17:02:13.375-05:002016-03-09T17:02:13.375-05:00This comment has been removed by the author.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-30638385570011858642016-03-09T16:51:00.664-05:002016-03-09T16:51:00.664-05:00Brian, I'm no PhD in physics, nor am I an econ...Brian, I'm no PhD in physics, nor am I an economist. I'm an engineer, and not a particularly bright one at that. However I was able to understand his paper (link to it in his tag line), at least the 1st part, but I would guess all of it since most of it came straight from his blog.<br /><br />I'm not saying you don't have a valid complaint. I had to put a bit of effort in, but once I did, it's actually a very simple idea. I won't try to say it's ALL simple, but one of the core ideas that's used over and over again is very simple, and it's this:<br /><br />Given a system in which there are:<br /><br />1.) Two process variables, call them X and Y<br /><br />2.) Underlying complex "micro-states" which look random from a macro scale, and which give rise to X and Y (think of X and Y as "emergent" from these complex micro-states.<br /><br />3.) A plausible communication channel between X and Y<br /><br />You can formulate a simple relationship between X and Y based on the idea of natural information equilibrium (see Fielitz & Borchardt's paper on this, linked to in the right hand column of Jason's blog).<br /><br />This is not necessarily a superior way to go about things if you know more information. For example, you can use the above to derive the ideal gas law... but that's just an example. We have more knowledge about what's going on with gasses which make the information transfer (IT) derivation unnecessary. There's no reason to use IT for such a system, except as an example. IT is handy when you DON'T have anything else to go on (except those three conditions I list above). So what does it lead to? Well, without the math, it leads to power laws and exponential laws between X and Y. E.g.<br /><br />(X/X0) = (Y/Y0)^k<br /><br />You can add the concept of an "abstract price" (or a detector in F&B's terminology):<br /><br />P = dX/dY<br /><br />I won't list the exponential laws, but they're equally as simple (they represent the case of a "fixed source" rather than a "floating source" in F&B's terminology... or "partial equilibrium" vs "general equilibrium" in Jason's).<br /><br />You can solve in terms of P if you'd like:<br /><br />P = (k*X0/Y0)*(Y/Y0)^(k-1)<br /><br />etc.<br /><br />You can chain such "process variables" together, and do other things, but in the end it's just power laws!<br /><br />Generally you have fit the few parameters such models require from data. Sometimes you can derive them from the problem (k for example, which is called the "information transfer index").<br /><br />That's a good chunk of all that's going on right there! How do you check? Use FRED. Sometimes Jason's "models" don't check out. He hypothesizes a relationship. He doesn't KNOW that it's true. But it's easy to check.<br /><br />There are some that do seem to check out. k (for example) can be expressed as function of time, and he can recreate time series data going back decades in several countries and do out of sample predictions which seem to do markedly better than 40+ parameter Fed DSGE models. In all fairness those DSGE models are attempting to do much more than the limited scope of Jason's models typically shoot for, but Jason keeps his parameter count very low ... 2 or 3 is typical.<br /><br />If you actually want to understand, I recommend <a href="http://informationtransfereconomics.blogspot.com/2014/03/how-money-transfers-information.html" rel="nofollow">this post</a> (it even has an early version of a varying k, which does pretty well).<br /><br />If you understand that, you can go a long way on Jason's blog. There are many other ideas... so I don't want to sell it short, but the core of a pretty big chunk of it is dead simple.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-52777260854218075072016-03-09T16:08:44.984-05:002016-03-09T16:08:44.984-05:00Falsifiability is extremely important; this is my ...Falsifiability is extremely important; this is my main complaint about mainstream macro. <br /><br />His model outputs look interesting, but he buries what he is doing under a lot of jargon. You see the chart, but the mathematics behind them are buried in an appendix somewhere. When I tried tracing through his code sample, my reaction was - why are you doing that? Unless he drops the physics analogies - which are inherently meaningless - and just writes down exactly how he calculates his model output, I felt that I did not want to spend my time on it. Mathematics was designed to be easy to understand, without requiring verbal hand waving. Unfortunately, DSGE modellers do exactly the same thing.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-70307409197274140042016-03-09T14:20:57.567-05:002016-03-09T14:20:57.567-05:00I meant to write "research program" rath...I meant to write <a href="https://en.wikipedia.org/wiki/Research_program" rel="nofollow">"research program"</a> rather than "research project" above. That's the sense in which <a href="https://www.google.com/search?safe=off&q=site%3Ahttp%3A%2F%2Finformationtransfereconomics.blogspot.com%2F+%22research+program%22&oq=site%3Ahttp%3A%2F%2Finformationtransfereconomics.blogspot.com%2F+%22research+program%22&gs_l=serp.3...10040.11755.0.13246.8.8.0.0.0.0.125.729.3j4.7.0....0...1c.1.64.serp..1.0.0.hJeq68RofN8" rel="nofollow">Jason</a> (and I think Noah Smith too) has used it (i.e. Popper, Kuhn and Lakatos).Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-87428350200895841862016-03-09T14:13:08.850-05:002016-03-09T14:13:08.850-05:00Here's one big reason why I'm intrigued by...Here's one big reason why I'm intrigued by Jason's approach: it's not necessarily about all the details of what he's doing... it's because he's one of the few econ bloggers who's ever given me a satisfying answer to this question:<br /><br />"Jason, what evidence would convince you that you're wrong?"<br /><br />He was able to provide a clear explanation of that for me, and added that he's not interested in adding "epi-cycles" so save his "research project" and would prefer to write a post mortem and abandon the whole framework if the data didn't go his way.<br /><br />I've asked that same question to a lot of other people (Scott Sumner, Nick Rowe, Roger Farmer, John Cochrane, Mark Sadowski, Marcus Nunes, Stephen Williamson... and others), and I never got something that clear and straightforward in response. Cochrane actually got quite offended!<br /><br />A falsifiable framework in econ! Who knew? (It probably helps that it's not his day job!... a remark on my part that I'm sure Cochrane would identify as a "Bulverism.")Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-21057391735132571012016-03-09T13:30:58.056-05:002016-03-09T13:30:58.056-05:00Put another way:
In the SFC modeling world it app...Put another way:<br /><br />In the SFC modeling world it appears to me it's better to speak of "compounding times" rather than "sample times." The SFC modeler is not necessarily "sampling" an underlying continuous system.<br /><br />In other worlds that's not necessarily the case (perhaps growth models, for instance? Where you're actually trying to fit parameters to empirical data?... I don't know, I'm just guessing here).<br /><br />You'll notice that Jason spends close to 50% of his time comparing his models with empirical data and making forecasts. THAT is the interesting part of his approach to me: rather than attempting to build "The Matrix" simulation of an economy... he's saying "You're assuming complexity there, where potentially none exists if you look at things from a certain scale." He's also saying "You have to crawl before you can run or walk, and I'm trying to provide a 1st order useful model of the real world... crawling essentially." (those quote of mine are paraphrasing BTW, based on my understanding of his framework).<br /><br />So I think there's a wide chasm between PKE and SFC modeling and what he's trying to do in general. That's why I was so curious to see his take on PKE ... and I'm disappointed he got derailed on what amounts (in my mind) to a technical issue. Other readers of Jason's blog (like <a href="http://ramblingsofanamateureconomist.blogspot.com/" rel="nofollow">John Handley</a>) were not keen on going there in the first place, and were all to happy to see him abandon the project. (If you haven't seen John's blog BTW, you should check it out: he's a truly amazing 15-year old!)Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-49910601712270142542016-03-09T13:16:17.208-05:002016-03-09T13:16:17.208-05:00Brian, the aliasing issue is related to assuming a...Brian, the aliasing issue is related to assuming an underlying continuous time problem. That's why I say the problem I discuss above isn't so much an aliasing problem: it's not about sampling an underlying system, it's that an inherently discrete time system (compounding at discrete dates), will have dynamics dependent on the sample times (compounding times). In the simple case of SIM, this means different time constants... and a limitation on alpha2 in terms of alpha1 that prevents some sample times from being considered (make alpha2 too big, and A goes negative in H[n+1] = A*H[n], and you get oscillations at pi (a pole along the negative real axis).<br /><br />I get your objection (I think). I'm not taking a stand one way or another. I'm just trying to point out the problem Jason was getting at *the way I understand it.* I may not be correct. Actually I think the way he expressed it may be more general than sample period invariance, but that's the way I came at it.<br /><br />I agree, most of the time it's a minor issue, or a non-issue. And while aliasing is certainly a problem, that's not the core issue he was getting at (although it may be related, in a way I'm not seeing).<br /><br />I came at this with a challenge: to meet all three of these requirements (for my own entertainment):<br /><br />1. Reproduce G&L's SIM results at Ts=1<br /><br />2. Satisfy all of G&L's equations between any two sample points (not necessarily just the ones they used)<br /><br />3. Remain sample period invariant.<br /><br />My conclusion is that's not going to happen unless you do something ugly like build a continuous time model of a ZOH system. In lieu of that, assuming all the compounding dates happen precisely at G&L's sample times may be just fine for you. From Jason's point of view, I can see where that's a draw back (why would he want sample time dependence?... he's trying to get AWAY from all the messy details of complex human behavior... not get further into the weeds).Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-74357406126549083262016-03-09T13:12:43.416-05:002016-03-09T13:12:43.416-05:00I admit I do not understand Jason when he invokes ...I admit I do not understand Jason when he invokes implicit time scales. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-80224293548750517832016-03-09T13:10:21.978-05:002016-03-09T13:10:21.978-05:00To be sure, Jason frames arguments using different...To be sure, Jason frames arguments using differential equations. But since the results are admittedly only approximate, I view that as a matter of convenience.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-38345404538197090782016-03-09T12:16:45.210-05:002016-03-09T12:16:45.210-05:00OK, I will write up my comments as a new article. ...OK, I will write up my comments as a new article. This article did not take on all of his arguments as directly as I should have, partially as I had no way of explaining myself without dragging in a boatload of systems theory. I think I can better explain the intuition with a simple example, and once again, no serious equations required. (It's related to your compounding example.)<br /><br />In any event, I still have a massive theoretical objection against his assumption that we need to look at continuous time. That objection makes it hard for me to discuss his arguments -- since I object to his starting point, following his later logic becomes even more difficult.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-16923746198573210562016-03-09T11:45:46.683-05:002016-03-09T11:45:46.683-05:00Also the problems I point out above I wouldn't...Also the problems I point out above I wouldn't describe as "aliasing." I'd describe them as (inadvertently?) changing the system dynamics in G&L's approach, when changing Ts. Why? Essentially they're only doing "compounding" at the sample times they choose, and ignoring any others. For a lot of situations this doesn't matter too much. For others it can be devastating (e.g. harmonic oscillation or exponential growth).<br /><br />200% / year growth compounded once in 1 year = 3x<br />200% / year growth compounded continuously = 7.4xTom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-40161569517532202272016-03-09T11:39:39.517-05:002016-03-09T11:39:39.517-05:00Sure, but like I wrote above: if you make a compli...Sure, but like I wrote above: if you make a complicated enough model, you can have everything. In lieu of that you can have a simple yet good continuous time model that's sample period invariant AND stock flow consistent, but that's not what G&L do. Or you can choose: SFC and nearly sample period invariant, or sample period invariant and nearly SFC. It's the modeler's choice.<br /><br />In some cases (perhaps) sample period invariancy is more important, and in some cases not. It doesn't hurt to be aware of the various trade offs.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-55367319907619665972016-03-09T06:35:23.612-05:002016-03-09T06:35:23.612-05:00Having thought about it, it comes down to an embed...Having thought about it, it comes down to an embedded assumption Jason Smith is making. He assumes that there is an underlying true continuous time system, and that all discrete time models reproduce that underlying continuous time system.<br /><br />One could argue that in the real world, there is a fixed set of high frequency transactions, and that we could theoretically change the accounting period, and end up with coherent results.<br /><br />But the point is that we are talking about mathematical models, not the real world. There is no assurance that if we start off with a closed form quarterly model, that we can convert it to a coherent monthly closed form model. Since every economic model is going to have mismatches with reality, this inability to change frequencies and have the exact same results is not a big deal. It is to Jason Smith -- because he is imposing analytical assumptions that he makes as a physicist.<br /><br />From what I have seen, people have a hard time transitioning from physics training to macroeconomics.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-25318553504947769732016-03-09T06:25:14.414-05:002016-03-09T06:25:14.414-05:00The Treasury could operate without coordination wi...The Treasury could operate without coordination with the Fed, but the problem is that would be a bad idea. The Treasury would have to do all the work, and the Fed almost nothing. The Fed would probably have to rely on paying interest on reserves to hit its interest rate target (a corridor system), since it could not engage in large open market operations. Furthermore, since the Treasury cannot control when cheques get cashed, there would still be times when banks get squeezed by mistake.<br /><br />In any event, all those details disappear in any simplified model that we hope to solve.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-36512728053121585532016-03-08T22:47:41.815-05:002016-03-08T22:47:41.815-05:00I do not have time to look at this right now (time...I do not have time to look at this right now (time to sleep...), but there would be changes in behaviour as a result of changing the sampling period. In order to keep the output with different sampling periods and the "same" parameters, the flows would have to be at a constant rate. Since the flows in the SFC models react to the state, how you sample will affect the results. (That is, monthly flows will not be exactly one-third of quarterly flows when you run a simulation. If flow rates are constant, then monthly would be exactly one-third of quarterly).<br /><br />In order to keep fidelity to a single continuous time model, you would need to recalculate all discrete time parameters each time you change the sampling frequency. (If you start from discrete time, you need to go to the continuous time, and then use the continuous time to calculate a second discrete time; you could not go directly to the discrete time model without introducing approximation errors.) You should be relatively close, so long as there are no dynamics that are being aliased by the sampling operation.<br /><br />For models with realistic parameter settings, you would probably start seeing aliasing effects at around the quarterly frequency (my guess). But if you compared a weekly and monthly sampled versions of a continuous time model, the discrete time responses should end up looking like the continuous time version, with very small errors.<br /><br />What this means is that we should expect that if you fitted a model at monthly data, you should theoretically redo the fitting again if you want to go to quarterly, rather than rely on an approximation. However, since your fittings are not going to be extremely accurate in the first place, that source of error is going to be so small that it is safe to ignore it. This is only an issue for theoretical discussion, where we assume that we have access to extremely accurate models. Since nobody believes that we have those accurate models, most economists are correct in largely ignoring this issue. Electrical engineers actually have accurate models, and run into a lot of sampling problems, and so they developed the theory for handling this.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-54094770309954544332016-03-08T22:27:10.653-05:002016-03-08T22:27:10.653-05:00But if you are not looking at individual transacti...But if you are not looking at individual transactions, you do not have any reason to think that things are operating in continuous time.<br /><br />I am unsure what they were arguing about originally; I was objecting to comments about the assumptions about time constants.Brian Romanchukhttps://www.blogger.com/profile/02699198289421951151noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-74299084607281118342016-03-08T20:54:01.689-05:002016-03-08T20:54:01.689-05:00(actually *I think* you CAN have stock flow consis...(actually *I think* you CAN have stock flow consistency that's invariant to the sample period... but you won't match G&L's results at Ts=1 if you do... not unless you resort to something awful like making a continuous time model of zero-order hold versions of the T function... or something equivalent)Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.comtag:blogger.com,1999:blog-5908830827135060852.post-58338687177504932232016-03-08T20:44:03.163-05:002016-03-08T20:44:03.163-05:00TL;DR version: I have two versions of G&L'...<b>TL;DR version: I have two versions of G&L's SIM model:<br /><br /><a href="http://banking-discussion.blogspot.com/2016/03/sim.html" rel="nofollow">SIM:</a> with stock-flow consistency, but not invariant to the sample period.<br /><br /><a href="http://banking-discussion.blogspot.com/2016/03/sim2.html" rel="nofollow">SIM2:</a> not stock-flow consistent, but invariant to the sampler period.<br /><br />Both produce the exact same results at sample period = 1, and the exact same results in steady state for any sample period.</b><br /><br />It seems to me that you can't have everything.Tom Brownhttps://www.blogger.com/profile/17654184190478330946noreply@blogger.com