I ran into an attempted slam on Paul Krugman's 1998 argument that "the internet" would not be a big deal economically. Sadly for the anonymous account who dredged the argument up, Krugman was closer to the truth than the people he was arguing with at the time. This was part of a debate when I started in finance, and related to long-term growth. The linkage to fixed income is via the dreaded r>g fiscal sustainability debate -- what is the long-term real growth rate?
At the time, I was largely unimpressed with economists' grasp of the technological issues. My academic background was in control systems engineering, which was the discipline tied to harnessing the power of electronics to better operate mechanical and chemical systems. My view is that there is a certain amount of economic folklore around the subject, and the correct answer is that long-term real growth projections are almost entirely garbage in-garbage out exercises.
Folkore Behind the Productivity Concern
The story behind the concern about productivity is the result of some model that is probably part of Economics 101 (which is entirely typical for mainstream economists). The idea is that output is specified as follows:
Real Output = f(capital, labour hours, productivity).
(Post-Keynesians object to the functional form used, and/or the definition of "capital." I will ignore this here.)
In order to project long-term real growth, we need to project the arguments to the function. By a process of elimination, productivity is the one that is the concern.
- Increasing capital ("capital deepening") is underweighted in discussion. The presumable reason for this is that private sector fixed investment decisions are allegedly the result of optimising behaviour, and cannot be controlled. (More capital investment reduces the share of output going to consumption, and implies greater future depreciation, so capital deepening is self-limiting.) Obviously, Chinese state planners were not listening to the consensus on this one.
- Labour hours is a function of the population, labour utilisation. Since the mainstream assumption is that full capacity is the norm, the only way to increase this is by growing the population. Since parents are not increasing the number of offspring to help out debt-to-GDP projections, we are stuck with economists arguing for more immigration.
We are left with productivity to wring our hands about.
The Academic Debate
The economist Robert Gordon is the big name in mainstream productivity research in recent decades, publishing The Rise and Fall of American Growth (Amazon affiliate link) in 2017. (The book has sold well, but I have not read it. I have no idea what the heterodox view is on his research. I was familiar with his earlier work when I looked at this debate in the late 1990s.)
Gordon's research was serious, but it was premised on attempts to make official GDP data reflect true productivity trends. The argument in favour of this is straightforward: if we want to debate long-term growth potential, we need to measure productivity accurately. My argument is that this is largely impossible, and we are stuck with hand-waving narratives. Mainstream economists do not like hand-waving narratives.
My discussion here reflects the popular discussion of productivity, at least the late-1990s variant. What bugged me at the time is that economists were bad-mouthing "computers" and the "internet" without grasping the underlying story. The academic debate may have been better than what I encountered in market research.
Since I am supposed to be editing my manuscript, I am not chasing after statistics. The problem is that the story I am telling puts the big structural breaks around World War II - when comprehensive national statistics was in its infancy. To get a good handle on the topics I am discussing, one would need to dig into time series created by economic historians -- which are sometimes controversial.
The Agricultural Revolution
The major productivity story of recent centuries was the revolution in agriculture. We went from a situation where the vast majority of the population were effectively subsistence farmers, and now only a tiny percentage of the population works on farms (in the developed countries).
Since aristocrats were bugs on increasing the returns on their estates, the interest in raising agricultural yields pre-dates "capitalism" (as far as I can tell).
From the perspective of the effect on aggregate productivity, the gains from agriculture stopped being significant only when too small a percentage of the work force was on farms. Yields are still advancing, but there are not peoples working on farms to greatly reduce their share of employment. I would guess that sometime around World War II is where you can draw the line on the chart for the structural break.
The Hydrocarbon Revolution
The next revolution (which helped push the first) is the revolution in the exploitation of the concentrated energy in hydrocarbon sources -- first coal, then oil/natural gas. If we view the rise of industrial capitalism to coincide with the advent of steam power, this revolution is almost coincident with the capitalist era,
This revolution only played out once diesel locomotives replaced steam, and/or the rise of commercial aviation. Steam locomotives may have been romantic, but they were dangerous (tended to blow up), and they disassembled themselves when running. Very large skilled workforces were needed to rebuild them. The rise of the diesel locomotive meant that those workers were superfluous. (My father was part of that wave; he first worked as an apprentice in a machine shop. He then got an engineering degree, and worked for the railway in an office.)
As my timeline indicates, the hydrocarbon revolution played out roughly in the periods where people start complaining about a productivity slowdown.
The Electrical Engineering Revolution
We can divide electrical engineering into two eras, once again divided by World War II. Before World War II, the emphasis was on the transmission of power over wires, and the related electro-mechanical devices: refrigerators, electric motors, lightbulbs. After World War II, the emphasis shifted towards the transmission of information: digital circuitry, and the rise of communications and control systems as disciplines. (Communications was part of the pre-World War II picture, but were not a big part of the curriculum.)
Therefore, a lot of revolutionary aspects of electrical engineering overlapped the hydrocarbon revolution, and explained the huge burst in new products in the early twentieth century. Thus, they fit in with the high pre-World War II productivity story. However, things look different after World War II.
Although people in the productivity debates focused on "computers," they should have said "personal digital computers." Analog computers (analogue in the U.K.) pre-dated digital computers; the Phillips hydraulic economic models were analog computers, although using water instead of circuitry. Electrical engineers were stuffing control circuitry into every industrial application they could find as soon as the technology existed (after wartime controls were relaxed). All the low-hanging fruit for the industrial applications of electronics were picked in the 1950s-1960s: which coincides with the end of the trend of high productivity by conventional analysis.
The post-1960s era has a different growth characteristic, which I will return to later.
Physical Stuff Productivity: Baseline Is Negative (?)
If we want to look at the productivity of industrial processes, they are coming in a more painful way. We need to increase some technical capacity, which allows new processes.
- Materials science allows us to build things that were previously impossible. (Wind farm rotors being one example.)
- Miniaturisation of electronics/improved machining technology allows us to build more intricate things.
- Scientific advancements continue, possibly related to better instrumentation (previous point).
For many industrial processes, we are near physical limits of efficiency, so improvements will tend to be marginal.
On top of the increasing difficulty of advancing industrial processes, the extractive industries are fighting depletion. Pretty well everything is on the wrong side of depletion curves -- oil, topsoil, fisheries, oil, minerals, etc. The Economics 101 production function is missing the depletion curve information, which helps explain why things like Peak Oil are not understood. Meanwhile, ignoring this factor understates current productivity.
If we factor in the reality that engineering efficiency cannot go above 100%, sooner or later, depletion curves will win.
New Era Productivity Is Competitive Productivity
I now return to the question why the new era (post-1960s) is different. I would phrase it as follows: advances are showing up making firms more competitive, and not producing a greater volume of stuff.
Although I getting old and filled with nostalgia, my feeling is that many things were not better in The Old Days. Very simply, if we could transport 1970s era products and business models to the present (or 2019, if we want to get away from the pandemic effect), those products and firms would fail horribly. The following is just a selection of things that I can personally compare. The way of framing the question is: how much would I pay for this product or service?
- I would pay exactly $0 to get 1970s era dental care (if 2020 era care is available), or for 1970s cancer care. Obviously, a lot more examples in the medical industry.
- Since I am not interested in repairing my car, nor do I have friends in the scrap metal business, I would pay $0 for most North American-produced automobiles from the 1970s or earlier. (Muscle cars being the only cars of that era of interest.)
- Although I am a fan of retro video games, it would be hard to imagine anyone paying for the equivalent of a Pong home console. The vast majority of video games from the 1990s or earlier would be viewed as unplayable by young people.
- Record stores died for a reason; they exist mainly by selling other products along side vinyl/CD's. I went into a high end audio store last year to look at turntables, and even those were digital -- which would have caused a riot by deranged audiophiles in the 1980s.
- Pre-flat screen televisions are worthless, as based on the number that are left abandoned by the curb in my neighbourhood.
- Cardboard wargames were popular up until the 1970s, and boardgames had a revival in the 1990s. Those old games were mediocre when compared to the new generation of games.
- North America was a beer wasteland at least until the mid-1980s; the only option were essentially tasteless lagers. (Admittedly, the IPA hipsters have now gone to far the other way.)
- North American bookstores were generally terrible (at least outside university towns or big cities); the big bookstore chains pushed them out of existence for a reason. Those big chains had very large inventories, that required good digital record-keeping.
- Building materials have advanced, such as the demise of copper plumbing.
- The regulators would not allow a financial firm to exist if it did not have relatively recent electronic technology.
- Smart phones (and the associated "app" ecosystem) are a product now that are incomparable to anything available in earlier decades.
My argument is that something is only worth what somebody else is willing to pay for it. A significant portion of earlier-era output would have a market value of $0. Stick that $0 into your price index, and suddenly measured productivity jumps.
With these preliminaries out of the way, I can now rant about economists discussed "the internet" as if it was something that popped up in the late 1990s, and they expected productivity to jump. (Admittedly, Silicon Valley types -- who allegedly knew about technology -- were talking as if that were true.)
The "internet" dates back to ARPANET, which was a going concern by the 1970s. Any electrical engineering department in a university that was part of the backbone (McGill University was one) has been using email for a very long time. I was working from home on my thesis back in the early 1990s, using boring, proven technology.
The only thing that happened in the 1990s is that HTTP protocol was developed, which just opened the internet to a wider audience. The "revolution" was largely in retail applications, as industrial firms had set up their own networks decades earlier. (Only a complete idiot -- or a current denizen of Silicon Valley -- would open their train traffic controls to foreign hackers.) Retail is a zero-sum business, and so it is not too surprising that there is no jump in productivity. What the communications revolution allowed is greater variety in products. This helps eliminate competitors, but it is not increasing the volume of "stuff."
(The other issue with the greater variety of products is that this means that production runs are smaller. The Cobb-Douglas production function incorrectly states that there are diseconomies of scale; in the real world, economies of scale exist. Greater variety will lower capacity utilisation, and hence lower output per hour, even with the same technical productivity.)
The other complaint by economists is about "computers" not helping productivity. This is not surprising. Personal computers were generally used by people in offices. For an industrial firm, anyone who is not either in production or sales is just part of administrative overhead. Giving office workers computers just resulted in them doing different things, but the number of employees did not wither (as was the case for agricultural workers). They probably ended up doing things that made them more competitive versus other firms, but this did not increase output.
There was no revolution in industrial output because the effect of the introduction of electronics happened decades earlier.
It is entirely possible to imagine negative events that greatly reduce productivity. The Peak Oil literature is a great place to start, or an environmental catastrophe. If we put that pessimism aside, the base case looks like continuation of post-1960s trends.
To greatly increase industrial output, if we look at historical analogies, there seem to be two possible stories.
- Some new cheap source of power, or possibly resources (asteroid mining!). I am not holding my breath.
- Some innovations that can displace large number of workers, as happened in agriculture. The problem is that we now have a very diverse workforce, and so it is hard to see what developments can hit multiple sectors at the same time. We are stuck with "artificial intelligence" fairy stories being spread by hucksters from Silicon Valley.
Barring a major trend change, there is no reason to expect the churn in products to suddenly stop. We will be a situation where there is an extremely small overlap between the consumption basket in 2020 and 2060. This makes the question of what part of nominal growth is "real" versus "inflation" largely unanswerable. As such, we cannot hope to offer much useful input into the "what is the long-term real growth rate?" question.
- The high productivity era that ended around the 1960s was the result of harnessing physical sciences. Unless a comparable revolution can happen, it will not be repeated. That is, trying to extrapolate a "nature rate of productivity growth" is misguided. Attempting to fit political economy narratives about something driven by engineering is silly. Political economy decisions can shape how production is conducted, but production has to respect engineering limits.
- Advances in technology are now mainly being used to make competitors obsolete. This means that the mix of products and services is continuously changing, making output quantity comparisons questionable.
- Any attempt to discuss fiscal policy in terms of hypothetical long-term growth trends is inherently a waste of time.
My comments about marking old output at $0 may or may not have been poetic exaggeration. However, I can use the subject of books -- obviously of personal interest -- to outline the sort of issues I see.
If we just use "books" are the final product, then we just look at sales volume, and we are done. The immediate problem is quality adjustment. As fiction authors are well aware, any living fiction title is competing against old titles (many of which have gone beyond copyright protection). If the new book is going to sell a decent number of copies, it has to be better (in some sense) than the existing body of work. How exactly are statistical agencies going to measure that (other than creating a lot of jobs for humanities majors)?
An alternative approach is to say that we are not concerned with books as a final product, rather they are in input to book-selling services.
I can divide the experience in my lifetime (growing up in a small North American city) as follows.
- A scattering of small bookstores, with extremely limited selections. For example, if you were a fan of science fiction, you had to read the "classics" of science fiction (e.g., Foundation, Dune, etc.) because those were the only option other than the small list of current best-sellers. If you knew what book you wanted, you could special order a book, but this was a service that was not heavily used.
- Big chain bookstores (there were two in Canada, which then merged) rolled over most of the existing competition. They had jaw-dropping levels of selection when compared to the old small stores. The rise of the big bookstores unsurprisingly coincided with a rise in the prominence of sub-genres (in science fiction, you have steampunk, cyberpunk, hard science fiction, space opera, etc. the wikipedia entry lists 20). Small bookstores had to adapt to survive. (E.g., quaint little bookshops in tourist traps sold curios.) The book-selling service raised its game, and the old service was not able to compete with the new. There was undoubtedly a quality improvement, which is independent of the physical books sold.
- Big internet booksellers then barged into the scene in the first Internet Bubble era (when Krugman made his prediction). Physical books were supplemented with e-books. It is now possible to create a rental arrangement for books, where customers pay by the month -- and publishers are paid by amount read. The ease of search allows for an extreme wide variety of books to be sold (including mine, which never would have been viable in bookstores). There is no doubt that bookselling is a service, and the new service is even more attractive to customers. By implication, there was yet another quality jump, which should be taken into account.
Even if the total number of worked hours was the same, and volumes of books sold were unchanged, the service is better, and so productivity should be higher.
How do we measure this? There is no obvious way to do so, and if we figure it out, that does not mean the methodology will map to the evolution of any other product or service. That's why I lean towards qualitative hand-waving.
(c) Brian Romanchuk 2020