Tag Archives: Paul Krugman

In my last post, I explained how the academics behind the job polarisation literature (declining middle class) have given us a framework for understanding the emergence of very clear winners and losers in the modern workplace. Yet most of these scholars have refused to extend their analysis to justify any fear of technology-led mass unemployment.

According to these economists, the disappearing middle class —due to the death of white collar routine cognitive work carried out by office employees and blue collar routine manual work performed by factory employees—will reappear in cognitive non-routine or manual non-routine jobs. In so doing, these academics have generally wasted few opportunities to bash lump-of-labour advocates; that is, those people who believe that there exists a fixed pool of jobs that computers are draining away.

Nonetheless, there are cracks in the facade. For example, back in 2003 Paul Krugman (who has acted as a commentator on the job polarisation literature rather than an originator) was rock solid behind the consensus economic profession position as can be seen here. But by December 2012 we see a significant U-turn in a piece called Rise of the Robots in the New York Times.

However, I would say that the consensus, while shaky, is still in place. Moreover, for a high-voltage polemic against the lump of labour theory, I recommend you read “Are Robots Taking Our Jobs, or Making Them?” by Ben Miller and Robert Atkinson of the Information Technology and Innovation Foundation. Like all good polemics, the essay assembles all the evidence that supports their thesis of ‘don’t worry, be happy’ and omits any evidence that contradicts it.

Nonetheless, it is a good, comprehensive exposition of the consensus position of the economics profession that has dominated thinking for decades. Further, we can actually take their analysis, but subvert it somewhat to fit the facts of what is actually happening in the job market, and from there think about solutions.

Miller and Atkinson sum up their position thus:

Both history and scholarly analysis have clearly and consistently refuted the notion that increased productivity leads in the moderate to long term to higher unemployment. This is because rising productivity increases overall wealth, and in a competitive economy that increased wealth gets reallocated to create additional demand that requires new workers.

This is a bold statement that I would agree used to be true, but may no longer be valid. But before we look at any data, let’s focus on the mechanism that they claim supports their assertion. The next sentence is key: Continue reading →

A decade or so ago, any suggestion that technology could be a major driver of inequality, let alone unemployment, would generally have been met with contempt by your average economist. Anyone questioning the beneficence of technology would have been accused of succumbing to the ‘lump of labour‘ fallacy. Simplistically, the lump of labour fallacy refers to the belief that there exists a fixed pot of labour; accordingly, if a computer eats some of the pot, there is less left for everyone else.

As the derisive name suggests (the lump of labor fallacy), it’s an idea economists view with contempt, yet the fallacy makes a comeback whenever the economy is sluggish.

And for decades, The Economist magazine, a generally intelligent supporter of free markets and free trade—by which I mean it has enough intellectual curiosity to explore counterarguments—froths at the mouth like a rabid Tea Party activist whenever the idea of a fixed supply of labour is raised.

In a very funny parody, Tom Walker of New York’s Monthly Review (who must have too much time on his hands) stitched together sentences containing the words “lump of labour” from more than a decade’s worth of The Economist to create this article here. It perfectly captures the magazine’s angry but condescending tone whenever the issue is raised.

Walker was at the time writing in defence of shorter working hours in the face of rising unemployment, a somewhat tangental topic to where we are going with the this post (although I will touch on it in my next), but the point I think he was trying to make was simple: if you believe there is any limit on the amount of work available, The Economist thinks you are an idiot.

But then a strange thing happened. Some mainstream economists started to venture the opinion that technology was making the labour market act weird. They hadn’t quite got to the stage of taking the lump of labour seriously, but they were now prepared to admit that technological progress was a two-edged sword—at least for some workers.

Autor et al had noticed that the U.S. labour market was changing shape under the influence of the awkward acronym SBTC (aka skills-biased technical change). Within the skills-based distribution of jobs, employment was growing at the top end, it was growing at the bottom end, but it was hollowing out in the middle. Furthermore, back at the top end, wages were both rising but also diverging; in other words, inequality among the wealthy, so beautifully spoofed here by The Onion, was actually true. In contrast, no such divergence in fortunes was seen at the bottom.

Accordingly, the new trend the paper tracked was not one that was ‘lifting-all-boats’ technology-led economic growth. Rather, the boats in the Autor et al paper are being thrown all over the place—with some capsizing.

Now, typical of its type, the paper contains an empirical bit looking at the labour-market data, and then a model which tries to make sense of what is going on. In the model, jobs sit in a grid of attributes: work is cognitive or manual, but it is also routine or non-routine. Of course, such a model removes the grey scale of real life, but that is not important for our basic understanding.

Autor and his co-authors then go on, somewhat confusingly, to simplify the workforce even further into three categories rather than four: 1) abstract, which are the cognitive, non-routine jobs; 2) routine, which are both routine cognitive and routine manual jobs; and finally 3) manual, which means manual non-routine jobs. An earlier paper, in which Autor was again an author, has a useful table showing what is going on in the grid:

Moving further into their model, workers are divided into those with tertiary education and those only educated up to high school. The latter can’t perform abstract tasks but can switch between routine and manual tasks. Finally, technology is seen as having two impacts: it mostly substitutes for routine tasks, but it mostly complements for abstract tasks—that is makes the abstract workers more productive.

The word compliment is key, since the complimentarity between technology and routine cognitive and manual labour has been the driver of the explosive growth in living standards since the industrial revolution. Ned Ludd was wrong to smash up two stocking frames in 1779 because his labour, with the course of time, could always be paired with new technology that required a routine cognitive or routine manual human compliment.

As Gregory Clark’s book “A Farewell to Arms” demonstrates, this was a win-win situation for the working class. As a result, workers were more ‘pulled’ out of the countryside and into the cities by attractive relative wages than they were pushed out by evil property owners enclosing their land. And while Marx and Engels may dispute the cause of the migration, they applauded the result; from the Communist Manifesto:

The bourgeoisie has subjected the country to the rule of the towns. It has created enormous cities, has greatly increased the urban population as compared with the rural, and has thus rescued a considerable part of the population from the idiocy of rural life.

At this point, I must stress that Autor has taken every opportunity to distance himself from lump of labour advocates since coming out with the polarisation thesis. From a 201o paper (here):

This ‘lump of labor fallacy’—positing that there is a fixed amount of labor to be done so that increased labor productivity reduces employment—is intuitively appealing and demonstrably false. Technological improvements create new goods and services, shifting workers from older to new activities. Higher productivity raises incomes, increasing demand for labor throughout the economy. Hence, in the long run technological progress affects the composition of jobs not the number of jobs…..

…..It is not fallacious, however, to posit that technological advance creates winners and losers.

And in an op-ed piece in the 24 August 2013 New York Times:

Computerization has therefore fostered a polarization of employment, with job growth concentrated in both the highest- and lowest-paid occupations, while jobs in the middle have declined. Surprisingly, overall employment rates have largely been unaffected in states and cities undergoing this rapid polarization. Rather, as employment in routine jobs has ebbed, employment has risen both in high-wage managerial, professional and technical occupations and in low-wage, in-person service occupations.

Hmmm. It is true that jobs have risen in the high-end cognitive occupations, but the rise in low-wage manual occupation job growth has been minimal, and far too small to absorb the displaced middle.We can see these numbers in a 2013 note from the Federal Reserve Bank of New York by Albanesi et al. Indeed, the job polarisation highlighted by Autor is impacting on the aggregate labour market beyond just relative wages. First, both cognitive routine and manual routine jobs have been in structural decline:

And the routine jobs are the first to go in recessions and the last to come back:

And from my previous posts, this chart of labour participation shows the net effect of all these moving parts. In a modern state like the U.S., the unemployed hide where they can, seeking refuge, for example, in disability claims, so the labour participation rate goes down.

Autor’s faith in the inability of technology to decrease jobs is thus stymied by the data—at least over the last decade. In his original 2006 article, Autor’s model predicted that the increased productivity of non-routine cognitive jobs would lead to income effects (greater wealth for the cognitive elite) that would in turn create higher demand for non-routine manual jobs.

Nonetheless, this is an empirical observation from old data not a truth that comes out of the model. What Autor admits is that most cognitive routine and manual routine workers can’t price their labour at a sufficiently low enough rate to compete with computers. Accordingly, they have to find refuge in work that is not in direct competition with computers (or technology broadly defined).

Looking at the Albanesi charts, however, the size of the non-routine manual job category is far smaller than the routine cognitive and manual job ones. So we have a huge problem of absorption. Nonetheless, to a classical economist, a price exists at which the market will clear. But the dirty little secret of the lump of labour skeptics is that the market may clear at a price that doesn’t provide a liveable wage (as per the Boxer and Napoleon example in Part 2).

Further, under the Autor model, income effects associated with the ever-prospering non-routine cognitive elite could compensate for the cut-throat competition within the non-routine manual sector. As the geeks’ wages rise, the opportunity cost of doing their own washing rather than cranking out computer code grows ever steeper. In the language of economists, we have income elasticity of demand effects (richer cognitive elite) for non-routine manual labour coupled with price elasticity emend effects (lower manual labour wages making non-routine manual workers more attractive to hire).

I would call this the ‘Downton Abbey economy’: A return to an Edwardian-style wealthy elite employing an army of non-cognitives. But how many workers would a modern-day Lord Grantham need to employ to run Downton Abbey in the requisite style? I would guess fewer than a fifth given the fact that technology has eliminated most of the routine jobs that all the scullery maids and man servants used to perform. Yet Autor remains resolutely upbeat. In the New York Times op-ed piece he says this.

The outlook for workers who haven’t finished college is uncertain, but not devoid of hope. There will be job opportunities in middle-skill jobs, but not in the traditional blue-collar production and white-collar office jobs of the past. Rather, we expect to see growing employment among the ranks of the “new artisans”: licensed practical nurses and medical assistants; teachers, tutors and learning guides at all educational levels; kitchen designers, construction supervisors and skilled tradespeople of every variety; expert repair and support technicians; and the many people who offer personal training and assistance, like physical therapists, personal trainers, coaches and guides. These workers will adeptly combine technical skills with interpersonal interaction, flexibility and adaptability to offer services that are uniquely human.

Nonetheless, in other statements by Autor, one can sense some ‘wobble’. In an MIT Technology Review article called “How Technology Is Destroying Jobs” by David Rotman, Autor comes out with this:

“There was a great sag in employment beginning in 2000. Something did change,” he says. “But no one knows the cause.”

So something is going on with total employment, even if he doesn’t admit it is a lump of labour problem for non-cognitive workers in the face of advancing technology.

In the same article, Lawrence Katz, Autor’s co-author in the original 2006 polarisation paper, goes even further. While reiterating the fact that the historical record shows no record of a jobs decrease over an extended period of time following technological change, Katz confesses that this time could possibly be different:

Katz doesn’t dismiss the notion that there is something different about today’s digital technologies—something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is “genuinely a question,” he says. “If technology disrupts enough, who knows what will happen?”

So lump of labour advocates are no longer ignored even if they still don’t get much respect. And if automated processes and robots do “take over a broad swath of human tasks” and radically downsize the job market, what is to be done? I will work through the implications of a lump of labour victory in my next two posts.

As I write this post, the yen has broken below 100 to the U.S. dollar and the Nikkei has closed at a five-year high. So surely Abenomics is working, isn’t it? Well, it is certainly pushing up asset prices. Indeed, if I were still in my old job as a Japanese equity hedge fund manager I would have swung the bat as hard as I could after Bank of Japan Governor Kuroda’s original April 4 announcement. And I would plan to keep swinging the bat well into the future. Indeed, if my risk manager was not having a heart attack by now, I would feel I had not done my job properly.

Both papers are difficult reads for the non-economist, but, as I mentioned in my previous post, the Richmond Fed has made available a “A Citizen’s Guide to Unconventional Monetary Policy” for non-specialists that contains the core policy prescription of the two academic papers referred to above. From A Citizen’s Guide, the critical passage is this:

In the Eggertsson and Woodford model, the com- mitment to making monetary policy “too easy” would only stimulate economic activity if the commitment is viewed by the public as highly credible. That is, markets must believe that the central bank will, in fact, hold rates “too low” in the future simply because it promised to in the past, despite the fact that at that point, it would wish to raise rates to avoid inflation.

Krugman, ever the wordsmith, put this more succinctly:

The way to make monetary policy effective, then, is for the central bank to credibly promise to be irresponsible…

Now we now that asset price inflation operates on a different time scale to consumer price inflation: indeed, Japan’s stock price indices are already up 50% from their December lows, while consumer price inflation has barely budged. Nonetheless, to whatever level asset prices go, Kuroda has to keep his mouth firmly shut to have any chance of the changing public perceptions of future inflation. He is not allowed to make Greenspan-type gnomic references to “irrational exuberance”, let alone pull back from Japanese government bond buying. He must drive Japan’s monetary policy as if he was in one of those defective Toyota cars that was recalled due to a faulty accelerator pedal that got stuck to the floor.

This, of course, is a bubble meister’s charter, since for Kuroda to succeed in changing consumer expectations he must keep the accelerator pedal depressed for years. It is also worth keeping in mind that the Bank of Japan’s newly minted 2% inflation target is only an intermediate goal. As I explained in my last post, what monetary policy is really trying to achieve here is the closure of an output gap, i.e., the difference between where the economy is currently operating and where it could be operating if labour and capital were fully employed.

Moreover, the problem is perceived as one of lack of demand, not supply. The idea is that households won’t spend today because they think goods will get cheaper tomorrow. In effect, even if they hold cash at the bank earning zero, deflation means that they are getting a comfortable real return. The policy goal a la Krugman, Woodford and Eggertsson is to make that real return negative. And the only way to create a negative real return when interest rates are zero is to have inflation. If you can persuade the populace that inflation is barrelling toward them in the future, then they will cut savings and increase consumption now—or so the theory goes.

In addition, if the economy is idling below potential with unused capital and labour, any sudden jump in demand will result in high productivity and economic growth. Growth, in turn, will lead to higher wages and greater government tax receipts. Thus—and this is where the magic of macroeconomics comes in—the act of spending more now results in higher wages and living standards in the future.

Surely, a classic win-win: more consumption and more growth. What’s not to like? Nonetheless, there are a number of problems. First, how smoothly this all works depends to a large degree on the extent of the output gap. An article by Gavyn Davies in The Financial Times takes a look at the difference between output gaps if we just extrapolate past growth and those if we take into account supply side phenomenon (click for larger image) for a number of countries.

Economic growth can be increased either by raising the labour and capital inputs used in production, or by improving the overall efficiency in how these inputs are used together, i.e. higher multifactor productivity (MFP). Growth accounting involves decomposing GDP growth into these three components, providing an essential tool for policy makers to identify the underlying drivers of growth.

Therefore, if I am to be proved wrong in my declaration that Japan is post-growth, Abenomics must be able to boost labour inputs, and/or increase capital inputs and/or improve multifactor productivity (innovation and efficiency). By definition, the Abe agenda must encompass one or more of the three—there are no other means of achieving growth.

Against this background, Prime Minister Abe has given top billing to monetary stimulus within his ‘three arrow’ policy agenda. He campaigned and won a general election on a pledge to force Japan’s central bank, the Bank of Japan, to adopt a binding 2% inflation target through unlimited monetary easing and thus slay deflation. Moreover, to execute such a strategy, he backed a new BOJ governor, Haruhiko Kuroda, who took office in March. Kuroda, in turn, has executed Abe’s monetary policy agenda with gusto. (For a fascinating article on how Kuroda deftly manoeuvred the BOJ board into unanimously support the policy shift, see this Reuters’ article here).

In contrast with the speeches of his predecessor, Masaaki Shirakawa, Kuroda’s early utterances have been accompanied by a very thin chart pack dominated by the now famous ‘all the twos’ slide (click for larger image):

These measures will give rise to an extraordinary jump in the monetary base over a two-year period from ¥138 trillion at the end of 2012 to ¥270 trillion at the end of 2014. In fiscal 2012, Japan’s GDP was estimated at approximately ¥475 trillion in nominal terms, so the monetary base is targeted to rise from around 30% of GDP to 55% of GDP.

By contrast, the action by the Federal Reserve Board in the U.S. looks positively cautious (here), with the monetary base a modest 17% of GDP. Continue reading →

Is foresight knowledge? Nobel prize winner Gabriel Garcia Marquez commences his classic novella “Chronicle of a Death Foretold” with a scene of mourning for the central character Santiago Nasar. The author then takes us back in time to chronicle the events that led up to Nasar’s demise. As the day in question unfolds, one discovers that almost the entire occupants of the town where Nasar lived knew of the plot to kill him. Nonetheless, no-one forewarns Nasar: some don’t believe the murderers will carry out their threat, many are distracted by the imminent arrival of a Bishop to their island backwater, a few encourage the perpetrators of the crime, and yet others try to warn Nasar but cannot find him.

Throughout all this, the reader realises the inevitability of the final outcome, but at the same time feels frustrated over the innumerable missed opportunities to prevent the death. Coming as I do from the west country of England, I feel the oppressive fatalism of Thomas Hardy. We understand that what we do is wrong, but we are fated to do it anyway.

Looking at Japan’s burgeoning public debt, I feel that I am peering down on Garcia Marquez’s fictional town. Everyone knows that public sector liabilities are ballooning, but there is no common purpose over what to do. Indeed, most are too occupied with the distractions of popular culture to pay the debt much heed, while some argue that the debt is harmless since the country has remained prosperous though the years over which it has accumulated. A few utter Cassandra-like warnings, but the fact that disaster never strikes blunts the message. (Of course, the accusation of being a Cassandra is a strange smear: it ignores the fact that Cassandra was eventually proved right; her true curse was that no-one believed her.) Continue reading →