Economics for the jilted generation…

Menu

politics

Debt ceiling fights, it seems, have become a permanent fixture in American politics. Twice in the last couple of years, the United States has been days away from potentially irrevocable economic damage because Congress refused to raise the debt ceiling and let the Treasury issue more debt. The next debt ceiling fight is slated for March 2014.

But isn’t there a better way to increase a borrowing limit — and one that doesn’t freak out markets, investors, and, well, just about everyone every few months?

The retirement of the baby boom cohorts means that the country’s labor force is likely to be growing far more slowly in the decades ahead than it did in prior decades. The United States is not alone in facing this situation. The rate of growth of the workforce has slowed or even turned negative in almost every wealthy country. Japan leads the way, with a workforce that has been shrinking in size for more than a decade.

With a stagnant or declining labor force, workers will have their choice of jobs. It is unlikely that they will want to work as custodians or dishwashers for $7.25 an hour. They will either take jobs that offer higher pay or these jobs will have to substantially increase their pay in order to compete.

This means that the people who hire low-paid workers to clean their houses, serve their meals, or tend their lawns and gardens will likely have to pay higher wages. That prospect may sound like a disaster scenario for this small group of affluent people, but it sounds like great news for the tens of millions of people who hold these sorts of jobs. It should mean rapidly rising living standards for those who have been left behind over the last three decades.

Perhaps Mr Baker was thinking of an older example: the Black Death, which killed about half the people in Europe. Many (including me until I looked it up) believe that the resulting shortage in agricultural labour led to soaring real wages for peasants and a redistribution of economic power away from landowners. Recent evidence, however, casts doubt on this hypothesis. While nominal peasant wages did indeed increase in the aftermath of the Black Death, real wages may have actually fallen for decades. That may have helped heavily indebted peasants, but everyone else had to endure punishing declines in their standard of living, not to mention the psychological trauma of surviving such a devastating plague.

In southern England, real wages of building craftsmen (rural and urban), having plummeted with the natural disaster of the Great Famine (1315-21), thereafter rose to a new peak in 1336-40. But then their real wages fell during the 1340s, and continued their decline after the onslaught of the Black Death, indeed into the 1360s. Not until the later 1370s – almost thirty years after the Black Death – did real wages finally recover and then rapidly surpass the peak achieved in the late 1330s.

To me at least, this seems to suggest that while all else being equal, a shrinking working age population might lead to a more competitive labour market, all else is not equal. Employers invest in more capital-intensive processes like automation and robots to compensate for a lack of workers, or in our globalised world they shift operations to somewhere with a stronger labour force (like China today, or perhaps like Africa further into the future). Even more simply, a falling population as a result of a natural disaster like the Black Death, or even just as a result of demographic trends like Japan, may lead to an economic depression due to falling demand.

This suggests that Baker’s conclusions are extremely optimistic for labour, and that shrinking populations may be bad news for wages.

There is a popular meme going around, popularised by the likes of Tyler Cowen, Paul Krugman and Noah Smith that suggests that recent falls in worker compensation as a percentage of GDP is mostly due to the so-called “rise of the robots”:

For most of modern history, two-thirds of the income of most rich nations has gone to pay salaries and wages for people who work, while one-third has gone to pay dividends, capital gains, interest, rent, etc. to the people who own capital. This two-thirds/one-third division was so stable that people began to believe it would last forever. But in the past ten years, something has changed. Labor’s share of income has steadily declined, falling by several percentage points since 2000. It now sits at around 60% or lower. The fall of labor income, and the rise of capital income, has contributed to America’s growing inequality.

…

In past times, technological change always augmented the abilities of human beings. A worker with a machine saw was much more productive than a worker with a hand saw. The fears of “Luddites,” who tried to prevent the spread of technology out of fear of losing their jobs, proved unfounded. But that was then, and this is now. Recent technological advances in the area of computers and automation have begun to do some higher cognitive tasks – think of robots building cars, stocking groceries, doing your taxes.

Once human cognition is replaced, what else have we got? For the ultimate extreme example, imagine a robot that costs $5 to manufacture and can do everything you do, only better. You would be as obsolete as a horse.

Now, humans will never be completely replaced, like horses were. Horses have no property rights or reproductive rights, nor the intelligence to enter into contracts. There will always be something for humans to do for money. But it is quite possible that workers’ share of what society produces will continue to go down and down, as our economy becomes more and more capital-intensive.

So, does the rise of the robots really explain the stagnation of wages?

This is the picture for American workers, representing wages and salaries as a percentage of GDP:

But there are two variables to wages as a percentage of GDP. Nominal wages have actually risen, and continued to rise on a moderately steep trajectory:

And average wages continue to climb nominally, too. What has actually happened to the wages-to-GDP ratio, is not that America’s wage bill has really fallen, but that wages have just not risen as fast as other sectors of GDP (rents, interest payments, capital gains, dividends, etc). It is not as if wages are collapsing as robots and automation (as well as other factors like job migration to the Far East) ravage the American workforce.

It is more accurate to say that there has been an outgrowth in economic activity that is not yielding wages beginning around the turn of the millennium, and coinciding with the new post-Gramm-Leach-Bliley landscape of mass financialisation and the derivatives and shadow banking megabubbles, as well the multi-trillion dollar military-industrial complex spending spree that coincided with the advent of the War on Terror. Perhaps, if we want to look at why the overwhelming majority of the new economic activity is not trickling down into wages, we should look less at robots, and more at the financial and regulatory landscape where Wall Street megabanks pay million-dollar fines for billion-dollar crimes? Perhaps we should look at a monetary policy that dumps new money solely into the financial sector and which has been shown empirically to enrich the richest few far faster than everyone else?

But let’s focus specifically on jobs. The problem with the view that this is mostly a technology shock is summed up beautifully in this tweet I received from Saifedean Ammous:

@azizonomics I wonder how humanity still manages to find jobs after the automation shock of the invention of the wheel.

The Luddite notion that technology might render humans obsolete is as old as the wheel. And again and again, humans have found new ways to employ themselves in spite of the new technology making old professions obsolete. Agriculture was once the overwhelming mainstay of US employment. It is no more:

This did not lead to a permanent depression and permanent and massive unemployment. True, it led to a difficult transition period, the Great Depression in the 1930s (similar in many ways, as Joe Stiglitz has pointed out, to the present day). But eventually (after a long and difficult depression) humans retrained and re-employed themselves in new avenues.

It is certainly possible that we are in a similar transition period today — manufacturing has largely been shipped overseas, and service jobs are being eliminated by improvements in efficiency and greater automation. Indeed, it may prove to be an even more difficult transition than that of the 1930s. Employment remains far below its pre-crisis peak:

But that doesn’t mean that human beings (and their labour) are being rendered obsolete — they just need to find new employment niches in the economic landscape. As an early example, millions of people have begun to make a living online — creating content, writing code, building platforms, endorsing and advertising products, etc. As the information universe continues to grow and develop, such employment and business opportunities will probably continue to flower — just as new work opportunities (thankfully) replaced mass agriculture. Humans still have a vast array of useful attributes that cannot be automated — creativity, lateral thinking & innovation, interpersonal communication, opinions, emotions, and so on. Noah Smith’s example of a robot that “can do everything you can do” won’t exist in the foreseeable future (let alone at a cost of $5) — and any society that could master the level of technology necessary to produce such a thing would probably not need to work (at least in the sense we use the word today) at all. Until then, luckily, finding new niches is something that humans have proven very, very good at.

Famed pollster and sabermetrician Nate Silver is calling the US Presidential race for Obama, in a big way:

Silver’s mathematical model gives Obama an 85% chance of winning. The Presidential election is based on an electoral college system, so Silver’s model rightly looks at state-level polls. And in swing state polls, Obama is mostly winning:

This is slightly jarring, because in national polls, the two candidates are locked together:

So who’s right? Is the election on a knife-edge like the national polls suggest, or is Obama strongly likely to win as Silver’s model suggests?

While the election could easily go either way depending on turnout, I think Silver’s model is predicting the wrong result. In order for that to be the case, the state polling data has to be wrong.

There are a number of factors that lead me to believe that this is the case.

First, Republicans tend to outperform their poll numbers. In 2008, the national average got the national race just about right:

In the end, Obama won the election with 52.9% of the vote, against McCain who came out with 45.7%.

However, polls have historically underestimated Republican support. Except 2000 (when a November Surprise revelation of a George W. Bush drunk-driving charge pushed Gore 3.2% higher than the final round of polling), Republican Presidential candidates since 1992 have outperformed their final polls by a mean of 1.8 points. Such an outcome for Romney would put him 1.5% ahead in the national polls, and imperil Obama’s grip on the swing states.

Second, the Bradley Effect. The interesting thing about the swing states is that many of them are disproportionately white. The United States is 72% white, but Iowa is 89% white, Indiana is 81% white, Ohio is 81% white, Minnesota is 83% white, Pennsylvania is 79% white, New Hampshire is 92% white, Maine is 94% white and Wisconsin is 83% white. This means that they are particularly susceptible to the Bradley Effect — where white voters tell a pollster they will vote for a black candidate, but in reality vote for a white alternative. In a state in which Obama holds a small lead in state-level polling, only a small Bradley Effect would be necessary to turn it red.

This effect may have already affected Barack Obama in the past — in the 2008 primaries, Obama was shown by the polls to be leading in New Hampshire, but in reality Hillary Clinton ran out the winner. And many national polls in October 2008 showed Obama with much bigger leads than he really achieved at the polls — Gallup showed Obama as 11% ahead, Pew showed Obama as 16% ahead.

A small Bradley Effect will not hurt Obama where he is 7% or 11% or 16% ahead in the polls. But when polls are closer — as they mostly are in the swing states — it becomes more plausible than such an effect could change the course of the race.

A majority of Americans (51 percent) now hold “explicit anti-black attitudes” — up from 49 percent in 2008 — and 56 percent showed prejudice on an implicit racism test.

Finally, polls have tended to overestimate the popularity of incumbent Presidents, especially Democrats. In 1980, polls put Jimmy Carter 3% of his final tally, and in 1996 polls put Bill Clinton 2.8% ahead of his final tally:

Taken together, these difficult-to-quantify factors pose a serious challenge to Silver’s model. While it is fine to build a predictive model on polling data, if the polling data fed into the model is skewed, then any predictions will be skewed. Garbage in, garbage out.

I rate Obama’s chance of being re-elected as no better than 50:50. If Silver really rates his chances as 85:15, perhaps he should consider taking bets at those odds.

UPDATE:

Obviously, Silver’s predictive model (and far, far more importantly the state-level polling data) proved even more accurate than 2008. However, the 2010 British General Election (in which polls and therefore Silver vastly overestimated the Liberal Democrat support level, leading to an electoral projection that was way off the mark) illustrates that there remain enough issues regarding the reliability of the polling data to ensure that Silver’s model (and similar) continue to suffer from the problem of fat tails. With solid, transparent and plentiful data (as Taleb puts it, in “Mediocristan”) such models work very, very well. But there remains plenty of scope (as Britain in 2010 illustrates) for polls to be systematically wrong (“Extremistan”). Given the likelihood that every news network will have its own state-level poll aggregator and Nate Silver soundalike on-hand come 2016, that might well be a poetic date for the chaotic effects of unreliable polling data to reappear. In the meantime, I congratulate the pollsters for providing Silver with the data necessary to make accurate projections.

Defenders of Cameron’s policies might claim that we are going through a necessary structural adjustment, and that lowered GDP and elevated unemployment is necessary for a time. I agree that a structural adjustment was necessary after the financial crisis of 2008, but I see little evidence of such a thing. The over-leveraged and corrupt financial sector is still dominated by the same large players as it was before. True, many unsustainable high street firms have gone out of business, but the most unsustainable firms that had to be bailed out — the banks and financial firms who have caused the financial crisis — have avoided liquidation. The real story here is not a structural adjustment but the slow bleeding out of the welfare state via deep and reaching cuts.

I believe countries are better with small governments and a larger private sector. The private sector consists of many, many individuals acting out their subjective economic preferences. This dynamic is largely experimental; businesses come and go, survive, thrive and die based upon their ability to stay liquid and retain a market, and this competition for demand forces innovation. The government sector is centrally directed. Governments do not have to behave like a business, they do not have to innovate or compete, as they have the power to tax and compel. (The exception to this is when governments become overrun by the representatives of private industries and corporations, who then leverage the machinations of the state to benefit corporations. When this occurs and markets become rigged in the favour of certain well-connected competitors, it matters little whether we call such industries “private sector” or “public sector”).

So I am sympathetic to the idea that Britain ought to have a smaller welfare state, and fewer transfer payments than it presently does. But the current and historical data shows very clearly that now is not the time to make such an adjustment. The time to reduce the size of the welfare state is when the economy is booming. This is the time that there is work for welfare claimants to go to. Cutting into a depressed economy might create a strong incentive for the jobless to work, but if there is little or no job creation for the jobless to go to, then what use are cuts? To reduce government deficits? If that’s the case, then why are British government deficits rising even though spending is being reduced? (The answer, of course, is falling tax revenues).

An alternative policy that would reduce unemployment and raise GDP without increasing the size of government is to force bailed-out banks sitting on huge hoards of cash to offer loans to the jobless to start their own private businesses. The money would be transferred to those who could be out working and creating wealth, but who cannot get credit through conventional channels, unlike the too-big-to-fail megabanks who are flush with credit but refuse to increase lending to the wider public. Even if the majority of these businesses were to fail, this would ensure a large boost in spending and incomes in the short run, and the few new businesses that succeed would provide employment and tax revenues for years to come. Once there is a real recovery and solid growth in GDP and in unemployment, then the government can act to decrease its size and slash its debt. Indeed, with growing tax revenues it is probable we would find that the deficit would end up decreasing itself.

Hojjat al-Eslam Ali Shirazi, the representative of Iran’s Supreme Leader Ayatollah Ali Khamenei to the Islamic Republic’s Qods Force, said this week that Iran needed just “24 hours and an excuse” to destroy Israel.

In his first public interview in a year, reported in the Persian-language Jahan News, which is close to the regime, Shirazi said if Israel attacked Iran, the Islamic Republic would be able to turn the conflict into a war of attrition that would lead to Israel’s destruction.

“If such a war does happen, it would not be a long war, and it would benefit the entire Islamic umma the global community of Muslims. We have expertise in fighting wars of attrition and Israel cannot fight a war of attrition,” Shirazi said, referring to Iran’s eight-year war of attrition against Iraq.

Such claims are — more or less — inconsequential rubbish. The fact remains that Israel has nuclear weapons and a nuclear second strike, and Iran has no such thing, and the fact remains that the Iranian leadership knows this and are extremely unlikely to start a war where Iran (as Shimon Peres put it) will be the one wiped off the face of the Earth by Israeli plutonium. Yet the facts of military science will do little to stop the hawks of the West sounding off that Iran is irrational and that Iran is cooking up a plan to destroy Israel, and so must face regime change.

To grasp what is really occurring here we must look at how authoritarian Middle Eastern regimes (or, indeed, authoritarian regimes in general) function. Authoritarian regimes must maintain a cloak of authority. Tyrants do not attempt to look or sound weak; they try to project an aura of invincibility and indefatigability. We saw this during the last Gulf War, where Iraq’s information minister Muhammad Saeed al-Sahhaf — nicknamed Baghdad Bob in the American media — shot off hundreds of absurd statements during the war about how Iraqi troops were crushing the Americans, quite in contrast to the facts on the ground and right up until American tanks were rolling through the streets of Baghdad.

Baghdad Bob was not deluded. He was merely playing his role, and trying to project an aura of regime invincibility — providing propaganda for domestic consumption to keep the Iraqi population loyal to Saddam Hussein. It was a dog and pony show.

Iran’s belligerent rhetoric in this case is also strictly for domestic consumption — fierce rhetoric to keep the Iranian population fearful of the regime. Just like Baghdad Bob, the Iranian propaganda is far-removed from the real facts of the conflict. Whether the Iranian people really believe the regime’s propaganda — especially as the Iranian economy continues to worsen under sanctions — is dubious.

Yet one group of people — the Western neoconservatives, who are looking for another war — are more than happy to buy into the dog and pony “destroy Israel” bullshit.

As Robert Gates noted this week:

Painting a picture of internal political dysfunction in a dangerous world, former Defense Secretary Robert Gates warned Wednesday night that a U.S. or Israeli attack on Iran would have disastrous consequences.

Neither the United States nor Israel is capable of wiping out Iran’s nuclear capability, he said, and “such an attack would make a nuclear-armed Iran inevitable. They would just bury the program deeper and make it more covert.”

Iran could respond by disrupting world oil traffic and launching a wave of terrorism across the region, Gates said.

“The results of an American or Israeli military strike on Iran could, in my view, prove catastrophic, haunting us for generations in that part of the world.”

A regional war in the Middle East could result, potentially sucking in the United States and Eurasian powers like China, Pakistan and Russia. China and Pakistan have both hinted that they could defend Iran if Iran were attacked — and for good reason, as Iran supplies significant quantities of energy.

Frustratingly, the Iranian regime keep giving the neoconservatives more rope with which to hang themselves — and the West — on a cross of imperial overstretch, debt and blowback.

The YouTube video depicting Mohammed is nothing more than the straw that broke the camel’s back. This kind of violent uprising against American power and interests in the region has been a long time in the making. It is not just the continuation of drone strikes which often kill civilians in Pakistan, Yemen, Somalia and Afghanistan, either. Nor is it the American invasions and occupations of Iraq and Afghanistan. Nor is it the United States and the West’s support for various deeply unpopular regimes such as the monarchies in Bahrain and Saudi Arabia (and formerly Iran). Nor is it that America has long favoured Israel over the Arab states, condemning, invading and fomenting revolution in Muslim nations for the pursuit of nuclear weapons while turning a blind eye to Israel’s nuclear weapons and its continued expansion into the West Bank.

Americans and Europeans are no doubt looking at the protests over the “film”, recalling the even more violent protests during the Danish cartoon affair, and shaking their heads one more at the seeming irrationality and backwardness of Muslims, who would let a work of “art”, particularly one as trivial as this, drive them to mass protests and violence.

Yet Muslims in Egypt, Libya and around the world equally look at American actions, from sanctions against and then an invasion of Iraq that killed hundreds of thousands of Iraqis and sent the country back to the Stone Age, to unflinching support for Israel and all the Arab authoritarian regimes (secular and royal alike) and drone strikes that always seem to kill unintended civilians “by mistake”, and wonder with equal bewilderment how “we” can be so barbaric and uncivilised.

All of these things (and many more) have contributed to Muslim and Arab anger toward the United States and the West. Yet the underlying fact of all of these historical threads has been the United States’ oil-driven foreign policy. Very simply, the United States has for over half a century pursued a foreign policy in the region geared toward maintaining the flow of oil out of the region at any cost — even at the cost of inflaming the irrational and psychopathic religious elements that have long existed in the region.

This is not to defend the barbaric elements who resort to violence and aggression as a means of expressing their disappointment with U.S. foreign policy. It is merely to recognise that you do not stir the hornet’s nest and then expect not to get stung.

And the sad thing is that stirring the hornet’s nest is totally avoidable. There is plenty of oil and energy capacity in the world beyond the middle east. The United States is misallocating capital by spending time, resources, energy and manpower on occupying the middle east and playing world policeman. Every dollar taken out of the economy by the IRS to be spent drone striking the middle east into the stone age is a dollar of lost productivity for the private market. It is a dollar of productivity that the market could have been spent increasing American energy capacity and energy infrastructure in the United States — whether that is in oil, natural gas, solar, wind or hydroelectric.

And this effect can spiral; every dollar spent on arming and training bin Laden and his allies to fight the Soviet Union begot many more thousands of dollars of military spending when bin Laden’s mercenaries turned their firepower onto the United States, and the United States chose to spend over ten years and counting occupying Afghanistan (rightly known as the graveyard of empires). It is likely that the current uprisings will trigger even more U.S. interventionism in the region (indeed it already has as marines have already been dispatched to Yemen) costing billions or even trillions of dollars more money (especially if an invasion of Iran is the ultimate outcome). This in turn is likely to trigger even fiercer resistance to America from the Islamist elements, and so the spiral continues on.

The only way out of this money-sucking, resource-sucking, life-sucking trap that is very literally obliterating the American empire is to swallow pride and get out of the middle east, to stop misallocating American resources and productivity on unwinnable wars.