The chart, which is based upon IRS data complied by economist Emmanuel Saez, shows that (at least in absolute terms) rising inequality hasn’t even benefited the so-called rich. They, like the rest of America, would be better off today if the government policy errors that led to the increasing income disparity had not occurred.

From 20 feet away, anyone can see that something bad happened to the U.S. economy in 1968. Prior to that, America experienced rapid income growth that was widely shared. The incomes of both “the ten percent” and “the ninety percent” increased by 80% in just 20 years. We had prosperity, without rising income inequality.

This 21-year “golden age” then gave way to 14 years of income stagnation, which was also widely shared. Incomes didn’t rise, but neither did income inequality.

Then something good (but not great) happened around 1983 that got incomes growing again, but not nearly as fast as during 1948 – 1968, and at the cost of rapidly widening income inequality.

After that, something bad happened circa 2000, leading to another 12 years of income stagnation for both “the ten percent” and “the ninety percent”. This brings us to the present.

Note that even though, as of 2011, the income gains of “the ten percent” since 1948 have far outpaced those of “the ninety percent” (205% vs. 72%), “the ten percent” would have been much better off in absolute terms if the 1948 – 1968 trend had continued. In such case, the incomes of both groups would have risen to about 270% above their 1948 levels. America’s real GDP (RGDP) would have been 84% (more than $13 trillion) higher in 2012.

In his blog post, Professor Taylor blames “increasing returns to education” for the widening income inequality of the 1983 – 2000 period. This is obviously not true.

Taylor’s “education” theory resembles Tyler Cowan’s assertion in his book, The Great Stagnation, that America’s slowdown in RGDP growth since 1973 was caused by the nation running out of “low hanging fruit”. Someone should make both of these eminent economists go to the blackboard and write the following sentence 100 times:

“Free markets don’t produce inflection points!”

Hundreds of millions of people do not suddenly change what they are doing unless acted upon by some systemic force. Anytime you see an inflection point (a sudden change in the direction or slope of a line) in a graph that amalgamates the decisions and actions of hundreds of millions of people, you are witnessing the impact of government policies. Let’s see if we can figure out what the government did wrong, then sort of right, then wrong again in 1968, 1983, and 2000 respectively.

The 21-year heyday of Bretton Woods really was a golden age. RGDP growth averaged 4.10%, which was slightly higher than the 3.95% average of the first 157 years of American economic history (1790 – 1947), although at the cost of higher annualized inflation, as measured by the GDP deflator (2.27% vs. 0.72%). The real incomes of both “the ten percent” and “the ninety percent” rose by about 80% during this period.

So, what happened to end America’s era of middle class prosperity? The fiat dollar happened.

In April 1968, the free market price of gold first moved above the official $35/oz Bretton Woods benchmark. This was the beginning of the end of our gold-defined dollar, and the smart money knew it. From this point on, investors would demand higher returns to compensate for the risks of unstable money, and more of society’s precious capital would be lost to the phenomenon called malinvestment.

The next 14 years were bad. Income inequality didn’t increase—Americans stagnated together. RGDP growth fell to 2.54%, and inflation skyrocketed to 6.82%. Both “the ten percent” and “the ninety percent” lost ground during this period, with real incomes falling by about 4% each.

During the 1979 – 1982 period, Federal Reserve Chairman Paul Volcker suppressed the virulent inflation set off by the collapse of Bretton Woods. Unfortunately, he did it by charging sky-high interest rates for borrowing fiat dollars, rather than by returning to a gold-defined dollar.

“Tight money” can stop an inflation (at least as measured by the GDP deflator), but it does not eliminate the costs and risks associated with a fiat dollar. To the economy, “tight money” is not the same thing as “stable money”.

Things got better for everyone during this period, but quite unequally. The incomes of “the ninety percent” rose by about 17%, while those of “the ten percent” shot up by 106%.

This was the best that could be done without actually re-fixing the dollar to gold (or something else real). A fiat dollar creates additional risks for capital investment, thus raising the cost of capital. When the cost of capital goes up, the return on capital must also increase, or the economy will liquidate itself. “The ten percent” owns essentially all of society’s capital. So, a higher cost of capital will inevitably raise the incomes of “the ten percent” relative to the rest of society.

Near the end of 1999, the Federal Reserve went off the rails again under Fed Chairman Alan Greenspan. The Fed created a stock market bubble by pumping up the monetary base to ward off the (non-existent) problem of “Y2K”. When January 1, 2000 came and went without incident, the Fed withdrew the extra money too fast, producing a stock market bust.

Greenspan, followed by Ben Bernanke, then fell in behind the “weak dollar” policy of newly elected president George W. Bush. The tenuous monetary stability of the 1983 – 2000 period was lost, with gold prices rising by 258% between December 2000 and February 2008.

The classic inflation hedge strategy is to buy physical assets with borrowed money. The inflationary environment engineered by the Fed produced a massive bubble in housing. Hundreds of billions of dollars worth of precious capital was diverted from nonresidential assets (which produce a 48% GDP return and support jobs) to residential assets (which return 7% and create no jobs). The result was malinvestment on a grand scale.