Pages

Friday, March 30, 2012

Japan ran a merchandise trade deficit in 2011! I missed the news when it was announced in January, and could hardly believe it. Japan running large trade surpluses has been one of the few constants of the last several decades, through good times and bad. Here's an illustrative figure from the Daily Yomiuri:

Of course, this event comes with some "buts" attached. The trade deficit largely arose from a onetime event: Japan's horrendous earthquake and tsunami in March 2011, which led industrial production and exports to fall while imports of natural gas rose. In addition,this trade deficit only involves merchandise trade: if one looks at the overall current account balance, which also includes income from foreign investments, Japan still shows a surplus.

But even if Japan returns to merchandise surpluses in 2012 or 2013, the days of perpetual surpluses in Japan seem numbered. If one squints just a bit at the figure above, one can imagine an overall downward trend in those trade surpluses since the late 1990s. At a fundamental level, a trade surplus means that an economy is producing more than it is consuming--and exporting the rest. But Japan is a rapidly aging society with a low birthrate where the size of the workforce topped out in 1998 and has been shrinking since then. Japan's government forecasts that the total population of the country will decline by one-quarter in the next 40 years, while the share of Japan's population over age 65 will rise
from about 23% now to almost 40% by 2050. With this demographic outlook, it seems likely that Japan will become a country that will start to live off some of its vast accumulated savings, consuming more than it produces and running trade deficits, in the not-too-distant future.

For the uninitiated, when economists refer to FX, the abbreviation doesn't mean "special effects" or "Federal Express" or "Fighter, Experimental." It means "foreign exchange." Morten Bech discussed "FX volume during the financial crisis and now" in the March 2012 issue of the BIS Quarterly Review. BIS stands for Bank of International Settlements, an organization whose members are central banks and some international organizations, which among other tasks holds conferences, collects data, and facilitates some financial transactions.

In particular, BIS produces the Triennial Central Bank Survey of Foreign Exchange and Derivatives Market Activity, which comes out every three years. The most recent version found that FX trading activity averaged $4.0 trillion a day in April 2010. Yes, that's not million or billion per day, or trillion per year, but $4 trillion per day. I'll say a bit more about the implications of that remarkable total in a moment, but Bech's main task is to look at the underlying data for the three-year survey and thus find a way of estimating the FX market at semiannual or even monthly intervals. Bech writes:

"By applying a technique known as benchmarking to the different sources on FX activity, I produce a monthly time series that is comparable to the headline numbers from the Triennial going back to 2004. Taking stock of FX activity during the financial crisis and now I estimate that in October 2011 daily average turnover was roughly $4.7 trillion based on the latest round of FX committee surveys. Moreover, I find that FX activity may have reached $5 trillion per day prior to that month but is likely to have fallen considerably into early 2012. Furthermore, I show that FX activity continued to grow during the first year of the financial crisis that erupted in mid-2007, reaching a peak of just below $4.5 trillion a day in September 2008. However, in the aftermath of the Lehman Brothers bankruptcy, activity fell substantially, to almost as low as $3 trillion a day in April 2009, and it did not return to its previous peak until the beginning of 2011. Thus, the drop coincided with the precipitous fall worldwide in financial and economic activity in late 2008 and early 2009."

Here's a result of Bech's work. The horizontal axis is years; the vertical axis is the size of the FX market. The green line plots the results for 2004, 2007, and 2010 from the Triennial Survey. The red line uses the benchmarking technique to create a semi-annual data series, and the blue line builds on that to create a monthly data series. The vertical lines refer to August 9, 2007, when the first really bad news about the financial crisis hit world financial markets, and September 15, 2008, the middle of what was arguably the worst month of the crisis.

As Bech points out: "The FX market is one of the most important financial markets in the world. It
facilitates trade, investments and risk-sharing across borders." In that spirit, his result interest me in several ways.

First, I'm always on the lookout for ways to illustrate the effect of a global financial crisis in ways that don't involve trying to explain interest rate spreads to students. Seeing the size of the foreign exchange market contract by one-quarter or so in late 2008 and early 2009 is a useful illustration. There are four more graphs for illustrating the financial crisis in this blog post of last August 3, and two more in this post of May 17.

Second, it's useful to compare the size of foreign exchange markets at $4.7 trillion per day to the size of world trade. World exports were about $15 trillion for the year of 2010, according to the World Trade Organization. Thus, only a tiny part of the foreign exchange markets are involved in financing imports and exports. Instead, by far the most important part of the foreign exchange markets involves international financial investing. This insight helps to explain why FX markets are so notoriously volatile: they are a financial market where international capital markets are continually rushing in and out of currencies. The volatility suggests that those who are involved in international trade might often do well to lock in future values for foreign exchange in futures and derivatives markets--and of course, part of what makes the FX market so big is the efforts by all parties to hedge themselves against large movements in exchange rates.

Over time, there does appear to be a tendency for foreign exchange rates to move in the general direction that reflects their purchasing power--the so-called "purchasing power parity" exchange rate. In the Fall 2004 issue of my own Journal of Economic Perspectives, Alan M. Taylor and Mark P. Taylor review "The Purchasing Power Parity Debate," and find that such movements do occur over the long-run, but they proceed slowly, over a period of several years, and in the meantime exchange rates are buffeted by changing investor sentiments and current events.

Thursday, March 29, 2012

We all know that the United States has the highest level of income inequality of any high-income country. Right? But at least according to OECD statistics, this claim is only true if one looks at inequality after taxes and transfers. If one looks at inequality before income and taxes, the U.S. economy has less inequality than Germany, Italy, and the United Kingdom, and about the same amount of inequality as France. The OECD data also offers a hint as to why this unexpected (to me, at least) outcome occurs.

Start with the OECD numbers. The OECD uses the Gini coefficient to measure income inequality across high income. For an earlier post with an intuitive explanation and definition of the Gini coefficients, see here. For present purposes, it suffices to say that a Gini coefficient is a way of measuring inequality that theoretically can range from a score of zero for perfect equality, where everyone has exactly the same income, to a score of one for a situation of complete inequality, where one person receives all the income.

Here's a compilation of Gini coefficients from OECD data with the United States in the top row, followed by Canada, France, Germany, Italy, Japan, Sweden, and the United Kingdom. The OECD data for the second column is here, and data for the other columns is available by toggling the "Income and population measures" box at the top. All data is for the latest year available. (Thanks to Danlu Hu for putting together the table.)

As noted above, the U.S. has the highest Gini coefficient of these eight comparison countries if measured after taxes and transfers (second column), but not if measured before taxes and transfers (first column). However, a hint as to why this arises can be found in the last four columns, which break down the Gini coefficients by the working age population and the over-65 population.

When it comes to the working age population, before taxes and transfers, the U.S. level of inequality is third-highest, but virtually tied for first with the United Kingdom and Italy. After taxes and transfers, the U.S. level of inequality among the working age population clearly the highest.

When it comes to the over-65 population, before taxes and transfers, the U.S. has a far more equal distribution of income than France, Germany, and Italy. I haven't dug down into the data here, but I suspect that these numbers are reflecting that a much larger share of over-65 workers are still in the labor force in the U.S. economy--which makes the distribution more equal before taxes and transfers.

Taxes and transfers make the over-65 distribution of income far more equal in all eight countries, but the U.S. stands out as by far the least equal, followed by Japan, with both well behind the other six countries.

These patterns are consistent with a finding from an OECD report published last fall called Divided We Stand: Why Inequality Keeps Rising. I blogged about it on December 16, 2012, in "Government Redistribution : International Comparisons." One theme of the report is that the extent of government redistribution across populations is driven much more by the widespread provision of government benefits than by the progressivity of taxation. As the OECD report stated: "Benefits had a much stronger impact on inequality than the other main instruments of cash distribution -- social contributions or taxes. ... The most important benefit-related determining factor in overall distribution, however, was not benefit levels but the number of people entitled to transfers."

Wednesday, March 28, 2012

Robert Allen starts off his article on"Technology and the great divergence: Global economic development since 1820" by asking a classic question: Why have low-income countries been seemingly so slow to adopt the technologies for increased production that exist in high-income countries? The article appears in the January 2012 issue of Explorations in Economic History. At least for now, Elsevier is allowing the article to be freely available here, but many academics will also have access through their libraries.

Some of the possible answers are that cultural factors, perhaps like Weber's "Protestant work ethic," cause some countries rather than others to adopt new technology. Or perhaps institutional factors like a legacy of property rights and representative government make some countries likelier to develop technology. Allen argues a different view: "This paper explores an alternative explanation of economic development based on the character of technological change itself. While the standard view assumes that technological progress benefits all countries, this paper contends that much technological
progress has been biased towards raising labor productivity by increasing capital intensity. The new technology is only worth inventing and using in high wage economies. At the same time, the new technology ultimately leads to even higher wages. The upshot is an ascending spiral of progress in rich countries, but a spiral that it is not profitable for poor countries to follow because their wages are low."

Simple examples of this phenomenon abound. It is cost effective to install price scanners in U.S. supermarkets, because it saves the time of cashiers, as well as purchasing and accounting workers behind the scenes. But for a low-income country with much lower wages, saving the time of workers isn't worth such an investment. Multiply this example all across the economy.

Using data on capital per worker and on GDP per worker across countries at different periods of time, Allen estimates a world production function. Here's is the evolution of the world production function for the period from 1820-1913, and from 1913 to 1920.

These production functions display some common patterns. On the far left, GDP per capita rises in a more-or-less linear way with capital per worker. On the right, at the technological frontier, GDP per capita doesn't rise with capital per worker. Over time, the technological frontier--where the gains from additional capital per worker don't add to per capita output--keeps rising. For example, the production function flattens out at about $2000 per worker in 1820, at about $4500 per worker in 1913, $17,000 per worker in 1965, and $35,000 per worker in 1990. Allen suggests that the technological leaders grow by stages, taking a generation or two to perfect the possibilities of one level of capital per worker, before then pushing further up the scale.

In this perspective, technology is quite transferable between countries with roughly similar capital to worker ratios: for example, this helps to explain the convergence in per capita GDP among high-income economies in recent decades. However, low-income countries find that the technology invented by high-income countries inappropriate for their circumstances; indeed, less capital-intensive technology from 50 or 100 years ago often seems more appropriate for them. This perspective also helps to explain why a ultra-high savings rate has often been so important as a precursor to rapid growth in places like Japan in mid-twentieth-century, and then to the East Asian "tiger" economies, and then to China. High savings creates a high capital to worker ratio, and thus makes it much more possible to leapfrog forward by adopting technologies closer to the frontier.

Looking ahead, an intriguing question is whether rapidly emerging economies around the world can become their own source of innovation: that is, can they take their high savings rates and draw upon world technological expertise to create a new kind of cutting-edge innovation aimed at their own home market. Can the emerging countries forge their own technological path? The Economist magazine has been predicting for the last couple of years that this process is now underway. For example, the April 15, 2010 issue had a lengthy "Special Report" called "The new masters of management: Developing countries are competing on creativity as well as cost. That will change business everywhere." Here's a flavor of the argument:

"Thirty years ago the bosses of America’s car industry were shocked to learn that Japan had overtaken America to become the world’s leading car producer. They were even more shocked when they visited Japan to find out what was going on. They found that the secret of Japan’s success did not lie in cheap labour or government subsidies (their preferred explanations) but in what was rapidly dubbed “lean manufacturing”. While Detroit slept, Japan had transformed itself from a low-wage economy into a hotbed of business innovation. Soon every factory around the world was lean—or a ruin. ...

"Now something comparable is taking place in the developing world.... Emerging countries are no longer content to be sources of cheap hands and low-cost brains. Instead they too are becoming hotbeds of innovation, producing breakthroughs in everything from telecoms to carmaking to health care. They are redesigning products to reduce costs not just by 10%, but by up to 90%. They are redesigning entire business processes to do things better and faster than their rivals in the West.

"As our special report argues, the rich world is losing its leadership in the sort of breakthrough ideas that transform industries. This is partly because rich-world companies are doing more research and development in emerging markets. Fortune 500 companies now have 98 R&D facilities in China and 63 in India. IBM employs more people in developing countries than in America....

"Even more striking is the emerging world’s growing ability to make established products for dramatically lower costs: no-frills $3,000 cars and $300 laptops may not seem as exciting as a new iPad but they promise to change far more people’s lives. This sort of advance—dubbed “frugal innovation” by some—is not just a matter of exploiting cheap labour (though cheap labour helps). It is a matter of redesigning products and processes to cut out unnecessary costs. In India Tata created the world’s cheapest car, the Nano, by combining dozens of cost-saving tricks."

This scenario suggests that in the future, technological change may not just disseminate gradually from the advanced countries to the rest of the world, as countries build up their capital/labor ratios. Instead, technological change and its effects may also be disseminating from the huge emerging markets back to consumers and firms in high-income countries.

_______________

Added note:

Louis Johnston writes from the College of St. Benedict at St. John's University to tell me that Robert Allen's article is also Chapter 4 of Allen's recent bookGlobal Economic History: A Very Short Introduction(http://amzn.com/0199596654).

Tuesday, March 27, 2012

There are two kinds of news stories about student loans. One group of stories emphasize the huge total of student loans. Calculations from the New York Fed for the end of 2011 find: " The outstanding student loan balance now stands at about $870 billion, surpassing the total credit card balance ($693 billion) and the total auto loan balance ($730 billion)." The Student Debt Loan Clock, which for illustrative purposes continually updates the total student loan debt outstanding, is on the verge of crossing $1 trillion.

The second group of stories emphasize the problems of particular students who have large loans and great difficulties in paying them back. For example, this New York Times story tells of a New York University graduate (class of 2005) who took out more than $100,000 in loans while completing an interdisciplinary major in religious and women's studies. By 2010, she was earning $22/hour working as a photographer's assistant--and going to night school so that she could defer the loan payments.

Most students are borrowing amounts that are within standard loan guidelines
"Leaving aside extreme cases, are student borrowing levels assumed by the majority of undergraduate students consistent with their capacity to repay these loans? There is little evidence to suggest that the average burden of loan repayment relative to income has increased in recent years. The most commonly referenced benchmark is that a repayment to gross income ratio of 8 percent, which is derived broadly from mortgage underwriting, is “manageable” while other analysis such as a 2003 GAO study set the benchmark at 10 percent. To put this in perspective, an individual with $20,000 in student loans could expect a monthly payment of about $212, assuming a ten-year repayment period. In order for this payment to accrue to 10 percent of income, the student would need an annual income of about $25,456, which is certainly within the range of expected early-career wages for college graduates. Overall, the mean ratio of student loan payments to income among borrowers has held steady at between 9 and 11 percent, even as loan levels have increased over time ..."

My own guess is that part of what is happening here is that larger loan burdens are being offset by lower interest rates, so the overall ratio of loan payments to income has risen by less than one might otherwise expect.

The median level of student borrowing isn't excessively high.
"Borrowing among students at the median is relatively modest: zero for students beginning at
community colleges, $6,000 for students at four-year public colleges, and $11,500 for students at private nonprofit colleges. Even at the 90th percentile, student borrowing does not exceed $40,000 outside of the for-profit sector. Examples of students who complete their undergraduate degree with more than $100,000 in debt are clearly rare: outside of the for-profit sector, less than 0.5 percent of students who received BA degrees within six years had accumulated more than $100,000 in student debt. The 90th percentile of degree recipients starting at for-profits have $100,000 in debt; so a nontrivial number of students at for-profits accumulate this much debt, but the situation is still far from the norm."

Students thinking about loans should also think seriously about their risk of not finishing a degree--especially at for-profit and less selective institutions.
"Only 55 percent of dependent students who anticipate completing a BA degree actually do so within six years of graduating high school, while more than one-third of them do not complete any postsecondary degree within six years. Similarly, more than half of dependent students who anticipate completing an associate’s degree do not do so within six years of graduating high school ... [A]among students beginning at four-year colleges, private for-profit colleges have dramatically lower average graduation rates (16 percent) for dependent students than do public (63 percent) or private not-for-profit (68 percent) colleges. In addition, there is substantial variation in graduation rates within each
college category, with more-selective colleges typically having higher graduation rates."
What are the average employment and wage prospects for your planned major?

Students considering loans should think about the typical employment and pay prospects for that major.
I do think that many students agonize a little too much over their major, while not agonizing enough over the extent to which they are building a skill set. That said, different majors have different payoffs.
Avery and Turner offer some evidence on this point, and in this post of January 11, 2012, I discuss some basic evidence on "For What Majors Does College Pay Off?" In that post, I summarized it this way: "[W]hen looking at unemployment rates, along with the architects, those who majored in humanities or in in the arts have relatively high rates, while those who had majored in health and education had relatively low unemployment rates. When it comes to income, the highest income levels are for those who majored engineering, computer science/mathematics, life sciences, social sciences, and business. The lower income went to those majoring in arts, education, and psychology/social work.

Students considering loans should consider that any major leads to a widely dispersed range of employment and pay outcomes.
When you look at pay out of college, there is considerable inequality--and the range of inequality has been generally increasing over the last couple of decades. Thus, the median pay is a better guide to expectations than the average. Especially if you have been a middle-range or lower-middle-range student all through high school, it would be unwise to assume that you are likely to be at the top of the income range after graduation.

Students should look to their high school experience for some guidance as to how they will fare in college.

About 60% of high school students go on to college. For the purposes of a quick-and-dirty estimate, let's say that it's the top 60% by academic qualifications. Thus, if you are at, say, the 70th percentile of your high school class, you are in the middle of those going on to college. Given that many of those who go on to college don't finish a degree, being at the 70th percentile of your high school class may mean that you can expect to be ranked in the bottom quarter of those who complete a college degree. Sure, some students will improve dramatically from high school to college, but it's a statistical fact that half of college graduates will be below the median, and one-fourth will be in the bottom quarter, and especially if you are advising a large number of high school students, it's unrealistic to tell each of them that that they can all end up in the upper part of the college distribution.

Some students borrow too little: for example, they don't take advantage of the the subsidy implicit in the student loans for which they are eligible, or they run large credit-card debts when it would be much cheaper to use student loans to borrow.
"[O]ne in six full-time students at four-year institutions who are eligible for student loans do not take up such loans—thus forgoing the subsidy ... Another possible sign of the underuse of student loans is that a number of students are carrying more-expensive credit card debt when they could instead be borrowing through student loans. Among students who entered college in 2004, 25.5 percent of those who were still enrolled in 2006 and 37.7 percent of those who were still enrolled in 2009 reported that they had credit card debt. But between one-third and one-half of these students (45.6 percent of students with credit card debt in 2006 and 38.5 percent of students with credit card debt in 2009) had
not borrowed from the Stafford loan program. Carrying credit card debt without maximizing Stafford borrowing burdens students with unnecessarily inflated interest rates—a choice that can interfere with a student’s ability to finish a degree." In addition, there are a number of students working more than 20 hours per week, and at least some of them might have a better chance of finishing their degree if they borrowed more and didn't try to work many hours.

Clearly, some of this advice would, if taken seriously, discourage some students from taking out loans to attend college. Given the price of higher education, I think that hard choice needs to be faced. Several decades ago, it was a low-risk option to spend a few years working part-time and attending a big public university: if it didn't end up in a degree, at least you didn't rack up much or perhaps anything at all in loans and you could learn something and have a good time and grow up a little along the way. But at current prices, that part-time job won't pay the higher education bills at most institutions. Sending a message that all students should try a few years of college, even if it requires taking on tens of thousands of dollars in loans, is borderline irresponsible.

Before students take on a heavy weight of loan burdens that could loom over their financial life for several decades, they need to confront some legitimate questions:

Are you attending a college--especially a for-profit--with a high drop-out rate?

Are you planning on a major (or a set of classes that will build real skills) so that you have good employment prospects?

How strong is your personal motivation for attending classes and finishing a degree?

Does your high school class ranking give you reason to believe that you have the ability to succeed?

If your higher education experience doesn't turn out as you hope, and you don't finish the degree, or you don't end up with a job that pays substantially less with the median in your field, will you feel OK with the loans you have taken out?

Are you taking out an average amount of loans, so that you will be committing no more than about 10% of your income to repaying them?

Given the growing wage gap between those with a college degree and those without, it will make economic sense for lots of students to borrow, especially at today's rock-bottom interest rates. But with student loans, we're talking about young adults often in their late teens and early 20s making financial decisions that could be with them for decades to come. It's a transaction that should be made with caution and consideration.

Monday, March 26, 2012

Today, the U.S. Supreme Court is scheduled to start hearing the oral arguments over President Obama's signature health-care reform legislation. I'll save you from my personal uninformed blathering about constitutional law--it will doubtless be easy to find such opinionizing elsewhere on the web. But as a backdrop, it seemed useful to note the basic fact that in the U.S., employer-provided health insurance is fading.

Here's a figure based on data from the U.S. Census Bureau (see Table HIB-1 here). The top line shows that employer-based health insurance covered 65% of the U.S. population in 2000, and 55% of the U.S. population in 2010. This decline seems to have accelerated since the start of the Great Recession, but it was well underway already. On the other side, the share of the U.S. population covered by Medicaid has risen from 10% in 2000 to 16% now, and the proportion of Americans without health insurance coverage has risen from 13.6% in 2000 to 16.3% in 2010. Other categories not shown in the figure have changed less. Direct purchase of private health insurance is down a bit, from 10.6% of the population in 1999 to 9.8% in 2010. Medicare and military health insurance have expanded a bit, with Medicare rising from 13.4% of the population in 1999 to 14.5% in 2010, and military health insurance rising from 3.1% of the population in 1999 to 4.2% of the population in 2010.

"The recent experience with employer-sponsored health insurance could be viewed as an acute illness aggravating a chronic condition. The acute illness—the sluggish economy and weak employment situation—likely will resolve at some point. But the underlying chronic condition—rising health care costs—likely will persist. Rising health care costs help explain why employers have become less and less likely to offer employer-sponsored coverage as a fringe benefit. Rising costs also have prompted employers to require workers to contribute a larger share of premiums and shoulder increased patient cost sharing at the point of service through higher deductibles, coinsurance and copayments. If health care cost increases continue to outpace wage increases, more workers are likely to conclude that health coverage is not worth the cost. ...

"There has been vigorous debate about the effects of national health reform on employer-sponsored insurance. The best estimates project that health reform will have little net impact, but estimates vary widely. The debate, however, often misses a key point—employer-sponsored insurance is likely to continue to erode with or without health reform, especially among lower-income families and those employed by small firms. ... Perhaps more central to the long-term future of employer-sponsored insurance is whether the health care delivery and payment system reforms, which are other important components of health reform, succeed in slowing the growth of health care costs and health insurance premiums faced by employers and employees."

It's worth remembering, for those who haven't read the history, that the predominance of employer-provided health insurance in the U.S. economy is an historical accident. Melissa Thomasson offers a nice overview in "From Sickness to Health: The Twentieth-Century Development of U.S. Health Insurance," in the July 2002 issue of Explorations in Economic History, but that's not freely available on-line. However, Thomasson offers a brief overview at the Economic History Association website here.

Thomasson points out that the number of Americans with health insurance went from 15 million in 1940 to 130 million in 1960. Blue Cross/Blue Shield plans began to be established in the 1930s. Then in World War II, the fateful decision was made to encourage employers to provide health insurance, and not to tax individuals on the value of that health insurance they received. Here's Thomasson:

"During World War II, wage and price controls prevented employers from using wages to compete for scarce labor. Under the 1942 Stabilization Act, Congress limited the wage increases that could be offered by firms, but permitted the adoption of employee insurance plans. In this way, health benefit packages offered one means of securing workers. ... [I]n 1949, the National Labor Relations Board ruled in a dispute between the Inland Steel Co. and the United Steelworkers Union that the term "wages" included pension and insurance benefits. Therefore, when negotiating for wages, the union was allowed to negotiate benefit packages on behalf of workers as well. This ruling, affirmed later by the U.S. Supreme Court, further reinforced the employment-based system.

"Perhaps the most influential aspect of government intervention that shaped the employer-based system of health insurance was the tax treatment of employer-provided contributions to employee health insurance plans. First, employers did not have to pay payroll tax on their contributions to employee health plans. Further, under certain circumstances, employees did not have to pay income tax on their employer's contributions to their health insurance plans. The first such exclusion occurred under an administrative ruling handed down in 1943 which stated that payments made by the employer directly to commercial insurance companies for group medical and hospitalization premiums of employees were not taxable as employee income. While this particular ruling was highly restrictive and limited in its applicability, it was codified and extended in 1954. Under the 1954 Internal Revenue Code (IRC), employer contributions to employee health plans were exempt from employee taxable income. As a result of this tax-advantaged form of compensation, the demand for health insurance further increased throughout the 1950s ..."

I have no insight into how the U.S. Supreme Court will rule on the Obama health care legislation. But the U.S. health care system continues to face severe problems: tens of millions of uninsured Americans with their share of the U.S. population rising, rises in health care costs that continually outstrip inflation, and the ongoing decline of employer-provided health insurance, the main mechanism through which a majority of Americans have received their health insurance in the last half-century. Whether the Affordable Care Act is ruled constitutional or not, it's abundantly clear that it won't be a final answer to these issues; indeed, it may well end up being a transitional piece of legislation that needs to be thoroughly revisited and reworked. For example, the Congressional Budget Office recently estimated that under the Affordable Care Act, after taking into account that some firms will expand health insurance coverage and others will contract or not offer it, by 2019-2022, about 3-5 million fewer people on net will have employer-sponsored coverage as a result of the law.

Note: Thanks to Danlu Hu for putting together the figure on health insurance coverage.

Friday, March 23, 2012

Theories of leverage cycles have been around for awhile: to name a few examples, in the work of Irving Fisher back in the 1930s, Hyman Minsky in the 1970s, and John Geanakoplos in the last decade or so. Here, I'll offer a quick description of the theory of leverage cycles, and why it makes a plausible explanation for financial crises and at least some recessions. There has been some question about how well the data supported such a story. I'll offer some basic graphs suggest that the Great Recession in the U.S. economy can be interpreted (at least in part) as a leverage cycle. In addition, in a recent working paper called "When Credit Bites Back: Leverage, Business Cycles, and Crises," Oscar Jorda, Moritz Schularick, and Alan M. Taylor (no relation) present evidence on the importance of leverage cycles based on data from almost 200 recessions in 14 advanced economies between 1870 and 2008. If sharply rising leverage poses systematic macroeconomic hazards, it suggests that central banks and other policy-makers should be paying attention to this variable as the economy evolves.

"Leverage" is the term that economics and finance people use for the extent of borrowing. To illustrate the theory of the "leverage cycle," I'll first use an example from housing markets. Say that the housing market is using a general rule (with a few exceptions) that people need to have a 20% down-payment. But over time, housing prices seem to be stable or rising, so that 20% begins to seem overly stringent. More loans get made with a 10% downpayment, or no down-payment, or subprime mortgages to those who wouldn't have qualified to borrow earlier, and all the way to the infamous NINJA loans, made when the borrower didn't provide any financial information: that is, "No Income, No Job or Assets." The greater ease of borrowing means more purchasing power to buy houses, and the rising price of houses that results makes it seem like even lower down payments make sense. The same logic leads people to increase their leverage by taking out bigger loans over longer terms, or of the loan, or mortgages that reset with much higher payments.

But of course, as the down payments fall and leverage increases in these other ways, borrowers become much more vulnerable to a downturn in prices. And a leverage cycle pops, not only borrowers but those holding the debt, like banks and financial institutions, are vulnerable as well.

Now extend the example of increase borrowing ("leverage") for housing across all sectors of the economy. When the economy is going well, the risk of default looks low, and borrowing expands: that is, more borrowing for housing, for cars, for credit cards, for student loans. More borrowing by businesses and by financial firms. The greater borrowing pushes up the economy for a time, but borrowing can't stay on a rising trend forever. When the bubble bursts, those who have overborrowed still need to make their interest payments. Some will be unable to do so, and many will make the payments but retrench for a time, trying to minimize their borrowing and reduce their debt levels. Just as the climbing leverage in the upward part of the cycle supported an expanding economy, the falling leverage in the downward part of the cycle magnifies the downward effects.

The claim isn't that leverage cycles explain all recessions, but rather that they can help explain why some recessions--often those that also include a financial crash--can turn out to be so severe. The U.S. data on borrowing certainly suggests that leverage went through a leverage cycle. Here are two graphs from FRED, the ever-useful website run by the St. Louis Fed. The first shows total bank credit in proportion to GDP. Total bank credit was about 45% of GDP, give or take a bit, from 1975 through the mid-1990s. But then it starts rising, hitting 50% of GDP by about 2002, and then shooting up to about 67% of GDP by 2009. It has dropped since then, but is still above 60% of GDP. But when leverage rises this fast, it has "bubble" written all over it.

A second table tells a similar story, but this time using total credit market debt owed--that is, including bank debt along with bonds and commercial paper and other forms of borrowing--divided by GDP. One might expect an economy's ratio of bank credit/GDP or total credit/GDP to rise gradually over time, as financial institutions in a country become more developed and sophisticated. But notice that it takes 28 years for total credit market debt to rise from 150% of GDP in 1975 to 300% of GDP in about 2003--and then just six years for it to rise from 300% of GDP to 400% of GDP. Also, notice that in earlier recessions, these measures of leverage flatten out, but don't drop off noticeably. The Great Recession looks like a time when, unlike other recessions in this time period, borrowers and lenders as a group felt a need to pull back dramatically. Indeed, that's one way to illustrate what a "financial crisis" means on a graph.

Jorda, Schularick and Taylor sift through data on nearly 200 recessions in advanced economies from 1870 to 2008. Some involved financial crises; many did not. They write:

"We document a new and, in our view, important stylized fact about the modern business cycle: the
credit-intensity of the expansion phase is closely associated with the severity of the recession phase. In other words, we show that a stronger increase in nancial leverage, measured by the rate of growth of bank credit over GDP in the boom, tends to lead to a deeper subsequent downturn. Or, as the title of the paper suggests--credit bites back. This relationship between leverage and the severity of the recession is particularly strong when the recession coincides with a systemic financial crisis, but can also be detected in "normal" business cycles."

In particular, they find that when an expansion has been driven by a credit boom, the recessions that follow are more likely to involve a severe drop in lending, which in turn is felt most greatly in a decline in investment:

"In a normal recession the drop in private loans mirrors the drop in real GDP per capita and the amount of leverage appears to have almost no eff ect. Thus at the six year mark, the cumulated drop is also about 5%. Contrast that with the severe contraction in lending during a nancial crisis recession. With average levels of excess leverage, lending activity drops by three times more than in normal times, about 15%. Measured against the decline in output during the same circumstances, the ratio is about 2-to-3. ... [W]here is the drop in lending most acutely felt? ... In normal recessions, the cumulative decline in the investment to GDP ratio is roughly on a par with the decline in output (but since we report the ratio, this naturally means that investment is declining faster than output). These declines are far more dramatic during fi nancial crisis recessions, almost three times as large in magnitude."

A key policy question from the Great Recession is what policy-makers should be looking at. Saying that it should be national policy to make sure that housing prices don't rise too fast or don't fall, or that the stock market won't fall, seems unrealistic and counterproductive in a market-oriented economy. (After all, part of what drives a leverage cycle is a belief that the danger of falling prices is so low.) But data on bank credit and total credit are available on a regular basis. At least a couple of years before the financial crisis first hit in late 2007, it would have been possible for the central bank and financial regulators to take various steps to slow the credit boom. Of course, it would have been politically unpopular at that time for them to do so! But as the economy staggers through a shaky recovery, with unemployment rates predicted to stay above 8% into 2014, maybe serious policy-makers can find the courage to forestall the next credit boom before it leads to such a devastating crash.

As Towse writes: "Governments the world over are looking for evidence on the economic effects of copyright law, the more so since the increased emphasis in government growth policy on the role of the creative industries has led to the justification of copyright as a stimulus to the economy. What they usually get in response to calls for evidence are persuasive statements from stakeholder interest groups that have sufficient funds for lobbying." Here are some lessons that Towse draws from the existing evidence:

Copyright terms are too long
"Almost all economists are agreed that the copyright term is now inefficiently long with the result that costs of compliance most likely exceed any financial benefits from extensions (and it is worth remembering that the term of protection for a work in the 1709 Statute of Anne was 14 years with the
possibility of renewal as compared to 70 years plus life for authors in most developed countries in the present, which means a work could be protected for well over 150 years)."

Extending copyright protection retroactively never makes sense
"One point on which all economists agree is that there can be no possible justification for retrospective
extension to the term of copyright for existing works since it defies the economic logic of the copyright incentive, something that nevertheless has been enacted on several occasions. ... Perhaps the most notorious case was the CTEA (Sonny Bono or Mickey Mouse) extension in the USA [the Copyright Term Extension Act of 1998] which was also followed up by the European Union, thereby handing out economic rents to the rich and famous of the entertainment world and, more likely, to their descendants."

Copyright is too one-size-fits-all
"[T]he scope of copyright is very broad and nowadays covers many items of no commercial value that were never intended to be commercialized, as is the case with a great deal of material on social-networking sites. This raises the question of the incentive role of the scope of copyright since it offers the same ‘blanket’ coverage for every type of qualifying work. In general, the lack of discrimination in this ‘one-size-fits-all’ aspect of copyright is another subject on which economists are agreed: in principle, the incentive should fit the type of work depending upon the investment required, the potential durability of the work and so on - computer software and operas do not have much in common. This applies as much to the term as to the scope of copyright; some works retain their value over a very long period while others lose it very quickly. The rationale for this lack of discrimination, however, is that ‘individualizing’ incentives would be prohibitively costly both to initiate and to enforce. As it is, that copyright is recognized to have become excessively complex and therefore very costly for users and authors.

Copyright will often be managed collectively
"For many rights, such as the public performance right, individual authors and performers cannot contract with all users and the solution is collective rights management. That minimizes transaction costs for both copyright holders and users of copyright material but introduces monopoly
pricing and blunts the individual incentive — another trade-off. ... Most economists agree that collective rights management is necessary in those circumstances in order for copyright to be practicable."

Only superstars profit much from copyright
"Research on artists’ total earnings including royalties shows hat only a small minority earn an amount comparable to national earnings in other occupations and only ‘superstars’ make huge amounts. Copyright produces limited economic rewards to the ‘ordinary’ professional creator; on the other hand, what the situation would be like absent copyright protection cannot be estimated."

Copyright can encourage protecting rents ahead of actual creativity
"[E]conomists have long had concerns that copyright has a moral hazard effect on incumbent firms, including those in the creative industries, by encouraging them to rely on enforcement of the law rather than adopt new technologies and business models to deal with new technologies. ... It is well-known that creative industries have spent huge amounts of money lobbying governments for increased copyright protection both through strengthening the law and stronger enforcement, not only
within national boundaries but also through international treaties."

A Policy Proposal: Renewable Copyright
"Copyright could be become more similar to a patent by having an initial term of protection of a work, say of 20 years, renewable for further terms. ... The advantage of this is twofold: it enables a ‘use it or lose it’ regime to function and, more relevant to the economics of copyright, it enables the market to function better in valuing a work (the vast majority of works, as we know, are anyway out of print because they are deemed to have no commercial value while the copyright is still valid); knowing that renewal would be necessary would also alter contractual terms between creators and intermediaries, thereby improving the efficiency of contracting and the prospect of fairer contracts."

In my own Journal of Economic Perspectives, Hal Varian wrote a nice article on "Copying and Copyright" in the Spring 2005 issue. Hal discusses useful insights about the appropriate height, width, and length of copyright, and how the existence of copyright affects pricing decisions. I found especially memorable and amusing his pocket overview of the U.S. history of copyright: that is, ignoring foreign copyrights through much of the nineteenth century, because there were relatively few U.S. authors with an international reputation to protect, and pirating works from the United Kingdom was free. Here's Varian (footnotes omitted):

"The U.S. Copyright Act of 1790 was modeled on the Statute of Queen Anne, and it offered a 14-year monopoly to American authors, along with a 14-year renewal. Note carefully the emphasis on American. Foreign authors’ works were not protected by the American law. In contrast, many other advanced countries, such as Denmark, Prussia, England, France and Belgium, had laws respecting the rights of foreign authors. By 1850, only the United States, Russia and the Ottoman Empire refused to recognize international copyright.

"The advantages of this policy to the United States were quite significant: it had a public hungry for books and a publishing industry happy to provide them. A ready supply of market-tested books was available from England. Publishing in the United States was virtually a no-risk enterprise: whatever sold well in England was likely to do well in the United States.

"American publishers paid agents in England to acquire popular works, which were then rushed to the United States and set in type. Competition was intense, and the first to publish had an advantage of only days before they themselves were subject to competition. As might be expected, this unbridled competition led to very low prices: in 1843, Dickens’s Christmas Carol sold for six cents in the United States and $2.50 in England.

"However, there were some mitigating factors. Publishers sometimes paid well-known English authors for advance copies of their work, since priority was critically important for sales, and, according to Plant (1934), some English authors received more money from American sales, where they held no copyright, than from English sales, where copyright was enforced.

"Throughout the nineteenth century, proponents of international copyright protection lobbied Congress. They advanced five arguments for their position: 1) it was the moral thing to do; 2) it would help stimulate the production of domestic works; 3) it would prevent the English from pirating American authors; 4) it would eliminate ruthless domestic competition; and 5) it would result in better-quality books.

"The rest of the world was far ahead of the United States in copyright coordination. In 1852, Napoleon III issued a decree indicating that piracy of foreign works in France was a crime; he was motivated by the hope of reciprocal arrangements with other European countries. His action led to a series of meetings, culminating in the Bern conventions of 1883 and 1885. The Bern copyright agreement was ratified in 1887 by several nations, including Great Britain, France, Germany and Spain—but not the United States.
It was not until 1891 that Congress passed an international copyright act.

"The arguments advanced for the act were virtually the same as those advanced in 1837. However, the intellectual climate was quite different. In 1837, the United States had little to lose from copyright piracy. By 1891, it had a lot to gain from respecting international copyright, the chief benefit being the reciprocal rights granted by the British. On top of this was the growing pride in homegrown American literary culture and the recognition that American literature could only thrive if it competed with English literature on an equal footing. Although the issue was never framed in terms of “dumping,” it was clear that American authors and publishers pushed to extend copyright to foreign authors to limit cheap foreign competition—such as Charles Dickens.

"The only special interest group that was dead opposed to international copyright was the typesetters union. The ingenious solution to this problem was to buy them off: the Copyright Act of 1891 extended protection only to those foreign works that were typeset in the United States! This provision stayed in place until 1976."

Wednesday, March 21, 2012

As Mahmoud Elamin and William Bednar of the Cleveland Fed point out: "Structured finance has been vilifi ed as the culprit behind the worst recession since the Great Depression. Every aspect of its design has been disparaged: faulty underlying loans, bad incentives for originators, dubious AAA ratings and mispriced risks." In the March 2012 issue of Economic Trends, Cleveland Federal Reserve, they update the story by asking: "How Is Structured Finance Doing?"

Start with defining terms: "Structured finance securities are debt instruments collateralized by a securitization pool of loans. The pool’s cash inflow supports the cash outflow to pay the securities off. The securities are divided into multiple tranches characterized by their seniority. The most senior tranche is paid first; the second senior gets paid only after the first senior is paid and so on. Investors buy the tranche that best fits their risk appetites. We look at three products that fall under the general
heading of structured finance: mortgage-backed securities (MBS), asset-backed securities (ABS),
and collateralized debt obligations (CDO). MBS are backed by mortgages, ABS are backed by assets
such as credit card loans, auto loans, student loans, and the like, while CDO are backed by investment grade loans, high-yield loans, other structured finance products, and the like."

What happened in each of these three categories? In the first category, the mortgage market, the total value of mortgage originations dropped off after about 2003. However, the share mortgage originations that were packaged as securities has continued to rise. Here are a couple of illustrative figures.

Why has the share of mortgages packaged as securities continued to rise? Elamin and Bednar name three possible reasons, but don't try to quantify them: a rise in private demand for such instruments, polices of government-sponsored enterprises like Fannie Mae and Freddie Mac, and the Federal Reserve "quantitative easing" policies, which have involved direct purchase of about a $1 trillion in mortgage-backed securities.

The second broad category of securitized finance is asset-backed securities. The biggest categories here are securities backed by auto loans and by credit card loans, with securities backed by student loans as another large category. Issuance of asset-backed securities dropped off by about half after 2006. In addition, the share of total auto-loan debt that is securities fell from above 40% to 30%, while the share of credit card debt repackaged as asset-backed securities fell from more than 30% to around 15%.

The third category is collateralized debt obligations. This is the category of structured finance most thoroughly implicated in the housing price bubble. Issuance of these securities rose from less than $100 billion in 2003 to about $500 billion in both 2006 and 2007, at the peak of the housing bubble, and since has fallen to near-zero. In addition, these collateralized debt obligations at the peak were largely based on mortgages, especially subprime mortgages. These were the financial instruments that started off with subprime mortgages, and then were divided into tranches. The junior tranches agreed to take the first of any losses that arose. Thus, the senior tranches--seemingly protected by the junior tranches--managed to get AAA credit ratings, and thus regulators let banks hold these "safe assets." When the housing bubble burst, and many of these subprime mortgages went sour, the popping of the housing market bubble had leaked into the banking system. Today, CDOs aren't based on housing; instead, what remains of the market is main involve securitizing investment-grade bonds and high-yield loans.