Pages

Saturday, February 28, 2015

Much of this week I've been posting figures and snippets of analysis from the 2015 Economic Report of the President, written by the Council of Economic Advisers. Here's one more. Companies that have additional cash in hand after paying their expenses and dividends have several choices: among the choices, they can use the funds for investing to increase output or improve efficiency, or they can use the funds to buy back the company's own shares. Here's the pattern between these two choices over time. (Firms can also get funds for investment from other sources, like issuing bonds, so the two lines in the figure below don't need to sum to 100%.)

And here's the explanation from the Council of Economic Advisers:

Nonfinancial corporations spent a lower-than-average share of their internal funds (also known as cash flow) on investment during 2011 to 2013 (see Figure 2-25). Instead, these corporations used a good part of those funds to buy back shares from their stockholders. Share buybacks are similar to dividends insofar as they are a way for corporations to return value to shareholders. They differ, however, with regard to permanence: whereas dividend changes tend to persist, share buybacks are one-time events. (When firms raise investment funds by issuing new equity, the nonfinancial sector aggregate of share buybacks in the figures can be negative, as was common in the 1950s and 1960s.) The decline in the invested share of internal funds from 2011 to 2013, together with the rise in share buybacks, suggests that firms had more internal funds than they thought they could profitably invest. As can be seen in Figure 2-25, the investment outlook appears to have improved in 2014, and the investment share of internal funds has rebounded to near its historical average. Share buybacks, however, remain high.

I'll only add that one of the major conundrums for the U.S. economy during the slow recovery since the Great Recession has been the issue of "Sluggish U.S. Investment" (June 27, 2014). Many firms were earning high profits, but as they saw it, the most productive option for using a substantial share those profits during the last few years apparently was not to invest in the higher efficiency and output.

Friday, February 27, 2015

It's fairly well-known that US labor force participation--that is, the share of U.S. adults who are classified either employed or unemployed--has been dropping. But it's not always recognized how the U.S differs from other high-income economies in this trend, or how

The 2015 Economic Report of the President, released last week by the White House Council of Economic Advisers, offers some striking evidence on these points. The top figures shows labor force participation rates for "prime-age males," who fall into the 25-54 age category. The nice thing about looking at this group is that countries may differ considerably in their patterns of the extent to which students attend school into their early 20s, or the extent to which people retire in their late 50s and early 60s. Looking at the "prime-age" group leaves these ages out of the picture.

For men, the U.S. was middle-of-the-pack in labor force participation rates of prime-age males in 1990, and now vies with Italy for the lowest level. For women, the U.S. was near the top-of-the-pack prime-age labor force participation in 1990, but since then has been surpassed by France, Canada, Germany, and the United Kingdom, and is now about even with Japan--which has not been historically known as a country with high labor force participation for women.

The Council of Economic Advisers sums up the cross-country patterns in this way:

Since the financial crisis, U.S. prime-age male participation has declined by about 2.5 percentage points, while the United Kingdom has seen a small uptick and most large European economies were generally stable. Of 24 OECD countries that reported prime-age male participation data between 1990 and 2013, the United States fell from 16th to 22nd. The story is somewhat similar among prime-age females. ... In 1990, the United States ranked 7th out of 24 current OECD countries reporting prime-age female labor force participation, about 8 percentage points higher than the average of that sample. But since the late 1990s, women’s labor force participation plateaued and even started to drift down in the United States while continuing to rise in other high-income countries, as shown in Figure 1-10. As a result, in 2013 the United States ranked 19th out of those same 24 countries, falling 6 percentage points behind the United Kingdom and 3 percentage points below the sample average.

These patterns of decline in US male and female labor force participation go back in time. The share of the male population above the age of 16 in the labor force has been falling for decades. The share of the female population above the age of 16 in the labor force rose steadily in the second half of the 20th century, but levelled out around 2000 and has been falling since.

When combining the cross-country data, the time series data, and the depth of the Great Recession, the report argues that the decline in labor force particpation rates in recent years is pretty well explaiued. The CEA writes:

Between 2007 and 2012 the decline in participation is fully (and at some points more than fully) explained by the aging of the population and standard business-cycle effects. Beginning in 2012, however, the labor force participation rate decline began to exceed what was predicted from aging and cyclical factors. Since late 2013, the labor force participation rate has stabilized and the portion of the decline that was unexplained shrank, albeit slowly, between the second and fourth quarters of 2014 ...

What explains the "residual" factor in the figure below? Part of it is probably due to a gradually lower rate of labor force participation within US age groups (like the evidence on prime-age workers given above), while another part is surely due to the fact that the Great Recession was so severe that it "led to a greater-than-normal cyclical relationship between unemployment and participation."

Whatever the reasons, as the U.S. economy looks ahead to the next few decades, figuring out ways so that the decline in labor force participation can be stabilized and reverse is an important goal of public policy.

Thursday, February 26, 2015

A certain amount of job training happens through the basic experience of showing up for work everyday. But in some cases, more specific training is called for, which can either be sponsored or provided by an employer, or by the worker. Here's some evidence from the recently released 2015 Economic Report of the President, by the Council of Economic Advisers, showing a decline in employer-provided and on-the-job training in recent decades. This data is collected only irregularly, when the Census Bureau decides to include the "module" that includes this particular set of questions. But with the most recent data in 2008, it seems pretty unlikely that employer-provided and on-the-job training have risen in the aftermath of the Great Recession.

The economic theory of job training is built on a distinction between company-specific skills, which are of much greater use to a specific employer, and general-purpose skills, which can be used with a variety of employers. In an economy where people can move fairly easily between jobs, employers will be willing to pay for company-specific training--that is, training that is mainly relevant to learning about what is done at that particular company, and would be of less use at other companies. If employers know that employees are likely to remain with the firm for a substantial time, then employers become more willing to pay for general-purpose training, because it is more likely to pay off for the firm. Otherwise, people will have to pay for general-purpose training, which could be used at other companies. In some cases, workers may "pay" for their general purpose training by taking a job that offers lower wages, but offers a systematic training program.

Thus, types of skills, who pays for them, and how they are paid for can be sliced various ways. But just looking at the overall pattern, a decline in employer-sponsored and on-the-job training suggest that workers who wish to keep building their skills are getting less support from their employers.

Wednesday, February 25, 2015

Everyone knows that the economic future belongs to those who put technology and innovation to work. One part of the formula for economic success for a high-income economy is active research and development, which then offers spillovers that are typically greater for the national economy than for the world economy as a whole. I've posted here about the arguments for raising R&D spending substantially, and here with a global overview of R&D spending. Dan Steinbock offers some additional perspective in "American Innovation Under Structural Erosion and Global Pressures,"
a February 2015 report written for the Information Technology and Innovation Foundation.

Here's a picture of the global R&D effort. The horizontal axis shows R&D spending as a share of GDP for various economies. The vertical axis shows scientists and engineers per million people (thus adjusting for population size). The size of the circle for each country is relative to the total number of scientists and engineers: thus, China and India have fewer scientists and engineers per capita, but because of their large populations, the size of their circles is relatively large. The color of the circle shows the region of the world, as given by the key in the upper left of the figure.

Clearly, the U.S. position in global R&D remains fairly strong. But the US lags behind Germany and Japan, among others, in the share of its economy going to R&D. The size of the yellow circles--Japan, South Korea, Taiwan, Singapore, China, Australia, and India--shows that the region with the largest absolute number of scientists and engineers is now the Asia-Pacific area. The U.S. has had a tradition of leading or being close to the lead in most areas of technology. As technological capacities expand around the rest of the world, the U.S. needs to strengthen its ties to the work being done elsewhere, increase the U.S. R&D effort, or both.

Steinbock lays out a number of trends in U.S. R&D. Here are a few that caught my eye. First, the ratio of R&D spending to GDP hasn't moved much in the US since the 1960s, hovering around 2.5% of GDP. However, federal support for R&D has been falling by this measure over time, while the nonfederal share has been rising.

Here's a figure showing federal R&D spending as a share of total federal spending (the red line). For four decades, while pretty much every politician talks up the virtues and importance of R&D spending, the share of the federal budget going in that direction has been sliding gradually lower--including in the last few years. Clearly, when it comes to political clout, researchers are slowly losing the battle.

Where is the US doing its R&D? A greater share is being done within businesses, and funded by businesses. A lesser share is being done within academic, government, and non-profit labs.

Of cousre, it's highly desireable to have a vibrant R&D effort in the private sector. But it's important to recognize that businesses have a tendency to emphasize the "D" of development, which promises a relatively rapid economic payoff, rather than the "R" of research, where the payoffs for that specific firm are often less clear. The fear is that the US R&D effort is disproportionately heavy on, say, new phone apps and web dating services, not on tackling basic research in, say, biomedical areas and life sciences, materials science, nanotechnology, and energy production and storage.

Tuesday, February 24, 2015

A little while back, I was discussing with a student about how or in what ways the U.S. economy would be affected by economic events in China. For a child of the 20th century, like me, the entire discussion was a little surreal. For the first nine-tenths of the 20th century, it would have been irrelevant to the point of ridiculousness even to ask how China's economy would affect the United States. But here in the 21st century, that question will take increasing importance.

First of all, China's rate of economic growth has slowed a bit, down to a "mere" 7.5% per year. This s down from the boom years of the mid-2000s, but it's similar to China's rate of growth in the 1990s. A quick reminder: at a 7.5% rate of growth, the size of China's economy doubles in a decade.

Along with the slower rate of growth, the other big concern about China's economy is whether it is experiencing its own version of a credit bubble. Here's a figure showing the credit flowing to nonfinancial corporations and households (that is, government debt is not included, and borrowing by the financial sector isn't included). Notice that the vertical axis here is the size of GDP--and the size of China's GDP is roughly similar to that of the United States or that of the euro area. China's economy has seen a remarkable burst of credit in the last six or seven years. Indeed, China stands out from all other emerging economies for its combination of high debt/GDP ratios and a very rapid rise in debt in recent years.

What level of concern is appropriate for this combination of growth slowdown (albeit still to a fairly rapid pace) and credit boom in China? Here's what the Council of Economic Advisers has to say:

China’s economy grew 7.3 percent during the four quarters ended in the fourth quarter of 2014, down from an annualized rate of 9.2 percent in the eight quarters ended in the fourth quarter of 2011 (Figure 2-9). Both the IMF and the World Bank have downgraded their projections for Chinese growth in 2015 to a rate below 7.5 percent, which until recently was thought to be the Chinese authorities’ target rate.

China may face stresses in adapting to a slower rate of expansion. In May, President Xi Jinping reportedly suggested that the Chinese “… must boost our confidence, adapt to the new normal condition based on the characteristics of China’s economic growth in the current phase and stay cool-minded.” One concern is the growth in credit to nonfinancial corporations and households, much of which has been channeled through the so-called shadow banking sector (which undertakes risky bank-like functions, but outside the government-regulated part of the financial sector). As shown in Figure 2-10, credit growth in China since 2008 has increased faster than in many developed countries. An initial surge in 2009 was seen as an aggressive response to the global financial crisis, in line with expansionary policies around the world. The renewed boom in credit since 2012, however, has raised worries about the rapid expansion of the unregulated shadow banking sector and a bubble in real estate prices. The government has responded with a number of policy measures to limit lending activities outside of the traditional banking sector. Property price gains have moderated, however, and prices began to fall in 2014, even in larger, wealthier cities where in the past demand has typically outstripped supply. There is growing concern about overbuilding because contraction in the construction sector would further depress aggregate growth and could cause financial instability.

A further economic slowdown in China would have ramifications for the global economy and, in particular, for low- and middle-income countries. Trade between China and other emerging BRICS economies (Brazil, Russia, India, and South Africa) has expanded since 2000. China is now the top export destination for 15 African countries, 13 Asian economies, and 3 Latin American countries. If demand in China slows, exports to China would decline, broadly dampening emerging-economy growth. Since mid-2011, the other BRICS countries have suffered declining terms of trade (the relative price of a country’s exports compared with its imports). This decline is accounted for in large part by falling prices of commodities and raw materials, to which China’s slowdown is a major contributor.

Monday, February 23, 2015

Inequality of income is different from inequality of wealth. Income refers to what is gained over a time horizon, often a year. Wealth refers to differences in what has already been accumulated in the past. It is well-accepted among economists that income inequality has risen in recent decades: here's are a couple of my more recent posts on the subjec with some US data and some international data. But when it comes to wealth inequality, the data and the theory are much less clear.

Broad interest in the subject of wealth inequality--whether it has risen, is rising and/or will rise in the future--is a big part of what propelled Thomas Piketty's book Capital in the Twenty-First Century to best-seller status. The Journal of Economic Perspectives, where I have labored in the fields as Managing Editor since the launching of the journal in 1986, has a four-paper symposium in the just-released Winter 2015 issue about wealth equality, with contributions from Daron Acemoglu and James Robinson, Charles Jones, Wojciech Kopczuk, and a response and final word from Piketty. Here are some of the thoughts and insights about wealth inequality that I took away from the symposium.

1) The data on how inequality of wealth has evolved in the past is limited. As Piketty writes:

"[L]ong-run wealth inequality series are available for a much more limited number of countries than income inequality series. In Chapter 12 of my book, I present wealth inequality series for only four countries (France, Britain, Sweden, and the United States), and the data are far from perfect. We do plan in the future to extend the World Top Incomes Database (WTID) into a World Wealth and Income Database (W2ID) and to provide homogenous wealth inequality series for all countries covered in the WTID (over 30 countries). But at this stage, we have to do with what we have."

2) The wealth inequality data from the few countries in Piketty's book tend to show a dramatic fall in the share of wealth held by the top 1% from levels in the late 19th and early 20th century, and then a much more modest rise in wealth inequality in recent decades. Here's a figure from Chad Jones, using Piketty's underlying data, on the share of wealth (not income!) held by the top 1%. Overall, the big pattern is high and rising wealth concentration during the 19th century, wealth concentration falling or flat up through about 1970, and then a rebound in wealth concentration that is modest by these long-term historical standards.

3) The factual argument that US wealth inequality is rising sharply in recent decades--rather than rising only modestly as shown in Piketty's data above--ends up relying on a particular method of calculating wealth inequality. Wojciech Kopczuk goes through this issue in some detail in his contribution to the JEP symposium.

Broadly speaking, there are three ways to measure US wealth inequality in recent decades. One way is using data from the Survey of Consumer Finances which is done every three years (most recently in 2013) by the Federal Reserve. A second way is using data from estate taxes over time, which involves figuring out ways to project changes in wealth of the top 1% for the total population based on those who die and file estate tax returns in a given year. Both of these methods show a modest or minimal rise in US wealth inequality in receent decades.

The third method is to look at the capital income people receive as shown in their tax returns, and use that data to estimate their wealth. For example, if someone reports a certain amount of income from bank interest, then by looking at interest rates in the past year, you can make a solid estimate of how much money (on average) was in their bank account. Emmanuel Saez and Gabriel Zucman have published a working paper using this approach that is getting a lot of attention. It is "Wealth Inequality in the United States since 1913: Evidence from Capitalized Income Tax Data,” published last October as NBER Working Paper 20625.

This method requires some extrapolation. In some cases, wealth doesn't throw off income in a given year: for example, an IRA or 401(k) account doesn't show up as income on your taxes; gain in the value of your home doesn't show up as income in a given year; a higher value of a business you are running doesn't necessarily show up as income in a given year; and if you owns stock but don't get paid dividends, it doesn't show up as income. Indeed, as Kopczuk reports, Saez and Zucman estimate that "capital income on tax returns represents only about one-third of the overall return to capital." Even when a capital asset does throw off some income, it can be tricky to know what interest rate to apply so that you can infer the amount of wealth. It's easy enough to look at interest paid by a bank and infer the size of a bank account. But if you have capital gains from selling stock, or from more complex financial assets, inferring the underlying size of the wealth is trickier. Thus, a variety of extrapolation methods are used, like approximating the value of real estate by the amount of property taxes paid, which shows up as a deduction on a number of income tax forms.

During some periods, this "capitalization method" of estimating wealth tracks the results of the survey and estate-tax methods pretty well. In the last few years, the capitalization method shows more of a rise in wealth concentration at the top 1% in the US economy: in the figure above, the top 1% of the US wealth concentration would have about 40% of total wealth, rather than 30% of total wealth. Piketty says in his JEP essay that he tends to favor the Saez-Zucman estimates over those presented in his book. Kopczuk says that he tends to favor the survey-based and estate-tax methods. Both agree that measuring wealth and matching up the estimates across these three methods is a lively and unsettled area of research.

4) The form in which wealth is held and generated matters: for example, consider wealth from inheritance. If a rising share of wealth is inherited, then this might be more troubling. However, Kopczuk cites various pieces of evidence for the U.S. that "[t]he importance of inheritances as the source of wealth at top of the wealth distribution peaked in the 1970s and has declined since then."

Or consider wealth from housing. An often-mentioned paper by Odran Bonnet, Pierre-Henri Bono, Guillaume Camille Chapelle, Étienne Wasmer argues that esssentially all of the recent variation in household wealth in recent decades is due to a rise in housing prices--and in particular, to the fact that housing has become more expensive relative to renting. Here's their comment from a short summary they wrote about their research paper:

"The impressive success of Thomas Piketty’s book (Piketty 2014) shows that inequality is a great concern in most countries. His claim that “capital is back”, because the ratio of capital over income is returning to the levels of the end of the 19th century, is probably one of the most striking conclusions of his 700 pages. Acknowledging the considerable interest of this book and the effort it represents, we nevertheless think this conclusion is wrong, due to the particular way capital is measured in national accounts.The author’s claim is actually based on the rise of only one of the components of capital, namely housing capital. Removing housing capital, all other forms of capital exhibit no trend in the recent period. At the beginning of the 21st century, other forms of capital are, relative to income, at much lower levels than at the beginning of the previous century."

5) What about the famous r>g? One quick-and-dirty shorthand description of the dynamics of wealth inequality in Piketty's book is that if the rate of return r being received on capital assets is higher than the growth rate g of the economy as a whole, then wealth inequality is likely to grow. Treating this as Piketty's view has some justification. As Acemoglu and Robinson point out in JEP, Piketty makes comments in his book like book like: "This fundamental inequality [r > g] will play a crucial role in this book. In a sense, it sums up the overall logic of my conclusions." Piketty refers to r>g as a “fundamental force of divergence" and at another place refers to "fundamental tendencies of capitalist economies." However, in his JEP essay Piketty offers a differentt tone about r > g than one might expect based on how devoutly the equation was often invoked in discussing the book. Here's Piketty in JEP:

"[T]he way in which I perceive the relationship between r > g and wealth inequality isoften not well-captured in the discussion that has surrounded my book—even in discussions by research economists. ... I do not view r > g as the only or even the primary tool for considering changes in income and wealth in the 20th century, or for forecasting the path of income and wealth inequality in the 21st century. Institutional changes and political shocks—which can be viewed as largely endogenous to the inequality and development process itself—played a major role in the past, and will probably continue to do so in the future. In addition, I certainly do not believe that r > g is a useful tool for the discussion of rising inequality of labor income: other mechanisms and policies are much more relevant here, for example, the supply and demand of skills and education. One of my main conclusions is that there is substantial
uncertainty about how far income and wealth inequality might rise in the 21st century and that we need more transparency and better information about income and wealth dynamics so that we can adapt our policies and institutions to a changing environment. ...

The gap between r and g is certainly not the only relevant mechanism for analyzing the dynamics of wealth inequality. As I explained in the previous sections, a wide array of institutional factors are central to understanding the evolution of wealth. Moreover, the insight that the rate of return to capital r is permanently higher than the economy’s growth rate g does not in itself imply anything about wealth inequality. Indeed the inequality r > g holds true in the steady-state equilibrium of most standard economic models ..."

In case you didn't catch all that, Piketty is noting that r>g is not useful for discussing income inequality, and does not necessarily lead to wealth inequality, and that the future of wealth inequality is highly uncertain. Instead, Piketty argues in JEP that when the difference between r and g is relatively large, it will tend to exaggerate the effect of other changes that make wealth more unequal. As he writes: "To summarize: the effect of r − g on inequality follows from its dynamic cumulative effects in wealth accumulation models with random shocks, and the quantitative magnitude of this impact seems to be sufficiently large to account for very important variations in wealth inequality."

6) If r > g isn't the main driver of wealth inequality over time, what is? Some of the answers include the extent of taxes on wealth, the extent to which wealth is saved or consumed, and even the birth and death rates of the population, which affects how long concentrations of wealth will stay together and how many slices they will be divided into when passed to a new generation. There are questions about whether r is the same across the population, or whether those with high levels of wealth will typically be able to get higher returns than those with lower levels of assets. There are questions about the extent to which whether the new fortunes being created in businesses around the globe will displace earlier fortunes, and whether the new fortunes will be long- or short-lived. There are also events of history like World Wars, and events of politics like surges of populist sentiment. The historical evidence shows that "capitalism," broadly understood, can co-exist with higher and lower levels of wealth inequality, and with rising and falling wealth inequality, so any claim that "capitalism must lead to rising wealth inequality" is clearly incorrect.

In their JEP essay, Acemoglu and Robinson point out that dynamics of inequality over time can lead to quite counterintuitive results unless interpreted in historical context. For example, they point out that measures of income inequality fell as South Africa's apartheid regime came to power early in the 20th century, and then rose after the apartheid regime was ended in the 1990s. But of course, any statement about how the apartheid regime supported greater equality would ignore the political dynamics between groups of whites and between white and nonwhite populations over time.

With regard to the importance of factors that are not r or g, toward the end of his essay in JEP, Piketty writes: "As I look back at my discussion of future policy proposals in the book, I may have devoted too much attention to progressive capital taxation and too little attention to a number of institutional evolutions that could prove equally important, such as the development of alternative forms of property arrangements and participatory governance."

7) Finally, for those whose tastes for all things Piketty were not sated earlier much in this post, here are some additional links.

On the next to last page of his book Piketty writes: “It is possible, and even indispensable, to have an approach that is at once economic and political, social and cultural, and concerned with wages and wealth”. One can only agree. But he has not achieved it. His gestures to cultural matters consist chiefly of a few naively used references to novels he has read superficially, for which on the left he has been embarrassingly praised. His social theme is a narrow ethic of envy. His politics assumes that governments can do anything they propose to do. And his economics is flawed from start to finish. It is a brave book. But it is mistaken.

Antoine Dolcerocca and Gokhan Terzioglu recently interviewed Piketty for the Winter 2015 issue of the Potemkin Review. The title of the interview gives a sense of why it is interesting: "Interview: Thomas Piketty Responds to Criticisms from the Left." For a sample, here is part of Piketty's answer to a question about whether, by advocating a global wealth tax, he is downplaying the class struggle and the role of the state:

I think it would be a big mistake to oppose the objective of global progressive taxation of income and wealth with the objective of class struggle and political fight, for at least two reasons. First, making this tax reform possible would require a huge mobilization. This has always been the case in the past. All the big revolutions engendered a big tax reform. Take the French Revolution, the American Revolution, or World War One: although it was not a fiscal revolution initially, through the Bolshevik Revolution, it had a huge impact on the acceptance of a progressive tax regime and more generally social welfare institutions after World War One – and even more so after World War Two. These were fiercely opposed by the elite and by the right just before these shocks, so this shows that we need a big fight and sometimes violent shocks to make progressive tax accepted. It would be a big mistake to think of progressive taxation as a technocratic process that comes quietly from a minister and experts. This is not at all the history of taxation.

The second reason why one should not oppose class struggle and progressive taxation is that progressive taxation in itself is not enough. I think we also need to have new forms of governance and capital ownership. For instance, in the book I mention the difference between the private and social value of capital in corporations taking the example of German capitalism as compared to Anglo-Saxon capitalism, where I describe the role of workers on the boards of corporations. This probably reduces the market value of corporations, but it apparently does not prevent them from producing good cars. Therefore developing new forms of ownership, new forms of sharing of power between those who own capital and those who own their labor, is extremely important to me. Not only in the traditional manufacturing sector, but in many new sectors, such as higher education, media, culture, etc. the shareholder company is not the end of history and this form of organization and capital ownership is certainly not the future. We need progressive taxation of private capital, and at the same time, a new thinking of what capital ownership means and how we organize its owners. But we should not put these two forms of social progress [class struggle vs. progressive taxation] in opposition. They actually are very complementary, because progressive taxation is also a way to produce a regime based on transparency, on information about income and wealth that is necessary for workers’ involvement in the management of companies. If you do not know who owns your company, and if you do not have financial transparency about the wealth, the income, the profits, and the accounts of your company, how can you participate in decision-making? It would be a big mistake if some on the left believed “progressive taxation, that’s a technocratic thing, we don’t really care. We care about revolution, and capital ownership”. That would be a huge intellectual mistake.

Friday, February 20, 2015

If many people start making choices about how to invest their retirement assets, how much of their money will end up in the hands of financial advisers? The answer will of course vary across countries and situations, but Justine Hastings explains the dispiriting outcome in Mexico in "Privatizing Social Security: Lessons from Mexico," appearing in the latest NBER Reporter (2014, Number 4).

When Mexico privatized its Social Security system in 1997, it wanted to avoid a situation where people would make risky investments with their retirement accounts, In fact, the regulations that it set up were so tight that everyone was required to have essentially the same investment. Hastings writes:

"Mexico launched a fully-privatized defined contribution plan in 1997, with 17 participating fund managers which could compete to manage investors’ privatized social security accounts. Given the tight regulations on investment vehicles, fund managers each offered one, essentially homogenous investment product. Investors could choose which firm they wanted to have manage and invest — for a fee — their personal social security account. Despite the large number of competitors selling an essentially homogeneous product, management fees and fund manager profits were high."

Here's some evidence on the fees that emerged. The "initial load" is the amount of each deposit that is is immediately paid to the investment adviser. The "annual fee" is then paid each year on the balance in the account. As Hastings explains: "Fund managers charged an average load (a fee taken as a share of account contributions at the time of contribution) of 23 percent and an annual fee on assets under management of 0.63 percent, implying that a 100-peso deposit earning a 5 percent annual real return would only be worth 95.4 pesos after five years."

The table also shows that although the investment products themselves were largely identical, companies hired thousands of people to promote their own firms. Thus, the initial loads and annual fees were largely funding sales staff. Hastings considers two main policy choices in response to this situation.

One options would be for the government to run a fund that would barely advertise, and would keep expenses at rock-bottom. However, this might make less difference than you would think. Imagine that all the people who are sensitive to fees and understand that the investment products are identical go to the government accounts. As a result, the private companies can then focus their advertising on those who are not sensitive to fees and do not understand that the investment products are identical.

Mexico's government tried to introduce a "fee index," which combined the load and annual fees into a formula that produced a single number. But the characteristics of the formula meant that many firms shifted to higher loads and lower annual fees--with the actual revenue to the investment advisers not falling. In addition, the single fee failed to capture certain differences across workers, causing some workers to make choices that caused them to pay more.

Or to rephrase this issue, imagine that Americans could invest an amount equal to the expected value of their Social Security benefits, and that U.S. financial advisers could have advertising and sales forces to attract these funds, and could set loads and sales fees. In most contexts, I'm the sort of person who advocates that people should have considerable freedom to make their own choices and live with the results. But it's also true that most people don't follow Warren Buffett's advice that the average investor should put most of their money in a very low fee broad-based stock market index fund. It's also true, as Burton Malkiel points out, that even though the amount of money in US domestic equity mutual funds has risen more than 100-fold in the last three decades, and even though one would think that there would be substantial economies of scale in applying information technology to investing money, the average amount paid as expenses to the fund managers in actively managed funds has stayed the same for three decades.

In short, many people are tempted to believe that the right advisers will help them beat the market on a consistent basis, and financial fees often are not competed down to negligible levels. If the answer is a heavy dose of regulation so that there are very few options available for investing these retirement funds, then the logic for having private companies compete for market share doesn't make a lot of sense.

Thursday, February 19, 2015

Imagine that that you can save 100 lives by enacting one of two regulatory policies. The policies have the same cost, which must be paid right now. However, one of the regulatory policies saves the 100 lives in the present, while the other saves 100 lives 50 years from now. In this hypothetical example, the two policies are equal in their costs. Are the policies equal in their benefits, because both policies save 100 lives? Or does saving 100 lives in the present have a different value--a greater value--than saving 100 lives in the future?

This question involves what economists call the "discount rate," which expresses how much future benefits should be "discounted" compared to present benefits of the same size. If your answer to the hypothetical question is that saving the 100 lives 50 years from now has the same value as saving 100 lives right now, you are applying a discount rate of 0%--that is, benefits in the future are not discounted relative to benefits in the present. If your answer is that saving 100 lives now is a greater benefit than saving 100 lives 50 years from now, you are applying a positive discount rate.

Almost all economists argue that a positive discount rate is appropriate. At an intuitive level, having a benefit happen sooner is worth something. Also, there is a level of certainty in saving 100 lives right now, while saving 100 lives in 50 years has some degree of uncertainty as to whether that will happen. In addition, say that the hypothetical example would cost $1 billion in the present. If you invested that money in a safe financial that pays 3% per year, then after 50 years of compound interest it would add up to $4.38 billion--which makes the cost-benefit tradeoff 50 years from now look less attractive. A discount rate of zero would mean that we treat all costs and benefits as equivalent, no matter whether they occur in the present, the near-future, the middle-future, or the unimaginably distant future. Thus, the near-certainty of a large asteroid hitting the earth in the next few million years would be treated as of equal concern to if we could see the asteroid coming and the event was 10 years away--because the future isn't discounted.

But what should the discount rate be? And should the discount rate be a constant value over time, or a declining value over time? Consider what's at stake here. A higher discount rate means that we have more of an orientation to the present, and in particular will treat future benefits as much less important. A lower discount rate means that while we still have an orientation to the present, we are giving greater weight to what happens in the future. For public policy issues that involve spending resources now for a benefit that would occur (at least partly) in the distant future, like some of the risks of climate change, or the chance of an asteroid hitting the earth, it turns out that the choice of discount rate is extremely important.

"In the United States, however, the Office of Management and Budget (OMB) recommends that project costs and benefits be discounted at a constant exponential rate (which, other things equal, assigns a lower weight to future benefits and costs than a declining rate), although a lower constant rate may be used for projects that affect future generations. ... For intragenerational projects, the OMB (2003) recommends that benefit-cost analyses be performed using a discount rate of 7 percent, representing the pretax real return on private investments, and also a discount rate of 3 percent, representing the “social rate of time preference.”

Two points are worth noticing here. First, the U.S. policy has a fixed discount rate over time. Second, the difference between a 7% rate and a 3% discount rate over a long period of time like a century is enormous. Consider a policy that has a benefit of $100 billion that occurrs 100 years in the future. At a discount rate of 7%, it is worth spending $115 million or less in the present to achieve that benefit. (Sometimes it's useful to think of this calculation in reverse: If you invested $115 million at a 7% annual interest rate, you would have approximately $100 billion at the end of a century.) At an annual discount rate of 3% of annual rate it would be worth spending up to $5.2 billion in the present to achieve that benefit. Thus, one of the differences between those who would spend many billions of dollars in the present to reduce the risks of climate change in the decades and centuries ahead, and those who would spend only millions of dollars, can be traced to different discount rates.

But what about other countries? The authors write (citations omitted):

"In evaluating public projects, France and the United Kingdom use discount rate schedules in which the discount rate applied today to benefits and costs occurring in the future declines over time. That is, the rate used today to discount benefits from year 200 to year 100 is lower than the rate used to discount benefits in year 100 to the present."

They argue that after taking issues into account like uncertainty about the future, and the fact that the path of benefits recognized in the future will tend to follow a correlated pattern (rather than being random from year to year), a declining discount rate makes sense. You can check the articles for some ways in which such a rate could be estimated from data, but in practice, the decision of what rate to use may be guided by such studies, while ultimately being chosen by a regulator. They conclude:

"We have argued that theory provides compelling arguments for using a declining certainty equivalent discount rate. ... Clearly, policymakers should use careful judgment in estimating a DDR [declining discount rate] schedule, whichever approach is used. Moreover, as emphasized earlier, the DDR schedule should be updated as time passes and more data become available. Establishing a procedure for estimating a DDR for project analysis would be an improvement over the OMB’s current practice of recommending fixed discount rates that are rarely updated."

(Full disclosure: Hall is the Head of Policy Research at Uber Technologies, and Krueger was paid by Uber for his work on this study. However, Krueger's contract also specified that he had “full discretion over the content of the report.” Fuller disclosure: Alan Krueger was Editor of the Journal of Economic Perspectives, and thus my boss, from 1996 to 2002.)

Here is some basic background. The number of Uber "driver-partners" who provided at least four paid trips in the previous month has now reached 150,000. In the context of this U.S. economy, this is about 0.1% of the workforce. But Uber is just one company, and the number is growing quickly.

Here's a picture of Uber's growth by city. Clearly, it's a bigger presence in LA, SF, and NY. But the takeoff growth more recently in Miami and Houston is striking.,

Most Uber drivers are part-timers: in most cities other than New York, a majority work less than 15 hours per week as Uber drivers, and only about 13-16% work more than 35 hours per week. (Many have other jobs.) The hourly pay for being an Uber driver seems about the same regardless of how many hours you work.

Hall and Krueger also provide some evidence to put the ongoing battles between current cab drivers and Uber drivers in context. Most cab drivers, for example, work full-time.

Comparing wages between Uber and cab drivers is a little tricky, because the pay records for Uber don't take into account the costs of wear and tear on the car. But at least at a first approximation, it looks as if Uber drivers make more per hour.

More detailed data suggests a number of other demographic differences between Uber drivers and those who work as taxi drivers and chauffers. For example, 49% of Uber drivers are age 39 or younger, compared with 28% of taxi drivers/chauffeurs. Of the Uber drivers, 48% have completed a college degree or a postgraduate degree, compared with 19% of taxi drivers/chauffeurs. Of Uber drivers, 14% are women, compared to 8% of taxi drivers/chauffeurs.

Overall, are companies like Uber and what is sometimes called "the sharing economy" contributing to an economy with greater inequality and instability? Or are they providing a flexible option that certain drivers and customers prefer? Hall and Krueger summarize the big picture arguments this way:

"[A]lthough some have argued that the sharing economy is weakening worker bargaining
power and responsible for much of the rise in inequality in the United States, the actual effect is much more complicated and less clear. First, there is little evidence of a secular rise in the percentage of workers who are self-employed, independent contractors, or part-time. ... Second, inequality increased dramatically in the United States long before the advent of the sharing economy, and has increased much less in many other countries that, unlike the United States, experienced a sharp rise in part-time work. Third, at least insofar as the advent of ride sharing services like Uber is concerned, the relevant market comparison is to other for-hire drivers, many of whom were independent contractors prior to the launch of Uber. Moreover, the availability of modern technology, like the Uber app, provides many advantages and lower prices for consumers compared with the traditional taxi cab dispatch system, and this has boosted demand for ride services, which, in turn, has increased total demand for workers with the requisite skills to work as for-hire drivers, potentially raising earnings for all workers with such skills. And finally, the growth of Uber has provided new opportunities for driver-partners, who ... seem quite pleased to have the option available."

As noted at the top, this research is co-authors by an Uber employee, so these positive conclusions are not unexpected--but the facts and analysis behind them are still worth consideration. It's seems safe to say that the labor market behind the "sharing economy" can't have had much macroeconomic effect so far, because it hasn't been big enough--but that situation could change a few years down the road.

I'll just close by adding that although "the sharing economy" is a widely used term, broadly meant to describe person-to-person sharing of resources, most economists I've run into gag on the term. Companies like Uber drive you where you want to go, for a price. Companies like Airbnb let you stay in someone's home or apartment, for a price. eBay had almost $1 billion in profits last year. This isn't "sharing." It's perhaps better called the "finding productive uses for underutilized capital" economy.

Tuesday, February 17, 2015

US. banks are legally required to hold reserves with the Federal Reserve. In August 2008, banks held $1.9 billion in such reserves, just a bit to the legally required minimum. In January 2015, they held $2.6 trillion in such reserves--that is, bank reserves multiplied by a factor of more than 1,300--far above the legally required minimum, which barely moved during this time.

In some broad sense, of course, we all know that this change is related to the extraordinary Federal Reserve quantitative easing policies of the last few years. However, Ben R. Craig and Matthew Koepke provide a useful bank's eye view of the tradeoffs for holding reserves, and how those incentives have changed, in "Excess Reserves: Oceans of Cash," an "Economic Commentary" published by the Federal Reserve Bank of Cleveland (February 12, 2015).

From a bank's eye view, the decision about whether to hold excess reserves above the legally required level involves evaluating a tradeoff: on one side, a need for liquidity; on the other side, the opportunity cost of holding reserves. As Craig and Koepke write: "Banks actively manage their reserves in order to balance their liquidity needs with the opportunity cost of holding reserves instead of interest-bearing assets. That is, banks measure the cost of carrying more reserves by comparing
what they might earn by parking the funds in an alternative asset (“forgone interest”) with the cost of last-minute borrowing to cover an unforeseen shortfall in reserves."

Before October 2008, the Federal Reserve paid no interest on excess reserves. If banks didn't have enough on hand for liquidity purposes, it was easy to borrow what was needed for a few days. In this setting, banks tried to hold hardly any excess reserves above the legal limit, and the level of bank reserves was quite stable. Craig and Koepke write:

"From 1959 to just before the financial crisis, the level of reserves in the banking system was stable, growing at an annual average of 3.0 percent over that period. This was about the same as the growth rate of deposits. Moreover, excess reserves as a percent of total reserves in the banking system were nearly constant, rarely exceeding 5.0 percent. Only in times of extreme uncertainty and economic distress did excess reserves rise significantly as a percent of total reserves; the largest such increase occurred in September 2001."

As the Federal Reserve responded to the Great Depression, it cut interest rates sharply in the economy as a whole and also starting paying interest rate on excess reserves. For banks, holding excess reserves now made economic sense. Craig and Koepke explain:

One reason for the increased marginal return of holding reserves is that the Federal Reserve now pays interest on all reserves. Since December 2008, the Federal Reserve has paid interest of 25 basis points on all reserves. Before the crisis, banks commonly parked their cash in the federal funds market for short periods. The interest rate in this market, hovering between 7 and 20 basis points since the crisis, has actually lagged the interest rate paid by the Federal Reserve for excess reserves ... The marginal cost of excess reserves has also declined, when measured by the opportunity cost of other uses for the reserves. Other short-term parking places where banks commonly earned interest have experienced rate drops that make them less favorable. For example, since the Federal Reserve began to pay interest on excess reserves, three-month Treasury bills have yielded less than the Fed pays. Longer-term fixed-rate bonds are not an ideal option either, because they expose the holder to a duration risk. Duration risk is the risk associated with a scenario in which interest rates increase, depressing the price of the bond. Given that interest rates are already very close to zero, the balance of risks is weighted on the side of interest rates increasing rather than decreasing. Consequently, holding cash has the attraction of investing later, when interest rates go up. The risk-adjusted return on alternative assets is low compared to the return on excess reserves, and the benefi ts are high, giving banks an incentive to hold historically high levels of excess reserves.

In the bigger picture, what does it mean that banks are now holding much more in the way of excess reserves? Here are a few thoughts:

1) The everyday tools of monetary policy are about to shift. The old standard tools of monetary policy taught in every intro econ class--open market operations, altering discount rates, and adjusting reserve requirements--all depend on a banks that don't want to hold lots of excess reserves. When banks are holding a few trillion dollars in excess reserves, other policy tools are needed, and the Fed has already announced that monetary policy in the future will be conducted using the rate of interest that it chooses to pay on bank reserves as its primary tool for monetary policy, and secondarily by using its new reverse repo facility.

2) The financial balance sheet of the Federal Reserve has changed dramatically. Here's a figure from Craig and Koepke. Notice that in late 2007, the "Reserves" category in liabilities is almost invisible, because banks were not yet holding any substantial quantity of excess reserves. Since then, both bank reserves on the liability side and the combination of US Treasury securities along with Agency debt and mortgage-backed securities on the asset side have growth in tandem. One shorthand way to think about this is that the the Federal Reserve is creating money to buy the Treasury securities and the mortgage-backed securities from banks. As a result of these transactions, banks hold a lot of reserves and the Fed holds the debt. This is the mechanics of quantitative easing policies.

3) The Federal Reserve has the policy tools to manage a reduction in bank reserves over time. One occasionally hears a concern that banks will suddenly go on a lending spree, dumping their excess reserves into the economy in a way that could be inflationary or destabilizing in some other way. But if this is a problem, the Fed can simply raise the interest rate it pays banks on excess reserves, and the incentive for banks to put those reserves back into the economy would be reduced. But while it's easy to describe how it could work in theory, central banks don't have much experience with how to manage a gradual reduction in excess reserves--which will presumably need to happen over time as the Fed gradually holds less in Treasury debt and mortgage-backed securities. Craig and Koepke describe the problem this way:

In 1936, US banks’ reserves had accumulated to record levels. Although there had not been a dramatic increase in the levels of loans, the Federal Reserve decided to “play it safe” and reduce the flexibility of the banks’ options for using the cash by increasing the reserve requirement. Banks responded by dramatically reducing their loan portfolios. Milton Friedman and Anna Schwartz argued that this action caused the 1937 recession (A Monetary History of the United States, 1867-1960). So the Federal Reserve has no easy policy choices, particularly in the absence of a large body of accepted theory on how banks can be expected to handle their oceans of cash under changingconditions. Perhaps the best thing to do is what they are doing, that is, to adopt an extremely watchful stance and wait.

Monday, February 16, 2015

Every now and then, you read a story about a very expensive prescription drug that seems to have a real but modest health benefits. A few years back, an anti-melanoma drug called ipilimumab (brand name Yervoy) became available for sale from Bristol-Myers Squibb. The price was $120,000
for a full course of therapy, and the expected gain in life expectancy was four months. Are these examples just a few outliers? Is there a trend toward more expensive drugs? David H. Howard, Peter B. Bach, Ernst R. Berndt, and Rena M. Conti tackle this question in "Pricing in the Market for Anticancer Drugs," which appears in the Winter 2015 issue of the Journal of Economic Perspectives.
(Full disclosure: I've worked at the Managing Editor of JEP since the inception of the journal in 1986. All JEP articles from the most recent issue back to the first are freely available on-line courtesy of the American Economic Association, which funds the journal.)

Howard, Bach, Berndt, and Conti look at the 58 anticancer drugs approved for sale in the U.S. between 1995 and 2013. Before each drug is approved, various clinical trials and studies are done, and these studies provide an estimate of the median expected extension of life as a result of using the drugs. Then based on the market price of the drug when it is announced, it's straightforward to calculate the price of the drug per year of life gained. This figure shows, for the new anti-cancer drugs over the last two decades, how the the drug price per year of life gained has been rising over time.

As the best-fit line shows, back in 1995 the new drugs were costing about $54,000 to save a year of life. By 2014, the new drugs were costing about $170,000 to save a year of life. This is an increase of roughly 10% per year.

How can this kind of increase persist? The authors point out that Medicare or some other form of insurance is often paying for these drugs, so patients do not face the prices directly. Thus, one theory involves "reference pricing," in which each new drug is set a little higher in terms of cost per life saved than the previous one. The controversy fades over what seemed at the time to be extraordinarily high price for certain drugs 10 or 15 years ago. The view of payers about what is "reasonable" to pay are pushed upward.

An alternative view points out that certain government programs now require that pharmaceutical companies sell discounted drugs to certain buyers (the "340B program"). When government-run health care providers in other countries negotiate prices with U.S. pharmaceutical manufacturers, they also seek to get a discount. For the drug manufacturers, if you know that a substantial part of your market is going to demand a "discount," then you have an incentive to set the initial price higher as a kind of benchmark for future negotiations--especially if you know that third-party payers in the U.S. will grumble and moan about that high price but eventually pay up.

The size of the U.S. market for anti-cancer drugs was $37 billion in 2013. I certainly have no objection to any person paying for these drugs themselves, or for people buying an insurance policy that will pay for these drugs if needed. But I also would have no problem with an insurance company offering a lower-priced policy with the explicit provision that it won't cover these very expensive drugs. And of course, one of the many discomfiting peculiarities of American health care policy is that on one hand we agonize over how to pay for helping those with low incomes receive adequate health insurance, while at the same time having a government program (Medicare) pay an amount that would cover health insurance for multiple families for a year for a drug that adds a few months to life expectancy.

As Fryer tells the story, he was "asked in 2003 to explore the reasons for the social inequality in the United States." He looked at data from the "National Longitudinal Survey of Youth, focusing on people who were then 40 years old." He looked at the raw differences in averaged outcomes between blacks and whites on a number of dimensions. Then he adjusted the data for the test scores of the 40 year-olds back when they were eighth-graders---essentially, this means comparing blacks and whites who had the same test scores back in eighth grade. Remarkably enough, the wage gap between black and white 40 year-olds essentially disappeared after adjusting for eighth-grade test scores. On average, blacks were less likely to attend college than whites, but after adjusting for eighth grade test scores, blacks were more likely to attend college. A number of subtantial black-white differences remained after adjusting for eighth-grade test scores, but the size of such differences was diminished.

Here's one of Fryer's slides from his talk, showing the average black-white differences, and then the differences after adjusting for eighth-grade test scores.

Fryer describes his reaction in this way:

"In two weeks I reported back that achievement gaps that were evident at an early age correlated with many of the social disparities that appeared later in life. I thought I was done. But the logical follow-up question was how to explain the achievement gap that was apparent in 8th grade. I’ve been working on that question for the past 10 years. I am certainly not going to tell you that discrimination has been purged from U.S. culture, but I do believe that these data suggest that differences in student achievement are a critical factor in explaining many of the black-white disparities in our society. It is no longer news that the United States is a lackluster performer on international comparisons of student achievement, ranking about 20th in the world. But the position of U.S. black students is truly alarming. If they were to be considered a country, they would rank just below Mexico in last place among all Organization of Economic Cooperation and Development countries."

The next question is when did black students start falling behind. Fryer writes:

When do U.S. black students start falling behind? It turns out that development psychologists can begin assessing cognitive capacity of children when they are only nine months old with the Bayley Scale of Infant Development. We examined data that had been collected on a representative sample of 11,000 children and could find no difference in performance of racial groups. But by age two, one can detect a gap opening, which becomes larger with each passing year. By age five, black children trail their white peers by 8 months in cognitive performance, and by eighth grade the gap has widened to twelve months.

It is remarkable to me that most of the cognitive performance gap for eighth-graders is already apparent for five year-olds. As I've commented on before in "The Parenting Gap for Pre-Preschool" (September 17, 2013), one possible reaction here is to think more seriously about home visitation programs for at-risk children in the first few years of life. Another possible reaction is to think about expanding preschool programs for 3-5 year-olds, but at least some of the evidence on whether these programs have any lasting effect is rather discouraging (although the case is somewhat stronger for focusing preschool programs on at-risk children).

Fryer's approach has been to focus on how to improve school achievement, and as an economist, he began with a charmingly straightforward approach of paying students to read books and to pass tests. Here's a sketch of his longer description of what he did:

As befits an arrogant economist, my first thought was that this will be easy: We just have to change the incentives. ... My solution was to propose that we pay them incentives now to reward good school performance.

Oh my gosh, I wish someone had warned me. No one told me this was going to be so incredibly unpopular. People were picketing me outside my house saying I would destroy students’ love of learning, that I was the worst thing for black people since the Tuskegee experiments. Really? Experimenting with incentives when nothing else seems to work is the equivalent of injecting people with syphilis without informing them?We decided to try the experiment and raised about $10 million. We provided incentives in Dallas, Houston, Washington, DC, New York, and Chicago. We also, just for fun, added a large experiment with teacher incentives just to cover all our bases, to make sure that we had paid everybody for everything. ....

What we learned through this $10 million and a lot of negative press and angry citizens is that kids will respond to incentives—and that incentives to teachers do not have a significant effect on student achievement. They will do exactly what you want them to do. By the way, they don’t do anything extra either. I had this idea that they were going to discover that school is great and to try harder in all of their subjects, even those that do not provide incentives. No. You offer $2 to read a book, and they read a book. They are going to do exactly what you want them to do. That showed me the power, and the limitations, of incentives for kids."

So Fryer and fellow researchers began to study successful charter schools, like the Harlem Children’s Zone led by Geoffrey Canada, as well as some less successful schools. The team spent several years interviewing and videotaping, and came up with five rules to follow to close the academic achievement gap. Here are the five (from the slides accompanying Fryer's talks), with some comments from Fryer.

More time in school.

"Simple. Effective schools just spent more time on tasks. I think of it as the basic physics of education. If your students are falling behind, you have two choices: spend more time in school or convince the high-performing schools to give their kids four-day weekends. The key is to change the ratio. ... In the case of Harlem Children’s Zone’s Promise Academy, students have nearly doubled the amount of time on task compared to students in NYC public schools."

Human Capital Management

"For teachers, it is important that they receive reliable feedback on their classroom performance and that they rigorously apply what they learn from assessments of their students to what they do in the curriculum and the classroom."

Small Group Tutoring

"The third effective practice was what I call tutoring, but which those in the know call small learning communities. It is tutoring. Basically what they do is work with kids in groups of six or fewer at least four days per year."

Data-Driven Instruction and Student Performance Management.

"Even low-performing schools know that data are important. When I visited a middling school, they would be eager to show me their data room. What I typically found was wall charts with an array of green, yellow, and red stickers that represented high-, mid-, and low-performing students, respectively. And when I asked what has this led you to do for red kids, they would say that they hadn’t reached that step yet, but at least they knew how many there are.When I asked the same question in the data rooms of high-performing schools, they would say that they have their teaching calibrated for the three blocks. They would not only identify which students were trailing behind, but would identify the pattern of specific deficiencies and then provide remediation for two or three days on the problem areas. They would also note the need to approach these areas more diligently in future editions of the course."

Culture and Expectations

"The icing on the cake was that effective schools had very, very high expectations of achievement regardless of their social or economic background. ... The essential finding is that kids will live up or down to our expectations. Of course they are dealing with poverty. Of course 90% of the kids have single female head of households. They all have that. That wasn’t news. The question is how are we going to educate them?"

Maybe this five-step approach sounds too commonsensical and simple to work? Fryer's group managed to try out their approach in a group of 20 Houston public schools, including four high schools, with 16,000 students. Here were the results:

"When we began, the black/white achievement gap in the elementary schools was about 0.4 standard deviations, which is equivalent to about 5 months. Over the three years, our elementary schools essentially eliminated the gap in math and made some progress in reading. In secondary schools, math scores rose at a rate that would close the gap in in roughly four to five years, but there was no improvement in reading. One other significant result was that 100% of the high school graduates were accepted to a two- or four-year college."

These methods involved a lot of change at the schools involved, including changing a number of principals and teachers. But the same student body that had been dramatically underperforming was no longer doing so. Fryer draws the hard lesson explicitly. We know many of the changes that ccan be made to improve low-performing schools dramatically within a few years. The financial costs of these changes are manageable. But the school systems that need to be changed, and many of the people currently working in those systems, are not ready to make the needed changes. He says:

"It is not rocket science. It is not magic. There is nothing special about it.... We are now repeating the experiment in Denver, Colorado, and Springfield, Massachusetts. We actually do know what to do, especially for math. The question is whether or not we have the courage to do it."

Wednesday, February 11, 2015

The Great Recession of 2007-2009 was born in excessive debt. Of course, this statement strips away all manner of important details: the housing price bubble, subprime mortgages, collateralized debt obligations, credit rating agencies, affordable housing mandates, financial institutions bettign their companies on being able to roll over huge amounts of very short-term financing every day, lax financial regulation, and more. While the manifestation of excessive debt happened in a particular way in 2007-2009, the general link between excessive debt and economic instability is a familiar story to economists. It's sometimes called a leverage cycle or a financial cycle. The basic notion is that when economic times are good, borrowing and risk-taking can rises to unsustainable levels, so that when times turn bad, the crash is especially hard.

Here in early 2015, how much has the U.S. economy and the world deleveraged--that is, reduced its debt? The McKinsey Global Institute tackles this question in a February 2015 report, Debt and (Not Much) Deleveraging, written by Richard Dobbs, Susan Lund, Jonathan Woetzel, and Mina Mutafchieva.

The amount of debt in an economy includes government, corporate, and household debt. Here's a picture of total debt by country. The horizontal axis shows the total debt/GDP ratio as of the second quarter of 2014. The vertical axis shows the change in the debt/GDP ratio from 2007 to 2014. Advanced economies are the yellow dots, while emerging and developing economies are the gray dots.

Several interesting patterns emerge from this figure.

1) The countries with higher debt/GDP ratios (on the horizontal axis) are mostly advanced economies, while the countries with lower debt/GDP ratios are mostly emerging and developing economies. This pattern is somewhat expected. A eocnomy with very little debt is a country where financial markets are underdeveloped (Nigeria) or ill-functioning (Argentina). As per capita GDP rises in a country, the debt/GDP ratio also tends to rise, reflecting the development of financial markets. As the McKinsey report explains:

Some of the growth in global debt is benign and even desirable. Developing economies
have accounted for 47 percent of all the growth in global debt since 2007—and three-quarters of new debt in the household and corporate sectors. To some extent, this reflects
healthy financial system deepening, as more households and companies gain access
to financial services. Moreover, debt in developing countries remains relatively modest,
averaging 121 percent of GDP, compared with 280 percent for advanced economies. There are exceptions, notably China, Malaysia, and Thailand, whose debt levels are now at the level of some advanced economies.

2) Some of the the countries with the highest change in debt/GDP ratios (on the vertical axis) are the countries that, if you read the headlines, you would expect to be there: Ireland, Greece, Portugal, Spain.

3) Some of the countries with the highest levels of debt/GDP ratios and also the highest growth in debt are there in part because of their role as global financial hubs, like Ireland and Singapore. The report explains:

As a major business hub, Singapore has the highest ratio of non‑financial corporate debt in the world, at 201 percent of GDP in 2014, almost twice the level of 2007. However nearly two-thirds of companies with more than $1 billion in revenue in Singapore are foreign subsidiaries. Many of them raise debt in Singapore to fund business operations across the region, and this debt is supported by earnings in other countries. Singapore has very high financial-sector debt as well (246 percent of GDP), reflecting the presence of many foreign banks and other financial institutions that have set up regional headquarters there. Ireland has the second-highest ratio of non‑financial corporate debt to GDP in the world—189 percent in 2014. But this mostly reflects the attraction of Ireland’s corporate tax laws, which lure regional (and sometimes global) operations of companies from around the world. Foreign-owned enterprises contribute 55 to 60 percent of the gross value added of all companies in Ireland and, we estimate, at least half of Ireland’s non‑financial corporate debt.

4) Among other high-income countries, Japan's extraordinarily high debt/GDP ratio of 400% stands out. Among the emerging countries, China stands out, both for having one of the highest debt/GDP ratios among the emerging economies (217%), but also for having experienced by far the biggest rises in debt/GDP ratio from 2007 to 2014, at 83 percentage points.

5) Among the advanced economies on the figure, the U.S. economy is one of the lowest in terms of debt/GDP ratio (233%) and also one of the lowest in terms of the rise in debt/GDP ratio from 2007-2014 (16 percentage points) When you dig into the underlying statistics a bit more, the U.S. economy had a rise in goverment debt equal to a 35 percentage point rise in its debt/GDP ratio. However, the U.S. economy also had by far the biggest decline in household debt of any of the countries on the figure, equal to a fall of 18 percentage points in its debt/GDP ratio. The U.S. also had a dramatic fall in financial sector debt, equal to a fall of 24 percentage points in its overall debt/GDP ratio (and a close second to Ireland for biggest fall in this category).

Of course, the declines in household and financial sector debt in the U.S. economy are part of the reason for why the economic recovery has proceeded as such a sluggish pace. But the good news is after the excessive indebtedness of U.S. households and its financial sector back around 2007, real changes have occurred.

Tuesday, February 10, 2015

A couple of months ago, I posted on how a growing share of US firms were "Opting out of the US Corporate Income Tax" (December 22, 2014), by instead choosing to incorporate in the form of a partnership or an S-corporation in which the business earnings are only taxed through the individual income tax of the owners. Paul Burnham offers an updated figure on these issues at the blog of the Congressional Budget Office.

The vertical axis includes all business receipts. About 90% of business receipts go to firms that are limited liability corporations; the other 10% would include, for example, a sole proprietor business where the owner remains personally liable for debts incurred by the business. Of the business receipts going to limited liability corporations, Burnham reports that a large share of business receipts ar going to "S corporations and limited liability companies, whose profits are taxed only at the individual level. That shift caused the share of business receipts attributed to C corporations to fall from 87 percent in 1981 to 62 percent in 2011. As a result, federal tax revenues are lower than they would otherwise be, but incentives for investment and the efficient allocation of resources are probably greater."

Discussions of corporate income tax and how to reform it would be wise to remember that the share of business income covered by the corporate income tax is 62% and falling.

Monday, February 9, 2015

How much money do colleges and universities have in their endowments? How are they investing the money? What returns are they earning? The National Association of College and University Business Officers does a survey of these questions each year, and ome results from its 2014 survey are now available.

What colleges and universities have the largest endowments? Here's a list of the top 40, with Harvard leading the way at $35 billion. The numbers are large for many of these nonprofit institutions. One caveat: The numbers are not adjusted for the number of students at each of these institutions. For example, #1 Harvard has a total enrollment of a little more than 20,000 students, while the University of Texas system enrolls more than 200,000.

How concentrated are the endowment assets? Of the 832 total isntitutions, the top 10.9% with endowments of more than $1 billion each hold 74% of all endowment assets. Public institutions hold about one-third of endowment assets.

How do colleges and universities invest their endowment funds? It varies considerably according to the size of the endowment. The institutions with the biggest endowments put well over half of their funds into "alternative strategies," which according to the report is defined as "Private equity (LBOs, mezzanine, M&A funds, and international private equity); Marketable alternative strategies (hedge funds, absolute return, market neutral, long/short, 130/30, and event-driven and derivatives); Venture capital; Private equity real estate (non-campus); Energy and natural resources (oil, gas, timber, commodities and managed futures); and Distressed debt." Institutions with smaller endowments put a much larger share into "domestic equities" in particular. Interestingly, the dividing line here really is by size of endowment: public and private institutions allocate their endowments in very similar ways.

Finally, how did those investments perform in 2014? As one might expect, returns for domestic equities, fixed income, and international equities are all quite similar across endowments of different sizes. But the big endowments over $1 billion get a substantially higher return on the "alternative strategies" than do smaller endowments--which of course explains why they have a large share of assets in this category. In particiular, the large endowments get much better returns in the "alternative" categories of private equity and ventur capital. One suspect that there is some combinatino here of better-paid investment professionals at these schools who are networked into better quality opportunities in private equity and venture capital than are available to schools with smaller endowments.