Pages

Thursday, July 31, 2014

My job as Managing Editor of the Journal of Economic Perspectives helps to pay the household bills. (Speed-typing these blog posts is a volunteer activity.) All issues of JEP back to the first issue in 1987 are freely available on-line, courtesy of the American Economic Association. I've been running JEP since the first issue in 1987, so I think of this as issue #109. The Summer 2014 issue is how available on-line, although it will take another three weeks or so for paper copies to arrive in the mailboxes of subscribers. I'm sure I'll do some blog posts about specific papers in this issue in the next couple of weeks. But for now, here's the compact table of contents, and after that, a longer list of the papers that includes abstracts.

"The Role of Entrepreneurship in US Job Creation and Economic Dynamism"Ryan Decker, John Haltiwanger, Ron Jarmin and Javier Miranda

An optimal pace of business dynamics—encompassing the processes of entry, exit, expansion, and contraction—would balance the benefits of productivity and economic growth against the costs to firms and workers associated with reallocation of productive resources. It is difficult to prescribe what the optimal pace should be, but evidence accumulating from multiple datasets and methodologies suggests that the rate of business startups and the pace of employment dynamism in the US economy has fallen over recent decades and that this downward trend accelerated after 2000. A critical factor in accounting for the decline in business dynamics is a lower rate of business startups and the related decreasing role of dynamic young businesses in the economy. For example, the share of US employment accounted for by young firms has declined by almost 30 percent over the last 30 years. These trends suggest that incentives for entrepreneurs to start new firms in the United States have diminished over time. We do not identify all the factors underlying these trends in this paper but offer some clues based on the empirical patterns for specific sectors and geographic regions. Full-Text Access | Supplementary Materials

Entrepreneurship research is on the rise but many questions about the fundamental nature of entrepreneurship still exist. We argue that entrepreneurship is about experimentation; the probabilities of success are low, extremely skewed, and unknowable until an investment is made. At a macro level, experimentation by new firms underlies the Schumpeterian notion of creative destruction. However, at a micro level, investment and continuation decisions are not always made in a competitive Darwinian contest. Instead, a few investors make decisions that are impacted by incentive, agency, and coordination problems, often before a new idea even has a chance to compete in a market. We contend that costs and constraints on the ability to experiment alter the type of organizational form surrounding innovation and influence whe n innovation is more likely to occur. These factors not only govern how much experimentation is undertaken in the economy, but also the trajectory of experimentation, with potentially very deep economic consequences. Full-Text Access | Supplementary Materials

There is a growing body of evidence that many entrepreneurs seem to enter and persist in entrepreneurship despite earning low risk-adjusted returns. This has lead to attempts to provide explanations—using both standard economic theory and behavioral economics—for why certain individuals may be attracted to such an apparently unprofitable activity. Drawing on research in behavioral economics, in the sections that follow, we review three sets of possible interpretations for understanding the empirical facts related to the entry into, and persistence in, entrepreneurship. Differences in risk aversion provide a plausible and intuitive interpretation of entrepreneurial activity. In addition, a growing literature has begun to highlight the potential importance of overconfidence in driving entrepreneurial outcomes. Such a mechanism may appear at face value to work like a lower level of risk aversion, but there are clear conceptual differences—in particular, overconfidence likely arises from behavioral biases and misperceptions of probability distributions. Finally, nonpecuniary taste-based factors may be important in motivating both the decisions to enter into and to persist in entrepreneurship. Full-Text Access | Supplementary Materials

Symposium: Classic Ideas in Development

"The Lewis Model: A 60-Year Retrospective"Douglas Gollin

The Lewis model has remained, for more than half a century, one of the dominant theories of development economics. This paper argues that the power of the model lies in the simplicity of its central insight: that poor countries contain enclaves of economic activity just as rich countries contain enclaves of poverty; and that a proximate explanation for the difference in income per capita across countries is that there are large differences in the relative sizes of their "modern" and "traditional" sectors. But while the Lewis model contains a powerful and compelling macro narrative, its details have proved somewhat elusive to scholars and students who have followed, and its policy implications are unclear. This paper identifies several key insights of the Lewis model, discusses several different interpretations of the model, and then reviews modern evidence for the central propositions of the model. In closing, we consider the relevance of Lewis for current thinking about development strategies and policies. Full-Text Access | Supplementary Materials

"The Missing "Missing Middle""Chang-Tai Hsieh and Benjamin A. Olken

Although a large literature seeks to explain the "missing middle" of mid-sized firms in developing countries, there is surprisingly little empirical backing for existence of the missing middle. Using microdata on the full distribution of both formal and informal sector manufacturing firms in India, Indonesia, and Mexico, we document three facts. First, while there are a very large number of small firms, there is no "missing middle" in the sense of a bimodal distribution: mid-sized firms are missing, but large firms are missing too, and the fraction of firms of a given size is smoothly declining in firm size. Second, we show that the distribution of average products of capital and labor is unimodal, and that large firms, not small firms, have higher average products. This is inconsistent with many models explaining "the missing middle" in which small firms with high returns are constrained from expanding. Third, we examine regulatory and tax notches in India, Indonesia, and Mexico of the sort often thought to discourage firm growth and find no economically meaningful bunching of firms near the notch points. We show that existing beliefs about the missing middle are largely due to arbitrary transformations that were made to the data in previous studies. Full-Text Access | Supplementary Materials

"Informality and Development"Rafael La Porta and Andrei Shleifer

In developing countries, informal firms account for up to half of economic activity. They provide livelihood for billions of people. Yet their role in economic development remains controversial with some viewing informality as pent-up potential and others viewing informality as a parasitic organizational form that hinders economic growth. In this paper, we assess these perspectives. We argue that the evidence is most consistent with dual models, in which informality arises out of poverty and the informal and formal sectors are very different. It seems that informal firms have low productivity and produce low- quality products; and, consequently, they do not pose a threat to the formal firms. Economic growth comes from the formal sector, that is, from firms run by educated entrepreneurs and exhibiting much higher levels of productivity. The expansion of the formal sector leads to the decline of the informal sector in relative and eventually absolute terms. A few informal firms convert to formality, but more generally they disappear because they cannot compete with the much more-productive formal firms. Full-Text Access | Supplementary Materials

A "poverty trap" can be understood as a set of self-reinforcing mechanisms whereby countries start poor and remain poor: poverty begets poverty, so that current poverty is itself a direct cause of poverty in the future. The idea of a poverty trap has this striking implication for policy: much poverty is needless, in the sense that a different equilibrium is possible and one-time policy efforts to break the poverty trap may have lasting effects. But what does the modern evidence suggest about the extent to which poverty traps exist in practice and the underlying mechanisms that may be involved? The main mechanisms we examine include S-shaped savings functions at the country level; "big-push" theories of development based on coordination failures; hunger-based traps which rely on physical work capacity rising nonlinearly with foo d intake at low levels; and occupational poverty traps whereby poor individuals who start businesses that are too small will be trapped earning subsistence returns. We conclude that these types of poverty traps are rare and largely limited to remote or otherwise disadvantaged areas. We discuss behavioral poverty traps as a recent area of research, and geographic poverty traps as the most likely form of a trap. The resulting policy prescriptions are quite different from the calls for a big push in aid or an expansion of microfinance. The more-likely poverty traps call for action in less-traditional policy areas such as promoting more migration. Full-Text Access | Supplementary Materials

Symposium: Academic Production

"Page Limits on Economics Articles: Evidence from Two Journals"David Card and Stefano DellaVigna

Over the past four decades the median length of the papers published in the "top five" economic journals has grown by nearly 300 percent. We study the effects of a page limit policy introduced by the American Economic Review (AER) in mid-2008 and subsequently adopted by the Journal of the European Economic Association (JEEA) in 2009. We find that the imposition of a 40-page limit on submissions led to no change in the flow of new papers to the AER. Instead, authors responded by shortening and reformatting their papers. For JEEA, in contrast, we conclude that the page-limit policy led authors of longer papers to submit to other journals. These results imply that the AER has substantial monopoly power over submissions, while JEEA faces a very competitive market. Evidence from both journals, and from citations t o published papers in the top journals, suggests that longer papers are of higher quality than shorter papers, so the loss of longer submissions at JEEA may have led to a drop in quality. Despite a modest impact of the AER's policy on the average length of submissions, the policy had little or no effect on the length of final accepted manuscripts. Full-Text Access | Supplementary Materials

"What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics"Raj Chetty, Emmanuel Saez and Laszlo Sandor

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results. First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals' policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing prosocial behavior. Full-Text Access | Supplementary Materials

Average grades in colleges and universities have risen markedly since the 1960s. Critics express concern that grade inflation erodes incentives for students to learn; gives students, employers, and graduate schools poor information on absolute and relative abilities; and reflects the quid pro quo of grades for better student evaluations of professors. This paper evaluates an anti-grade-inflation policy that capped most course averages at a B+. The cap was biding for high-grading departments (in the humanities and social sciences) and was not binding for low-grading departments (in economics and sciences), facilitating a difference-in-differences analysis. Professors complied with the policy by reducing compression at the top of the grade distribution. It had little effect on rece ipt of top honors, but affected receipt of magna cum laude. In departments affected by the cap, the policy expanded racial gaps in grades, reduced enrollments and majors, and lowered student ratings of professors. Full-Text Access | Supplementary Materials

"The Research Productivity of New PhDs in Economics: The Surprisingly High Non-success of the Successful"John P. Conley and Ali Sina Onder

We study the research productivity of new graduates from North American PhD programs in economics from 1986 to 2000. We find that research productivity drops off very quickly with class rank at all departments, and that the rank of the graduate departments themselves provides a surprisingly poor prediction of future research success. For example, at the top ten departments as a group, the median graduate has fewer than 0.03 American Economic Review (AER)-equivalent publications at year six after graduation, an untenurable record almost anywhere. We also find that PhD graduates of equal percentile rank from certain lower-ranked departments have stronger publication records than their counterparts at higher-ranked departments. In our data, for example, Carnegie Mellon' s graduates at the 85th percentile of year-six research productivity outperform 85th percentile graduates of the University of Chicago, the University of Pennsylvania, Stanford, and Berkeley. These results suggest that even the top departments are not doing a very good job of training the great majority of their students to be successful research economists. Hiring committees may find these results helpful when trying to balance class rank and place of graduate in evaluating job candidates, and current graduate students may wish to re-evaluate their academic strategies in light of these findings. Full-Text Access | Supplementary Materials

Fair Trade is a labeling initiative aimed at improving the lives of the poor in developing countries by offering better terms to producers and helping them to organize. Although Fair Trade-certified products still comprise a small share of the market--for example, Fair Trade-certified coffee exports were 1.8 percent of global coffee exports in 2009--growth has been very rapid over the past decade. Whether Fair Trade can achieve its intended goals has been hotly debated in academic and policy circles. In particular, debates have been waged about whether Fair Trade makes "economic sense" and is sustainable in the long run. The aim of this article is to provide a critical overview of the economic theory behind Fair Trade, describing the potential benefits and potential pitfalls. We also provide an assessment of the empirical evidence of the impacts of Fair Trade to date. Because coffee is the largest single product in the Fair Trade market, our discussion here focuses on the specifics of this industry, although we will also point out some important differences with other commodities as they arise. Full-Text Access | Supplementary Materials

In this article, we present a simple back-of-the-envelope approach for evaluating whether counterterrorism security measures reduce risk sufficiently to justify their costs. The approach uses only four variables: the consequences of a successful attack, the likelihood of a successful attack, the degree to which the security measure reduces risk, and the cost of the security measure. After measuring the cost of a counterterrorism measure, we explore a range of outcomes for the costs of terrorist attacks and a range of possible estimates for how much risk might be reduced by the measure. Then working from this mix of information and assumptions, we can calculate how many terrorist attacks (and of what size) would need to be averted to justify the cost of the counterterrorism measure in narrow cost-benefit terms. To illustrate this appr oach, we first apply it to the overall increases in domestic counterterrorism expenditures that have taken place since the terrorist attacks of September 11, 2001, and alternatively we apply it to just the FBI's counterterrorism efforts. We then evaluate evidence on the number and size of terrorist attacks that have actually been averted or might have been averted since 9/11. Full-Text Access | Supplementary Materials

Wednesday, July 30, 2014

One of the arguments as to why monetary policy should continue to be loose, both for the Federal
Reserve in the United States and for other central banks around the world, is to avoid the risk of a bout of deflation. But is that fear overstated? The Bank of International Settlements offers a discussion "The costs of deflation: what does the historical record say?" in its 84th Annual Report published last month. (For those not familiar with BIS, it's a Swiss-based international organization that has been around since 1930, and serves as a main forum for consultation and cooperation between central banks.

When looking at historical episodes of deflation, the BIS offers a simple comparison. Look at the five years before and after an episode of deflation started for a range of countries, and see what happened to growth of real GDP during that time. As a starting point, consider pre-World War I episodes of deflation from 1860-1901 in ten countries: Belgium, Canada, France, Germany, Italy, Japan, Netherlands, Switzerland, United Kingdom, United States. Some of these countries had multiple episodes of deflation during this time: for example, the U.S. economy had episodes of deflation starting in 1866, 1881, and 1891. When you match up the five years before and after the starting point of an episode of deflation with the path of GDP growth, here's what you get. The red line shows the price level (which peaks at time zero, because that's how the figure is constructed), and blue line shows the path of GDP relative to that time zero. For this time period, the onset of deflation on average doesn't seem to have any effect on economic growth.

How about during the early interwar period, meaning the 1920s and early 1930s? Here, the path of GDP growth across the 10 countries in this sample is definitely faster before the start of price deflations rather than after--but it remains positive. BIS writes: "In the early interwar period (mainly in the 1920s), the number of somewhat more costly (“bad”) deflations increased: output still rose, but much more slowly – the average rates in the pre- and post-peak periods were 2.3% and 1.2%, respectively. (Perceptions of truly severe deflations during the interwar period are dominated by the exceptional experience of the Great Depression, when prices in the G10 economies fell cumulatively up to roughly 20% and output contracted by about 10%.)"

Finally, what about more recent deflations from 1990 to 2013? For this time period, the sample now includes episodes of deflation in 13 places: Australia, China;l the euro area, Hong Kong SAR, Japan, New Zealand, Norway, Singapore, South Africa, Sweden, Switzerland, and the United States (in the third quarter of 2008). The recent episodes of deflation have been much shorter: thus, the red line showing the price level dips slightly, but then starts rising again after about a year. The path of GDP growth looks much the same in the five years before and after the deflation. As BIS writes: "The deflation episodes during the past two and a half decades have, on average, been much more akin to the
good types experienced during the pre-World War I period than to those of the early interwar period ..."

BIS suggests four lessons that can be taken from this exercise.

"First, the record is replete with examples of “good”, or at least “benign”, deflations in the sense that they coincided with output either rising along trend or undergoing only a modest and temporary setback. ...

"The second important feature of deflation dynamics revealed by the historical record is the general absence of an inherent deflation spiral risk – only the Great Depression episode featured a deflation spiral in the form of a strong and persistent decline in the price level; the other episodes did not. ... The evidence, especially in recent decades, argues against the notion that deflations lead to vicious deflation spirals. In addition, the fact that wages are less flexible today than they were in the distant past reduces the likelihood of a self-reinforcing downward spiral of wages and prices. ..."

"Third, it is asset price deflations rather than general deflations that have consistently and significantly harmed macroeconomic performance. Indeed, both the Great Depression in the United States and the Japanese deflation of the 1990s were preceded by a major collapse in equity prices and, especially, property prices. These observations suggest that the chain of causality runs primarily from asset price deflation to real economic downturn, and then to deflation, rather than from general deflation to economic activity. ...

"Fourth, recent deflation episodes have often gone hand in hand with rising asset prices, credit expansion and strong output performance. Examples include episodes in the 1990s and 2000s in countries as distinct as China and Norway. There is a risk that easy monetary policy in response to good deflations, aiming to bring inflation closer to target, could inadvertently accommodate the build-up of financial imbalances. Such resistance to “good” deflations can, over time, lead to “bad” deflations if the imbalances eventually unwind in a disruptive manner."

In short, the grim experience of the 1930s and its combination of deflation and Great Depression looks like a special case. The typical deflation is not accompanied by a steep recession. Nothing here argues that deflation is desirable or worth seeking to encourage. But the greater danger of economic recessio or Depression seems to arise not out of price deflation, but instead when asset prices bubbles overinflate, and then burst.

Tuesday, July 29, 2014

Pretty much all current public programs to assist households with low incomes suffer from the same seemingly unavoidable problem: As the household's income rises, the amount of assistance provided by the public program is phased out. For example, a person who earns an extra $100 might find that eligibility for public assistance has been reduced by $50. Economists call this reduction in benefits as income rises a "negative income tax," and it is not unusual for studies to find that certain working poor families face negative income tax rates in the range of 50% of the marginal dollar earned, 60%, or even higher (for examples, see here and here). These high negative income tax rates diminish work incentives for those with low incomes.

The idea of a universal basic income is that every U.S. citizen would be entitled to a chunk of money each year. This amount would not vary based on income level, or employment, or disability, or age, or any other reason. Specifically, when a low-income person works and earns income, the universal basic income check would not be reduced in any way. The "negative income tax" rate is zero percent.

The idea of a universal basic income raises obvious questions. How much would it be per person? How would it be financed? Are the politics of such a program conceivable? Let's tackle these in turn.

How much? Dolan suggests that we aim at having a universal basic income that reaches the poverty line. The current poverty line for a family of three is roughly $18,000. Thus, in round number terms, the goal might be to have a universal basic income of $6,000 per person, perhaps paid every other week. (One can easily imagine having a number that is a little higher for adults and a little lower for children, or keeping some of the money for children in trust and giving it to them over the first decade or so of adulthood, but let's not make this example too fancy.)

How would this amount be financed? The starting point is that this program would replace programs currently aimed at those with low incomes: for example, it would replace welfare, Food Stamps, the Earned Income Tax Credit and the Child Credit. For the sake of this thought experiment, Dolan suggests not including public policies that help low-income families with education or health (like Medicaid). The other programs for supporting low-income Americans spend roughly $500 billion.

But remember, the universal basic income goes to everyone, not just those with low incomes. Thus, Dolan suggests that a second source of revenue would be to eliminate a number of tax deductions and deferrals, including the mortgage-interest deduction, the deferral of income taxes on retirement, and the deduction of charitable contributions. He also proposed eliminating the personal exemption. (In keeping his argument separate from health care financing issues, Dolan does not propose touching the tax exemption for compensation paid in the form of employer-provided health insurance.) These changes would raise about $1.2 trillion. Most middle-income and upper-middle income families, who either do not itemize deductions at all or don't have that many deductions to itemize, would come out ahead with $6,000 per person rather than using these tax provisions, although those at the highest income levels who make substantial use of these provisions would end up paying more.

Finally, Dolan suggests that Social Security recipients can have a choice: they can either receive the Social Security payments they are already entitled to by law, or they could instead receive the universal basic income. Those receiving the lowest Social Security payments now would benefit from this choice, and the rest of the elderly would be unaffected by the plan.

Taking these steps together, Dolan estimates that the U.S. could fund a universal basic income of $5,800 per person. In addition, consider some of the side benefits. Work incentives for those with low incomes would be greatly improved. The need for government to set up and enforce complicated eligibility rules for those with low incomes would be eliminated. The tax code could be greatly simplified, and many more people could file very simple tax returns. At the most basic level, everyone in the United States would be guaranteed an amount close to the poverty level of income.

What about the politics of a universal basic income? It's no surprise that many who lean liberal like the idea of guaranteeing a basic income. However, the idea has a reasonable number of conservative and libertarian supporters, who like the idea of a program that addresses the basic concern over helping those with low incomes, but in a clean, clear way that involves much less interference of eligibility rules and phase-ins and phase-outs in people's lives. Dolan claims that there are lively debates over a universal basic income happening behind the scenes between those with very different political persuasions.

The idea of a universal basic income is appealing to me in theory, but I have a hard time believing that once enacted, the U.S. political process would be willing or able to leave it alone. One one side, those who favor higher tax rates for those with high incomes would immediate start trying to figure out ways to claw back payments to those with high incomes. On another side, there would be continual pressures to reinstate programs like Food Stamps, or targeted welfare payments for certain types of families, or favored tax provisions for home-buying or charitable contributions or retirement. There would be continual political pressure to alter the amount of a universal basic income, as well. The U.S. political system does not excel at replacing complexity with simplicity, and then leaving well enough alone.

The decline seems to be part of a generational shift in what people choose to drink. Bentley explains:

"A 2013 ERS [Economic Research Service] study found that while Americans continue to drink about 8 ounces of fluid milk when they drink milk, they are consuming it less frequently than in the past. Americans are especially less apt to drink milk at lunchtime and with dinner. National food consumption surveys reveal that Americans born in the early 1960s drank milk 1.5 times a day as teenagers, 0.7 times a day as young adults, and 0.6 times a day in middle age. In contrast, Americans born in the early 1980s entered their teenage years drinking milk just 1.2 times a day and were drinking milk 0.5 times a day as young adults. Competition from other beverages—especially carbonated soft drinks, fruit juices, and bottled water—is likely contributing to the changes in frequency of fluid milkconsumption. In addition, substitutes for cow’s milk (including nut milks, coconut milk, and soy milk) have provided alternatives for consumers."

For the overall category of dairy products, the decline in milk consumption has been partially offset by a rise in cheese consumption: "Over the last four decades, Americans have increased their consumption of cheese, especially Italian varieties such as mozzarella, parmesan, and provolone. In 2012, cheese availability was 33.5 pounds per person, almost triple the amount in 1970 at 11.4 pounds. Availability of Italian cheeses increased to 14.9 pounds per person from 2.1 pounds in 1970. Since 2005, availability of American cheese has remained around 13 pounds per person. The inclusion of cheese in time-saving convenience foods and in commercially manufactured and prepared foods such as frozen pizza, macaroni and cheese, and pre-packaged cheese slices has increased consumption. The popularity of cheese-rich Italian and Tex-Mex cuisines has also contributed to increased cheese consumption."

Nonetheless, with the decline in milk consumption, dairy consumption has been falling over time.

(In case you are wondering what the "availability" of milk means in the figure title, here is Bentley's explanation: "ERS’s [Economic Research Service's] food availability data calculate the annual supply of a commodity available for humans to eat by subtracting measurable nonfood use (farm inputs, exports, and ending stocks) from the sum of domestic supply (production, imports, and beginning stocks). Per capita estimates are determined by dividing the total annual supply of the commodity by the U.S. population for that year. Although these estimates do not directly measure actual quantities ingested, they serve as a proxy for Americans' food consumption over time.)

Friday, July 25, 2014

An old trick in Washington policy circles is to obscure unwelcome news by releasing it late on Friday afternoon--and if it can be Friday afternoon of a three-day weekend, so much the better. Many companies seem to have another method of obscuring unwelcome news: they schedule their annual shareholder meeting to a more remote location. Yuanzhi Li and David Yermack present the evidence in "Evasive Shareholder Meetings." An overview of the paper is freely available in the NBER Digest for July 2014. The actual paper is National Bureau of Research Working Paper #19991 (March 2014), and while it is not freely available, many readers will have access through library subscriptions.

Li and Yermack look at data on 10,000 annual shareholder meetings held between 2006 and 2010. In one set of companies that in a certain year choose to hold their shareholder meeting at least 1,000 miles from the corporate headquarters. After such meetings, the company is more likely to announce unfavorable quarterly earnings, and also experiences an stock market return 3.7% less than a benchmark for the overall stock market over the following six months.

Similarly, when a company holds its shareholder meeting at least 50 miles from a major airport, its stock underperforms the market benchmarks in the following six months. When companies hold their meeting both more than 50 miles a major airport and more than 50 miles from corporate headquarters, the company's stock underperforms the market by 6.8% in the six months after the meeting.

Just to be clear, this effect is not rooted in effects from companies that are already known to be performing poorly, or already facing controversy, before the shareholder meeting occurs. Li and Yermack summarize (citations omitted):

We find little evidence that meetings are moved to distant locations when a firm has had a bad year, or when public information suggests that firms should expect confrontation; in fact, analysis of the agendas for the meetings in our sample suggests that companies are more likely to meet near headquarters when they expect hostile shareholder proposals or board elections that may be subject to protest voting. This may occur because the company is more comfortable arranging security, working with law enforcement, and controlling access to the meeting site in its own jurisdiction. Companies may also be relatively unconcerned with the publicity value of controversial agenda items, since these are known by everyone in advance and often have easily predictable outcomes.

Instead, we find that managers schedule long-distance meetings when the firm is experiencing adverse operating performance that is not already known to the market. Company stocks perform very poorly in the aftermath of remote meetings, and part of this result stems from disappointing quarterly earnings announcements following these meetings. By moving the meeting far away, the managers might forestall shareholder or news media questioning that could lead to the early disclosure of adverse news. ...

Scheduling a meeting far from headquarters provides a straightforward opportunity for managers to discourage attendance. Research shows that firms tend to have high ownership in their local communities and that local analysts tend to forecast stock performance better than distant analysts. A long-distance shareholder meeting would inevitably reduce participation from both of these cohorts as well as the local business press, who may be the most knowledgeable people about the company. ...

The poor performance of companies following long-distance meetings suggests that management knows adverse news when choosing the location of these meetings, and it may move them far from headquarters as part of a scheme to suppress negative news for as long as possible. While this motivation seems understandable, it is less obvious why shareholders fail to decode such an unambiguous signal at the time the meeting location is announced.

Of course, now that this research has been published, investors are going to be on the lookout. Companies that schedule shareholder meetings far from corporate headquarters or far from a major airport should at a minimum expect to come under greater scrutiny--and perhaps even see an immediate fall in their stock price, on the presumption that a faraway shareholder meeting is a negative signal about future earnings.

Thursday, July 24, 2014

Much of the sound and fury about U.S. budget deficits involves what should happen in the next year or so. Of course, short-run decisions about red ink do matter, and have a way of bleeding over into long-term decisions. But this focus on the short-term also risks missing the longer-term context of how U.S. government deficits and debt have changed since the Great Recession started in 2007, and where they are headed in the next couple of decades. For this purpose, my go-to starting point is "The 2014Long-Term Budget Outlook" published this month by the Congressional Budget Office.

Here's an overview of the basic CBO budget forecast, which shows some of the costs of the current path, but as I will explain, is probably to optimistic. This "extended baseline" forecast is based on existing law, including any ways in which the law requires future changes: for example, if current law requires a tax to be changed a few years in the future, that future change is included in this forecast. This figure shows government spending and revenues as a share of GDP. The gap between them doesn't look large, but remember that with the US GDP now in the range of $17 trillion, a gap of 1 percentage point is a deficit of $170 billion.

I'd draw two lessons from this figure. First, you can see the large jump in accumulated federal debt during the Great Recession, from less than 40% of GDP in 2008 to 74% of GDP in 2014. Federal debt relative to GDP is now at the second-highest level in US history, trailing only the explosion of debt used to finance the fighting of World War II. Second, federal debt in the longer term is projected to rise more slowly in the next few years, but to keep rising to 106% of GDP by 2039--which would be equal to the debt/GDP level in 1946.

Here's a figure showing the government debt/GDP ratio throughout U.S. history. You can see the previous debt/GDP peaks at the Revolutionary War, the Civil War, World War I, the Great Depression, World War II, and the 1980s and early 1990s. According to the CBO, the U.S. government is currently on autopilot to set an unwelcome new record.

The discussion of the effects of large budget deficits often seems to be an argument over the possibility of catastrophic debt overload and the risk of Greek-style or Argentinian-style debt defaults. But arguing over catastrophe misses the real and near-term costs of the higher budget deficits. The CBO lists three of them.

First, the current high level of government debt, and the projections for the next 25 years, mean that the U.S. government lacks fiscal flexibility. Before previous debt spikes, we had started with a relatively low debt/GDP ratio, and so we had room to borrow as needed. But with the debt/GDP ratio already at 74% and rising, we now lack that flexibility. I sometimes say that the U.S. could afford to fight the Great Recession with large budget deficits. But having done so, it wouldn't be nearly as easy to enact a similar deficit boost in the future if economic or foreign policy considerations might seem to warrant it.

Second, the current spending patterns of the U.S. government are starting to crowd out everything except health care, Social Security, and interest payments. The bottom three lines on this graph show rising government spending on major health care programs (like Medicare and Medicaid), on Social Security, and on interest payments on past borrowing. The top dark green line shows spending on everything else the government does, falling steadily as a share of GDP over time. Again, this is the projection based on current law.

Third, large government borrowing means less funding is available for private investment. CBO writes: "Large federal budget deficits over the long term would reduce investment, resulting in lower national income and higher interest rates than would otherwise occur. Increased government borrowing would cause a larger share of the savings potentially available for investment to be used for purchasing government securities, such as Treasury bonds. Those purchases would crowd out investment in capital goods—factories and computers, for example—which makes workers more productive. Because wages are determined mainly by workers’ productivity, the reduction in investment would reduce
wages as well, lessening people’s incentive to work."

These long-run dangers don't mean that an abrupt large reduction in budget deficits should happen immediately, when the economy is still struggling to generate a respectable recovery. But it does mean that we should be thinking seriously about small changes for the near future that will phase into larger effects over the next couple of decades.

I said earlier that this scenario is optimistic. I don't mean that the CBO has biased its baseline estimates: indeed, the report goes through in some detail the underlying projections behind these numbers about productivity and economic growth, health care costs and life expectancy, and interest rates (which affect the costs of financing government borrowing). But the baseline estimate, by law and custom, focuses on what is specifically in the law. This seemingly sensible rule offers a temptation to politicians, who can enact spending cuts or tax increases that aren't scheduled to start for five or 10 years. These proposals make the projected deficits look better over the next five or ten years, and then the policies can be changed later.

Thus, CBO calculates an "alternative fiscal scenario," in which it sets aside some of these spending and tax changes that are scheduled to take effect in five years or ten years or never. Remember that the extended baseline scenario projected that the debt/GDP ratio would be 106% by 2039. In the alternative fiscal scenario, the debt-GDP ratio is projected to reach 183% of GDP by 2039. As the report notes: "CBO’s extended alternative fiscal scenario is based on the assumptions that certain policies that are now in place but are scheduled to change under current law will be continued and that some provisions of law that might be difficult to sustain for a long period will be modified. The scenario, therefore, captures what some analysts might consider to be current policies, as opposed to
current laws."

What changes are assumed in the alternative fiscal scenario? As one example, the category of "Other Non-Interest Spending" in the chart above does not plummet: "Federal noninterest spending apart from that for Social Security, the major health care programs (net of offsetting receipts), and certain refundable tax credits would rise after 2024 to its average as a percentage of GDP during the past two decades—rather than fall significantly below that level, as it does in the extended baseline."

On the tax side, the usual political trick is to have various tax deductions or credits scheduled too expire in the future, which makes the projected deficit appear lower, except that when the time comes for expiration the tax provisions are renewed again. Thus, the baseline revenue estimates rise from 17.6% of GDP in 2014 to 18.3% of GDP by 2024 and 19.4% of GDP by 2034 (and keep rising after that). In the alternative fiscal scenario," tax revenue rises to 18.1% of GDP, which is "slightly higher than the average of 17.4 percent over the past 40 years," notes the CBO--but then tax revenues don't continue to rise above that level.

My own judgement is that the path of future budget deficits in the next decade or so is likely to lean toward the alternative fiscal scenario. But long before we reach a debt/GDP ratio of 183%, something is going to give. I don't know what will change. But as an old-school economist named Herb Stein used to say, "If something can't go on, it won't."

Wednesday, July 23, 2014

The US unemployment rate has been painfully high in the Great Depression and its aftermath, but the high unemployment rates of the early 1980s look even worse--at least at first glance. However, the 1970s and 1980s were a time when a rising share of the adult population, and especially women, were entering the (paid) labor force, while the last few years are a time when the share of the adult population is in the labor force is declining . Indeed, it's been a standard concern in the last few years that the official unemployment rate is not capturing the true pain in the labor market, because the official unemployment rate only includes those who are looking for work--not those who have become discouraged and given up looking. What's is the interaction between the unemployment rate and the labor force participation rate telling us?

Here'a figure showing the post-World War II unemployment rate. The monthly peak of the unemployment rate at 10% in October 2009 was second-highest of any post World War II recession, behind the peak of 10.8% in November and December 1982. In addition, from December 1979 to August 1987, a period of almost eight years, the monthly unemployment rate exceeded 6%. More recently, the unemployment rate first rose above 6% in August 2008, and as of June 2014 was at 6.1%. Thus, it's plausible that that the current stretch of monthly unemployment above 6% will last a little more than six years--which is dismal, but still with some way to go before matching the high unemployment rates that prevailed during most of the 1980s.

However, this comparison doesn't feel quite fair. After all, during the 1970s and 1980s, the labor force participation rate (that is, the share of adults who either had jobs or were unemployed and looking for jobs) was rising from 60% to about 67%. Since the start of the Great Recession, the labor force participation rate has fallen from 66% down to about 63%. An unemployment rate falling back below 6% was more comforting in the 1980s, with a rising share of adults working, than it would be in 2014, with a falling share of adults working. Here's a figure showing the labor force participation rate in the post World War II period.

To investigate these issues, the Council of Economic Advisers has just published "The Labor Force Participation Rate Since 2007: Causes and Policy Implications." The rise in the labor force participation rate from about 1960 up through the mid-1990s was driven both by the baby boom generation reaching working age, and the dramatic entry of women into the (paid) labor force. The decline since about 2000 is largely because the rising proportion of women in the labor force levelled off, and the aging of the baby boom generation is raising the number of retirees. Here's a figure from the report showing the labor force participation rate for men and women separately.

The CEA also notes that during every recession, the labor force participation rate tends to fall a little, as some people give up looking for jobs and thus aren't counted as officially "unemployed." After presenting its own analysis, and showing that it fits fairly well with previous studies of the subject, the CEA notes: "Up until the beginning of 2012 the [labor force] participation rate was generally slightly higher than would have been predicted based on the aging trend and the standard business cycle effects. But in the last two years, the participation rate has continued to fall at about the same rate even though the unemployment rate has been declining rapidly."

In other words, the drop in the labor force participation rate up through about 2012 was explainable based on the aging of the population and the common patterns during a recession. (Here's a post from April 2012 that refers to a study making this point.) In this telling, the real puzzle is not why the labor force participation rate fell up to 2012, but why it has continued to fall so quickly since 2012. To explain what is different about the period since 2012, compared with the period after previous recessions, the CEA report focuses most heavily on how long-term unemployment has been different in the aftermath of the Great Recession. For example, here's one figure showing the share of the total unemployed who have been out of work more than 27 weeks. In past recessions, this share peaked at about 20% of the total unemployed. In the Great Recession, the share of long-term unemployed peaked at above 40% of all unemployed, and even now remains at historically high levels. (Here's a post from July 2013 on the legacy of long-term unemployment.)

Another measure of long-term unemployment is the average duration of unemployment. Again, remember that only those who are actually looking for a job are counted as officially unemployed, not those who have become discouraged and stopped looking. In the last few recessions, the average length of unemployment peaked at around 20 weeks. In the Great Recession, it peaked at about 40 weeks, and is still at a discomfitingly high 35 weeks.

The CEA report looks at some other potential reasons for why labor force participation has dropped off, like a rising share of the 16-24 year-old age bracket being in school, and other breakdowns by age, gender and disability status. At least to me, the most important bottom line is not to focus on the somewhat sterile argument of who should "really" be counted as unemployed. After all, the Bureau of Labor Statistics does also collect statistics on "discouraged" workers and "marginally attached" workers. Instead, the key message is that the lower unemployment rate includes a much larger than usual share of long-term unemployed. Waiting for the U.S. economy to re-absorb these workers has not been showing good results.

Tuesday, July 22, 2014

Many discussions of "technology" and how it will affect jobs and the economy have a tendency to discuss technology as if it is one-dimensional, which is of course an extreme oversimplification. Erik Brynjolfsson, Andrew McAfee, and Michael Spence offer some informed speculation on how they see the course of technology evolving in "New World Order: Labor, Capital, and Ideas in the Power Law Economy," which appears in the July/August 2014 issue of Foreign Affairs (available free, although you may need to register).

Up until now, they argue, the main force of information and communications technology has been to tie the global economy together, so that production could be moved to where it was most cost-effective. As they write: "Technology has sped globalization forward, dramatically lowering communication and transaction costs and moving the world much closer to a single, large global market for labor, capital, and other inputs to production. Even though labor is not fully mobile, the other factors increasingly are. As a result, the various components of global supply chains can move to labor’s location with little friction or cost."

But looking ahead, they argue that the next wave of technology will not be about relocating production around the globe, but changing the nature of production--and in particular, automating more and more of it. If the previous wave of technology made workers in high-income countries like the U.S. feel that their jobs were being outsourced to China, the next wave is going to make those low-skill workers in repetitive jobs--whether in China or anywhere else--feel that their jobs are being outsources to robots. Brynjolfsson, McAfee, and Spence write:

Even as the globalization story continues, however, an even bigger one is starting to unfold: the story of automation, including artificial intelligence, robotics, 3-D printing, and so on. And this second story is surpassing the first, with some of its greatest effects destined to hit relatively unskilled workers in developing nations.

Visit a factory in China’s Guangdong Province, for example, and you will see thousands of young people working day in and day out on routine, repetitive tasks, such as connecting two parts of a keyboard. Such jobs are rarely, if ever, seen anymore in the United States or the rest of the rich world. But they may not exist for long in China and the rest of the developing world either, for they involve exactly the type of tasks that are easy for robots to do. As intelligent machines become cheaper and more capable, they will increasingly replace human labor, especially in relatively structured environments such as factories and especially for the most routine and repetitive tasks. To put it another way, offshoring is often only a way station on the road to automation.

This will happen even where labor costs are low. Indeed, Foxconn, the Chinese company that assembles iPhones and iPads, employs more than a million low-income workers -- but now, it is supplementing and replacing them with a growing army of robots. So after many manufacturing jobs moved from the United States to China, they appear to be vanishing from China as well. (Reliable data on this transition are hard to come by. Official Chinese figures report a decline of 30 million manufacturing jobs since 1996, or 25 percent of the total, even as manufacturing output has soared by over 70 percent, but part of that drop may reflect revisions in the methods of gathering data.)

If this prediction holds true, what does this mean for the future of jobs and the economy?

1) Outsourcing would become much less common. After all, if most of the cost of production is embodied in capital like robots and 3D printers, then the advantage to cheap labor becomes minimal. Brynjolfsson, McAfee, and Spence write: "As work stops chasing cheap labor, moreover, it will gravitate toward wherever the final market is, since that will add value by shortening delivery times, reducing inventory costs, and the like."

2) For low-income and middle-income countries like China that have thrived on being the workshops and manufacturing centers of the global economy, their jobs and workforce would experience a dislocating wave of change.

3) Some kinds of physical capital are going to plummet in price, like robots, 3D printing, and artificial intelligence doing many more tasks in both manufacturing and services. Especially as robots become capable of building more robots, capital goods will be abundant in a way that will not generate high returns to capital.

4) So if many workers are going to find their jobs disrupted and many makers of capital equipment are going to find themselves in a brutal competitive battle to reduce price and raise capabilities, who does well in this future economy? For high-income countries like the United States, Brynjolfsson, McAfee, and Spence emphasize that the greatest rewards will go to "people who create new ideas and innovations," in what they refer to as a wave of "superstar-based technical change." For the MBA students at MIT and NYU, where these authors are based, this probably qualifies as thrilling news. But for the typical worker, the largely unspoken implication seems fairly grim. If you aren't a superstar entrepreneur, then you are likely to be replaced by a robot, or a lower-paid work in another country, or you'll have to scramble against all the other non-superstars to find a job in the remainder of the economy.

This final forecast seems overly grim to me. While I can easily believe that the new waves of technology will continue to create superstar earners, it seems plausible to me that the spread and prevalence of many different new kinds of technology offers opportunities to the typical worker, too. After all, new ideas and innovations, and the process of bringing them to the market, are often the result of a team process--and even being a mid-level but contributing player on such teams, or a key supplier to such teams, can be well-rewarded in the market. More broadly, the question for the workplace of the future is to think about jobs where labor can be a powerful complement to new technologies, and then for the education and training system, employers, and employees to get the skills they need for such jobs. If you would like a little more speculation, one of my early posts on this blog, back on July 25, 2011, was a discussion of "Where Will America's Future Jobs Come From?"

Monday, July 21, 2014

Here's a vivid description from a newspaper article about 150 years ago of why competitive labor markets are necessarily exploitative and immoral.

Where all men are equals, all must be competitors, rivals, enemies, in the struggle for life, trying each to get the better of the other. The rich cheapen the wages of the poor; the poor take advantage of the scarcity of labor, and charge exorbitant prices for their work; or, when labour is abundant, underbid and strangle each other in the effort to gain employment. Where any man engaged in business in free society to act upon the principle of the Golden Rule--doing unto others as he would that they should do unto him--his certain ruin would be the consequence.

Especially if you are predisposed to feel some sympathy with this line of argument, you will be intrigued to know that it's from a pro-slavery essay that appeared in the Richmond (Virginia) Examiner, July 17, 1861, as quoted in Fighting Words, a 2004 book by Andrew S. Coopersmith (pp. 49-50). Here's the full passage:

Christian morality is impractical in free society and is the natural morality of slave society. Where all men are equals, all must be competitors, rivals, enemies, in the struggle for life, trying each to get the better of the other. The rich cheapen the wages of the poor; the poor take advantage of the scarcity of labor, and charge exorbitant prices for their work; or, when labour is abundant, underbid and strangle each other in the effort to gain employment. Where any man engaged in business in free society to act upon the principle of the Golden Rule--doing unto others as he would that they should do unto him--his certain ruin would be the consequence.

"Every man for himself'" is the necessary morality of such society, and that is the negation of Christian morality. . . On the other hand, in slave society, . . .it is in general, easy and profitable to do unto others as we would that they should to unto us. There is no competition, no clashing of interests within the family circle, composed of parents, master, husband, children and slaves. . . . When the master punishes his child or his slave for misconduct he obeys the golden rule just as strictly as when he feeds and clothes them. Were the parent to set his chidden free at fifteen years of age to get their living in the world, he would be guilty of crime; and as negroes never become more provident or intellectual than white children of fifteen, it is equally criminal to emancipate them. We are obeying the golden rule in retaining them in bondage, taking care of them in health and sickness, in old age and infancy, and in compelling them to labor. . . .

'Tis the interest of masters to take good care of their slaves, and not cheat them out of their wages, as Northern bosses cheat and drive free labourers. Slaves are most profitable when best treated., free labourers most profitable when worst treated and most defrauded. Hence the relation of the master and slave is a kindly and Christian one; that of free laborer and employer a selfish and inimical one. It is in the interest of the slave to fulfill his duties to his master; for he thereby elicits his attachment, and the better enables him to provide for his (the slave's) wants. Study and analyze as long as [you] please the relations of men . . . in a slave society, and they will be found to be Christian, humane and affectionate, whilst those of free society are anti-Christian, competitive and antagonistic.

Of course, the fact that a point of view has some appalling allies doesn't make it incorrect. Most points of view, from all perspectives, have some appalling allies. So the fact that slaveholders saw wage labor as exploitative and slavery as moral certainly doesn't prove that wage labor is not exploitative, at least at certain times and places and situations. But it does suggest that before you condemn labor markets as exploitative--especially labor markets that operate in modern high-income societies with democratic governance--you should consider the alternative social mechanisms over time that determined what jobs people would do and how much they would be paid.

My own views about the value of labor market are closer to those expressed by Nobel laureate Edmund Phelps in an interview with Howard R. Vane and Chris Mulhearn published in 2009 in the Journal of Economic Perspectives. Phelps said:

I’ve been trying to develop a new justification for capitalism, at least I think it’s new, in which I say that if we’re going to have any possibility of intellectual development we’re going to have to have jobs offering stimulating and challenging opportunities for problem solving, discovery, exploration, and so on. And capitalism, like it or not, has so far been an extraordinary engine for generating creative workplaces in which that sort of personal growth and personal development is possible; perhaps not for everybody but for an appreciable number of people, so if you think that it’s a human right to have that kind of a life, then you have on the face of it a justification for capitalism. There has to be something pretty powerful to overturn or override that.

The questions about when, where, and how paid labor markets can be said to be exploitative aren't going to be settled in a blog post. For a quick sketch of some of the arguments about about how working for wages is either an instrumental act divorced from morality or part of a deeper moral engagement with society and nature, my essay on "Economics and Morality" in the June 2014 issue of Finance and Development offers a starting point.

Note: Thanks to Ann Norman, the Assistant Editor and my general co-conspirator at the Journal of Economic Perspectives, for pointing out the slavery quotation.

Friday, July 18, 2014

"The Samuelson Conjecture" sounds a bit like the title of one of those old Robert Ludlum novels (The Bourne Identity, The Parsifal Mosaic, The Aquitaine Progression, The Sigma Protocol, etc.) But among economists, it refers to an article called "Where Ricardo and Mill Rebut and Conﬁrm Arguments of Mainstream Economists Supporting Globalization," written by the great economist Paul Samuelson, and published in the Journal of Economic Perspectives in the Spring 2004 issue. (Full disclosure: I've been Managing Editor of the JEP since the first issue in 1987.)

Samuelson's article won't be a simple read for the uninitiated, but the basic intuition is straightforward. The Samuelson conjecture starts from the basic insight that international trade is based on differences in productivity levels across industries that are driven by some mixture of natural endowments and past investments in education, physical capital, technology, and the legal and financial infrastructure. But what happens if a high-income and low-income economy start out in different places, but then the low-income economy develops in a way that it converges in education, investment, and technology toward the high-income economy? As the difference between the two economies decreases, the potential for gains from trade between the economies can also decrease. Or to bring this meditation abruptly to the real world, as the economy of China becomes more technologically similar to that of the United States, the standard of living in China undoubtedly rises, but the U.S. economy may suffer because the potential for gains from trade is diminished. Or as Samuelson puts it:

[S]ometimes free trade globalization can convert a technical change abroad into a benefit for both regions; but sometimes a productivity gain in one country can benefit that country alone, while permanently hurting the other country by reducing the gains from trade that are possible between the two countries. ... Historically, U.S. workers used to have kind of a de facto monopoly access to the superlative capitals and know-hows (scientific, engineering and managerial) of the United States. All of us Yankees, so to speak, were born with silver spoons in our mouths—and that importantly explained the historically high U.S. market-clearing real wage rates for (among others) janitors, house helpers, small business owners and so forth. However, after World War II, this U.S. know-how and capital began to spread faster away from the United States. That meant that in a real sense foreign educable masses—first in western Europe, then throughout the Pacific Rim—could and did genuinely provide the same kind of competitive pressures on U.S. lower middle class wage earnings that mass migration would have threatened to do. Post-2000 outsourcing is just what ought to have been predictable as far back as 1950. And in accordance with basic economic law, this will only grow in the future 2004–2050 period.

As Samuelson was at some pains to point out, this argument showing one possible theoretical outcome of how gains from trade can change as one country experiences technological development is not a new one; indeed, I often have pointed out a version of this finding using the simple graphs in an introductory economics class. Samuelson was also at some pains to point out in his 2004 article that the argument didn't show that blocking imports from China was a good idea; after all, if the issue is how to keep gains from trade high, acting to reduce trade would be a peculiar way of accomplishing that goal. But for a few months after the article was published, a string of popular press articles made wildly overblown claims that the great Paul Samuelson had proven that globalization and international trade were harming the U.S. economy.

Theoretical models show what is possible, but empirical evidence shows the size of actual effects. In the American Economics Journal: Macroeconomics, Julian di Giovanni, Andrei A. Levchenko, and Jing Zhang have published "The Global Welfare Impact of China: Trade Integration and Technological Change" (6:3, 153–183). (Unlike the JEP, the AEJ-Macro is not freely available on-line, but many readers will have access through library subscriptions.) The authors do a simulation of the world economy, which is based on productivity estimates for many sectors across 75 countries: "We embed these productivity estimates within a quantitative multi-country, multi-sector model with a number of realistic features, such as multiple factors of production, an explicit nontraded sector, the full specification of input-output linkages between the sectors, and both inter- and intra-industry trade, among others."

The authors then focus on two scenarios for productivity growth in China. In one scenario, all of China's industries grow at the same productivity rate, so that the structure of China's economy remains fundamentally different from that of the U.S. In the other scenario, the productivity rate of China's industries grows in an unbalanced way, so that China's economy evolves in such a way that "China's productivity in every sector becomes a constant ratio of the world frontier."

Given the basic idea of international trade--that is, the economic gains from trade happen when trading across economies with different relative productivity levels across industries--one might expect that if productivity in China becomes more similar to U.S. patterns, then the U.S. economy could suffer from a reduction in the potential for gains from trade. But the simulation results from di Giovanni, Levchenko, and Zhang show the opposite results. For most countries, including the U.S., the gains from trade with China are larger when the patterns of China's productivity growth converges to the U.S. patter. The authors explain.

The sheer size of the Chinese economy and the breathtaking speed of its integration into global trade have led to concerns about the possible negative welfare effects of China’s integration and productivity growth. These concerns correspond to the theoretically possible, though not necessary, outcomes in fully articulated models of international trade, and thus have been taken seriously by economists. However, it is ultimately a quantitative question whether the negative welfare effects of China on its trading partners actually obtain in a calibrated model of the world economy with a realistic production structure, trade costs, and the inherently multilateral nature of international trade. ... With respect to technological change, our results are more surprising: contrary to a well-known [Samuelson] conjecture, the world will actually gain much more in welfare if China’s growth is unbalanced. This is because China’s current pattern of comparative advantage is common in the world, and thus unbalanced growth in China actually makes it more different than the average country.

Of course, one set of empirical simulations is not an unarguable proof, either, and research in this area is sure to continue. But for now, at least, the evidence suggests that Samuelson's (2004) conjecture emphasized a theoretical possibility that does not seem to hold true in a fully articulated quantitative model of world trade, in which the productivity gains in China, in the context of all the countries and trading relationships in the world economy, do redound to the overall benefit of the U.S. economy.

One final thought is that the discussion here focuses on the gains from trade as driven by differences in relative productivity across countries, and these "Ricardian" models of trade have seen a new surge of importance in recent years. However, since the 1980s there have also been several waves of economic analysis that focused on other gains from international trade. For example, one approach focuses on gains to consumers from having a variety of products available (for example, a variety of models of cars or smartphone). A second approach looks at how international trade adds to the economic pressures that push less-productive firms out of business and provide benefits more productive firms that can sell in world markets. A third approach looks at how trade leads to pressure for innovation, which in turn increases productivity levels.

Thursday, July 17, 2014

One prism for looking at patterns of global trade is to consider trade between regions of the world. The rows of the table show the origin of world merchandise trade according to the region of the country involved; the columns show the destination of that trade. Thus, if you go to the South and Central America row, you can see that countries of this region exported $201 billion in merchandise to each other in 2012, only slightly more than the $186 billion these countries exported to North America and the $172 billion they exported to Asia. At least to me, this suggests that there are substantial possibilities for gains from trade within the countries of South America.

I include this table in each edition of my Principles of Economics text (and if you're teaching an intro econ course, I encourage you to check it out). Here are some lessons try to highlight:

First, the bulk of world trade involves high-income areas of the world, either as exporters, importers, or both. For example, the single biggest number is the $4,382 billion that countries in Europe sell to each other. The level of trade between North American countries appears low compared to Europe, but this is because the European Union includes 27 nations while the trade

in the North American region is between the United States, Canada, and Mexico. Remember,

exports from Germany to Sweden count as international trade, but sales from California to New York would not be counted in this table, since they occur within the U.S. economy.

Second, trade between high-income regions and low-income regions is fairly extensive. Perhaps the most vivid example of this point is trade of $3,012 billion between the nations of Asia. Asia includes the high-income economies like Japan and Korea, upper-middle income countries like China and Thailand, lower-middle-income countries like Indonesia and Vietnam, and low-income countries like Cambodia.

Third, high-income regions are important as markets to the other regions. The countries of Africa, for example, export more than three times as much to the European Union as they do to other countries of Africa. As noted above, countries of Latin America almost as much to North America than they do to each other.

Finally, trade between some regions is very low; for example, between Africa and Latin America, or between the Middle East and the Commonwealth of Independent States. Indeed, some regions of the world seem economically disconnected from each other. I suspect that in a globalizing world economy, these connections will be gradually created and growing over time.

Wednesday, July 16, 2014

Want a glimpse of how companies can shift their profits among countries in a way that reduces their tax liabilities? Here's the dreaded "Double Irish Dutch Sandwich" as described by the International Monetary Find in its October 2013 Fiscal Monitor. This schematic to show the flows of goods and services, payments, and intellectual property. An explanation from the IMF follows, with a few of my own thoughts.

The IMF writes (footnotes omitted):

"So many companies exploit complex [taz] avoidance schemes, and so many countries offer devices that make them possible, that examples are invidious. Nonetheless, the “Double Irish Dutch Sandwich,” an avoidance scheme popularly associated with Google, gives a useful flavor of the practical complexities. Here’s how it works (Figure 5.1):

•• Multinational Firm X, headquartered in the United States, has an opportunity to make profit in (say) the United Kingdom from a product that it can for the most part deliver remotely. But the tax rate in the United Kingdom is fairly high. So . . .

•• It sells the product directly from Ireland through Firm B, with a United Kingdom firm Y providing services to customers and being reimbursed on a cost basis by B. This leaves little taxable profit in the United Kingdom. Now the multinational’s problem is to get taxable profit out of Ireland and into a still-lower-tax jurisdiction.

•• For this, the first step is to transfer the patent from which the value of the service is derived to Firm H in (say) Bermuda, where the tax rate is zero. This transfer of intellectual property is made at an early stage in development, when its value is very low (so that no taxable gain arises in the United States).

•• Two problems must be overcome in getting the money from B to H. First, the United States might use its CFC [controlled foreign corporation] rules to bring H immediately into tax. [Note: The "controlled foreign corporation" rules seek to reduce the ability of companies to move profits to another country via a pure paperwork transaction to what is really the same company.] To avoid this, another company, A, is created in Ireland, managed by H, and headquarters “checks the box” on A and B for U.S. tax purposes. This means that, if properly arranged, the United States will treat A and B as a single Irish company, not subject to CFC rules, while Ireland will treat A as resident in Bermuda, so that it will pay no corporation tax. The next problem is to get the money from B to H, while avoiding paying cross-border withholding taxes. This is fixed by setting up a conduit company S in the Netherlands: payments from B to S and from S to A benefit from the absence of withholding on nonportfolio payments between EU companies, and those from A to H benefit from the absence of withholding under domestic Dutch law.

This clever arrangement combines several of the tricks of the trade: direct sales, contract production, treaty shopping, hybrid mismatch, and transfer pricing rules.

A few quick thoughts of my own here.

1) The description of this arrangement is not from a muck-raking journalist nor from a lefty lobbying group. When it comes to knowing what happens in the world of international finance, the IMF is a mainstream and establishment source.

2) Under U.S. tax law, these profits would be subject to U.S. tax law if and when they are repatriated to the United States as dividends to shareholders. As the IMF writes (citations omitted): "The United States will charge tax when the money is paid as dividends to the parent—but that can be delayed by simply not paying any such dividends. At present, one estimate is that nearly US$2 trillion is left overseas by U.S. companies." Here's a post from about a year ago on the reasons why U.S. firms find it useful to hold liquid assets overseas.

3) There is an ongoing cat-and-mouse game in corporate tax avoidance, in which government tax agencies write regulations, well-paid corporate tax attorneys construct arrangements to pay lower taxes within the rules, the government tax agencies write more regulations, and so on--in a spiraling descent into ever-greater complexity and confusion. As with many pointless and destructive games, the answer is to define the rules for a different game. President Obama has proposed one corporate tax reform, and other proposals are floating around. When the economic incentives to shift profits are powerful, the corporate tax attorneys will find ways to make it happen. Thus, one goal of such reform should be to reduce the underlying incentives for this sort of profit-shifting.