Pages

Monday, April 30, 2012

The U.S. corporate sector has high profits. Interest rates are near historical lows. Those factors would seem to encourage investment, expansion, and hiring. But here we are in 2012, with the official end of the Great Recession nearly three years in the rear-view mirror, and many firms are still holding back. Scott R. Baker, Nick Bloom, and Steven J. Davis have written "Is Policy Uncertainty Delaying the Recovery?" as a policy brief for the Stanford Institute for Economic Policy Research. The underlying research paper and data are available here

There are lots or theoretical reasons why a high level of uncertainty might cause managers to be hesitant about starting new projects, investing, or hiring workers. But how does one collect data on the level of uncertainty--and in particular, on the level of uncertainty related to economic policy? Baker, Bloom and Davis mix together three sources of data into a single index: "We construct our index of policy uncertainty by combining three types of information: the frequency of newspaper articles that reference economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire in coming years; and the extent of disagreement among economic forecasters about future inflation and future government spending on goods and services." Here is their index, where a level of 100 is set arbitrarily to be equal to the average of the index for the 25 years from 1985 up to 2010.

Constructing any index like this involves some more-or-less arbitrary choices, so there will always be room for dispute. In addition, while the authors offer some arguments that this index is emphasizing "policy" uncertainty, I suspect that it's picking up other kinds of swings in economic confidence as well.

But for what it's worth, it does seem that the index is spiking at times one might expect: 9/11, the "Black Monday" stock market meltdown in 1987, wars, presidential elections, and the like. In addition, policy uncertainty by this measure has been especially high since 2008, although in early 2012 the measure has fallen back to 2009 levels. When the authors look more closely at the newspaper articles underlying their index, they find that the greatest sources of uncertainty are those related to monetary issues, which includes many steps taken by the Federal Reserve, and tax issues, like whether various tax provisions will be extended or ended.

How much does policy stability matter? As the authors ask (references to figures omitted):

"How much near-term improvement could we expect from a stable, certainty-enhancing policy regime? We use techniques developed by Christopher Sims, one of the two 2011 Nobel laureates in economics, to estimate the effects of economic policy uncertainty. The results for the United States suggest that restoring 2006 (pre-crisis) levels of policy uncertainty could increase industrial production by 4% and employment by 2.3 million jobs over about 18 months. That would not be enough to create a booming economy, but it would be a big step in the right direction."

By the time one takes into account the problems of creating an index to measure policy uncertainty and the problems of blending policy uncertainty into a macroeconomic model, I wouldn't place much confidence in these exact numbers. But at a broader level, the calculations make a strong argument that the effects of policy uncertainty on output and employment have probably been a substantial contributor to the sluggishness of the U.S. economic recovery.

Friday, April 27, 2012

Movies are usually shown to critics before being released to the general public. But about one-tenth of movies are not released. What do you as a movie-goers infer when a movie isn't released for review? But then, what is an appropriate strategy for movie studios in sending movies out for review, if they recognize what movie-goers like you are likely to infer? And what is the appropriate strategy for movie-goers, if they recognize what the movie studios are likely to infer about what they are likely to infer? You have just crossed the border into the land of strategic game theory. In the most recent issue of the American Economic Journal: Microeconomics, Alexander L. Brown, Colin F. Camerer, and Dan Lovallo sort through the implications and inferences in "To Review or Not to Review? Limited Strategic Thinking at the Movie Box Office" (vol. 4, number 2, pp. 1-26). The journal is not freely available on-line, although many academics will have access to it through a library subscription or their personal membership in the American Economic Association.

The usual starting point for analyzing these kinds of strategic interactions is to consider what would happen if all parties were completely rational. It might seem intuitively obvious that movie studios will send out their better-quality movies to be reviewed, but not send out their lower-quality movies. However, it turns out that if all parties are fully rational, movie studios would release every movie for review. The authors explain the underlying mathematical game theory by offering an illustration along these lines.

Say that the quality of a movie can be measured on a scale between 0 and 100. Now say that studios decide that they will only release their better movies for review: for example, the studios might decide not to release for review any movie with quality below 50. In this situation, when moviegoers see that a movie has not been released for review, they will infer that it has a quality ranging from 0 to 50--on average, a value of 25.

But if consumers are going to assume that all unreviewed movies have a quality value of 25, then it makes sense for the movie studios to release for review all movies with qualities higher than 25, because they suffer diminished profits if a movie has quality of, say, 40, but moviegoers are assuming it's only a 25.

Now movie studios are releasing for review all movies with a quality score over 25, and movie goers will assume that the remaining movies are between 0 and 25, or an average of 12.5. Given these expectations of moviegoers, it will pay for the studios to release for review all movies with a quality score above 12.5, so that they don't face a situation where their movie with a true quality score of 20, when movie-goers are expecting only a 12.5.

However, now consumers will assume that all unreviewed movies have a quality value below 12.5. And as this cycle of inference and counterinference continues, eventually the movie studios will release all movies (except perhaps the single worst movie, which consumers will then know is the single worst movie) for review.

After identifying the purely rational outcome, the next step in this kind of analysis is to look at the underlying assumptions, and to think about which assumptions are mostly likely to be violated in this setting. The authors emphasize two such assumptions: 1) Consumers are always aware when movies haven't been released for review; and 2) Consumers draw fully rational conclusions when a movie isn't released for review. Of course, in the real world neither of these assumptions holds true. The authors find that of the 1,414 that had wide release in the U.S. market from 2000 through 2009, about 11% were not released for review. In addition, that number has been higher in recent years.

Movie-goers who often don't notice that a movie hasn't been released for review, or who don't draw the rational inference when that happens, are likely to end up going to low-quality movies they would not otherwise have attended. As a result, they are more likely to be disappointed in their movie experience when going to an unreviewed movie than to a movie that was released for review. The authors set out to test whether this implication holds true.

To measure what critics think of the quality of a movie movie, they use data from Metacritic.com, a website that pulls together and averages ratings of more than 30 movie critics from newspapers, magazines, and websites. To measure what audiences think of a movie, they look at user reviews of movies at the of Internet Movie Database (IMDB). They plot a graph with the movie critic ratings on the horizontal axis and the movie-watcher ratings on the vertical axis. Movies that were released for review are solid dots; movies that had a "cold open" without a review from the critics before they were released (although they were reviewed later) are hollow dots. Here is the graph:

What patterns emerge here?

1) Notice that the dots form a generally upward-sloping pattern, which tells you that when the critics tend to rate a movie more highly (on the horizontal axis), moviegoers also tend to rate the movie more highly (on the vertical axis).

2) Cold-opened movies, the hollow dots, tend to have lower quality. "No cold-opened movie has a metacritic rating higher than 67. The average rating for those movies is 30, 17 points below the sample average of 47."

3) The darker straight line is the best-fit line looking only at movies that were screened in advance. The lighter straight line is the best-fit line looking at movies that were not screened in advance. The lighter line is below the darker line. Think about a movie of a certain quality level as defined by the critics: if that movie is reviewed, people are more likely to enjoy that movie than if the movie was not released for early review. This pattern suggests that the reviews are helping people to sort out which movies they would prefer seeing, and that without reviews, people are more likely to end up disappointed.

After doing statistical calculations to adjust for factors like whether the movie features well-known stars, the size of the production budget, the rating of the movie, the genre of the movie, and other factors, they find: "[C]old opening is correlated with a 10 –30 percent increase in domestic box-office revenue, and a pattern of fan disappointment, consistent with the hypothesis that some moviegoers do not infer low quality from cold opening."

So here's some advice you can use: If you're not sure whether a movie has been released for review by critics before it was distributed, find out. If it hasn't been releasedm think twice about whether you really want to see it. Maybe you do! Or maybe you are not paying enough attention to the signal the movie studio is sending by choosing a cold opening. Here is the authors' explanation, based in part on interviews with studio executives (footnotes omitted):

"[P]roduction budgets and personnel are decided early in the process. The number of theaters which agree to show the film is contracted far in advance of any cold-opening decision. Cold-opening decisions are made after distribution contracts have been signed and according to a major distributor and studio executives “are not a part of the contract.” There are no contracted decision rights about whether to cold open or not. The cold-opening decision is almost always made late in the process. After the film is completed, there is often audience surveying and test screenings. As one senior marketing and public relations (PR) veteran put it, “If a movie is not shown to critics, a decision has been made that the film will not be well received by them … After the PR executives have seen the film, if they believe the film will be poorly reviewed, they will have a heart to heart with the marketing execs and filmmakers about the pros and cons of screening for critics. ...

"A key ingredient in this story is that executives must think some moviegoers are strategically naïve, in the sense that those moviegoers ... will not deduce from the lack of reviews that quality is lower than they think. (Otherwise, the decision to cold open would be tantamount to allowing critics to say the quality is low)."

Here's the pattern of the labor force participation rate. The first figure shows the overall number. The second figure shows the breakdown by gender: that is, the declining labor force participation rate for men, and the women's labor force participation rate that was rising until about 2000, but then flattened out and has now declined.

How much of the recent sharp decline in the labor force participation rate is the Great Recession, and how much is other factors? "At the turn of the 21st century, labor force participation in the United States reversed its decades-long increase and started trending lower. A more startling development has been the recent sharp decline in the labor force participation rate—from 66.0 percent in 2007 to 64.1 percent in 2011—a far bigger drop than in any previous four-year period. ... This article presents a variety of evidence—including data on demographic shifts, labor market flows, gender differences, and the effects of long-term unemployment—to disentangle the roles of the business cycle and trend factors in the recent drop in participation. Taken together, the evidence indicates that long-term trend factors account for about half of the decline in labor force participation from 2007 to 2011, with cyclical factors accounting for the other half."

What are these long-term trend factors?

1) The baby boom generation (roughly those born starting in 1946 and up until about 1960) pushed up the labor force participation rate while they were moving through their prime earning years, and now are starting to pull down the labor force participation rate as they head into retirement.

2) Women entered the (paid) labor force in large numbers starting after World War II, which helped drive the overall labor force participation rate higher for decades. But the labor force participation rate for women seemed to top out at around 60%, and has flattened out since then.

3) Young adults in the 16-24 age group have become less likely to work. This group had a labor force participation rate of nearly 70% back in the 1970s and 1980s, but it has now fallen to about 55%. Part of the decline is that more young people are attending at least some college. Another part of the decline is that for many of the relatively low-skilled in this age group, the low wages they would earn don't seem worth taking a job.

4) The long-term trend of declining male labor force participation rates continues. What are these men doing when they leave the labor force? One doorway out of the labor force for many of them takes the form of applying for disability, which has nearly tripled in the last 10 years from 1 million to 3 million applications per year. For a discussion of "Disability Insurance: One More Trust Fund Going Broke," see this post from August 11, 2011.

A long-term decline in the labor force participation rate isn't good news for long-term economic growth, nor for the long-term solvency of Social Security and Medicare. There are two particular areas worth focus.

2) The labor force participation of the elderly has been rising since the mid-1990s, albeit slowly. For example, the labor force participation rate of the 55-64 age group rose from 59.3% in 2000 to 64.9% in 2010; for the over 65 group, the increase was from 12.9% in 2000 to 17.4% in 2010; and for the over-75 group, the rise was 5.3% in 2000 to 7.4% in 2010. As the population ages, we need to think about design of retirement programs and labor force institutions in a way that certainly doesn't penalize--and perhaps can even reward--the decision to work a few more years.

Tuesday, April 24, 2012

One of the most sizzling of all hot-button issues over the last 40
years has been the sharp rise in immigration from Mexico, much of it
illegal. Thus, it's intriguing to read the report by Jeffrey Passel, D’Vera Cohn, Ana Gonzalez-Barrera called "Net Migration from Mexico Falls to Zero—and Perhaps Less," from the Pew Research Center.

They
write (footnotes and citations omitted): "The largest wave of
immigration in history from a single country to the United States has
come to a standstill.... The U.S. today has more immigrants from Mexico
alone—12.0 million—than
any other country in the world has from all countries of the world. Some
30% of all current U.S. immigrants were born in Mexico. The next
largest sending country—China (including Hong Kong and Taiwan)—accounts
for just 5% of the nation’s current stock of about 40 million
immigrants.... Beyond its size, the most distinctive feature of the
modern Mexican wave has been the unprecedented share of immigrants who
have come to the U.S. illegally. Just over half (51%) of all current
Mexican immigrants are unauthorized, and some 58% of the estimated 11.2
million unauthorized immigrants in the U.S. are Mexican."

Here's are two illustrative figures. The first shows the
total Mexican-born population in the United States, showing how the
total just takes off from about 1970 up through the middle of this
decade. The second figure breaks the total down into legal and
"unauthorized," and shows that the decline in the unauthorized total
actually started back about 2007.

Will immigration from Mexico surge again in the next few years,
if and when U.S. employment gradually recovers? I suspect that such
immigration may rise again, but much more mildly than in the past. There
are a number of reasons, going back several years, why net immigration
from Mexico has leveled out or perhaps even turned slightly negative.
Start by looking at a graph of annual immigration (that is, not the
total Mexican-born population, but the annual flow). It actually peaked
back in the late 1990s, and there has been an especially sharp decline
going back to about 2004.

What are the causes of this decline? In no particular order, here are
some of the longer-term reasons dating back to before the Great
Recession hit full force:

1) Border enforcement is way
up. "Appropriations for the U.S. Border Patrol within the Department of
Homeland Security (DHS)—only a subset of all enforcement spending, but
one especially relevant to Mexican immigrants—more than tripled from
2000 to 2011, and more than doubled from 2005 to 2011. The federal
government doubled staffing along the southwest border from 2002 to
2011, expanded its use of surveillance technology such as ground sensors
and unmanned flying vehicles, and built hundreds of miles of border
fencing. .. In spite of (and perhaps because of) increases in the number
of U.S. Border Patrol agents, apprehensions of Mexicans trying to cross
the border illegally have plummeted in recent years—from more than 1
million in 2005 to 286,000 in 2011—a likely indication that fewer
unauthorized migrants are trying to cross. Border Patrol apprehensions
of all unauthorized immigrants are now at their lowest level since
1971."

2)
Deportations are way up. "As apprehensions at the border have declined,
deportations of unauthorized Mexican immigrants–some of them picked up
at work sites or after being arrested for other criminal violations–have
risen to record levels. In 2010, 282,000 unauthorized Mexican
immigrants were repatriated by U.S. authorities, via deportation or the
expedited removal process."

3) Mexico's demography is
changing, with fewer children per woman and an older population, so the
pressures on young men to leave and look for work in the U.S. are much
reduced. "In Mexico, among the wide array of trends with potential
impact on the decision to emigrate, the most significant demographic
change is falling fertility: As of 2009, a typical Mexican woman was
projected to have an average 2.4 children in her lifetime, compared with
7.3 for her 1960 counterpart."

4)
Mexico's economy was a train wreck for substantial periods of the 1970s
and 1980s, and the U.S. economy was an incredible jobs locomotive in
the second half of the 1990s in particular. But Mexico's economy is
maturing, and the gap between economic opportunities in Mexico and those
in the U.S. seems less gaping.

"Mexico today is the
world’s 11th-largest country by population with 115 million people and
the world’s 11th-largest economy as measured by gross domestic product
(World Bank, 2011). The World Bank characterizes Mexico as an
“upper-middle income economy,” placing it in the same category as
Brazil, Turkey, Russia, South Africa and China. Mexico is also the most
populous Spanish-speaking country in the world. ... In the three decades
from 1980 to 2010, Mexico’s per capita GDP rose by 22%—from $10,238 in
1980 to about $12,400 in 2010.17 This increase is somewhat less than the
average for all Latin American/Caribbean countries during the same
period (33%) and significantly less than the increase in per capita GDP
in the United States during this period (66%). Meantime, during this
same period, the per capita GDP in China shot up thirteenfold—from $524
in 1980 to $6,816 in 2010. In more recent years, Mexico’s economy, like
that of the United States and other countries, fell into a deep
recession in 2007-2009. But since 2010 it has experienced a stronger
recovery than has its neighbor to the north ..."

5)
Prospects for education and health care in Mexico have improved, as
well. "For example, 92.4% of all Mexicans ages 15 and older were
literate in 2010, up from 83% in 1980.19 In 2010, the average number of
years of education of Mexicans ages 15 and older was 8.6, compared with
7.3 years in 2000. In terms of health care, almost three-in-five (59%)
Mexicans in 2000 lacked health care coverage. In 2003, the Mexican
federal government created a health care program, Seguro Popular, that
provides basic coverage to the uninsured and is free for those living
under the poverty line. The share of the Mexican population with access
to health care had increased from less than half (41%) in 2000 to
slightly more than two-thirds (67%) in 2010, an increase of 26
percentage points."

In short, Mexico in the 1970s and
1980s was demographically top-heavy with teenagers and young adults from
large families living in a country with a weak economy and limited
prospects for education and health care, right next to a much richer
country with a weakly enforced border. A flood of immigration followed.
Now, Mexico is on average older, with smaller families, and the
prospects for education, health, and finding economic opportunity in
Mexico are notably better. Enforcement at the border and within the U.S.
economy have ramped up considerably. In that situation, a large
resurgence of immigration from Mexico seems unlikely.

The Federal Reserve has set up "swap lines" with other central banks around the world. What are these? Galina Alexeenko, Sandra Kollen, and Charles Davidson offer a nice overview in "Swap Lines Underscore the Dollar's Global Role," in EconSouth from the Atlanta Fed.

The economic issue here is the central role of the U.S. dollar in global economic transactions. As they write, "[O]ne of the major business lines of European banks is providing financing in dollars on
a global scale—for trade, purchasing dollar-denominated assets, or syndicating loans to corporations. Banks the world over, in fact, have a great need for dollars because much of the world’s trade, investment, and lending is conducted in U.S. currency." But during an international financial crisis, as various financial markets freeze up, it may be very expensive or even impossible at certain time for banks around the world to get the U.S. dollars they need to carry out transactions. The Federal Reserve's swap lines are a temporary measure to make U.S. dollars available at such time around the world, so that financial instability is less likely to persist and grow.

How does a swap line work? Alexeenko, Kollen, and Davidson explain:

"The swaps involve two steps. The first is literally a swap—U.S. dollars for foreign currency—between the Federal Reserve and a foreign central bank. The exchange is based on the market exchange rate at the time of the transaction. The Fed holds the foreign currency in an account at the foreign central bank, while the other central bank deposits the dollars the Fed provides in an account at the Federal Reserve Bank of New York. The two central banks agree to swap back the money at the same exchange rate, thus creating no exchange rate risk for the Federal Reserve. The currencies can be swapped back as early as the next day or as far ahead as three months.

The second step involves the foreign central bank lending dollars to commercial banks in its jurisdiction. The foreign central bank determines which institutions can borrow dollars and whether to accept their collateral. The foreign central bank assumes the credit risk of lending to the commercial banks, and the foreign central bank remains obligated to return the dollars to the Fed. At the conclusion of the swap, the foreign central bank pays the Fed an amount of interest on the dollars borrowed that is equal to the amount the central bank earned on its dollar loans to the commercial banks. The interest rate on the swap lines is determined by the agreement between the Fed and foreign central banks."

The description helps to clarify why such swap lines are not a "bailout" or any such prejudicial term. The exchange rate for the swap is locked in, and any U.S. dollar loans that are made will pay interest to the Fed. Because the U.S. dollar plays such a central role in global transactions, the Fed is just making sure that a temporary shortfall of dollars in a foreign financial system doesn't make a financial crisis worse.

Here's a timeline for these swap lines. (I found it interesting that these authors differentiate between the "Global Financial Crisis (2007-2008)" and the "European Financial Crisis (2009-current)." I've been trying to sort out in my own mind, without yet reaching firm conclusions, about how to think of these episodes as connected in some ways and separate in others.)

How large have these swap lines been? This graph shows the sharp rise in assets held by the Federal Reserve starting in mid-2008. A fairly substantial portion of these assets (say, $500 billion or so) were held in the form of swap lines at the height of the global financial crisis in late 2008 and early 2009, but these swap lines were ended by February 2010. More recently, you can see on the graph the much smaller swap lines--in the range of $100 billion--established to address the European financial crisis.

Monday, April 23, 2012

The global financial crises was preceded by a huge run-up in household debt, which in a number of countries helped to fuel a rise in housing prices. When housing prices then deflated, households were left with oversized debt burdens that they couldn't meet. One of the reasons behind the sluggish "recovery" since the Great Recession is that so many households have been struggling to pay down or renegotiate their debts. How to reduce these housing-related debt burdens? Some aggressive policy steps to reduce housing debt are recommended by (to me, at least) an unlikely source: the International Monetary Fund in the April 2012 World Economic Outlook.

Chapter 3 of the report, "Dealing with Household Debt," first rehearses facts about about this cycle of rising debt, housing price bubbles, and then after the bubble pops, a sluggish recovery. This story is fairly conventional; for example, I posted on "Leverage and the Business Cycle" about a month ago on March 23. What surprised me about the IMF report was the policy recommendations: "[B]old household debt restructuring programs such as those implemented in the United States in the 1930s ... can significantly reduce debt repayment burdens and the number of household defaults and foreclosures. Such policies can therefore help avert self-reinforcing cycles of household defaults, further house price declines, and additional contractions in output."

The IMF uses the U.S. Home Owners' Loan Corporation, established in 1933, as it main example of how best to address housing debt--and contrasts it unfavorably with the policy steps the U.S. has taken since 2009. Here's the IMF's description of how the U.S. Home Owners' Loan Corporation worked (footnotes, citations, and references to tables and boxes omitted):

"To prevent mortgage foreclosures, HOLC bought distressed mortgages from banks in exchange for bonds with federal guarantees on interest and principal. It then restructured these mortgages to make them more affordable to borrowers and developed methods of working with borrowers who became delinquent or unemployed, including job searches. HOLC bought about 1 million distressed mortgages that were at risk of foreclosure, or about one in five of all mortgages. Of these million mortgages, about 200,000 ended up foreclosing when the borrowers defaulted on their renegotiated mortgages. The HOLC program helped protect the remaining 800,000 mortgages from foreclosure, corresponding to 16 percent of all mortgages. HOLC mortgage purchases amounted to $4.75 billion (8.4 percent of 1933 GDP), and the mortgages were sold over time, yielding a nominal profit by the time of the HOLC program’s liquidation in 1951. The HOLC program’s success in preventing foreclosures at a limited fiscal cost may explain why academics and public figures called for a HOLC-style approach during the recent recession.

A key feature of HOLC was the effective transfer of funds to credit-constrained households with distressed balance sheets and a high marginal propensity to consume, which mitigated the negative effects on aggregate demand discussed above.... Accordingly, HOLC extended mortgage terms from a typical length of 5 to 10 years, often at variable rates, to fixed-rate 15-year terms, which were sometimes extended to 20 years. ... In a number of cases, HOLC also wrote off part of the principal to ensure that no loans exceeded 80 percent of the appraised value of the house, thus mitigating the negative effects of debt overhang discussed above.

Here's a figure showing the U.S. housing market in recent years. As the IMF reports: "There were about 2.4 million properties in foreclosure in the United States at the end of 2011, a nearly fivefold increase over the precrisis level, and the “shadow inventory” of distressed mortgages suggests that this number could rise further." The area shade in blue shows the number of properties in foreclosure. The area shaded in yellow is an estimate of "shadow inventory"--that is, additional properties likely to go into foreclosure."Shadow inventory indicates properties likely to go into foreclosure based on a
number of assumptions. It includes a portion of all loans delinquent 90 days or more
(based on observed performance of such loans); a share of modifications in place (based
on redefault performance of modified mortgages); and a portion of negative equity
mortgages (based on observed default rates)."

Notice that the spike in foreclosures starts at about the same time as housing prices top out, in late 2006, and peaks around early 2009. These numbers don't include the larger number of people-- about 11 million, which is one in every four mortgages in the country--who have "underwater" mortgages where the value of the mortgage exceeds the value of the property.

What policies has the U.S. followed to deal with this foreclosure problem? The IMF reports that the main policy is "the Home Affordable Modification Program (HAMP), the flagship mortgage debt restructuring initiative targeted at households in default or at risk of default." It was adopted in February 2009, and has been revised a number of times since. But so far, the policy hasn't accomplished much. Here's the IMF:

"However, households already in default are excluded from HARP, and the impact on preventing foreclosures is likely to be more limited. HAMP had significant ambitions but has thus
far achieved far fewer modifications than envisaged. ... Meanwhile, the number of permanently modified mortgages amounts to 951,000, or 1.9 percent of all mortgages. By contrast, some 20
percent of mortgages were modified by the Depression-era HOLC program, and HAMP’s targeted reach was 3 to 4 million homeowners. By the same token, the amount disbursed ... as of December 2011 was only $2.3 billion, well below the allocation of $30 billion (0.2 percent of GDP)."

Of course, there is a list of reasons for this minimal effect. It requires the cooperation of creditors and loan officers, which is voluntary. When mortgages have been bundled together and sold as securities, it's not clear how the renegotiation should work. Tight eligibility rules mean that the unemployed and those who have suffered big drops in income often aren't eligible. The policy typically relied on lower interest rates and longer mortgage terms, but only rarely could reduce the outstanding principal on a house that had lost value. The reductions in mortgage payments often wasn't large, so roughly a third of those who made it through the program ended up defaulting again--which of course reduces anyone's incentive to participate in the first place. Fannie Mae and Freddie Mac, which hold about 60% of outstanding U.S. mortgages, don't participate.

So here we are, six
years after the wave of foreclosures started and three years after it
peaked, still arguing about whether something substantial ought to be done--and if so, what. Without drilling down into details of the alternative proposals, it seems to me that a modest share of the trillions in federal borrowing in the last few years, along with the trillions of assets that the Federal Reserve has accumulated through its "quantitative easing" policy, might have been better applied to assisting the millions of American households who took out a mortgage and bought a house--implicitly relying on the ability of supposedly better-informed lenders to tell them what they could afford--and then were blindsided by the national downturn in housing market prices.

Friday, April 20, 2012

What jobs do those in the top 1% of the income distribution hold? How have those jobs shifted in recent decades? Jon Bakija, Adam Cole, and Bradley T. Heim have evidence on this question in "Jobs and Income Growth of Top Earners and the Causes of Changing Income Inequality: Evidence from U.S. Tax Return Data." An April 2012 version of their working paper is here; a very similar March 2012 version is here. They have a bunch of interesting tables and analysis: here, I'll give a sampling with two columns from two of their tables about the occupations of the top 1%, and some evidence on how what's driving the top 1% is really the top tenth of 1%.

Here's are the occupations of who was in the top 1% of the income distribution, in the tax return data, in 1979 and in 2005. What occupations have a smaller share of the top 1? The share of the top 1% who get their income as "Executives, managers and supervisors (non-finance)" has dropped 5.3 percentage points. The breakdown at the bottom of the table suggests that much of this fall is due to the subcategory of "Executive, non-finance, salaried." The share of the top 1% in the medical profession falls by 1.7 percentage points. The share of the top 1% who are "Farmers & ranchers" falls by 1.5 percentage points.

What occupations comprise a larger share of the top 1%? The share of the top 1% whose occupation is classified as "Financial professions, including management" rises by 5.5 percentage points. The share in "Real Estate" rises by 1.8 percentage points--although surely this gain would be smaller if calculated on data after the drop in housing prices. The share who are lawyers rises by 1 percentage point.

Here's the share of GDP received as income by those in each occupation in the top 1%, again comparing 1979 and 2005. The top line shows that the share of income going to the to 1% roughly doubled over this time. Thus, the interesting question here is whether in some occupations the rise in income was substantially more or less than a doubling. For example, the share of income going to the top 1% in the "Financial Professions, including management" more than tripled, as did the share of income going to those "Real Estate." The share of income going to "Business operations (nonfinance)" and to "Professors and scientists" almost tripled. On the other side, the share of income in the top 1% going to "Medical" rose by "only" about 50%, and the share of income in the top 1% going to "Farmers & ranchers" declined.

Finally, here's a striking figure that compares income growth from 1979 to 2005 for the top 0.1% to income growth for those in the bottom half of the top 1%: that is, those from the 99th percentile to those in the 99.5th percentile. The bottom line shows that on average, the growth rate of real income (in this case, excluding capital gains) was 2.4 times as fast for the top 0.1% as it was for those in the 99-99.5 percentiles. For certain occupations like "Executive, non-finance" and "Supervisor, non-finance," the multiple is much higher. The overall pattern here is that while slogans often refer to the top 1%, for most occupations, referring to the top 0.1% might be a more accurate description of where the largest income gains have been occurring.

Thursday, April 19, 2012

Every intro econ class points out that the total aggregate demand economy is the sum of consumption plus investment plus government plus exports minus imports. It also points out that the "government" category for "government demand" in this equation isn't the total government budget, but rather government spending on purchasing goods and services and paying employees. Parts of the government budget that involve a transfer of funds to consumers are not treated as part of demand by government , but instead are treated as demand by consumers. Daniel Carroll gives the facts behind this distinction in "The Shrinking Government Sector," published in the April 2012 issue of Economic Trendsfrom the Cleveland Fed.

First, here's a figure showing government demand or the "government sector" as a share of GDP. Total budgets for federal, state, and local government have been over one-third of GDP. But total government demand for goods and services has actually been falling. Here's Carroll's exposition:

"While it is true that the ratio of government expenditures—including federal, state, and local government—to GDP increased precipitously during the crisis (reaching 21.1 percent in 2009), it has been trending down sharply since. At 19.7 percent as of the fourth quarter of 2011, it has given back 70 percent of its post-crisis increase.

This downward trend is the result of decreasing shares at all levels of government; however, the most significant factor has been cuts at the state and local level. Unlike the federal government share, which currently sits at 15.7 percent, state and local government spending is now nearly 3 percent below its first-quarter 2007 level. Because state and local government accounts for about 60 percent of total government spending, the trend in this component has more weight than the federal component on the overall government share."

Carroll also provides a graphs of income transfers by government: again,
in the breakdown of GDP into components of aggregate demand, these are
allocated to the "consumption"category. Note that this graph isn't directly comparable to the one above, because it starts in 1997 rather than in 1970.

Several intriguing patterns emerge from these graphs:

1) I hadn't known that government spending on goods and services was actually higher in the much of the 1970s than
it is today, nor that government demand for goods and services had such
a big decline in the 1990s. For those who have a vision of government doing things like building roads, providing education and national defense, enforcing laws, and paying for research and development, government is doing less of those things as a share of the economy now than it was a few decades ago.

2) The recent rise in government transfer payments is extraordinarily large 4%: nearly 4% of GDP during the recent recession, or more than 5% of GDP if one compares from the peak of the business cycle in 2000 to the trough in 2009 and 2010. For comparison, total defense spending in 2011 was 4.7% of GDP. Thus, just rise in government transfer payments has been roughly comparable to total defense spending.

3) One way to look at the government budgets is that tax and other revenues pay for transfers, and borrowing pays for all government demand for goods and services. Carroll writes: ""[G]overnment as a component of GDP does not include transfers; however, transfers greatly exceed tax revenue and nearly exhaust total revenues. This leaves little funding to pay for government consumption and investment, and so the difference must be borrowed."

What is the "Short Africa"?
"'Africa' in this talk is 'the short Africa': excluding N Africa, Madagascar, Mauritius and South Africa. All these are sharply distinct from the rest of Africa environmentally, agriculturally and economically, and generally well ahead in mean income; poverty reduction; growth; farming (irrigation, fertilizer, seeds); and demographic transition. The short Africa is itself highly diverse, but no more so than is India or China."

What's the demographic and economic challenge for this region?
"Between 1950 and 2012, population in the 'short Africa' rose fivefold. It will more than double again in 2012-50 to 11.3 times its 1950 level. Workforces - people aged 15-65 - are rising faster still, thanks to better child survival and some fall in fertility. In 1985 sub-Saharan Africa had 106 people of prime working age for every 100 dependents. By 2012 there were 120; in 2050 there will be 196. That's a 63% rise in workers-per-dependent from now to 2050 - and a 3.5% rise each year in the number of people aged 15-64. In South and East Asia, a similar rise in workers-per-dependent proved a demographic window of opportunity, contributing about a third of the 'miracle' of growth and poverty reduction - because those extra workers found productive employment: first, in smallholdings, gaining from a green revolution and usually land redistribution; later, in industry and services, as farm transformation released workers. In 'the short Africa', will the swelling ranks of young workers produce Asian miracles - or worsening poverty, unemployment and violent unrest?"

Why smallholder farmers are of central importance.
"Farming will decide in Africa, as it did in Asia. Farms remain the most important income and work source for over 2/3 of the short Africa's economically active - more among the young and the poor. This will change, but not fast."

More land under cultivation isn't the answer.
"Farmers' strategy of feeding themselves by land expansion - forced on them by insufficient public atten-tion to irrigation, fertilizer access and seed improvement - not only failed to maintain living standards: it has run out of steam and is, or is fast becoming, unsustainable in most of Africa. That is, farmland ex-pansion is inducing, or soon will induce, soil depletion that means net farmland loss."

Improvements in irrigation, fertilizer and seeds are a possible answer.
In 'the short Africa', below 1% of cropland is irrigated (20-25% in S/E/SE Asia in 1965; 35-40% now). Below 2 kg/ha of main plant nutrients - nitrogen, phosphorus, potash - are applied (>150kg/ha
in S/E/SE Asia). ... [F]ast yield growth without fertilizers and water-control is bricks without straw."

Summing up.
"'Scientific smallholder intensification' in Africa is no easy path to development. From global evidence, we know it's possible. Is it necessary? Initially, yes. Farm development is only the start of modernization away from agriculture; I'm no agricultural or smallholder fundamentalist. But I'm an income-from-work fundamentalist. 'The short Africa' by 2050 will have 2.3 times today's population - but 3.7 times today's 15-64-year-olds. They need an affordable initial path to workplaces giving income and respect. Other-wise, potential demographic dividend will become demographic disaster. But, with half the people still in severe poverty and States cash-strapped too, what initial path is 'affordable'? One, trodden elsewhere, is scientific intensification of smallholder farms. If there's an alternative, what is it?"

Tuesday, April 17, 2012

The U.S. Treasury has published "The Financial Crisis Response In Charts." The labels on the charts largely tell the following four-part story: 1) There was a deep financial crisis; 2) The government did things; 3) The crisis did not continue; 4) Therefore, what the government did was beneficial and useful and responsible for the recovery. Even those of us who are generally supportive of many of the steps taken during the worst of the financial crisis late in 2008 and early 2009 can spot some logical flaws in that syllogism.

But two of the charts in particular, about the U.S. banking system, caught my eye. The first one is titled: "The financial industry is less vulnerable to shocks than before the crisis. The panels show two lessons. "Banks have added nearly $400 billion in fresh capital as a cushion against unexpected losses and financial shocks. Banks are also less reliant on short-term funding, which can disappear in a crisis and leave them more vulnerable to panics."

The second panel of interest shows that, relative to the U.S. economy, "The U.S. banking system is proportionally smaller than that of other advanced economies." The horizontal axis shows total assets of banks as a share of the economy of their home country. The four largest U.S. banks by assets are JPMorgan Chase, Bank of America, Citigroup, and Wells Fargo. The vertical axis shows total assets of all commercial banks as a share of GDP. By either measure, U.S. banks are relatively small in international terms.

Of course, this comparison is somewhat misleading. U.S. banks are being compared to the huge U.S. economy, while banks in Belgium and Sweden and Switzerland are being compared to their much smaller national economies, not to overall economy of the European Union. However, the figure still makes a useful point that while the biggest U.S. banks are enormous, by some standards they aren't so large.

Along similar lines, I blogged on December 7, 2011, about "The Rise of Global Banks in Emerging Markets," where I quoted Neeltje van Horen: "In fact, the world’s biggest bank in market value is China’s ICBC. The
global top 25 includes eight emerging-market banks. Among these, three
other Chinese banks (China Construction Bank, Agricultural Bank of
China, and Bank of China), three Brazilian banks (Itaú Unibanco, Banco
do Brasil, and Banco Bradesco) and one Russian bank (Sberbank). While
excess optimism might have inflated these market values, these banks
are large with respect to other measures as well. In terms of assets
all these banks are in the top 75 worldwide, with all four Chinese
banks in the top 20."

My broader point is that in thinking about the financial system, it's important not to overemphasize just the large U.S. banks. The U.S. financial system is much larger than the banking system, and includes all the ways of borrowing funds like asset-banked securities, commercial paper, and bonds. In addition, there are enormous banks in other countries as well. In addition, U.S. banks have largely returned to health in terms of holding more assets and being less reliant on short-term financing. Looking ahead, the big issues about stability of the financial system go well beyond the big U.S. banks--although they pose too-big-to-fail issues of their own--and require looking more broadly at international banks and global finance.

Monday, April 16, 2012

I admit to a bias against Trade Adjustment Assistance, because I fear that it plays to an untrue but common stereotype that if only the U.S. economy didn't have to deal with imports, workers would be far more secure. In reality, many factors across the vast U.S. market can cause some workers to lose their jobs: tough domestic competitors, a firm that doesn't keep up with shifts of production processes, shifts in popular tastes for goods and services, poor management decisions, and others. I've never seen a serious argument that most of the workers who lose their jobs in the continual churn of the U.S. labor market do so because of import competition. In my mind, the entire category of unemployed workers--especially in the Long Slump following the Great Recession--can use greater assistance from active labor market policies in moving to new jobs. I don't see why such assistance should be limited to those who can get the U.S. Department of Labor to certify their claim that their jobs were lost specifically because of import competition

Trade Adjustment Assistance began with the Trade Expansion Act of 1962, although no one was actually ruled eligible for benefits for the first seven years. It has been repeatedly updated and amended over time, especially in recessions and at times when new trade agreements are being discussed, including in 2002, 2009, and again in 2011. It's not a large program. In 2011, TAA included a total of 196,000 participants, about half of whom participated in training programs. The U.S. Department of Labor page describing what kinds of services are provided is here. It's a mixed bag of job search assistance, support for retraining, support for costs of relocation, assistance in paying for transitional health insurance while out of work, and wage subsidies for older workers who end up taking another job at a much reduced wage. The key goal is to help dislocated workers find new jobs.
Training and cash benefits under the TAA will be $826 million in 2012, according to the Congressional Budget Office baseline forecast.

Although I'm not a fan of how Trade Adjustment Assistance is focused on such a limited group, it does offer a testing ground for how these sorts of policies might work. However, results do need to be interpreted with care. Those who are ruled eligible for this program are not a cross-section of American workers across all industries: tend to be heavily in manufacturing jobs, like steel or textiles or autos, where it is more straightforward to make the argument under the law that their jobs were lost due to foreign competition. But these workers also tend to be older and to have lower education levels compared either with other displaced workers in the U.S. economy or with the U.S. workforce as a whole. They may also be more likely to be in communities that depended on a big manufacturing plant, and that have a limited number of alternative job options. For evidence on these points, a useful starting point is "Does Trade Adjustment Assistance Make a Difference?" by Kara M. Reynolds and John S. Palatucci, in the January 2012 issue of Contemporary Economic Policy. For an earlier look at the subject,Katherine Baicker and M. Marit Rehavi wrote on "Policy Watch: Trade Adjustment Assistance," for the Spring 2004 issue of my own Journal of Economic Perspectives.

The Reynolds and Palatucci paper looks at the 150,000 beneficiaries of Trade Adjustment Assistance in 2007, who on average received benefits worth $5,700 from TAA. They compare "dislocated" workers who lost their jobs and were eligible for TAA benefits to other dislocated workers. Here are their two main findings:

"Unfortunately, we find no statistical evidence that the TAA program improves the average employment outcome of beneficiaries over a comparison group of nonbeneficiary displaced workers with characteristics similar to those workers in the TAA program. Our results imply that while the TAA program may provide an income safety net, it does not help the average displaced worker who is enrolled in the program find new, well-paying employment opportunities. ...

"Upon further examination, however, we find strong evidence that those workers who participate in a TAA-funded training opportunity are more likely to obtain reemployment, and at higher wages, when compared to the TAA beneficiaries who do not participate in training. Specifically, participating in the training component of the TAA program increases the likelihood that the average TAA beneficiary will find new employment by 10–12 percentage points, and reduces the earnings losses of the average worker by 8–10 percentage points, when compared to a group of similar TAA beneficiaries who do not participate in the training component. Although the income support, job and relocation payments, and other TAA benefits may not help workers find new, well-paying
employment, training seems to improve employment outcomes for these workers."

Again, it could be hazardous to generalize too broadly from these findings, because those eligible for assistance under the Trade Adjustment Act are not a randomly selected group. For example, it may be that this group is less well-situated to take advantage of job search and placement assistance, and but better-situated to benefit from training. But with the unemployment rate above 8% since February 2009, and the Congressional Budget Office forecasting that it won't drop much before 2014, there should be a heightened urgency in figuring out how to help the unemployed find job slots.

Friday, April 13, 2012

How much would raising marginal tax rates on those with high incomes cause their level of income to fall? Emmanuel Saez, Joel Slemrod, and Seth H. Giertz tackle this question in the March 2012 issue of the Journal of Economic Literature in "The Elasticity of Taxable Income with Respect to Marginal Tax Rates: A Critical Review." The JEL paper isn't freely available on-line, although many academics will have access through their libraries, but a 2010 working paper version is available at Saez's website here.

The short answer to the question is .25. That is, a plausible mid-range estimate based on the economics research literature is that raising the marginal tax rate by 10% (not 10 percentage points, but 10% above the previous rate) would lead over the long-term to a reduction in taxable income of 2.5%. The longer answer is that understanding the implications of the question is difficult. The article itself is on the technical side, but here are some of the key issues.

When marginal tax rates rise, people will seek to avoid paying at least some of the increase. But how they seek to avoid the higher taxes matters. For example, one possibility is that people react to the marginal tax rate by working fewer hours or by making less entrepreneurial effort. Another possibility is that they find ways to shift taxable income to future years, in which case a decrease in tax revenue now might be offset by an increase in tax revenue later. Yet another possibility is that they find a way to shift the form of income, perhaps by receiving more income in the form of untaxed fringe benefits or in lower-taxed capital gains. People may also react to higher marginal tax rates by taking greater advantage of tax deductions: for example, they may give more to charity.

Economic studies that consider how revenues changes in response to past tax changes tend to pick up short term effects of these changes, and the most common short-term effects are probably changes in timing of taxable income or shifting it to less taxable income. From society's overall point of view, these changes are not of central importance. In fact, if the problem with higher marginal tax rates is that people are finding ways to avoid paying those higher rates legally, then an obvious answer is to combine the higher marginal tax rates with rules and enforcement to make such legal tax avoidance more difficult. The long-term responses are potentially more worrisome, but also in the nature of things much harder to measure with confidence. Consider the difficulties, for example, of figuring out how a change in higher marginal tax rates might (or might not!) affect incentives to get an additional graduate degree or to start a company. Here's Saez, Slemrod, and Giertz:

"One might expect short-term tax responses to be larger than longer-term responses because people may be able to easily shift income between adjacent years without altering real behavior. However, adjusting to a tax change might take time (as individuals might decide to change their career or educational choices or businesses might change their long-term investment decisions) and thus the relative magnitude of the two responses is theoretically ambiguous. The long-term response is of most interest for policy making although, as we discuss below, the long-term response is more difficult to identify empirically. The empirical literature has primarily focused on short-term (one year) and medium-term (up to five year) responses, and is not able to convincingly identify very long-term responses."

It also seems likely that the economic reaction to higher marginal tax rates is not a single constant number, but may vary for different taxpayers, and under different tax regimes (like what is being taxed, and how many opportunities the tax code offers for legally minimizing one's tax burden). For example, it's plausible that higher-income taxpayers have greater incentives and resources to search for legal ways to minimize their tax burden. For example, it seems clear that after the 1986 tax return, which broadened the tax base and reduced top personal income tax rates, there was a large shift in reported income from corporations to the personal income tax, and a vast reduction in personal tax shelters.

The very top incomes were about 60 percent from dividend payments in the early 1960s, and faced top marginal tax rates of about 80%, which suggest that these investors had little control over the form in which their payments were received. But there has been a huge shift and now top incomes are much more likely to be from partnerships and wage income, which suggests much greater potential for control of the form and timing in which income is received. As they write (citations omitted):

"The difficult question to resolve is to what extent the secular growth in top wage incomes was due to the dramatic decline in top marginal tax rates since the 1960s. This question cannot be resolved solely looking at U.S. evidence. Evidence from other countries on the pattern of top incomes and top tax rates suggests that reducing top tax rates to levels below 50 percent is a necessary—but not sufficient—condition to produce a surge in top incomes. Countries such as the United States or the United Kingdom have experienced both a dramatic reduction in top tax rates and a surge in top incomes, while other countries such as Japan have also experienced significant declines in toptax rates, but no comparable surge in top incomes over recent decades ..."

Among the goals that they suggest for future research are a greater effort to disentangle the different responses to higher tax rates:

"[F]uture research that attempts to quantify the welfare cost of higher tax rates should attempt to measure the components of behavioral responses as well as their sum. It needs to be more attentive to the extent to which the behavioral response reflects shifting to other bases and the extent to which the behavioral response comes from margins with substantial externalities. ... [R]esearchers should be sensitive to the possibility that nonstandard aspects of tax systems and the behavioral response to them—such as salience, information, popular support, and asymmetric response to increases versus decreases—might affect the size of behavioral response."

As I said at the start, the short answer from economic research to the question of how higher marginal tax rates reduce tax income is .25: a 10% rise in marginal tax rates will tend to reduce taxable income by 2.5%. But given the current state of research, that answer includes considerable uncertainty over its size and its underlying economic meaning.

Thursday, April 12, 2012

Rising life expectancy is a good thing; indeed, Kevin Murphy and Robert Topel estimated in a 2006 paper in the Journal of Political Economy ("The Value of Health and Longevity," 114:5, pp. 871-904) that the 30 years of additional life expectancy gained by an average American during the 20th century was $1.3 million per person. But growing life expectancies also put pressure on public and private retirement programs. What if those programs are underestimating how much longevity is likely to rise? The April 2012 Global Financial Stability report from the IMF tackles this question in Chapter 4: "The Financial Impact of Longevity Risk."

The first step in their argument is to make plausible the claim that government and private retirement plans may well be understating how much longevity is likely to rise (citations and references to exhibits omitted): "The main source of longevity risk is therefore the discrepancy between actual and expected lifespans, which has been large and one-sided: forecasters, regardless of the techniques they use, have consistently underestimated how long people will live. These forecast errors have been systematic over time and across populations. ... In fact, underestimation is widespread across countries: 20-year forecasts of longevity made in recent decades in Australia, Canada, Japan, New Zealand, and the United States have been too low by an average of 3 years. The systematic errors appear to arise from the assumption that currently observed rates of longevity improvement would slow down in the future. In reality, they have not slowed down, partly because medical advances, such as better treatments for cancer and HIV-AIDS, have continued to raise life expectancy ..."

Here are a couple of illustrative figures. The first shows projected life expectancies for the United Kingdom. Starting at the bottom left, the lines show the projected rise in life expectancies at that time. Notice that the projected increases consistently underestimate the actual rise, which is the black line on top.

The table below shows the typical life expectancy at age 65 used for pension funds in a number of countries in the first column. The second columns shows the currently estimates of life expectancy at age 65. Notice that in each case, the number used for future life expectancy in the first column is above the current estimates of life expectancy, which is wise. But also notice that for a number of countries, including the United States, the actual increase in life expectancy since 1990 is substantially larger than the difference between columns 1 and 2.

The IMF asks what would happen if life expectancy by 2050 turns out to be three years longer than current projected in government and private retirement plans: "[I]f individuals live three years longer than expected--in line with underestimations in the past--the already large costs of aging could increase by another 50 percent, representing an additional cost of 50 percent of 2010 GDP in advanced economies and 25 percent of 2010 GDP in emerging economies. ... [F]or private pension plans in the United States, such an increase in longevity could add 9 percent to their pension liabilities. Because the stock of pension liabilities is large, corporate pension sponsors would need to make many multiples of typical annual pension contributions to match these extra liabilities."

What's to be done? One step could be to build into the benefit formal for public pensions, like Social Security, provisions for an automatic decline in expected benefit levels for future retirees as life expectancy rises. The enormous change that has taken place in private retirement plans, switching from "defined benefit" plans in which the employer promises a stream of future payments to "defined contribution plans where the employee has a retirement account with a certain amount in that account acts as a way of transferring longevity risk from employers to workers. Of course, retirees can then protect themselves from longevity risk and outliving their assets by putting a substantial share of their retirement funds in annuities.

There are also proposals for innovative financial assets like "longevity bonds." Say that a pension plan worried that it has underestimated longevity risk, and thus will have to make higher payments than it expects. The company buys a longevity bond, where the payments that the company receives from that bond would rise if longevity exceeds certain benchmarks. Because the return on the bond would offset some longevity risk, the buyers of the bond would be willing to accept a lower interest rate than they otherwise would demand. However, no longevity bonds have yet been successfully issued.

It's not enough just to set an expectation for how much the population will age, and to plan accordingly. We also have to plan for the historically likely possibility that life expectancies may grow faster than we expect.

Wednesday, April 11, 2012

"Pierre, an expensively attired middle-aged French tourist on his first trip to Toronto strolls into thebar of his 5-star hotel. The elegant hostess smiles, leads him to a table and beckons her prettiestserver to take care of him. They talk, flirt a little and she giggles a bit. When he draws her closer and whispers in her ear, she gasps and runs away.

The hostess frowns then sends a more experienced waitress to the gentleman’s table.They talk, flirt a little and giggle a bit. He whispers in her ear and she too screams, “No!” and walks away quickly.

The hostess is surprised ... . Rather than alienate a high-powered customer, she asks Lucille, her seen-it-all, heard-it-all bartender, to take his order. They talk, flirt a little and Lucille even giggles a bit. When he whispers in her ear, she screams, “NO WAY, BUDDY!” then smacks him as hard as she can and leaves.

The hostess is now intrigued, having seen nothing like this in all her years working in bars. ... Besides, she has to find out what this man wants that makes her girls so angry. ... So she goes to Pierre’s table, wishes him a pleasant evening and tells him she’ll personally take care of his needs. ... They flirt a little, giggle a bit and talk. Pierre leans forward and whispers in her ear, “Can I pay in Euros?”"

"[T]he world's population is migrating from rural areas--accounting for 70% of global population in 1950--to cities--accounting for 70% of global population by 2050 based on United Nations projections. In 2009, the percentage of the planet's population living in urban areas crossed the 50% threshold and by 2037 cities in developing nations will contain half the world's total population. .... "

Here's a figure to illustrate the point. World population is divided into four groups. As recently as the 1990s, more than half all the world population lived in rural areas of developing countries. The projections are that in 25 years, more than half of world population will live in urban areas of developing countries.

The coming urbanization will be almost entirely a developing country phenomenon: "Looking at population projections in the world's 15 largest urban agglomerations in 2025, we note that just two--Tokyo and New York City--are in developed countries. Not a single one of the world's 25 fastest projected growing major cities is in a developed country." But even within developing countries, a number of countries are already seeing decelerating urbanization, because their levels of urbanization are already 70-90% of the population, and so there isn't much more room to urbanize. For example, Malaysia, Mexico, Peru, Colombia, Turkey, Czech Republic, Russia and Hungary all have urbanization rates already above 66%, and Argentina, Brazil, Chile, South Korea and Saudi Arabia all have urbanization rates already over 80%.

But another group of countries, mostly in Asia and Africa, but also including Poland, are seeing accelerating urbanization: South Africa, Morocco, Nigeria, Poland, China, Philippines, Indonesia, Egypt, Thailand, Pakistan, Vietnam, Bangladesh, India, Kenya. As the report points out, this list includes six of the eight largest countries by population in the world, and about 55% of world population.

The report argues: "Exploring the relationship between per capita economic growth and urbanization, we find that there is a sweet-spot as countries urbanize (in the range of 30%-50% of total population), accompanied by peak per capita GDP growth." It offers a discussion of each these countries of accelerating urbanization.

Here, I'll just make the overall point that urbanization creates economies and diseconomies. On one side, it concentrates, consumers, workers, and firms in a way that allows a flow of information, goods, and services that feeds specialization, economies of scale, technological development, and economic growth. On the other side, it also concentrates problems of pollution, crime, poor health, poverty, and corruption, and raises needs for physical infrastructure and working institutions. The politics and economics of the future are likely to be hammered out, one issue at a time, for better or worse, in the large cities of developing countries. I discussed some of these broader issues of urbanization in "The Coming Urban World" in a post last August 29, 2011.

Tuesday, April 10, 2012

The U.S. federal budget is typically measured on a cash basis: that is, how much tax money came in and how much spending went out. But for a more complete picture of any budget, it is useful to look at an accrual budget: that is, including not just current spending, but what spending has already been committed for the future. The federal government takes a stab at providing an overview of an accrual budget each year in a report from the U.S. Treasury called the Financial Report of the United States Government: the 2011 edition is here.

Here's a graphic showing all the assets and debts of the U.S. government from an accrual perspective. Federal debt held by the public, the usual measure of federal debt, is about $10.2 trillion. "As of September 30, 2011, the Government held about $2.7 trillion in assets, comprised mostlyof net property, plant, and equipment ($852.8 billion) and a combined total of $985.2 billion innet loans receivable, mortgage-backed securities, and investments." The big addition here is the $5.8 trillion in already owed in employee and veterans' benefits. Taking these legal obligations into account increases the government's liabilities by more than half.

What about future obligations for Social Security and Medicare? Legally speaking, these are not legal obligations in the same sense as benefits owed to federal employees and to veterans. The U.S. government can and probably will adjust the revenue and spending side of Social Security and Medicare. That said, the report does offer some estimates of current federal obligations in this area. Here are some numbers for Social Security and the different parts of Medicare. The numbers are "present values" over 75 years (that is, how much money in the present, at an assumed rate of interest, would be equal to the sum over 75 years).

Thus, for example, the present value of all the revenues Social Security is scheduled to receive over the next 75 years is $41.6 trillion, the present value of total expenditures over that time is $50.8 trillion, and the current unfunded gap is $9.2 trillion. The multi-trillion dollar gaps for Part A (Hospital Insurance), Part B (Supplemental Medical), and Part D(Pharmaceuticals) of Medicare are also shown. The total gap is about $33 trillion--which for comparison is about twice as large as the $17.5 trillion in legally obligated liabilities in the chart above.

Two main lessons emerge from these figures. First, the debt obligations of the federal government are far larger than the $10 trillion or so in debt owed to the public. Adding what is legally owed in benefits to federal employees and veterans, together with promises already made to Social Security and Medicare, the total amount owed would reach about $50 trillion.

Second, of this $50 trillion in what is owed, about half is because of Medicare. America's health care the system is a huge part of what is driving our long-term fiscal problems. Indeed, the numbers in the table probably understate the effect of future rises in health care spending for several reasons. The table shows only Medicare, but Medicaid is also a substantial spending program without any dedicated payroll tax or funding source. Moreover, the estimates of Medicare costs in the table are likely to be far too low. At least, this was the conclusion of Richard S. Foster, Chief Actuary, Centers for Medicare & Medicaid Services, who wrote in a "Statement of Actuarial Opinion" in an appendix to the 2011 Annual Report of the Medicare Trustees:

"[T]he financial projections shown in this report for Medicare do not represent a reasonable expectation for actual program operations in either the short range (as a result of the unsustainable reductions in physician payment rates) or the long range (because of the strong likelihood that the statutory reductions in price updates for most categories of Medicare provider services will not be viable). ... Although the current-law projections are poor indicators of the likely future financial status of Medicare, they serve the useful purpose of illustrating the exceptional improvement that would result if viable means can be found to permanently slow the growth in health care expenditures."

There is a raging argument over whether to attack the federal budget deficits now, or whether to wait until the economy recovers further and the unemployment rate falls. Compares with this picture of current federal debt from an accrued perspective -- a low-ball estimate of $50 trillion in accumulated obligations -- the short-term decisions are relatively small potatoes.

Monday, April 9, 2012

For many people, saying that a little bit of inflation can have good effects is akin to saying that a little bit of leukemia can have good effects. Too much inflation, especially volatile rates of inflation, does operate
like sand in the gears of an economy, by making it unclear how much
prices throughout an economy are rising or falling in real terms. But But when an economy is trying to climb out of a recessionary episode caused by a wave of overindebtedness, and is suffering sustained high unemployment at the same time, a bit of inflation can grease the transition.

When it comes to overindebtedness, a bit of inflation means that past debts can be repaid in inflated dollars. For the millions of homeowners struggling with mortgages that are worth more than the value of their property, as well as others with high debts, a bit of inflation is a breath of fresh air. In the case of wages, standard price theory suggests that when unemployment is high and a large quantity of labor is available, wages should fall--for the same reason that at when the quantity of apples at an autumn farmers' market is high, the price of apples be lower than at other times. But employers are reluctant to cut wages. It decreases morale of existing workers, and encourages the more high-productivity workers--who have better outside options--to look for other jobs. In contrast, apples don't get sulky and inefficient when the price of apples declines.

But here's a kicker: Workers strongly dislike cuts in nominal wages, but they are typically less annoyed by cuts in real wages. Here's an intriguing figure from Mary Daly, Bart Hobijn, and Brian Lucking at the San Francisco Fed. The solid thin blue line shows the inflation rate; the thicker red line shows nominal growth in wages; and the dashed black line shows the real wage growth--that is, the growth in the buying power of wages after taking inflation into account.

This data suggests that average wage growth in the U.S. economy was negative for most of the 1980s and half of the 1990s, and then crept into positive territory for a time, before dropping off to negative again in 2011. This graph must be interpreted with care, because it doesn't mean that the average real wage of those who held jobs 1982 was falling for a decade. The workforce changes over time. In the 1980s, for example, there was a dramatic increase in the number of women entering the (paid) labor market, and many of them took relatively lower-wage work. This factor would tend to reduce the rise in "average" wages in any given year, even if those who were already in the workforce in the late 1970s saw a rise in their wages over that decade. However, the graph also helps to explain how the U.S. economy adjusted from very high unemployment rates in the early 1980s to low unemployment rates by the mid-1990s: in short, average real wages were lower, which encouraged more hiring.

Friday, April 6, 2012

Back in 2008, Cass R. Sunstein wrote a book with Richard Thaler called Nudge: Improving Decisions About Health, Wealth, and Happiness. The focus of the book was to discuss how to take findings from behavioral economics and apply them to affecting behavior. Thus, since President Obama appointed Sunstein to be the Administrator, Office of Information and Regulatory Affairs, Office of Management and Budget, there has been considerable interest to see how he might put this approach into effect. In the Fall 2011 issue of the University of Chicago Law Review, Sunstein, has written "Empirically Informed Regulation," which discusses his approach and a selection of the policy results.

Sunstein starts this way (footnotes omitted): "In recent years, a number of social scientists have beenincorporating empirical findings about human behavior into economic models. These findings offer useful insights for thinking about regulation and its likely consequences. They also offer some suggestions about the appropriate design of effective, low-cost, choice-preserving approaches to regulatory problems, including disclosure requirements, default rules, and simplification. A general lesson is that small, inexpensive policy initiatives can have large and highly beneficial effects." Here are a few examples of the issues and possibilities that he raises for such an approach:

"In the domain of retirement savings, for example, the default rule has significant consequences. When people are asked whether they want to opt in to a retirement plan, the level of participation is far lower than if they are asked whether they want to opt out. Automatic enrollment significantly increases participation."

"For example, those who are informed of the benefits of a vaccine are more likely to become vaccinated if they are also given specific plans and maps describing where to go. Similarly, behavior has been shown to be significantly affected if people are informed, not abstractly of the value of “healthy eating,” but specifically of the advantages of buying 1 percent milk as opposed to whole milk."

"When patients are told that 90 percent of those who have a certain operation are alive after five years, they are more likely to elect to have the operation than when they are told that after fiveyears, 10 percent of patients are dead. It follows that a product that is labeled “90 percent fat-free” may well be more appealing than one that is labeled “10 percent fat.”"

"In some contexts, social norms can help create a phenomenon of
compliance without enforcement—as, for example, when people comply with
laws forbidding indoor smoking or requiring the buckling of seat belts,
in part because of social norms or the expressive function of those laws."

"Many
people believe that they are less likely than others to suffer from
various misfortunes, including automobile accidents and adverse health
outcomes. One study found that while smokers do not underestimate the
statistical risks faced by the population of smokers, they nonetheless
believe that their personal risk is less than that of the average
nonsmoker."

The wave of behavioral
economics research seems to me one of the most intriguing and fruitful
developments in economics in the last few decades. However, in thinking
about its value as a method of improving regulation, I often find
myself feeling skeptical. Although there is much to praise in Sunstein's
essay and approach to regulation, let me focus here on raising four
skeptical questions.

1) How big a deal is this combination of behavioral economics and regulation?

The
work on how people's savings patterns are affected by
whether they face a default rule seems to me the shining success of
behavioral economics as applied to policy. It addresses an issue of
first-order importance that cuts across macroeconomics, microeconomics,
and social policy: Why do so many people save so little?

However,
a number of the other applications seem to me relatively small
potatoes. For example, at one point Sunstein lists nine examples of
regulations that have been simplified or eliminated. If you add together
his estimated cost savings for all nine rules, it's about $1 billion
per year. I'm in favor of saving that $1 billion each year! But in the
context of federal regulation and the U.S. economy, it's not a large
amount.

2) Does behavioral economics imply more regulation, or just offer suggestions for better regulation?

Sunstein
clearly takes the second position: "An understanding of the findings
outlined above does not, by itself, demonstrate that “more” regulation
would be desirable. ... It would be absurd to say that empirically
informed regulation is more aggressive than regulation that is not so
informed, or that an understanding of recent empirical findings calls
for more regulation rather than less. The argument is instead that such
an understanding can help to inform the design of regulatory programs."

3) How well can the government apply these lessons?

There
are reasons to doubt how well government can apply these insights as it
goes about its regulatory tasks. As Sunstein writes: "It should not be
necessary to acknowledge that public officials
are subject to
error as well. Indeed, errors may result from one or more of the
findings traced above; officials are human and may also err. The
dynamics of the political process may or may not lead in the right
direction."

Consider for a moment a seemingly simple
policy, like improved disclosure requirements. What rule should be
followed. Here's how Sunstein phrases it: "Disclosure requirements
should be designed for homo sapiens, not homo economicus (the agent in
economics textbooks). In addition, emphasis on certain variables may
attract undue attention and prove to be misleading. If disclosure
requirements are to be helpful, they must be designed to be sensitive to
how people actually process information.
A good rule of thumb is that disclosure should be concrete, straightforward, simple, meaningful, timely, and salient."

Just
how to apply this perspective in the case of say, the USDA food pyramid
or health warnings on cigarette packages or public information on toxic
chemical releases is not going to be straightforward. It made me smile
that at the back of Sunstein's paper, there is an appendix about "open
and transparent government" that takes 12 pages of bureaucratese to
explain what the term means.
Disclosure requirements and other
regulations are going to be the
subject of intense lobbying, and there will be pressure from many
parties to make people feel as if their politicians are being
public-spirited and responsive, while continuing to conceal relevant
costs and tradeoffs.

4) Is overcoming these issues unambiguously beneficial?

An
often-unspoken assumption in this literature is that people are always
better off if they have better information, or better disclosure rules,
or a more accurate perception of risk. This isn't necessarily so.
For example, a recent working paper by Jacob Goldin at Princeton's
Industrial Relations Center tackles the issue of "Optimal Tax Salience." The
paper is technical, but under the math is a basic intuition: if people
are unaware that their marginal tax rate is rising, then they will not
cut back as much on work effort. In that narrow sense, the costs of
higher tax rates would be reduced. Goldin makes a case that having a
mixture of taxes that are more and less salient may actually end up
being better for society.

There's
no reason government shouldn't be able to learn from the management and
marketing literature about how to affect people's behavior. If
government is going to impose a regulation, it should be designed to
work better rather than worse. And yet, the point of departure of
behavioral economics is that people don't always know very clearly what
they want. People are affected by how questions are framed, by what
information they have, by default rules, by how risks are perceived, by
whether costs and benefits are immediate or long-term, and by social
norms. Most of us recognize that private sector actors try to manipulate
our decisions through these factors, and we are rightly skeptical that
they are doing so in our own best self-interest.

Thus,
I find that I tend to be more comfortable with clear-cut government
actions, like readily apparent taxes and subsidies, regulations that set
certain standards or forbid certain activities, or default rules where
the possibility of opting-out is clearly stated. There is some virtue in
having government be clunky and apparent in its
actions; conversely, a government that views its task as to be more
subtle and manipulative,
affecting choices in ways that people can't easily perceive, seems to me
a
potential cause for concern. For example, I'm more comfortable with a
tax on gasoline or on carbon than I am with government attempting to
discourage fossil fuel use by providing the public with what some
government agency has decided is the relevant, meaningful, timely, and
salient information.