Pages

Friday, September 28, 2012

Back in July 2010, President Obama signed into law the Dodd-Frank Wall Street Reform and Consumer Protection Act. The difficulty with the law has always been that while it was fairly clear on its goals, it did not specify how to reach those goals--instead turning over that task to current and newly-created regulatory agencies. If you're looking for an update on how the law is proceeding, a good starting point is the Third Quarter 2012 issue of Economic Perspectives, published by the Federal Reserve Bank of Chicago, which has six articles on the Dodd-Frank legislation.

Douglas D. Evanoff and William F. Moeller offer an overview of the goals and approach of the law in their opening piece (footnotes and citations omitted):

"The stated goals of the act were to provide for financial regulatory reform, to protect consumersand investors, to put an end to too-big-to-fail, to regulate the over-the-counter (OTC) derivatives markets, to prevent another financial crisis, and for other purposes. ... Implementation of Dodd–Frank requires the development of some 250 new regulatory rules and various mandated studies. There is also the need to introduce and staff a number of new entities (bureaus, offices, and councils) with responsibility to study, evaluate, and promote consumer protection and financial stability. Additionally, there is a mandate for regulators to identify and increase regulatory scrutiny of systemically important institutions. ... Two years into the implementation of the act, much has been done, but much remains to be done."

How are those rules coming along? The law firm of Davis Polk & Wardwell publishes a regular Dodd-Frank report. The September 2012 edition summarizes:

"As of September 4, 2012, a total of 237 Dodd-Frank rulemaking requirement deadlines havepassed. This is 59.5% of the 398 total rulemaking requirements, and 84.6% of the 280rulemaking requirements with specified deadlines.

"Of these 237 passed deadlines, 145 (61.2%) have been missed and 92 (38.8%) have beenmet with finalized rules. Regulators have not yet released proposals for 31 of the 145 missedrules.

"Of the 398 total rulemaking requirements, 131 (32.9%) have been met with finalized rules andrules have been proposed that would meet 135 (33.9%) more. Rules have not yet beenproposed to meet 132 (33.2%) rulemaking requirements.

The July 2010 Davis Polk update--the two-year anniversary of the legislation--offers some additional detail: "The two years since Dodd-Frank’s passage have seen 848 pages of statutory text expand to 8,843 pages of regulations. Already at almost a 1:10 page ratio, this staggering number representsonly 30% of required rulemaking contained within Dodd-Frank, affecting every area of the financial markets and involving over a dozen Federal agencies."

It's important to recognize that writing a new regulation isn't as simple as, well, just writing it. Instead, there is often first an in-house study, followed by a draft regulation, which then is open to public comments, and then can revised, and eventually at some point a new regulation is created. It's not unusual for a regulation to get dozens or hundreds of detailed public comments.

This blizzard of evolving rules has to create considerable uncertainty in the financial sector. Matthew Richardson discusses the complexities of one particular issue in his contribution to the Chicago Fed publication. He picks one example: the problem that many banks made very low-quality subprime mortgage loans. What does the Dodd-Frank legislation do about this basic issue? As he describes, the act: 1) Sets up a Consumer Finance Protection Bureau in title X to deal with misleading products; 2) Imposes particular underwriting standards for residential mortgages; 3) Requires firms performing securitization to retain at least 5 percent of the credit risk; and 4) Iincreases regulation of credit rating agencies. Each of these tasks requires detailed rulemaking. And as Richardson points out, "with all of these new provisions, the act does not even address what we at NYU Stern consider to be a primary fault for the poor quality of loans—namely, the mispriced government guarantees in the system that led to price distortions and an excessive buildup of leverage and risky credit."

I'm skeptical of anyone who has strong opinions about the Dodd-Frank legislation, because here we are more than two years later, less than halfway toward figuring out what rules the legislation will actually put in place. Wayne A. Abernethy of the American Bankers Association is one of the authors in the Chicago Fed symposium. Yes, he is speaking for the bankers' point of view. But his judgement about the overall process seems fair to me:

"At least in the financial regulatory history of the United States, there has never been anything like it. I have seen no definitive count of the number of regulations that the Dodd–Frank Act calls forth. The numbers seem to range between 250 and 400—numbers so large that they are numbing. It all defies hyperbole. The Fair and Accurate Credit Transactions Act, adopted in 2003, astonished the financial industry with more than a dozen significant new regulations to be written. ...

"One of the most common criticisms of Dodd–Frank implementation has been a lack of order and coordination in the regulatory process. Instead, the Dodd–Frank Act has succeeded in replacing the financial crisis with a regulatory crisis. ... As agencies are grappling with impossible rulemaking tasks, most of them are also engaged in major structural reorganizations and shifts in the areas of responsibility. ... Nothing like this has ever been tried before in the history of the United States. Writing 400 financial regulations of the highest significance and the greatest complexity in a couple of years has clearly been too much to expect. ... Getting on with the work to end our self-inflicted regulatory crisis should be among the highest priorities."

I'm someone who believes that financial regulation needed shaking up. Many of the broad goals of the Dodd-Frank legislation make sense to me: rethinking bank and regulation to deal with macroeconomic risk, not just the risk of an individual institution going broke; figuring out better ways to shut down even large financial institutions when needed; and better regulation of certain financial instruments like credit default swaps and repo agreements; a closer look at technologies that allow ultra-high-speed financial trading; and others.

The Dodd-Frank legislation is almost not a law in the conventional meaning of the term, because it mostly isn't about actual specific activities that are prohibited. Instead, it's about handing over the difficult problems to regulators and telling them to fix it. I'm not sure there was an easy alternative to this regulatory approach: the idea of Congress trying to debate, say, appropriate regulation of the over-the-counter swaps market is not an encouraging thought. But stating a goal is not the same as solving a problem. The passage of Dodd-Frank, in and of itself, didn't solve any problems.

The starting point is to look at labor income relative to the size of the economy. The top line in the figure shows labor income as a share of GDP, as measured in the national income and product accounts from the U.S. Bureau of Economic Analysis. The lower line in the figure shows the ratio of compensation to output for the nonfarm business sector, as measured by the U.S. Bureau of Labor Statistics. The measures are not identical, nor would one expect them to be, but they show the same trend: that is, with some ups and downs as the economy has fluctuated, the labor share of income has been falling for decades, and is now at an historically low figure.

This fact lies behind much of the rise in inequality of incomes over this time. The income that is not being earned by labor is being earned by capital--and capital income is much more concentrated than is labor income. Jacobsen and Occhino offer an intriguing figure that measures the inequality of labor income and the inequality of capital income. The measure used here is a Gini coefficient, which "ranges between 0 and 1, with 0 indicating an equal distribution of income and 1 indicating unequal income." (Here's an earlier post with an explanation of Gini coefficients.)

The figure has two main takeaways. First, labor income has become more unequally distributed over time, but since the early 1990s, the big shift in income inequality is because capital income is more unequally distributed. Second, capital income tends to rise during booms and to fall in recessions. Thus, it seems plausible that the inequality of capital income has dropped in the last few years of the Great Recession and its aftermath, but will rise again as economic growth recovers.

What has caused the long-run decline of the labor share of income? Jacobson and Occhino explain this way: "[W]e begin by looking at what determines the labor share in the long run. The main factor is the technology available to produce goods and services. In competitive markets, labor and
capital are compensated in proportion to their marginal contribution to production, so the most important factor behind the labor and capital shares is the marginal productivities of labor and capital, which are determined by technology. In fact, one important cause of the post-1980 long-run decline in the labor share was a technological change, connected with advances in information and
communication technologies, which made capital more productive relative to labor, and raised the return to capital relative to labor compensation. Other factors that have played a role in the long-run decline in the labor share are increased globalization and trade openness, as well as changes in
labor market institutions and policies."

There is no particular reason to believe that these trends will continue--or that they won't. But the declining share of income going to labor suggests the importance of finding ways to increase the marginal product of labor, especially for workers of low and medium skills, perhaps by focusing on the kind of training and networking that might help them make greater use of the advances in information and communication technology to improve their own productivity.

"Neutral taxation of owner-occupied housing would call for taxing its imputed rental value, but allowing a full mortgage interest deduction." For those not indoctrinated into the jargon, the idea here is that when you live in a house that you own, you are--in a way--renting that house to yourself. Thus, you are in effect paying rent to yourself, and paying mortgage expenses. In a pure income tax, you would pay income tax on the income you receive from your owned-and-rented-to-yourself property ("imputed rental value"), but you would be able to deduct from taxation the costs of that property--namely, the interest payed on the mortgage.

This logic may seem counterintuitive to many homeowners! But another way to think about it is that a pure income tax should not favor owning over renting. (That is, the decision to favor owning is a political policy decision that has costs and benefits, but it's not part of a pure income tax.) Thus, if I buy a house and rent it out, or if I buy the same house and live in that house, my income tax bill should look the same.

But practical difficulties surface immediately, of course. How would "imputed rental income" be calculated? Grigg and Thornton write: "Taxing imputed rents has generally proved impracticable, however, although several countries have at one time or another done so. Belgium taxes imputed rent, but the value was last reviewed in 1975 and has been indexed to inflation since 1990, resulting in imputed rents generally below their market counterparts, especially for old houses. In the Netherlands, imputed income is calculated as a percentage (up to 0.55 percent) of a property’s market value. Norway abolished its tax on imputed rents, based on property values, in 2005, and Sweden followed in 2007. While property values provide a readily observable basis for taxing imputed rents, they are likely to represent an imprecise measure of the returns to housing. An alternative is to use house prices and average price-to-rent ratios to estimate imputed rents, but this requires regular updating."

The administrative tax of figuring out an appropriate imputed rent in the enormous and diverse U.S. economy may be impractical. But then, if the gains from imputed rental income are not included in the income to be taxed, there is an argument for not allowing the deductibility of mortgage interest, either. The authors write: "As imputed rent taxation is thus generally unattractive on administrative grounds,tax neutrality could be better approximated by phasing out mortgage interest deductibility." Indeed, countries like Denmark ad France give only very limited mortgage interest deductions.

The U.S. tax treatment of housing is very generous by the standards of OECD countries. We don't tax imputed rental income: doing so would raise $337 billion in taxes over the next five years, according to Office of Management and Budget estimates. We do let mortgage interest be deductible for first and second homes up to $1 million, which reduces income tax revenues by $606 billion over the next five years. In addition, we have various provisions so that capital gains in housing values are untaxed, which reduces income taxes by an estimated $171 billion over the next five years.

But the issues go well beyond costs to the government in a time when we need to be scrutinizing the spending and tax sides of the federal budget to find a trajectory toward smaller budget deficits over the medium and long term. Tax breaks for housing create economy-wide distortions in the allocation ofinvestment across sectors. The authors explain: "The marginal effective tax rate on housing investment in the U.S. is currently only 3½ percent, as compared to 25½ percent for business investment in equipment, structures, land and inventories. This discourages investment in productive assets, to the detriment of long-run economic growth."

Of course, it's never wise to make dramatic changes to tax policies affecting the housing market, because the existing tax policies are part of the conditions of demand and supply in the current market. The still-shaky U.S. housing market doesn't need another sudden shock. But the example of the United Kingdom shows how the mortgage interest deduction can be gradually phased out: set a ceiling on the total amount of the deduction, and then over time, reduce that ceiling in real terms and reduce the tax rate that can be applied to the deduction. Grigg and Thornton write:

"The UK experience offers a lesson in how the mortgage interest deduction can be gradually phased out. Until 1974, mortgage interest tax relief (MITR) in the UK was available for home loans of any size. In that year a ceiling of £25,000 was imposed. In 1983, this ceiling was increased to £30,000, below the rate of both general and house price inflation. From 1983 onwards, the ceiling remained constant, steadily reducing its real value. Beginning in 1991, this erosion of the real value of MITR was accelerated by restricting the tax rate at which relief could be claimed, to the basic 25 percent rate of tax in 1991, and then to 20 percent in 1994, 15 percent in 1995 and 10 percent in 1998. These ceilings on the size of loans and restrictions on the tax rate at which relief could be claimed chipped away at thevalue of the tax deduction, paving the way for its complete abolition in 2000 ..."

Similarly, one could phase in limits on the special treatment for capital gains in housing, limiting it to primary residences and to a maximum amount.

I'm acutely aware that, given the fall in U.S. housing prices over the last few years, many homeowners would love to see housing prices soar again. But the U.S. economy and U.S. households have now absorbed most of the pain of the housing price decrease. The goal over the medium terms should be to make the housing market less tax-favored. It would benefit the U.S. economy to focus less on housing and more on investments that generate future economic growth. Most of the tax benefits of housing go to those with well above-average incomes--since these are the people who are living in bigger houses and itemizing deductions. The additional revenue from reducing the favored tax treatment of housing can be part of a package to reduce marginal tax rates and trim future budget deficits.

Tuesday, September 25, 2012

How should African elephants be protected from poachers? Just passing laws that elephants should not be harmed is clearly an insufficient policy, because many governments across Africa have limited ability to enforce such laws. Thus, two complementary policies are often suggested. One is to encourage local people who live near Africa's elephants to help protect them and their habitat by making the elephants a valuable economic resource. For example, local people may see economic benefits from tourists who come to see the elephants. A second proposal is for other countries to ban imports of ivory--or at least to ban imports that are not certified as coming from elephants that were killed as part of a sustainable wildlife management plan.

But these policies aren't working to protect elephants. Brian Christy provides a journalistic overview of the situation in "Ivory Worship," appearing in the October 2012 issue of National Geographic. The entire article is a great read, with all sorts of detail about ivory poaching in Africa and markets for ivory in the Philippines, China, Thailand, and elsewhere. Here, I'll just offer a smattering of quotations scattered throughout the article on some of the main points that relate to the policy choices on how best to protect elephants.

"Elephant poaching levels are currently at their worst in a decade, and
seizures of illegal ivory are at their highest level in years. ... Still, according to Kenneth Burnham, official statistician for the CITES
program to monitor illegally killed elephants, it is “highly likely”
that poachers killed at least 25,000 African elephants in 2011. The true
figure may even be double that. "

"[A] global ivory trade ban was adopted in 1989. ... At the time of the ivory ban, Americans, Europeans, and Japanese
consumed 80 percent of the world’s carved ivory. ... . African ivory brought into a country before 1989 may be traded
domestically. And so anyone caught with ivory invokes a common refrain:
“My ivory is pre-ban.” Since no inventory was ever made of global ivory
stocks before the ban, and since ivory lasts more or less forever, this
“pre-ban” loophole is a timeless defense."

"Not all countries agreed to the [ivory] ban. Zimbabwe, Botswana, Namibia,
Zambia, and Malawi entered “reservations,” exempting them from it on the
grounds that their elephant populations were healthy enough to support
trade. In 1997 CITES
held its main meeting in Harare, Zimbabwe, where President Robert
Mugabe declared that elephants took up a lot of space and drank a lot of
water. They’d have to pay for their room and board with their ivory.
Zimbabwe, Botswana, and Namibia made CITES
an offer: They would honor the ivory ban if they were allowed to sell
ivory from elephants that had been culled or had died of natural causes.
CITES
agreed to a compromise, authorizing a one-time-only “experimental sale”
by the three countries to a single purchaser, Japan. In 1999 Japan
bought 55 tons of ivory for five million dollars. Almost immediately
Japan said it wanted more, and soon China would want legal ivory too.... "

"In a 2002 report China warned CITES
that a main reason for China’s growing ivory-smuggling problem was the
Japan experiment: “Many Chinese people misunderstand the decision and
believe that the international trade in ivory has been resumed.” Chinese
consumers thought it was OK to buy ivory again. ... By 2004 China had forgotten its concerns and petitioned CITES to buy ivory.... In July 2008 the CITES
secretariat endorsed China’s request to buy ivory, a decision supported
by Traffic and WWF. Member countries agreed, and that fall Botswana,
Namibia, South Africa, and Zimbabwe held auctions at which they
collectively sold more than 115 tons of ivory to Chinese and Japanese
traders."

"[I]t also meant, according to CITES,
that China could now do its part for law enforcement by flooding its
domestic market with the low-priced, legal ivory. This would drive out
illegal traders, who CITES had heard were paying up to $386 for a pound of ivory. Lower prices, CITES’s Willem Wijnstekers told Reuters, could help curb poaching. Instead the Chinese government did the unexpected. It raised ivory
prices. ... China also devised a ten-year
plan to limit supply and is releasing about five tons into its market
annually. The Chinese government, which controls who may sell ivory in
China, wasn’t undercutting the black market—it was using its monopoly
power to outperform the black market. Applying the secretariat’s logic that low prices and high volumes
chase out smugglers, China’s high prices and restricted volumes would
now draw them in. The decision to allow China to buy ivory has indeed
sparked more ivory trafficking, according to international watchdog
groups and traders I met in China and Hong Kong.And prices continue to rise. ... By all accounts, China is the world’s greatest villain when it
comes to smuggled ivory. In recent years China has been implicated in
more large-scale ivory seizures than any other non-African country. ..."

"The genie cannot be returned to her bottle: The 2008 legal ivory will forever shelter smuggled ivory. There is one final flaw in the CITES
decision to let China buy ivory. To win approval, China instituted a
variety of safeguards, most notably that any ivory carving larger than a
trinket must have a photo ID card. But criminals have turned the
ID-card system into a smuggling tool. In the ID cards’ tiny photographs,
carvings with similar religious and traditional motifs all look alike. A
recent report by the International Fund for Animal Welfare found that
ivory dealers in China are selling ivory carvings but retaining their ID
cards to legitimize carvings made from smuggled ivory. The cards
themselves now have value and are tradable in a secondary market.
China’s ID-card system, which gives a whiff of legitimacy to an illegal
icon, is worse than no system at all."

In short, Brian Christy's essay makes a plausible case that a ban on imported ivory has some possibility of reducing incentives for elephant poaching. His article doesn't address the question of how much tourists coming to look at elephants can provide an economic incentive to protect them. But he makes a strong prima facie case that trying to have regular large sales of legally harvested ivory is a fiasco, more likely to encourage and facilitate additional smuggling than to undercut it.

Monday, September 24, 2012

If you don't have a bank account, then you pay extra for many day-to-day financial transactions. Need a check cashed? Lots of non-bank places will do that--for a fee. Need a money-order to pay a bill? Lot's of non-bank places do that--for a fee. Need a loan? Payday loans and rent-to-own stores and pawnshops are available--for a fee. Of course, banks have fees, too, but the unbanked typically pay a lot more for basic financial transactions. In addition, those who live in the cash economy often find it harder to save for an emergency, and are highly vulnerable to losing a substantial part of their assets if their cash is stolen or lost.

The FDIC does an annual survey in partnership with the U.S. Census Bureau to find out more about the "unbanked," who lack any deposit account at a banking institution, and the "underbanked," who have a bank account but also rely on providers of "alternative financial services" like payday loans, pawnshops, non-bank check cashing and money orders, and the like. The results of the 2011 survey have now been released in "2011 FDIC National Survey of Unbanked and Underbanked Households."
From the start of the report, here are some bullet-points (footnotes omitted):

"• 8.2 percent of US households are unbanked. This represents 1 in 12 households in the nation, or nearly 10 million in total. Approximately 17 million adults live in unbanked households. ...
• 20.1 percent of US households are underbanked. This represents one in five households, or 24 million households with 51 million adults....
• 29.3 percent of households do not have a savings account, while about 10 percent do not have a checking account. About two-thirds of households have both checking and savings accounts.• One-quarter of households have used at least one AFS product in the last year, and almost one in ten households have used two or more types of AFS products. In all, 12 percent of households used AFS products in the last 30 days, including four in ten unbanked and underbanked households."

The survey provides considerable detail about the unbanked and the underbanked. For example, about 30% of the unbanked don't use any of the "alternative financial services"--and thus are living in something close to a pure cash economy. Nearly half of unbanked household had a bank account at some point in the past and nearly half report that they are likely to have a bank account in the future.

Some people will prefer to live in a non-bank world. I suspect that a substantial number of them are in the underground economy, staying under the government's radar and avoiding taxes. About 5.5% of those in the survey report that they can't open a bank account because of identification, credit, or banking history problems. But there are also a substantial number of the unbanked who have notions about bank accounts that are misleading or false: like a belief that they don't have enough money to open a bank account or in some way wouldn't "qualify" to open an account. Many of the unbanked also like the convenience and speed of dealing with nonbank firms that cash checks or give instant loans, and they are familiar with these firms in their neighborhoods.

But I fear that many of the unbanked dramatically underestimate the size of the fees that they pay for dealing with these alternative financial service providers, and have little notion of the programs at many banks that are designed to provide services to those who will tend to have low balances.

Thursday, September 20, 2012

The Congressional Budget Office calculates "potential GDP," which is the amount that the economy would produce at full employment. During a recession, actual economic output is below potential GDP; during an extreme economic boom, like the dot-com boom of the late 1990s, the economy can for a time have output greater than potential GDP. Here's a graph showing potential GDP in blue and actual GDP in red, both in real dollars from 1960 up through the mid-2012, generated by the ever-useful FRED website of the Federal Reserve Bank of St. Louis.

The graph does usefully show the depth of the current recession, and other recessions, as well as how actual GDP climbs above potential GDP in the dot-com boom of the late 1990s, as well as during the guns-and-butter period of the late 1960s and the housing boom of the mid-2000s. But you do have to squint a bit to make it all out! And your eye can be fooled in thinking about the depth of recessions, because the graph shows the gaps in absolute levels, not in percentage terms. Thus, when GDP is much lower back in the 1960s, the absolute gap may appear small, but the percentage gap could be larger.

So here's a graph based on the same data that shows the percentage amount by which actual GDP was above or below potential GDP in the years from 1960 up through mid-2012.

A few themes jump out from looking at the data in this way:

1) If the Great Recession is measured according to how far the economy had fallen below potential GDP, it is actually quite similar to the effects of the double-dip recession in the early 1980s.

2) If the Great Recession is measured by the size of the drop, relative to potential GDP, it is about 9 percentage points of GDP (from an actual GDP 1 percent above potential GDP to an actual GDP that is 8 percent below potential GDP). The total size of this drop isn't all that different--although the timing is different--from the years around the double-dip recession of the 1980s, the years around the recession of 1973-75, and the recession of 1969-1970.

3) The recovery from the early 1980s recessions was V-shaped, while the recovery from the Great Recession is more gradual. But this change isn't new. The recoveries from all the recessions before the early 1980s were reasonably V-shapes, and the recoveries after the 1990-91 and 2001 recessions were more U-shaped, as well.

4) The most red-hot time for the U.S. economy in this data, in the sense that the economy was running unsustainably ahead of potential GDP for a time, was what is sometimes called the "guns-and-butter" period of the late 1960s an early 1970s, when the federal government spent on both social programs and the military at the same time. In the dot-com boom, the economy was also well above potential GDP. The U.S. economy was also unsustainably above potential GDP during the housing boom around 2005-6, but it wasn't as white-hot a period of economic boom as these others.

Wednesday, September 19, 2012

Why has the economic recovery been so sluggish? One set of possible explanations is rooted in the idea of economic uncertainty: in other words, the financial meltdowns of 2008 and 2009, along with major legislation affecting health care and the financial sector, along with ongoing disputes over budget and tax policy, along with the shakiness of the euro area, have all come together to create a situation where businesses are reluctant to invest and hire, and consumers are reluctant to spend.

Of course, the difficulty with explanations that evoke "uncertainty" is that, from an empirical point of view, measusuring uncertainty can be like trying to nail jello to a wall: it's messy, and when you're finished, you can't be confident that you've accomplished much. Sylvain Leduc and Zheng Liu describe their efforts to measure uncertainty and connect it with macroeconomic outcomes in "Uncertainty, Unemployment, and Inflation," an "Economic Letter" written for the Federal Reserve Bank of San Francisco.

For starters, they use a two-part measure of uncertainty. One part is the "consumer confidence" survey that has been carried out since 1978 by the University of Michigan, now in partnership with Thomson/Reuters. They write: "Since 1978, the Michigan survey has polled respondents each month on
whether they expect an “uncertain future” to affect their spending on
durable goods, such as motor vehicles, over the coming year. Figure 1
plots the percentage of consumers who say they expect uncertainty to
affect their spending." The other ingredient is the VIX index, which measures the volatility of the Standard & Poor's 500 stock market index: that is, it's not just measuring whether the stock market is rising or falling, but rather measuring whether the jumps in either direction are relatively large or small. As they point out: "The VIX index is a standard gauge of uncertainty in the economics literature." Here's a graph showing these two measures of uncertainty, with time periods of recession shaded.

These two measures of uncertainty don't always move together. For example, in the late 1990s during the dot-com boom, consumer uncertainty looked low, but volatility in the stock market made the VIX index high. Conversely, in the aftermath of the 1990-91 recession, consumer uncertainty looked high, but uncertainty in the stock market as measured by the VIX index was toward the bottom of its range. However, during the Great Recession, both kinds of uncertainty spiked.

The authors work with data on these movements in uncertainty, compared and contrasted with what actual macroeconomic data is announced before and after the surveys, in an effort to figure out how much uncertainty by itself made the recession worse. They write:

"We calculate what would have happened to the unemployment rate if
the economy had been buffeted by higher uncertainty alone, with no
other disturbances. Our model estimates that uncertainty has pushed up
the U.S. unemployment rate by between one and two percentage points
since the start of the financial crisis in 2008. To put this in
perspective, had there been no increase in uncertainty in the past four
years, the unemployment rate would have been closer to 6% or 7% than
to the 8% to 9% actually registered.
While uncertainty tends to rise in recessions, it’s not the case
that it always plays a major role in economic downturns. For instance,
our statistical model suggests that uncertainty played essentially no
role during the deep U.S. recession of 1981–82 and its following
recovery. This is consistent with the view that monetary policy
tightening played a more important role in that recession. By contrast,
uncertainty may have deepened the recent recession and slowed the
recovery because monetary policy has been constrained by the Fed’s
inability to lower nominal interest rates below zero..."

Back in April, I described another attempt to measure economic uncertain in "Is Policy Uncertainty Delaying the Recovery?" In that study, R. Baker, Nick Bloom, and Steven J. Davis created an index of economic uncertainty based on three different factors: newspaper articles that refer to economic uncertainty and the role of
policy; the number of federal tax code provisions that are set to expire; and the extent of disagreement among economic
forecasters. Their central finding: "The results for the United States suggest that restoring 2006
(pre-crisis) levels of policy uncertainty could increase industrial
production by 4% and employment by 2.3 million jobs over about 18
months." This is roughly the same magnitude of effect as Leduc and Liu find, although the two sets of researchers are using different data for meausuring uncertainty and different approaches for connecting uncertainty to macroeconomic outcomes.

It's hard to believe that economic uncertainty will drop a lot before Election Day 2012. But one way or another, it seems likely to decline after that.

Tuesday, September 18, 2012

Aaron Steelman conducted an "Interview" with John List that appears in the most recent issue of Region Focus from the Federal Reserve Bank of Richmond (Second/Third Quarter 2012, pp. 32-38). Full disclosure: John List is one of the co-editors of my own Journal of Economic Perspectives. But in his research, John is probably best-known for being a leader in the area of taking randomized controlled experiments out of the laboratory and moving them into the field.

Here's an early example from List's work:

"So let’s go through an example whereby I think I can convince you that I am in a natural environment and that I’m learning something of importance for economics. I first got interested in charitable fundraising in 1998 when a dean at the University of Central Florida asked me to raise money for a center at UCF. ... Many charities have programs where they will match a donor’s gift. So your $100 gift means that the charity will get $200 after the match. Interestingly, however, when you go and ask those charities if matching works they say, “Of course it does, and a 2-to-1 match is much better than a 1-to-1 match, and a 3-to-1 match is better than either of them.” So I asked, “What is your empirical evidence for that?” They had none. Turns out that it was a gut feeling they had.

"I said, well, why don’t you do field experiments to learn about what works for charity? ... So what we are going to do is partner with them in one of their mail solicitations. Say they send out 50,000 letters a month. We will then randomize those 50,000 letters that go directly to households into different treatments. One household might receive a letter that says, “Please give to our charity. Every dollar you give will be matched with $3 from us.” Another household might receive the exact same letter, but the only thing that changes is that we tell them that every dollar you give will be matched by $2. Another household receives a $1 match offer. And, finally, another household will receive a letter that doesn’t mention matching. So you fill these treatment cells with thousands of households that don’t know they’re part of an experiment. We’re using randomization to learn about whether the match works. That’s an example of a natural field experiment — completed in a natural environment and the task is commonplace.

"I didn’t learn that 3-to-1 works better than 2-to-1 or 1-to-1. Empirically, what happens is, the match in and of itself works really well. We raise about 20 percent more money when there is a match available. But, the 3-to-1, 2-to-1, and 1-to-1 matches work about the same."

How does List respond to the concern that we are unlikely to learn much of interest from these kinds of experiments, because the real world is just too messy for cause and effect to be discerned?

"So I come along, and I say we really need to use the tool of randomization, but we need to use it in the field. Here’s where the skepticism arose using that approach: People would say, “You can’t do that, because the world is really, really messy, and there are a lot of things that you don’t observe or control. When you go to the marketplace, there are a lot of reasons why people are behaving in the manner in which they behave. So there’s no way — you don’t have the control — to run an experiment in that environment and learn something useful. The best you can do is to just observe and take from that observation something of potential interest.

"That reasoning stems from the natural sciences. Consider the example with the chemist: If she has dirty test tubes her data are flawed. The rub is that chemists do not use randomization to measure treatment effects. When you do, you can balance the unobservables — the “dirt” — and make clean inference. As such, I think that economists’ reasoning on field experiments has been flawed for decades, and I believe it is an important reason why people have not used field experiments until the last 10 or 15 years. They have believed that because the world is really messy, you can’t have control in the same way that a chemist has control or a biologist might have control. ...

"When I look at the real world, I want it to be messy. I want there to be many, many variables that we don’t observe and I want those variables to frustrate inference. The reason why the field experiments are so valuable is because you randomize people into treatment and control, and those unobservable variables are then balanced. I’m not getting rid of the unobservables — you can never get rid of unobservables — but I can balance them across treatment and control cells. Experimentation should be used in environments that are messy; and I think the profession has had it exactly backwards for decades. They have always thought if the test tube is not clean, then you can’t experiment. That’s exactly wrong. When the test tube is dirty, it means that it’s harder to make proper causal inference by using our typical empirical approaches that model mounds and mounds of data."

And here's List arguing that many institutions, including education, should be continually involved in new natural field experiments, so that we can do a better job of figuring out what actually works.

"I think in many ways, it’s harder to overturn entrenched thinking in parts of the nonprofit, corporate, and public sectors, where many things are not subject to empirical testing. For instance, why don’t we know what works in education? It’s because we have not used field experiments across school districts. Each school district should be engaged in several experiments a year, and then in the end the federal government can say, “Here’s what works. Here’s a new law.” It’s unfair to future generations to pass along zero information on what policies can curb criminal activities, what policies can curb teen pregnancy, what are the best ways to overcome the racial achievement gap, why there aren’t more women in the top echelon of corporations. We don’t know because we don’t understand, we haven’t

engaged in feedback-maximization. There needs to be a transformation, and I don’t know what it’s going to take. I mean, are we going to be sitting here in 50 years and thinking, “If we only knew what worked to help close the achievement gap, if we only knew how to do that”?

"I hope my work in education induces a sea change in the way we think about how to construct curricula. Right now, we are doing a lot of work on a prekindergarten program in Chicago Heights and in a year or two I think that we will be able to tell policymakers what will help kids — and how much it will help them. But unless people adopt the field experimental approach more broadly, it will be a career that’s not fulfilled in my eyes."

Monday, September 17, 2012

Last week the Bureau of the Census released its annual report that estimates the poverty rate in the previous year, and I posted about "What the Official Poverty Rate is Missing," with my discussion focusing on various government anti-poverty programs that don't show up in the measure of income and the problems involved in measuring poverty by income rather than by using consumption. The same day, the Brookings Institution held a conference for various well-informed folks to react to the Census report, like Ron Haskins, Richard Burkhauser, Gary Burtless, Isabel Sawhill, Kay Hymowita, Wendell Primus. Many of them took an approach broadly similar to my own--that is, slicing and dicing the numbers to figure out the trends and patterns and strengths and weaknesses of the data.

But I was taken by the comments of Ralph Smith, senior vice-president of the Annie E. Casey Foundation, who focused on what the poverty numbers mean in more human sense for the prospects of children. The quotation is taken from the uncorrected transcript of the event, which is posted here. Smith said:

"There’s an antiseptic quality about the charts and graphs and the PowerPoint that feels to me as if it misses the issue and misses the reality of the lives of the people and the families about whom we speak. ... I just can’t get to the point where I’m so captured by the data that I miss what these numbers mean for the lives and futures of the families about whom we speak, about the material conditions in which they live, about the aspirations they could hold onto for their kids and for the next generation. "And I will confess a discomfort as I think about the one million children who despite these not-quite-so-bad numbers will be born into poverty next year. One million new entrants into poverty, and what we can predict now. And what we can certify on the day they are born is that more than 50 percent of them will spend half their childhoods in poverty. Twenty-nine percent of them will live in high poverty communities. Ten percent of them will be born low birth weight, a key indicator of cognitive delays and problems in school. Only 60 percent of them will have access to health care that meets the criteria for having a medical home. By age three, fewer than 75 percent of them will be in good or excellent health, and they’ll be three times more likely than their more affluent peers to have

elevated blood lead levels. "More than 50 percent of them will not be enrolled in pre-school programs and by the time they enter kindergarten, most of them will test 12 to 14 months below the national norms in language and pre-reading skills. Nearly 50 percent of them will start first grade already two years behind their peers. During the early grades, these children are more likely to miss more than 20 days of school every year starting with kindergarten, and that record ofchronic absence will be three times that of their peers. When tested in fourth grade, 80 percent of these children will score below proficient in reading and math. We know now that 22 percent of them will not graduate from high school, and that number rises to 32 percent for those who spend more half of their childhood in poverty. And to no one’s surprise, these sad statistics and deplorable data get even worse for children of color and children who live in communities of concentrated poverty. ...

"[T]his report brings bad news about a predictably bleak future in this the land of opportunity. ... We don’t spend as much as we need to, but until we do better with what we have, we’re not going to make the case for what we need. And we don’t care as much as we say we do because some kids matter more than others and some kids matter not at all. And I think these million kids are the kids who might matter not at all. And so when I see the numbers, I must admit that I flinch and I think they ought to as well because for these children, the numbers that matter most to their futures and to ours are one, the income of their parents, and two, the zip code of their homes. ...

"The view that I agree with most is the one that recognizes that persistent poverty is the challenge of our time. Like the world wars, the Great Depression, civil rights, persistent poverty is worthy of an engaged national as well as federal government. ... Imagine that in 2015 candidates as they stumped in Iowa and New Hampshire and North Carolina had to confront the issue of persistent poverty and had to talk about it. And imagine that in 2016 there was a debate where a reporter would even ask a question about it, but where candidates would feel compelled to articulate their position. Most of us in this room, as good and as smart as we are, we cannot imagine that happening."

I'll just add that it has been remarkable to me during the last few years of sustained high unemployment and families under stress, how much our national political discussion has focused on the merits of different tax levels for those with high incomes, and how little our national political discussion has focused in any concrete way on how to assist the poor, and in particular on how to alter the trajectory of life for children living in poverty.

First, just as a matter of getting the facts straight, CEO pay relative to household income did spike back in the go-go days of the dot-com boom in the late 1990s, but since then, it is relatively lower. Kaplan argues that there are two valid ways to measure executive pay. One measure looks at actual pay received, which he argues is useful for seeing whether top executives are paid for performance. The other measure looks at "estimated" pay, which is the amount that financial pay packages would have been expected to be worth at the time were granted. This calculation requires putting a value on stock options, restricted stock grants, and the like, and estimating what these were worth at the time the pay package was given. Kaplan argues that this measure is the appropriate one for looking at what corporate boards meant to do when they agreed on a compensation package.

Here's one figure showing actual average and median pay totals for S&P 500 CEOs from 1993 to 2010. Average pay is above median pay, which tells you that there are some very high-paid execs at the top pulling up the average. Also, average CEO pay spikes when the stock market is high, as in 2000 and around 2006 an 2007. Median realized pay seems to have crept up over time.
Here's a figure showing estimated pay--that is, the value of the pay packages when they were granted. But this time, instead of showing dollar amounts, this graph shows average and median CEO pay as a multiple of median household income. Average pay again spikes at the time of the dot-com boom. Kaplan emphasizes that estimated CEO pay is on average lower than in 2000 and that the median hasn't risen much. My eye is drawn to the fact that median pay for CEOs goes from something like 60 times median household income back in 1993 to about 170 times median household income by 2010.

An obvious question is whether these pay levels are distinctive for CEOs, or whether they are just one manifestation of widening income inequality across a range of highly-paid occupations. Kaplan makes a solid case that it is the latter. For example, here's a graph showing the average pay of the top 0.1% of the income distribution compared with the average pay of a large company CEO.Again, the story is that CEO pay really did spike in the 1990s, but by this measure, CEO pay relative to the top 0.1% is now back to the levels common in the the 1950s.

Kaplan also points out that the pay of those at the top of other highly-paid occupations has grown dramatically as well, like lawyers, athletes, and hedge fund managers. Here's a figure showing the pay of top hedge fund managers relative to that of CEOs in the last decade. Kaplan writes: "The top 25 hedge fund managers as a group regularly earn more than all 500 CEOs in the S&P 500. In other words, while public company CEOs are highly paid, other groups with similar backgrounds and talents have done at least equally well over the last fifteen years to twenty years. If one uses evidence of higher CEO pay as evidence of managerial power or capture, one must also explain why the other professional groups have had a similar or even higher growth in pay. A more natural interpretation is that the market for talent has driven a meaningful portion of the increase in pay at the top."

Kaplan also compiles evidence that CEOs of companies with better stock market performance tend to be paid more than those with poor stock market performance, and that CEOs have shorter job tenures. He writes: Turnover levels since 1998 have been higher than in work that has studied previous periods. In any given year, one out of 6 Fortune 500 CEOs lose their jobs. This compares to one out of 10 in the 1970s. CEOs can expect to be CEOs for less time than in the past. If these declines in expected CEO tenures since 1998 are factored in, the effective decline in CEO pay since then is larger than reported above."
And the CEO turnover is related to poor firm stock performance ..."

To me, Kaplan makes a couple of especially persuasive points: the run-up in CEO salaries was especially extreme during the 1990s, and less so since then (depending on how you measure it); and the run-up in CEO salaries reflects the rise in inequality across a wider swath of professions. While I believe the arguments that job tenure can be shorter for the modern CEO, especially if a company isn't performing well, it seems to me that most former CEO's don't plummet too many percentiles down the income distribution in their next job, so my sympathy for them is rather limited on that point.

In this paper, Kaplan doesn't seek to address the deeper question of why the pay for those at the very top, CEOs included, has risen so dramatically. While the demand for skills at the very top of the income distribution is surely part of the answer, I find it hard to believe that these rewards for skill increased so sharply in the 1990s--just coincidentally during a stock market boom. It seems likely to me that
cozy institutional arrangements for many of those at the very top of the income distribution--CEOs, hedge fund managers, lawyers, and athletes and entertainers--also plays an important role.

Thursday, September 13, 2012

Yesterday the U.S. Bureau of the Census released its annual report on "Income, Poverty, and Health Insurance Coverage in the United States: 2011," this year written by Carmen DeNavas-Walt, Bernadette D. Proctor, and Jessica C. Smith. One finding is that the official U.S. poverty rate barely budged from 2010 to 2011, which if not positive news, is at least non-negative news. Here's a figure showing the number of people in poverty and the poverty rate since 1959:

One set of problems is clear: some of the largest government programs to help those in poverty have zero effect on the officially measured poverty rate. For example, Food stamps are technically a noncash benefit (even if in many ways they
are similar to receiving cash), so they are not counted in the
definition of income used for calculating the poverty rate. The Earned Income Tax Credit operates through the tax system, it is not covered in the definition of "money income before taxes" used to measure poverty. The same is true of the child credit given through the tax code. Medicaid assistance for those with low incomes is not cash assistance, so it doesn't reduce the measured poverty rate, either.

The fact that many anti-poverty programs have no effect on officially measured poverty is no secret. The Census report itself carefully
notes: "The poverty estimates in this report compare the official
poverty thresholds to money income before taxes, not including the value
of noncash benefits. The money income measure does not completely
capture the economic well-being of individuals and families, and there
are many questions about the adequacy of the official poverty
thresholds. Families and individuals also derive economic well-being
from noncash benefits, such as food and housing subsidies,and their
disposable income is determined by both taxes paid and tax credits received." As an example, the report points out that if EITC benefits were included as income, the number of children in poverty would fall by 3 million.

But there is a deeper issue with the official poverty rate, which is that it is measured on the basis of income, not on the basis of consumption. In a given year, a household's level of income and its level of consumption don't always match up. It's easy to imagine an example, especially in the last few years with sustained high unemployment rates, that some households had low income in a given year, but were able to draw on past savings, or perhaps to borrow based on credit cards or home equity. It's possible that measured by income, such households appears to be in poverty, but if measured by consumption (and especially if they own their own homes), they would not appear quite as badly off.

Meyer and Sullivan look at how those classified as "poor" would be different if using a poverty rate based on consumption, vs. a poverty rate based on income. They emphasize the poverty rate can be the same whether it is based on consumption or on income: it's just a matter of where the official poverty line of consumption or income is set. Thus, the poverty rate is not automatically higher or lower because it is based on income or consumption. (For those who care about these details, the official poverty meausure looks at income as measured by the Current Population Survey, while Meyer and Sullivan look at consumption as measured by the Consumer Expenditure Survey.)

Meyer and Sullivan offer a fascinating comparison: they look at 25 characteristics that seem intuitively related to household well-being: total consumption; total assets; whether the household has health insurance; whether it owns a home or a car; how many rooms, bedrooms, and bathrooms in the living space; whether the living space has a dishwasher, air conditioner, microwave, washer, dryer, television, or computer, whether the head of household is a college graduate; and others. It turns out that if one looks at poverty by income and by consumption, with the poverty rates set to be equal in both categories, 84% of the people are included in either definition. But those who are "poor" by the consumption definition of poverty are worse off in 21 of the 25 categories of household well-being.

Why is this? In part, it's because the total value of consumption includes the funds received Food Stamps, the Earned Income Tax credit, and so on. Some of those who fall below the poverty line when these are not considered, in the official income-before-taxes poverty measure, rise above the poverty line when these are included. In addition, consumption poverty better captures those who don't have other resources to fall back on, so those whose income is temporarily low enough to fall below the poverty line, but have other ways to keep their consumption from falling as much, don't show up as falling below a consumption-based poverty line.

Setting a poverty line is a political decision, not a law of nature. Some decisions will always be second-guessed, and those who care about the details of what happens with different poverty lines can go to the Census Bureau website and construct alternative measures of the poverty rate based on different measures of income or different ways of defining poverty. But that said, it seems downright peculiar to have an official income-based measure of poverty that isn't affected at all by several of the largest anti-poverty programs. And it seems peculiar to base our official measure of poverty on income, when the fundamental concept of poverty is really about having a shortfall of consumption.