Pages

Thursday, April 30, 2015

What's the share of women in the top 1% of the earnings distribution? How has it been changing over time? Fatih Guvenen, Greg Kaplan, and Jae Song tackle this question in "The Glass Ceiling and The Paper Floor: Gender Differences among Top Earners, 1981–2012," published as NBER Working Paper No. 20560 in October 2014. (NBER working papers are not freely available online, but many readers will have access through library subscriptions.)

Here's a figure showing the share of women in the top 0.1% and the next 0.9% of the earnings distribution from 1981-2012. The dashed lines show the data for a given one-year period, with the darker line showing the 0,1% and the lighter line showing the next 0.9%. The solid lines show the data if you average over a five-year period. The darker lines fall below the dashed lines, which suggests that women are less likely than men to sustain a position in the top 0,1% or the next 0.9% over time, but the difference is not large. The trends is clearly upward. For example, based on one-year data, women were about 6% of the top top 0.1% in earnings back in 1982, and were about 18% of this top 0.1% by 2012. As the authors write: "The glass ceiling is still there, but it is thinner than it
was three decades ago."

A different way of slicing up the same data is to look at the proportion of men to women in each group at any given time (basically, this is the reciprocal of the figure above).

The authors explore this data in considerably more detail. Here are some other facts about the study and highlights that caught my eye.

1) Many studies of the top of the earnings or income distribution look at tax data. In contrast, this study uses data on earnings as reported to the Social Security Administration: more technically, it's a representative 10% sample drawn from the Master Earnings File at the SSA. In the data, people are identified only by anonymous numbers, but because of those numbers, it's possible to track the same people over time--which is why it's possible to look at average income over five years in the figure above. The Social Security data also includes data on the industry of the employer. The data is limited to people between the ages of 25 and 60, who had annual earnings of at least $1,885 as measured in inflation-adjusted 2012 dollars. The measure of earnings here includes wages, bonuses, and any stock options that are cashed in.

2) How much in earnings does it take to be in the top 1% or the top 0.1%? In 2012, it's $291,000 to reach the top 1% of earnings, and $1,018,000 to reach the top 0.1%. Interestingly, these thresholds for the top income levels rose sharply in the 1980s and the 1990s, but have been pretty steady since then. Also, the thresholds don't vary much acor

3) Along with the "glass ceiling" that limits women from entering the highest-paying jobs, there is also a discussion in this literature a "paper floor," which is the pattern that women who are highly-paid in one year are more likely than men to drop out of the top earnings group in the following year. However, the "paper floor" seems to have changed over time. The authors write:

This high tendency for top earners to fall out of the top earnings groups was particularly stark for females in the 1980s -- a phenomenon we refer to as the paper floor. But the persistence of top earning females has dramatically increased in the last 30 years, so that today the paper floor has been largely mended. Whereas female top earners were once around twice as likely as men to drop out of the top earning groups, today they are no more likely than men to do so. Moreover, this change is not simply due to females being more equally represented in the upper parts of the top percentiles; the same paper floor existed for the top percentiles of the female earnings distribution, but this paper floor has also largely disappeared.

4) the reason behind more women moving into the top earnings group can be traced to younger cohorts of women having more members in that group--not to increases in earnings for the older cohorts of women workers.

Entry of new cohorts, rather than changes within existing cohorts, account for most of the increase in the share of females among top earners. These new cohorts of females are making inroads into the top 1 percent earlier in their life cycles than previous cohorts. If this trend continues, and if these younger cohorts exhibit the same trajectory as existing cohorts in terms of the share of females among top earners, then we might expect to see further increases in the share of females in the overall top 1 percent in coming years. However, this is not true for the top 0.1 percent. At the very top of the distribution, young females have not made big strides: the share of females among the top 0.1 percent of young people in recent cohorts is no larger than the corresponding share of females among the top 0.1 percent of young people in older cohorts.

5) The persistence of earners who are in the top 0.1% for successive years seems to be increasing, not decreasing.

Throughout the 1980s and 1990s, the probability that a male in the top 0.1 percent was still in the top 0.1 percent one year later remained at around 45%, but by 2011 this probability had increased to 57%. When combined with our nding that the share of earnings accruing to the top 0.1 percent has leveled o ff since 2000, this implies a striking observation about the nature of top earnings inequality: despite the total share of earnings accruing to the top percentiles remaining relatively constant in the last decade, these earnings are being spread among a decreasing share of the overall population. Top earner status is thus becoming more persistent, with the top 0.1 percent slowly becoming a more entrenched subset of the population.

6) The industry mix of those at the very top is shifting toward finance.

[R]egarding the characteristics of top earners, we find that the dominance of the finance and insurance industry is staggering, for both males and females: in 2012, finance and insurance accounted for around one-third of workers in the top 0.1 percent. However, this was not the case 30 years ago, when the health care industry accounted for the largest share of the top 0.1 percent. Since then, top earning health care workers have dropped to the second 0.9 percent where, along with workers in finance and insurance, they have replaced workers in manufacturing, whose share of this group has dropped by roughly half.

Wednesday, April 29, 2015

The US economy has considerable strengths: its enormous internal market, well-educated population, its scientific and research establishment, its physical and legal infrastructure, and more. In addition to these, the US economy has been known for its flexibility in creating new companies and jobs. And by those measures, problems have been increasing for a couple of decades. I just recently discovered that the Business Dynamics Statistics program at the U.S. Census Bureau has a handy-dandy figure and chart generator, which I recommend playing with.

For those not familiar with it, the BDS is a longitudinal database of business establishments and firms going back to 1976. "Longitudinal" means that it covers the same firms over time, so a researcher can investigate patterns of firms that have just started, or firms that are five years old, and so on. It looks at the rates at which firms start up and shut down. And it looks at whether firms of different sizes and ages are adding jobs or subtracting jobs.

One pattern which comes out of the data is a long-run decline in the birth rate of firms in the US economy. The figure below shows the latest BDS data, released in September 2014 and covering the years up through 2012. (The numbers on the vertical axis can be read as percentages: that is, the number of firms with age zero in 2012 was 11% of the total number of firms in 2012.) As I've pointed out in earlier posts on "The Decline of US Entrepreneurship" (August 4, 2014) and "New Business Establishments: The Shift to Existing Firms" (August 26, 2014), this downward trend in business births worsened during the Great Recession, but it had been on a downward trend for a couple of decades before that.

One sometimes hears a claim by politicians that we are now seeing for the first time that exit rate for firms is exceeding the entrance race. It's true that the exit rate of firms exceeded the birth rate for several years during the Great Recession, and that this had not happened during the previous two recessions. However, it did happen for a year back during the 1981-82 recession. And the birth rate was again exceeding the exit rate by 2012. Of course, these claims do not contradict the overall pattern that the birth rate of firms has been declining over time.

Not surprisingly, as the birth rate of new firms has gradually declined, so has the job creation rate. The BDS data illustrates this point. The "job creation rate" sums up the number of jobs added at all establishments that added to their total number of jobs in a given year, and divides by total jobs. Conversely, the "job destruction rate" adds up the number of jobs subtracted at all establishments that reduced their number of jobs in a year, and divides by the total number of jobs. Notice that this method of counting "job creation" and "job destruction" is an underestimate of the amount of churn in the US labor market, because if a certain establishment shuts down a bunch of jobs in one area, but opens up an equivalent number jobs in a completely different area, then it is unlikely the workers will be able to transfer from one area to another--but from the standpoint of the BDS data, this establishment neither created new jobs nor destroyed existing ones.

Even by this understated measure of job churn, it used to be that the number of new jobs created each year back in the late 1970s and into the mid-1980s was often about 20% of the total number of jobs. But this share has sagged over time, and during the Great Recession job creation fell almost to 10% of the existing jobs in a given year. Meanwhile, the rate of job destruction seems to hover around 15% of existing jobs in a given year--a little higher or lower depending on the state of the macroeconomy.

Research using the BDS data shows that new firms play an outsized role in creating new jobs, both in the first year of the new firms, and also--even after attrition--in the few new firms that really take off as job creators. Looking ahead, part of the evaluation process for every new rule affecting business should take a look at how it affects incentives to start new firms and to hire. The US economy can't afford to take its entrepreneurism and labor market flexibility for granted.

Tuesday, April 28, 2015

Economists tend to believe that taxing consumption makes more sense than taxing income. When you tax people on income, and then tax them on the return earned from saving, you discourage saving and investment. After all, income that is saved, along with any return on that income, will be consumed eventually. But the US relies much less on consumption taxes than any other high-income country.

Consumption taxes come in two main forms. One is a value-added tax, which in economic terms is essentially similar to a sales tax. The functional difference is that a sales tax is collected from consumers at the point of purchase, while a value-added tax is collected from producers according to the value they add along the production chain. Then, the value-added tax thus the prices paid by consumers in the same way as a sales tax. The other forms of a consumption tax are taxes on specific kinds of consumption, and in most countries the biggest consumption taxes on specific items are those relating to energy (especially oil), alcohol, and tobacco. The OECD lays out the evidence in its December 2014 report, "The Distributional Effects of Consumption Taxes in OECD Countries."

The US is at rock-bottom when it comes to consumption taxes. That's in part because, unlike 160 other countries around the world and all of the other countries in this figure, the US doesn't have a value-added tax. In addition, the US taxes on gasoline are lower than in most of these other countries. The OECD average in this figure shows that these countries as a group get about one-third of their tax revenue from consumption taxes: the US gets about half that proportion of its taxes from consumption taxes. A standard VAT rate in countries like Germany, France, Italy, and the United Kingdom is about 20%.

Perhaps the main concern with a value-added tax is that it is regressive: that is, because those with low income tend to consume a higher share of their income than those with higher incomes, those with lower incomes pay the VAT on a higher share of their income, too. Proponents of a VAT point out that this perspective is short-term, and that when viewed over a lifetime, a VAT will look somewhat more egalitarian. But even in the perspective of one or a few years, there are three ways to keep a consumption tax from being overly regressive--although only two of those ways work especially well.

The most common method that seeks to prevent a consumption tax from being regressive is to exempt certain products from the tax. Most countries have a lower VAT rate on food products, and on certain other products, often including health care, education, newspapers and books, restaurant meals, hotels, water supply, children's clothing, and others. The main problem with this approach is one that also arises for sales taxes in US states, which often have a similar list of exemptions. In an attempt to help those who are poor--who are perhaps 10-20% of the population in these countries--the VAT is reduced for everyone. If the goal is to help the poor, it works much better to, well, help the poor by offering income rebates or food stamps or other income-tested programs. The OECD report sums up the problem this way:

[M]ost, if not all of the reduced VAT rates that are introduced for the distinct purpose of supporting the poor--such as reduce rates on food, water supply and energy products--do have the desired progressive effect. However, despite this progressive effect, these reduced VAT rates are still shown to be a very poor tool for targeting support to poor households: at best, rich households receive as much aggregate benefit from a reduced VAT rate as do poor households; at worst, rich households benefit vastly more in aggregate terms than poor households. Furthermore, reduce rates introduced to address social, cultural and other non-distributional goals--such as reduced rates on books, restaurant food and hotel accommodation--often provide so large a benefit to rich households that the reduced VAT rate actually has a regressive effect.

Another approach to the regressive nature spending of a VAT is to focus not on the tax side, but on the spending side. Previous research by the OECD suggests that precisely because the US relies less than other countries on consumption taxes, the US tax code is already highly progressive compared to most other countries. However, patterns of US government spending are aimed less at the poor than in other countries, so the overall result is that the US redistributes less than other countries. A Congressional Budget Office report a few years ago pointed out that while the amount of redistribution happening through the federal tax code hasn't changed much over time, the amount of redistribution happening through federal spending has been declining. In other words, perhaps the goal should be to worry less about focusing taxes on those with high incomes, and instead to work on focusing government spending on those with lower incomes.

Finally, it's quite possible to design a consumption tax that is progressive. It looks a lot like an income tax, except that instead of taxing people based on their income, you tax them on income minus saving--which is the definition of consumption. It would actually be fairly straightforward for the US to turn its income tax into a full consumption tax. It's already true that a high proportion of saving--certain retirement accounts, increases in home equity, capital gains--isn't taxed when the gains occur. The US would have a consumption tax if all saving was deducted from income before taxes were applied.

However, an outright VAT for the US seems highly unlikely. The last time the subject was even broached, back in April 2010, the US Senate quickly bestirred itself for an 85-13 nonbinding resolution that the US should not adopt a value-added tax. Economists have an old joke about a value-added tax which goes like this: "Democrats oppose a VAT because it's regressive and Republicans oppose a VAT because it's potentially a money machine for government. However, the the US will enact a VAT as soon as Republicans recognize that it's regressive and Democrats recognize that it's a money machine for the government."

For more discussion of consumption taxation, with some emphasis on the US exception, a useful starting point is an article in the Winter 2007 of the Journal of Economic Perspectives by James R. Hines, "Taxing Consumption and Other Sins" (21:1, pp. 49-68).

Monday, April 27, 2015

Everyone knows that online technologies have the potential to disrupt existing arrangements in higher education, perhaps in extreme ways. But how far has the disruption proceeded? And what are the main barriers ahead?

I. Elaine Allen and Jeff Seaman have been doing annual surveys on these issues for 12 years. Their latest is "Grade Level:Tracking Online Education in the United States," published by the Babson Survey Research Group and Quahog Research Group, LLC. (Accessing the report may require free registration.) As Allen and Seaman point out, the "National Center for Education Statistics’ Integrated Postsecondary Education Data System (IPEDS) added “distance” education to the wealth of other data that they collect and report on US higher education institutions." Allen and Seaman were collecting data from a sample of over 600 colleges and universities, but the IPEDS data is mandatory for all institutions of higher education. I recommend the full report, but here are a few results that caught my eye.

Allen and Seaman provide evidence that over 70% of degree-granting institutions, and over 95% of those with the largest enrollments, now have distance-learning online options. Many of these institutions say that distance learning is crucial to the future of their institutions. Over 5 million students are currently taking at least one distance-learning class, although the rate of growth of students taking such classes seems to have slowed in recent years. On the other hand, faculty seem increasingly leery of online education, and many of them do not seem especially willing to embrace it. Many students are finding that completing online courses on their own is difficult. Skepticism about MOOCs, or "massively open online courses," is on the rise.

In the BSRG survey, chief academic officers are asked for their own perceptions of how learning outcomes differ for online learning and face-to-face classes. The share that think online learning has inferior outcomes is falling. Most answer "same," and a growing share--approaching 20%--says that online learning outcomes are superior.

These same chief academic officers say that "blended" courses, with elements of online learning but some face-to-face component, are more likely to favor blended courses over pure online courses. In the graph below, the light blue bar on the bottom is equal for both bars, because it's the share of those who say that learning outcomes are the same in on-line and blended courses. Those who think one or the other are superior are then shown in the gray and orange bars on top.

However, it's worth noting that "perceptions of the relative quality of both online and blended instruction have shown the same small decline for each of the past two years." Some of the issue seems to be that faculty members are often not supportive of online learning.

Another problem is concern that that students taking such courses often don't finish, and need additional support or self-discipline to make it through.

What about the most extreme version of online education, the MOOCs or massively open online courses? Here, the bloom seems to be off the rose. "The portion of academic leaders saying that they do not believe MOOCs are sustainable increased from 26.2% in 2012 to 28.5% in 2013, to 50.8% in 2014. A majority of all academic leaders now state that they do not think the MOOCs are a sustainable method for offering courses."

From the viewpoint of chief academic officers, about 8% of their institutions offer a MOOC, and an additional 5% are planning to offer one. But the main reasons given for offering a MOOC are things like increasing institution visibility, driving student enrollment, and experimenting with new pedagogy. In a way, MOOCs are being treated as loss-leaders.

Perhaps there is a chicken-and-egg problem here: if more faculty embraced on-line learning and MOOCs, then they would work for more students. But maybe the problems of online education and MOOCs are in some ways embedded, and the issue is how to create a hybrid structure that builds on what online education can do well, without pretending that (at least in the current state of artificial intelligence) higher education can be automated.

After all, there's been a primitive version of a massively open course available for quite some time now. It's called a textbook, or a public library. Students could in theory learn everything they need in this way, but few have the energy and directedness and stamina to do so on their own. Perhaps the online version of courses in higher education will be so pedagogically wonderful that lots of students who couldn't learn the material on their own from a textbook or a library will be able to do so, but I'm dubious. The challenge for higher education will be how to combine what online learning can do well (presenting examples in livelier and different ways, repetition, limited but immediate feedback) with the strengths of the human touch, which includes support from other students, along with a mixture of teaching assistants and faculty members. I'm sure there's no one all-purpose formula. But some formulas will work better than others.

Here's a figure showing the basic question. Real US manufacturing output is the solid blue line, where the 1990 level is set equal to 100, and changes are shown relative to that level. The air pollutants shown are carbon monoxide (CO), nitrogen oxides (NOx), particulate matter (PM2.5 and PM10 refer to particulate matter that is either less than 2.5 microns or less than 10 microns), sulfur dioxide (SO2), and volatile organic compounds (VOCs).

How could this happen? Shapiro and Walker lay out the four most common hypotheses (citations omitted):

Research suggests at least four possible explanations of these substantial improvements in U.S. air quality. First, U.S. manufacturing trade has grown substantially. When dirty industries like steel or cement move abroad, total U.S. pollution emissions may fall. Second, federal and state agencies require firms to install increasingly stringent pollution abatement technologies. Some research, for example, directly attributes national changes in air quality to the Clean Air Act and to other environmental regulations. Third, ... Americans may gradually choose to spend less on heavy manufactured goods and more on services and cleaner goods. Finally, if manufacturers use fewer dirty inputs each year to produce the same outputs, then annual productivity growth could improve air quality.

The authors use plant-level data that includes the value of shipments, production costs, and pollutants emitted. They lay out a detailed industry-level model of firms and the choices they make about reducing pollution, and then use that estimate the model with data. As is standard for any exercise like this, the model is a useful (and indeed, an unavoidable) way of organizing the data and suggesting connections, and the detailed results will depend to some extent on the specific model chosen. They also do some specific analysis of the extent to which emissions of specific pollutants changed when tighter rules were put into effect. Readers with the technical skills can dig through the paper and evaluate it for themselves. Here, I'll stick to some big-picture insights.

The decline in pollution from US manufacturing doesn't seem to be due to a change in what was produced: instead, it's mostly a case of lower pollution to produce specific products. They write: "[C]hanges in the scale of manufacturing output or changes to the composition of products produced cannot explain trends in pollution emissions from U.S. manufacturing between 1990 and 2008. Instead, changes in emissions over this time period were almost exclusively driven by decreased pollution per unit output for narrowly de fined products." This insight suggests that while it is certainly true that global supply chains are shifting parts of manufacturing around the world, and that American consumers are spending money on services, these factors are not causing the fall in pollution from US manufacturing production.

Instead, Shapiro and Walker find strong evidence that environmental regulations are what made most of the difference. "[W]e fi nd that the increasing stringency of environmental regulation explains 75 percent or more of the 1990-to-2008 decrease in pollution emissions from U.S. manufacturing. ... [W]e find that changes in U.S. productivity have had small e ffects on U.S. pollution emissions at the economy-wide level."

An obvious follow-up question here is: What about carbon dioxide and the potential effects on climate change? In their analysis of air pollution regulation, Shapiro and Walker estimate that the increase in regulatory costs on the six pollutants mentioned above were the equivalent of imposing a pollution tax that more than doubled between 1990 and 2008. However, we have not imposed either such rules or explicit taxes on carbon dioxide, and thus it's not a big surprise that CO2 have not dropped in synch with the other emissions. The authors write: "We also measure implicit tax rates for CO2, a pollutant which largely has not been regulated. While our inferred tax rates for most pollutants more than doubled between 1990 and 2008, the implicit tax rate for CO2hardly changed over this time period."

Of course, this sort of study doesn't analyze costs and benefits: that is, it doesn't show either that the costs of these new pollution rules were worthwhile, nor does it analyze whether some alternative form of regulation might been able to achieve the same reduction in pollution at lower cost. However, the fact that it is possible to substantially reduce air pollution while still having manufacturing output rise has broader implications for holding down pollution in the US and around the world. Readers might also be interested in earlier posts on "Costs of Air Pollution in the United States" (November 7, 2011), "Air Pollution: World Biggest Health Hazard" (April 1, 2014), and "Other Air Pollutants: Soot and Methane" (June 28, 2012).

Thursday, April 23, 2015

I've pointed out in the past that Americans tend to be less supportive of international trade than other high-income countries of the world, and high-income countries tend to be less supportive of trade than lower-income countries. Of course, foreign trade (as measured by export/GDP ratio, for example) is relatively less important in the US economy, with its enormous internal markets, than in most other countries around the world. Thus, a working hypothesis would be that countries with lower incomes and/or more exposure to foreign trade are more likely to see it in overall positive terms.

However, a recent Gallup poll conducted in February suggest that although Americans became more likely to see trade as a "threat" than as an "opportunity" from about 2001-2008, since the Great Recession of 2007-2009, a rising share of Americans have started to viewing foreign trade as an opportunity. Here's a graph from Gallup:

Interestingly, the Gallup poll also suggests that the rise in Americans support for foreign trade has been driven mostly by a shift toward in the pro-trade beliefs of (self-described) Democrats and Independents, not a shift in the beliefs of Republicans. Indeed, Gallup finds that Republicans were more likely than Democrats to see trade as an opportunity in the early 2000s, but now Democrats are more likely to hold such beliefs.

I welcome the overall shift toward a more positive view of foreign trade among Americans. As I've argued on this blog before, the next few decades seem likely to be a time when the most rapid economic growth is happening outside the high-income countries of the world, and finding ways for the US economy to connect with and participate in that rapid growth could be an important driver of US economic growth in the decades ahead. In a broad sense, US attitudes over foreign trade mirror the behavior of the US trade deficit: that is, when the US trade deficit was getting worse in the early 2000s, the share of those viewing trade as a "threat" was rising, but at about the same time that the US trade deficit started declining, the share of those viewing trade as an "opportunity" started to rise.

However, I feel considerable uncertainty over how to interpret these findings. For example, it's not clear to me why Democrats and Independents are shifting their opinions about trade more strongly than Republicans. This shift doesn't seem to reflect the political divisions in Congress, where it seems that Republicans are more often the ones to be pushing for agreements to reduce trade barriers and Democrats are more likely to be opposing them.

Wednesday, April 22, 2015

Discussions of the economic situation in Europe and the eurozone during the last few years have a heavy emphasis on government debt and central bank policy. But from the time when the Treaty of Rome back in 1957 created the European Economic Community, better known as the "Common Market," a primary goal of the European project has been to create a single market. The notion is that goods, services, people, and capital are supposed to be able to move freely across borders. Mario Mariniello, André Sapir and Alessio Terzi offer an overview of how far we have travelled on "The long road towards the European single market," published in March 2015 as Bruegel Working Paper 2015/01.

Back in 1993, the European Commission accounted the "completion" of the single market project. Economic theory suggests the single market should improve productivity and thus increase the standard of living. Here's a comparison between productivity in the EU-15 (which is the 15 countries that were in the EU through the 1990s, before the addition of a wave of new EU members in 2004) and US productivity. The vertical line in 1993 doesn't suggest that the SMP, the single market project, has given Europe much of a productivity boost.

Mariniello, Sapir, and Terzi rehearse the reasons why economic theory suggests a single market should help productivity growth. For example, a single market and lower barriers to trade means that firms face more competition, which should encourage innovation. The cross-fertilization of ideas and methods across Europe should also favor innovation. Firms would be operating in a larger European market rather than a smaller national market, so they can expand to take advantage of economies of scale. Workers can move to regions where the economy is expanding and job opportunities are more plentiful. Financial integration would help to allocate investment capital to projects with higher rates of return and greater competitiveness. However, they also point out that when empirical researchers tried to quantify these effects, they don't find much--as the figure above implies.

What went wrong? The authors argue that the single market is far from complete along a number of dimensions. Here is their "non-exhaustive" list of remaining major barriers:

Insufficient mutual recognition. Official tariffs between EU countries are essentially zero. But even more than a half-century after the Treaty of Rome, regulations for products and services still vary considerably across EU countries, so the costs of doing business across borders remain high.

Poor quality of implementation of directives. An OECD report emphasized that efforts to harmonize regulations across Europe and promote competition have often been accompanied by "regulatory creep," adding to the costs of such regulation, as well as by "gold-plating," which is implementation that goes beyond the requirements and thus makes the business climate more complex rather than simpler.

Public Procurement. They cite evidence that public procurement is 16% of GDP in the EU countries, but only 3% of contracts are actually cross border, and only 20% of bids end up being published.

Service Sectors. National-level rules and regulations are especially strong in a number of service industries, including retail trade, professional services (doctors, lawyers, accountants), and network industries (telecommunications)

Infrastructure not integrated. Infrastructure is still planned and funded mostly a national level, including both infrastructure for transportation and for communication. The national emphasis tends to be on moving and communicating within the country, not across borders.

Free movement of people. Only about 3% of the EU workforce actually works in another country. Some of the issue is language, culture, and family ties. But another part is that countries often have rules that discourage working elsewhere, like a lack of eligibility for retirement and unemployment programs, lack of recognition of professional qualifications, and lack of coordination on personal taxes that can lead to being double-taxed in two countries on the same income.

Differences in regulation across countries. There is no common set of banking regulations across the EU countries. The 28 countries have 28 different tax codes. Environmental standards and consumer protection standards vary by country.

Of course, this outcome is unsurprising from a political economy viewpoint. On one side, EU countries make strong statements about their commitment to the European project, and at some level they mean it. But when push comes to shove in domestic politics, there remains a strong desire for national control over all sorts of regulations, prodded by national-level interest groups that like limited competition and public monopolies and also play a large role in the electoral fortunes of national politicians.

Tuesday, April 21, 2015

The US House of Representatives voted on April 16 to repeal the estate tax. The bill seems unlikely to become law. The US Senate seems unlikely to pass such legislation, because it would be filibustered by Democrats,and President Obama has said he would veto the measure if it passed through Congress. But did you know that Sweden, with its well-earned reputation for egalitarian social policy, abolished its century-old inheritance tax in 2004? Indeed, Portugal also abolished its inheritance tax in 2004, as did Austria in 2008 and Norway in 2014.

For context, here's a figure that offers an illustration of Sweden's inheritance tax over time. The top tax rate rose sharply through much of the 20th century, topping out at 65%. Perhaps more interesting, the horizontal axis gives a sense of the income level where those top rates kicked in. For example, in 1985, although the very top tax rate applied only to those with inheritance levels at roughly 30 times annual income of a worker, many of those with inheritances that were only 10 times the annual income of a worker or less also could end up paying inheritance tax.

Of course, Sweden is not extraordinary in having some high top-level inheritance taxes. Here's a figure with some international comparisons of countries, including the United States, where the top rate of inheritance taxes exceeded 60% over time. However, notice that in all of those countries the peak inheritance tax rates are in the past--often several decades in the past. Top rates for inheritance taxes have been coming down.

Of course, each country has its own distinctive politics and history that go into the decisions about inheritance taxes. What caused Sweden to abolish the tax in 2004? Here are some of the reasons from Henrekson and Waldenström, with my own thoughts about the arguments apply in a US context.

1) In Sweden, a relatively large share of inheritances owed some inheritance tax. The authors report: "In the last year of the tax, the exemption level was a mere one-quarter of an annual production worker’s income (SEK 70,000), and the top marginal rate was reached at an inherited amount of just over two times the annual income of a production worker." This is probably the main difference with US inheritance tax, which has high exemptions. For example, in 2015 only estates of more than $5.4 million will even potentially need to pay any estate tax at all. Historically, only about 1 in 700 deaths in the US results in paying any estate tax.

2) Sweden had built a number of "safety valves" into its inheritance tax. For example, "owners of unlisted business equity"--often those inheriting a family business--had strict limits on what they would pay. Apparently, someone inheriting a family business in Sweden under no circumstances paid an inheritance tax of more than 9%. There were prominent examples of how wealthy Swedish families (Wallenberg, Söderberg) had created foundations to shield family wealth from the inheritance tax. There were a variety of ways to reduce the estate tax owed: for example, family holdings could be loaded up with debt, to reduce their current value. Money could be passed between generations through real estate holdings, which faced different tax rules, or buy purchasing large life insurance policies, because in Sweden life insurance payments were tax-free for the beneficiary. Of course, one of the easiest ways to pass wealth between generations is to do so before death, by hiring the next generation into the family business at an exorbitant salary. Obviously, when those with high wealth levels have many ways to find ways to avoid or minimize an estate tax, while still passing along wealth to future generations, such a tax ends up looking unfair and pointlessly symbolic. A globalizing economy surely adds to the ways in which wealth can be accumulated and passed on in other nations, where the estate tax rules may be more lenient.

3) The authors point out that in Sweden and in other countries, the tax funds raised by estate taxes are are often around 1-2% or even less of total government revenues. Thus, whether you think that dropping the estate and inheritance tax is wise or unwise, it makes much less difference to the government bottom line than many other tax changes.

As regular readers know, I'm concerned about the extent to which economic inequality is transmitted between generations. But in the modern US economy, inherited fortunes don't matter in the same way that they did, say, back in 19th century Europe. Instead, I think the transmission of economic inequality in the modern economy happens primarily because of parents who have the resources to do so provide all kinds of extra support and opportunities to their children. I find it hard to get much excited over the current debates about the US estate tax, which are essentially arguments over whether the labyrinthine rules of the tax code will make it harder or easier for a tiny slice of the very wealthiest households--together with their high-priced estate lawyers--to pass along resources after death. I worry a lot more about all the children growing up whose families don't have the resources and connections to provide a special added turbo-boost to their development, and how their life opportunities are being shaped and determined.

Monday, April 20, 2015

Empirical research in economic is being revolutionized (and no, that word is not too strong) by two major new sources of data: administrative data and private sector data. Liran Einav and Jonathan Levin explain in "Economics in the age of big data," which appears in Science magazine, November 7, 2014 (vol 346, iossue 6210; the "Review Summary" is p. 715 and the "Review" article itself is pp. 1243089-1 to 12403089-6). Science is not freely available online, but many readers will have access through library subscriptions.

To grasp the magnitude of the change, you need to know that a two or more decades ago, economists had only a few main sources of data: there was data produced by the government for public consumption like all the economic statistics from the Bureau of Economic or the Bureau of Labor, the surveys from the US Census, and a few other major surveys. Sometimes, economists also constructed their own data by working in library archives or carrying out their own surveys. For example, I remember as an undergraduate back around 1980 I remember doing basic empirical exercises where you wrote programs (stored on punch cards!) to find correlations between GDP, unemployment, interest rates, and car sales. I remember as a graduate student in the early 1980s compiling data on the miles-per-gallon of new cars, which involved collecting the annual paper brochures from the US Department of Transportation and then inputting the date to a computer file (no more punchcards by then!). As Einav and Levin put it: "Even 15 or 20 years ago, interesting and unstudied data sets were a scarce resource."

One of the major new sources of data is "administrative data," which is data collected by the government in course of administering various programs. As Einav and Levin point out, some of the most prominent results in empirical economics in recent years are based on administrative data.

For example, the evidence that most of the rise in income inequality is at the very top of the income distribution is based on IRS tax data. Evidence on wide variation in health care spending, how people and providers react to different health insurance provisions, and the use of certain health care treatments across states (thus implying that some health care providers in some states may be overdiagnosing or underdiagnosing) is often based on administrative data from Medicare and Medicaid. Evidence on how teachers can affect student academic achievement is based on a combination of student test scores and the patterns of how teachers are assigned to classrooms.

Of course, the use of administrative data for research raises legitimate privacy issues. But just to be clear, it's the existence of this data in government hands that raises the privacy concerns in the first place. Before the administrative data is received by researchers, it is "anonymized" so that it should be impossible to identify individuals. Einav and Levin sum up:

The potential of administrative data for academic research is just starting to be realized, and substantial challenges remain. This is particularly true in the United States,where confidentiality and privacy concerns, as well as bureaucratic hurdles, have made accessing administrative data sets and linking records between these data sets relatively cumbersome. European countries such as Norway, Sweden, and Denmark have gone much farther to merge distinct administrative records and facilitate research. ... However, even with today’s somewhat piecemeal access to administrative records, it seems clear that these data will play a defining role in economic research over the coming years.

The other major source of new data comes from private efforts, either by firms or by researchers. Your credit card company, your insurance company, your cable access provider, and many other firms have a lot of information about your life and your preferences. They are already doing in-house research on this data, but in some cases, they are pairing with research economists to work on anonymized forms of the data. For example, Einav and Levin have done research with eBay data on how Internet sales taxes affect online shopping.

Some companies are taking the next step and publishing data. Einav and Levin write:

Already the payroll service company ADP publishes monthly employment statistics in advance of the Bureau of Labor Statistics, MasterCard makes available retail sales numbers, and Zillow generates house price indices at the county level. These data may be less definitive than the eventual government statistics, but in principle they can be provided faster and perhaps at a more granular level, making them useful complements to traditional economic statistics.

Similarly, Google publishes a "Flu Trends" list which seeks to provide early warning of flu outbreaks, faster than the Center for Disease Control statistics, by using data from search queries.

Researchers can create their own data sets by "scraping" the web: that is, by writing programs that will download data from various websites at regular intervals. One of the best-known of these projects is the Billion Prices Project run by Alberto Cavallo and Roberto Rigobon at MIT. Their program downloads detailed data on prices and product characteristics from websites all over the world every day on hundreds of thousands of products. For a sense of the findings that can emerge from this kind of study, here's one graph showing the US price level as measured by the Billion Prices Project and the official Consumer Price Index. They are fairly close. Next look at the price level from the Billion Prices Project and the official measure of inflation in Argentina. It's strong publicly available evidence that the government in Argentina is gaming its inflation statistics.

Finally, many more economists are creating their own data by carrying out their own social experiments and surveys.

The new sources of data are changing the emphasis of published economic research. If you go back about 30 years, the majority of papers appearing in top research journals were theoretical--that is, they contained either no data or a few bits of illustrative data. Now, about 70% of the papers in top economics journals are primarily empirical and data-based. Einav and Levin offer evidence that for empirical papers (not including experiments designed by the researcher), only about 5-10% of the papers used administrative or private data back in 2006-2008, but by 2013 and 2014, the share of empirical papers using administrative and private data was nearly half. The tools for collecting and using this administrative and private-sector data are different in many ways from what economists have traditionally done. Careers, reputations, and eventually even Nobel prizes will be built on this body of work.

As a starter, here's the overall pattern of global military spending since the late 1980s. Since the Great Recession hit, global military spending has dropped a bit (as measured in inflation-adjusted dollars).

Here's a table showing the 15 countries with the highest level of military spending. US is one-third of all global military spending, or to put it another way, US military spending is roughly equal to the next seven nations on the list combined. Of course, it's worth remembering that military spending doesn't buy the same outcomes in all countries; for example, the pay of a soldier in India or China is considerably lower than that in the United States. It's also interesting to me that US military spending as a share of GDP is higher than for most of the other countries in the top 15, with the exception of Russia, Saudi Arabia, and UAE. And it's thought-provoking to compare, say, military spending in China to that in Japan and South Korea, or military spending in Russia to that in Germany and France.

Finally, here's a list of the countries where military spending is more than 4% of their GDP.
As the report notes: "A total of 20 countries—concentrated in Africa, Eastern Europe and the Middle East—spent more than 4 per cent of their GDP on the military in 2014, compared to 15 in 2013.
Only 3 of the 20 countries are functioning democracies, and the majority were involved in armed conflict in 2013–14 or had a recent history of armed conflict."

I won't editorialize here, except to note that when countries devote very high levels of GDP to military spending, they are putting their money behind a belief that using or threatening military force is in their near future.

Thursday, April 16, 2015

The natural disasters that cause the highest levels of insurance losses are only rarely the same as the natural disasters that cause the greatest loss of life. Why should that be? Shouldn't a bigger disaster affect both property and lives? The economics of natural disasters (and yes, there is such a subject) offers and answer. But first, here are two lists from the Sigma report recently published by Swiss Re (No 2, 2015).

The first list shows the 40 disasters that caused the highest insurance losses from 1970 to 2014 (where the size of losses has been adjusted for inflation and converted into 2014 US dollars). The top four items on the list are: Hurricane Katrina that hit the New Orleans area in 2005 (by far the largest in terms of insurance losses), the 2011 Japanese earthquake and tsunami; Hurricane Sandy that hit the New York City area in 2012; and Hurricane Andrew that blasted Florida in 1992. The fifth item is the only disaster on the list that wasn't natural: the terrorist attacks of September 11, 2001.

Now consider a list of the top 40 disasters over the same time period from 1970 to 2014, but this time they are ranked by the number of dead and missing victims. The top five on this list are the Bangladesh storm and flood of 1970 (300,000 dead and missing); China's 1976 earthquake (255,000 dead and missing), Haiti's 2010 earthquake (222,570 dead and missing), the 2004 earthquake and tsunami that hit Indonesia and Thailand (220,000 dead and missing), and the 2008 tropical cyclone Nargis that hit the area around Myanmar (138,300 dead and missing). Only two disasters make the top 40 on both lists: the 2011 Japanese earthquake and tsunami, and Japan's Great Hanshin earthquake of 1995.

The reason why there is so little overlap between the two lists is of course clear enough: the effects of a given natural disaster on people and property will depend to a substantial extent on what happens before and after the event. Are most of the people living in structures that comply with an appropriate building code? Have civil engineers thought about issues like flood protection? Is there an early warning system so that people have as much advance warning of the disaster as possible? How resilient is the infrastucture for electricity, communications, and transportation in the face of the disaster? Was there an advance plan before the disaster on how support services would be mobilized?

In countries with high levels of per capita income, many of these investments are already in place, and so natural disasters have the highest costs in terms of property, but relatively lower costs in terms of life. In countries with low levels of per capita income, these investments in health and safety are often not in place, and much of the property that is in place is uninsured. Thus, a 7.0 earthquake hits Haiti in 2010, and 225,000 die. A 9.0 earthquake/tsunami combination hits Japan in 2011--and remember, earthquakes are measured on a base-10 exponential scale, so a 9.0 earthquake has 100 times the shaking power of a 7.0 quake--and less than one-tenth as many people die as in Haiti.

Natural disasters will never go away, but with well-chosen advance planning, their costs to life and property can be dramatically reduced, even (or perhaps especially) in low-income countries. For an overview of some economy thinking in this area, a starting point is my post on "Economics and Natural Disasters," published November 2, 2012, in the aftermath of Hurricane Sandy.

In one of the great ironies, the great economist Milton Friedman--known for his pro-market, limited government views--helped to invent government withholding of income tax. It happened early in his career, when he was working for the U.S. government during World War II. Of course, the IRS opposed the idea at the time as impractical. Friedman summarized the story in a 1995 interview with Brian Doherty published in Reason magazine. Here it is:

"I was an employee at the Treasury Department. We were in a wartime situation. How do you raise the enormous amount of taxes you need for wartime? We were all in favor of cutting inflation. I wasn't as sophisticated about how to do it then as I would be now, but there's no doubt that one of the ways to avoid inflation was to finance as large a fraction of current spending with tax money as possible.

In World War I, a very small fraction of the total war expenditure was financed by taxes, so we had a doubling of prices during the war and after the war. At the outbreak of World War II, the Treasury was determined not to make the same mistake again.

You could not do that during wartime or peacetime without withholding. And so people at the Treasury tax research department, where I was working, investigated various methods of withholding. I was one of the small technical group that worked on developing it.

One of the major opponents of the idea was the IRS. Because every organization knows that the only way you can do anything is the way they've always been doing it. This was something new, and they kept telling us how impossible it was. It was a very interesting and very challenging intellectual task. I played a significant role, no question about it, in introducing withholding. I think it's a great mistake for peacetime, but in 1941-43, all of us were concentrating on the war.

I have no apologies for it, but I really wish we hadn't found it necessary and I wish there were some way of abolishing withholding now."

[In commemoration of US federal income taxes being due today, April 15, here is an updated version of a post that appeared on April 15, 2013, with the table and numbers updated to reflect the information in the proposed federal budget released by the White House Office of Management and Budget in February of this year.]

In the lingo of government budgets, a "tax expenditure" is a provision of the tax code that looks like government spending: that is, it takes tax money that the government would otherwise have collected and directs it toward some social priority. Each year, the Analytical Perspectives volume that is published with the president's proposed budget has a chapter on tax expenditures.

Here's a list of the most expensive tax expenditures, although you probably need to expand the picture to read it. The provisions are ranked by the amount that they will reduce government revenues over the next five years. It includes all provisions that are projected to reduce tax revenue by at least $10 billion in 2015.

Here are some reactions:

1) The monetary amounts here are large. Any analysis of tax expenditures is always sprinkled with warnings that you can't just add up the revenue costs, because a number of these provisions interact with each other in different ways. With that warning duly noted, I'll just point out that the list of items here would add up to about $1 trillion in 2014.

2) It is not a coincidence that certain areas of the economy that get enormous tax expenditures have also been trouble spots. For example, surely one reason that health costs have risen so far, so fast, relates to the top item on this list, the fact that employer contributions to health insurance and medical care costs are not taxed as income. If they were taxed as income, and the government collected an additional $206 billion in revenue, my guess is that such plans would be less lucrative. Similarly, one of the reasons that Americans spend so much on housing is the second item on the list, that mortgage interest on owner-occupied housing is deductible from taxes. Without this deductibility, the housing price bubble of the mid-2000s would have been less likely to inflate. Just for the record, I have nothing personal against either health care or homeownership! Indeed, it's easy to come up with plausible justifications for many of the items on this list. But when activities get special tax treatment, there are consequences.

3) Most of these tax expenditure provisions have their greatest effect for those with higher levels of income. For example, those with lower income levels who don't itemize deductions on their taxes get no benefit from the deductibility of mortgage interest or charitable contributions or state and local taxes. Those who live in more expensive houses, and occupy higher income tax brackets, get more benefit from the deductibility of mortgage interest. Those in higher tax brackets also get more benefit when employer-paid health and pension benefits are not counted as income.

4) These tax expenditures offer one possible mechanism to ease America's budget and economic woes, as I have argued on this blog before (for example, here and here). Cut a deal to scale back on tax expenditures. Use the funds raised for some combination of lower marginal tax rates and deficit reduction. Such a deal could be beneficial for addressing the budget deficit, encouraging economic growth, raising tax revenues collected from those with high income levels, and reducing tax-induced distortions in the economy You may say I'm a dreamer, but I'm not the only one. After all, a bipartisan deal to broaden the tax base and cut marginal rates was passed in 1986, when the president and the Senate were led by one party while the House of Representatives was led by the other party.

[In commemoration of US federal income taxes being due today, April 15, here's a repeat of a post from April 15, 2014, on the subject of government filling out your taxes for you.]

As Americans hit that annual April 15 deadline for filing income tax returns, they may wish to contemplate how it's done in Denmark. Since 2008, in Denmark the government sends you a tax assessment notice: that is, either the refund you can receive or the amount you owe. It includes an on-line link to a website where you can look to see how the government calculated your taxes. If the underlying information about your financial situation is incorrect, you remain responsible for correcting it. But if you are OK with the calculation, as about 80% of Danish taxpayers are, you send a confirmation note, and either send off a check or wait to receive one.

This is called a "pre-filled" tax return. As discussed in OECD report Tax Administration 2013: Comparative Information on OECD and Other Advanced and Emerging Economies: "One of the more significant developments in tax return process design and the use of technology by revenue bodies over the last decade or so concerns the emergence of systems of pre-filled tax returns for the PIT [personal income tax]." After all, most high-income governments already have data from employers on wages paid and taxes withheld, as well as data from financial institutions on interest paid. For a considerable number of taxpayers, that's pretty much all the third-party information that's needed to calculate their taxes. The OECD reports:

"Seven revenue bodies (i.e. Chile, Denmark, Finland, Malta, New Zealand, Norway, and Sweden) provide a capability that is able to generate at year-end a fully completed tax return (or its equivalent) in electronic and/or paper form for the majority of taxpayers required to file tax returns while three bodies (i.e. Singapore, South Africa, Spain, and Turkey) achieved this outcome in 2011 for between 30-50% of their personal taxpayers. [And yes, I count four countries in this category, not three, but so it goes.] In addition to the countries mentioned, substantial use of pre-filling to partially complete tax returns was reported by seven other revenue bodies -- Australia, Estonia, France, Hong Kong, Iceland, Italy, Lithuania, and Portugal. [And yes, I count eight countries in this category, not seven, but so it goes.] Overall, almost half of surveyed revenue bodies reported some use of prefilling ..."

"Around two-thirds of taxpayers take only the standard deduction and do not itemize. Frequently, all of their income is solely from wages from one employer and interest income from one bank. For almost all of these people, the IRS already receives information about each of their sources of income directly from their employers and banks. The IRS then asks these same people to spend time gathering documents and filling out tax forms, or to spend money paying tax preparers to do it. In essence, these taxpayers are just copying into a tax return information that the IRS already receives independently. The Simple Return would have the IRS take the information about income directly from the employers and banks and, if the person's tax status were simple enough, send that taxpayer a return prefilled with the information. The program would be voluntary. Anyone who preferred to fill out his own tax form, or to pay a tax preparer to do it, would just throw the Simple Return away and file his taxes the way he does now. For the millions of taxpayers who could use the Simple Return, however, filing a tax return would entail nothing more than checking the numbers, signing the return, and then either sending a check or getting a refund. ... The Simple Return might apply to as many as 40 percent of Americans, for whom it could save up to 225 million hours of time and more than $2 billion a year in tax preparation fees. Converting the time savings into a monetary value by multiplying the hours saved by the wage rates of typical taxpayers, the Simple Return system would be the equivalent of reducing the tax burden for this group by about $44 billion over ten years."

Most of this benefit would flow to those with lower income levels. The IRS would save money, too, from not having to deal with as many incomplete, erroneous, or nonexistent forms.

For the U.S., the main practical difficulty that prevents a move to pre-filling is that with present arrangements, the IRS doesn't get the information about wages and interest payments from the previous year quickly enough to prefill income tax forms, send them out, and get answers back from people by the traditional April 15 timeline. The 2013 report of the National Taxpayer Advocate has some discussion related to these issues in Section 5 of Volume 2. The report does not recommend that the IRS develop pre-filled returns. But it does advocate the expansion of "upfront matching," which means that the IRS should develop a capability to tell taxpayers in advance, before they file their return, about what their parties are reporting to the IRS about wages, interest, and even matters like mortgage interest or state and local taxes paid. If taxpayers could use this information when filling out their taxes in the first place, then at a minimum, the number of errors in tax returns could be substantially reduced. And for those with the simplest kinds of tax returns, the cost and paperwork burden of doing their taxes could be substantially reduced.

Tuesday, April 14, 2015

So many important aspects of the US and world economy turn on developments in information and communications technology and their effects These technologies were driving productivity growth, but will they keep doing so? These technologies have been one factor creating the rising inequality of incomes, as many middle-managers and clerical workers found themselves displaced by information technology, while a number of high-end workers found that these technologies magnified their output. Many other technological changes--like the smartphone, medical imaging technologies, decoding the human gene, or various developments in nanotechnology--are only possible based on a high volume of cheap computing power. Information technology is part of what has made the financial sector larger, as the technologies have been used for managing (and mismanaging) risks and returns in ways barely dreamed of before. The trends toward globalization and outsourcing have gotten a large boost because information technology made it easier

In turn, the driving force behind information and communications technology has been Moore's law, which can understood as the proposition that the number of components packed on to a computer chip would double every two years, implying a sharp fall in the costs and rise in the capabilities of information technology. But the capability of making transistors ever-smaller, at least with current technology, is beginning to run into physical limits. IEEE Spectrum has published a "Special Report: 50 Years of Moore's Law," with a selection of a dozen short articles looking back at Moore's original formulation of the law, how it has developed over time, and prospects for the law continuing. Here are some highlights.

It's very hard to get an intuitive sense of the exponential power of Moore's law, but Dan Hutcheson takes a shot at it with few well-chosen sentences and a figure. He writes:

In 2014, semiconductor
production facilities made some 250 billion billion (250 x 1018)
transistors. This was, literally, production on an astronomical scale. Every
second of that year, on average, 8 trillion transistors were produced. That
figure is about 25 times the number of stars in the Milky Way and some
75 times the number of galaxies in the known universe. The rate
of growth has also been extraordinary. More transistors were made in 2014 than
in all the years prior to 2011.

Here's a figure from Hutcheson showing the trends of semiconductor output and price over time. Notice that both axes are measured as logarithmic scales: that is, they rise by powers of 10. The price of a transistor was more than a dollar back in the 1950s, and now it's a billionth of a penny.

As the engineering project of making the components on a computer chip smaller and smaller is beginning to get near some physical limits. What might happen next?

Chris Mack makes the case that Moore's law is is not a fact of nature; instead, it's the result of competition among chip-makers, who viewed it as the baseline for their technological progress, and thus set their budgets for R&D and investment according to keeping up this pace. He argues that as technological constraints begin to bind, the next step will be for combining capabilities on a chip.

I would argue that nothing about Moore’s Law was inevitable. Instead, it’s a testament to hard work, human ingenuity, and the incentives of a free market. Moore’s prediction may have started out as a fairly simple observation of a young industry. But over time it became an expectation and self-fulfilling prophecy—an ongoing act of creation by engineers and companies that saw the benefits of Moore’s Law and did their best to keep it going, or else risk falling behind the competition. ...

Going forward, innovations in semiconductors will continue, but they won’t systematically lower transistor costs. Instead, progress will be defined by new forms of integration: gathering together disparate capabilities on a single chip to lower the system cost. This might sound a lot like the Moore’s Law 1.0 era, but in this case, we’re not looking at combining different pieces of logic into one, bigger chip. Rather, we’re talking about uniting the non-logic functions that have historically stayed separate from our silicon chips.

An early example of this is the modern cellphone camera, which incorporates an image sensor directly onto a digital signal processor using large vertical lines of copper wiring called through-silicon vias. But other examples will follow. Chip designers have just begun exploring how to integrate microelectromechanical systems, which can be used to make tiny accelerometers, gyroscopes, and even relay logic. The same goes for microfluidic sensors, which can be used to perform biological assays and environmental tests.

Andrew Huang makes the intriguing claim that a slowdown in Moore's law might be useful for other sources of productivity growth. He argues that when the power of information technology is increasing so quickly, there is an understandably heavy focus on adapting to these rapid gains. But if gains in raw information processing slow down, there would be room for more focus on making the devices that use information technology cheaper to produce, easier to use, and cost-effective in many ways.

Jonathan Koomey and Samuel Naffziger point out that computing power has become so cheap that we often aren't using what we've got--which suggests the possibility of efficiency gains in energy use and computer utilization:

Today, most computers run at peak output only a small fraction of the time (a couple of exceptions being high-performance supercomputers and Bitcoin miners). Mobile devices such as smartphones and notebook computers generally operate at their computational peak less than 1 percent of the time based on common industry measurements. Enterprise data servers spend less than 10 percent of the year operating at their peak. Even computers used to provide cloud-based Internet services operate at full blast less than half the time.

I'm sure I'll return to the disputes over secular stagnation before too long, but for now, I just want to lay out some of the evidence documenting the investment slowdown. Such a slowdown is troublesome both for short-term reasons, because demand for investment spending is part of what should be driving a growing economy forward in the short-run, and also for long-term reasons, because investment helps to build productivity growth for increasing the standard of living in the future. The IMF asks "Private Investment: What's the Hold-Up?" in Chapter 4 of the World Economic Outlook report published in April 2015. Here's a summary of some of the IMF conclusions:

The sharp contraction in private investment during the crisis, and the subsequent weak recovery, have primarily been a phenomenon of the advanced economies. For these economies, private investment has declined by an average of 25 percent since the crisis compared with precrisis forecasts, and there has been little recovery. In contrast, private investment in emerging market and developing economies has gradually slowed in recent years, following a boom in the early to mid-2000s.

The investment slump in the advanced economies has been broad based. Though the contraction has been sharpest in the private residential (housing) sector, nonresidential (business) investment—which is a much larger share of total investment—accounts for the bulk (more than two-thirds) of the slump. ...

The overall weakness in economic activity since the crisis appears to be the primary restraint on business investment in the advanced economies. In surveys, businesses often cite low demand as the dominant factor. Historical precedent indicates that business investment has deviated little, if at all, from what could be expected given the weakness in economic activity in recent years. ... Although the proximate cause of lower firm investment appears to be weak economic activity, this itself is due to many factors. ...

Beyond weak economic activity, there is some evidence that financial constraints and policy uncertainty play an independent role in retarding investment in some economies, including euro area economies with high borrowing spreads during the 2010–11 sovereign debt crisis. Additional evidence comes from the chapter’s firm-level analysis. In particular, firms in sectors that rely more on external funds, such as pharmaceuticals, have seen a larger fall in investment than other firms since the crisis. This finding is consistent with the view that a weak financial system and weak firm balance sheets have constrained investment.

I'll just add a couple of figures that caught my eye. Here's the breakdown on how much investment levels have fallen below previous trend in the last six years across advanced economies. The blue bars show the decline relative to forecasts made in spring 2004; the red dot shows the decline relative to forecasts made in spring 2007. These don't differ by much, which tell you that that the spring 2004 forecasts of investment were looking fairly good up through 2007. The dropoff in investment for the US economy is in the middle of the pack.

It's also interesting to note that the cost of equipment has been falling over time, driven in substantial part by the fact that a lot of business equipment involves a large does of computing power, and the costs of computing power have been falling over time. Indeed, one of the arguments related to secular stagnation is that the decline in investment spending might in part be driven by the fact that investment equipment is getting cheaper over time, so firms don't need to buy as much of it. An alternative view might hold that as the price of business equipment falls, then firms should be eager to purchase more of it. Again, I won't dig into those arguments here, but the pattern itself is food for thought.

Friday, April 10, 2015

Here's a figure showing annual sales of cameras over time, with smartphones included. The gray bars are analog cameras (CIPA stands for the Camera & Imaging Products association, which collects this data). Compact digital cameras are the blue bars. Smaller categories of digital cameras include D-SLR, which stands for "digital single-lens reflex" camera, and mirrorless, which are cameras with interchangeable lenses.

Specialists still need specialized cameras, but for basic personal and business use, the camera as a separate tool is dying. I suspect that extremely cheap and easy imaging, along with technology that can recognize and "read" those images, will change the way we manage our personal memories, our sharing of experiences with others, our record-keeping, and all the paperwork of society in dramatic and often unexpected fashions.