Technology Policy Institute Blog

Archive for June, 2012

Jeffrey Katz, the CEO of price-comparison site Nextag, is an outlier to the virtually unanimous view that the Internet should remain unregulated. In an op-ed in today’s Wall Street Journal, Mr. Katz takes the position that Google should be turned into a public utility, although he doesn’t use that terminology.

The op-ed is aimed at European Competition Commissioner Joaquin Almunia, who has set a July 2 deadline for Google to respond to the EU’s antitrust concerns. Commissioner Almunia will make a big mistake and risk serious damage to the Internet if he follows any part of Mr. Katz’s advice.

Mr. Katz is nostalgic for the old days. Maybe he should get into a different, slower moving, industry. He laments the fact that Google doesn’t work the way it “used to work.” It now promotes its own (according to Mr. Katz) “less relevant and inferior” products. Google used to highlight Nextag’s services, because they “were better – and they still are. But Google’s latest changes are clearly no longer about helping users.”

In the U.S., antitrust authorities are skeptical about complaints from competitors and, hopefully, Mr. Almunia will be as well. Indeed, there is no evidence that Google has engaged in the type of exclusionary practices that were the focus of the Microsoft case, for example. It is true that both Google and Bing sometimes favor their own specialized search results. Understandably, Mr. Katz doesn’t like this. But both search engines have discovered this is a service their users do like.

The scope of Mr. Katz’s proposed remedy is astounding:

“Google needs to be transparent about how its search engine operates.” Presumably that means making Google’s algorithm, and the changes that occur continually, public. Perhaps Mr. Katz would like a forum where Nextag could express its views on Google’s algorithm changes before they are implemented. That would certainly speed innovation along.

“When a competitor’s service is the best response for the user, Google should highlight it instead of its own service.” Who determines the “best response”? Does Mr. Katz want a say?

“Google should provide consumers with access to unbiased search results.” Who determines what is “unbiased” and how is it even defined?

“Google should grant all companies equal access to advertising opportunities regardless of whether they are considered a competitor.” “Equal access” is a defining feature of public utility regulation. It has no meaning in the absence of price regulation. Is Mr. Katz suggesting price regulation for advertising on Google?

There is a large literature on public utility regulation that people tend to forget. Suffice it to say, the experience overall was not beneficial for consumers. That is why there has been a worldwide movement toward regulatory liberalization over the last few decades. If regulating traditional industries was difficult, regulating an Internet company like Google, and a product like a search engine, in a pro-efficiency, pro-consumer manner would be far more complex – basically, impossible.

In the U.S., public officials and various other stakeholders are in the process of preparing for the international telecommunications negotiations at the December ITU meeting in Dubai, with the goal of keeping the Internet unregulated. This argument becomes more difficult to make if we are in the process of doing the opposite.

Fundamentally, Mr. Katz wants Google to work “the way it used to work.” That is not a recipe for innovation. Hopefully, the authorities will see his recommendations for what they are – the self-interested proposals of a competitor – and discount them accordingly.

By June 2011, nearly one-third of American households relied solely on wireless voice service, with lower income households more likely to be wireless-only. This information doesn’t come from the FCC, as you might expect. Instead, it comes from the twice-yearly National Health Interview Survey, conducted by the U.S. Census for the Centers for Disease Control and Prevention (CDC).[1] The example highlights three points policymakers should take to heart for data collection relevant to telecommunications:

FCC does not always produce the most relevant telecommunications data.

Careful, representative surveys—not population counts, which the FCC uses for measuring voice and broadband markets—are usually the most effective and efficient way to gather data.

Policymaking agencies like the FCC can obtain relevant data from other agencies like the U.S. Census that specialize in data collection but have no vested interest in any particular policy outcome.

Counting telephones began at the turn of the 20th Century

The U.S. Census began to collect data on telephones as they became an increasingly important part of American life. In 1922 the Bureau noted, “The census of telephones has been taken quinquennially since 1902, and statistics of telephones were compiled and published in the decennial censuses of 1880 and 1890.”[2]

The FCC has largely continued this tradition, attempting to count each line or connection for communications technologies. (Some—not me, of course—might say delays in producing some reports indicate a desire to revert to the quinquennial release schedule).

Maintaining a consistent approach to data-gathering has certain advantages, such as facilitating comparisons over time. However, that advantage diminishes as it becomes less clear what, exactly, we should measure and as market changes make any particular count less relevant.

Counting is inefficient and misses the most important data

Most economic and social policy is based on surveys conducted by agencies such as the Census Bureau and the Bureau of Labor Statistics. We rely on surveys because gathering information on an entire population is typically not feasible. For the constitutionally-mandated decennial census, for example, the U.S. Census spends about $11 billion and hires about one million temporary workers.[3] By contrast, in a non-census year, the Census bureau spends about $1 billion on all its data collection efforts.[4] Additionally, surveys make it possible to gather data about particular groups and estimate the likelihood that different measures truly reflect the actual population.

The FCC attempts to count all lines, connections, and other factors related to telecommunications by requiring companies to provide certain data. Large firms spend significant resources providing these data. Small firms often do not have the resources to provide this information, and the FCC’s skilled data staff then must spend enormous time and effort trying to gather this information from firms who either will not or cannot respond.

The result is that the FCC has the least reliable count data in precisely the topical and geographic areas that it needs data for sound decisionmaking. For example, counts of broadband connections provide some measure of the intersection of supply (availability) and demand, but not good information on either separately. The counts provide no information on how those connections are used nor on how they break down demographically.[5]

This telecommunications counting fetish has spread to other parts of the government, as well. The National Broadband Map is based on the same flawed premise: the belief that the best dataset comes from observing every detail of every broadband connection. The effort cost about $350 million and still apparently yields inaccurate results in rural areas where policymakers want to direct resources.[6]

The FCC Should Stop Counting and Start Contracting the Census Bureau to do Surveys

Nearly all other areas of economic policy are informed by surveys, many of which are conducted monthly to provide real-time information to markets and policymakers. Nothing in particular about telecommunications requires a total population count rather than survey data.

Additionally, there is no reason why the FCC itself should be responsible for data collection. The U.S. Census Bureau is much better equipped to design and implement surveys. It is not uncommon for Census to do survey work for (and funded by) other agencies. In addition to the CDC survey mentioned above, Census also does surveys for the Department of Justice,[7] the National Center for Education Statistics,[8] and State Library Agencies[9] to name a few.

Embracing surveys conducted by other agencies would have several advantages:

Surveys are almost certain to be cheaper than counts both to the government and to the private sector.

Surveys of users, rather than counts submitted by providers, are more likely to yield data not influenced by providers’ incentives to game the data collection process to their own benefit.

Data collection by outside agencies would reduce any inherent conflict of interest the FCC might face when gathering data related to its agenda.

Surveys by other agencies, of course, are not a silver bullet for obtaining better and more timely data. They can be done poorly. And the FCC should remain involved. As the expert agency it should largely determine the questions it needs answered and the type of information necessary for policymaking and provide the resources necessary to do it. Additionally, the FCC needs the ability to compel data from regulated companies for specific decisions when necessary.

Today, unfortunately, surveys are being subject to attacks by Congressional Republicans, who want to reduce the ability of the U.S. Census to collect data.[10] These attacks have been roundly and correctly criticized by conservative and liberal commentators, who note that these data are crucial to good policymaking.[11]

Despite the Congressional statistical ignorance de jour, surveys by agencies expert in data collection will yield far better data at lower cost than today’s methods. Hopefully the FCC will take note and begin to move our ability to study telecommunications out of the 19th Century.

[5] It is possible to merge geographic counts with demographic data from the Census, but even this approach would be more effective if done in a way that explicitly incorporates connections to the Current Population Survey.