Privacy

To little fanfare, the Ninth Circuit Court of Appeals issued an opinion Monday morning in FTC v. AT&T Mobility. The case has important, if murky, implications for the future jurisdictional lines between the Federal Trade Commission (FTC) and Federal Communications Commission (FCC), opening some level of doubt as to which body will be responsible for protecting consumers and competition for a fairly large swath of the tech and telecom industries. While some reactions were overblown, just how far the fallout spreads is not clear. Regardless, the case warrants attention from policymakers on Capitol Hill, at the FCC, and elsewhere.

Back in 2014, the FTC brought an enforcement action against AT&T, suing the company for misleading customers about rate limiting or “throttling” of grandfathered unlimited data plans. AT&T defended itself by essentially saying, “Hey FTC, you don’t regulate us, the FCC regulates us,” pointing to what is called the “common carrier exemption” within section 5 of the FTC Act, which says that the FTC does not have jurisdiction over common carriers.

The question for the Ninth Circuit Court boiled down to whether the FTC common carrier exemption is “status-based” (triggered

Digital trade issues continue to grow in importance to the U.S. economy as people and businesses find new and innovative ways to use data and technology to deliver more goods and services via the Internet. However, the growth in entrepreneurship and innovation so vastly enabled by digital technologies is increasingly threatened by a growing range of digital trade barriers. On July 13, the U.S. House Committee on Ways and Means Subcommittee on Trade held an important hearing on the growing significance of digital trade to the U.S. economy, the rise of these digital trade barriers, and the ways in which U.S. trade policy, including through the Trans-Pacific Partnership (TPP), can help remove existing—and prevent future—barriers. ITIF Founder and President Robert Atkinson testified, alongside representatives from IBM, the Internet Association, PayPal, and Fenugreen (a tech startup). This post captures a few of the key takeaways.

Digital trade benefits a large segment of the U.S. economy and its workforce. Digital trade and data flows often go unrecognized (as they are often hard to see) for the important role they play in helping U.S. companies and workers, whether from firms big or

Sci-Hub has garnered some support in the online piracy debate as the business model used by scientific publishing firms has clearly not caught up to the digital age and is in need of reform. The firms commonly charge as much as $35 for a digital copy of a journal article. Yet, an annual subscription to a top journal, such as The Lancet, costs $233 for both digital access and a print copy. This means, assuming four journal articles per weekly issue, that they charge 31 times more for a single digital article than a paper one, with zero marginal costs for the digital. While

In 2014, Europe’s highest court ruled that Europeans have the ability to request that search engines remove links from queries associated with their names if those results are irrelevant, incorrect, or outdated. As a result of this ruling, Google agreed to delist search results from country code level domains—such as Google.fr for France—to remove offending results for European users, without affecting the rest of its users worldwide. Earlier this month, Google expanded its practice so that it now will delist offending results from all Google search domains, including Google.com, for all European users, based on geo-location signals, such as IP addresses. So a user in France would not see delisted URLs even if they visit Google.com instead of Google.fr. France is now saying that this is insufficient and Google must take down offending material for all users visiting any of its domains worldwide.

Last week, the French privacy authority, the Commission Nationale de l’informatique et des Libertés (CNIL), fined Google €100,000 ($112,000) for failing to remove links associated with French right-to-be-forgotten requests from its global search index. France is trying to force its domestic policies on the rest of

The Federal Trade Commission (FTC) hosted the first annual PrivacyCon in January 2016, an event designed to highlight the latest research and trends for consumer privacy and data security. The FTC’s stated goal was to bring together “whitehat researchers, academics, industry representatives, consumer advocates, and government regulators” for a lively discussion of the most recent privacy and security research. Unfortunately, not only did the event not reflect the diversity of perspectives on these issues, but the whole event seemed to be orchestrated to reinforce the FTC’s current regulatory strategy.

First, the “data security” side of this discussion was almost non-existent in the agenda. Of the 19 presentations, only 3 were about security. Given that the FTC has been flexing its regulatory muscle on corporate cybersecurity practices, this was a missed opportunity to delve into important cybersecurity research that could inform future oversight and investigations.

Second, the FTC mostly selected papers that jibed with its current enforcement agenda. As Roslyn Layton, a visiting fellow at the American Enterprise Institute, noted recently, of over 80 submissions that the FTC received for PrivacyCon, it selected 19 participants to give presentations with

The Pew Research Center released a survey last week that investigated the circumstances under which many U.S. citizens would share their personal information in return for getting something of perceived value. In the survey, Pew set up six hypothetical scenarios about different technologies—including office surveillance cameras, health data, retail loyalty cards, auto insurance, social media, and smart thermostats—and asked respondents whether the tradeoff they were offered for sharing their personal information was acceptable.

To be sure, some of the questions that Pew asked described one-sided tradeoffs that could have tainted the findings. Nevertheless, the overall results reveal that the Privacy Panic Cycle, the usual trajectory of public fear followed by widespread acceptance that often accompanies new technologies, is still going strong for many technologies.

The Privacy Panic Cycle explains how privacy concerns about new technologies flare up in the early years, but over time as people use, understand, and grow accustomed to these technologies, the concerns recede. For example, when the first portable Kodak camera first came out, it caused a big privacy panic, but today most people carry around phones in their pockets and do not give

Earlier this month, the Electronic Frontier Foundation (EFF) launched a “Spying on Students” campaign to convince parents that school-supplied electronic devices and software present significant privacy risks for their children. This campaign highlights a phenomenon known as the privacy panic cycle, where advocacy groups make increasingly alarmist claims about the privacy implications of a new technology, until these fears spread through the news media to policymakers and the public, causing a panic before cooler heads prevail, and people eventually come to understand and appreciate innovative new products and services.

When it comes to privacy, EFF has a history of such histrionics. The organization has accused desktop printers of violating human rights, spread misinformation about the effectiveness of CCTV cameras, escalated confrontations around the purported abuse of RFID, cried foul over online behavioral advertising, and much more. These claims, even if overblown and ultimately disproved by experience, generate headlines and allow EFF to spread fear, ploughing the ground for harmful regulation or even technology bans.

EFF’s newly launched “Spying on Students” campaign is yet another example of this tendency to put fear ahead of fact. EFF

ITIF‘s latest report—“The Privacy Panic Cycle: A Guide to Public Fears About New Technologies”—analyzes the stages of public fear that accompany new technologies. Fear begins to take hold when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on. This phenomenon has occurred many times—from the portable Kodak Camera in 1888 to the commercial drones of today. And yet, even though the privacy claims made about technology routinely fail to materialize, the cycle continues to repeat itself with many new technologies.

The privacy panic cycle breaks down into four stages:

In the “Trusting Beginnings” stage, the technology has not been widely deployed and privacy concerns are minimal. This stage ends when privacy fundamentalists, a term coined by the noted privacy researcher Alan Westin, begin raising the alarm creating a “Point of Panic.”

In the “Rising Panic” stage, the media, policymakers, and others join the privacy fundamentalists in exacerbating

The recent announcement that Verizon Communications Inc. intends to acquire AOL Inc. generated a surprising amount of media coverage, and unfortunately some groups are using the news as an excuse to push for expanded privacy regulations that would stifle innovation and competition in the burgeoning mobile ecosystem.

By telecom standards, this is not a huge transaction. At $4.4 billion, it is a full order of magnitude smaller than either the AT&T-DirecTV deal or the ill-fated Comcast-Time Warner Cable merger. And Verizon’s purchase of the 45% stake Vodafone had in Verizon Wireless was almost 30 times larger. Nevertheless, reporters flocked to the story, perhaps drawn by potential jokes about promotional CDs or the opportunity to poke fun at the 2 million Americans who remain AOL dial-up subscribers.

More likely interest in the deal was driven by its implications for the business Verizon wants to become. AOL is well known for its content, such as Huffington Post and TechCruch, but its growth is now in online ad sales—especially in video ads. The nation’s leading wireless company is looking down the road and seeing mobile video (presumably sprinkled with advertisements) as the future.

History is riddled with examples where attempts to achieve one outcome actually led to the opposite result. In May, the European Court of Justice (ECJ) ruled that Europeans have the “right to be forgotten,” the ability to request search engines to remove links from queries associated with their names if those results are irrelevant, inappropriate, or outdated. Just as Prohibition famously increased alcohol consumption, it would seem the “right to be forgotten,” while intended to increase online privacy, may actually have the opposite effect, both by cataloging shameful information and incentivizing individuals to publicize the very materials people want forgotten.

Since the decision, Google has scrambled to meet Europe’s demands by creating an online form to process removal requests and hiring new personnel to handle compliance. When individuals want information removed about themselves, they must submit verification of their identity, provide the URLs to be removed, and justify why they should be taken down. Google then verifies that the submitted information is accurate and meets the criteria for removal. Then, if the company decides to take the link down, it notifies the website where the content was posted of

Sponsored By

Issues

Disclaimer:

Views expressed on this blog do not necessarily represent the views of any other author or organization affiliated with this site. ITIF sponsors this blog but does not endorse or necessarily agree with the views of non-ITIF contributors. Views expressed by ITIF employees do reflect the views of ITIF, but not of any other author or organization.