from the throttled,-blocked,-hamstrung-and-hindered dept

FCC boss Ajit Pai has made no secret of his disdain for net neutrality. Or, for that matter, his general disregard for the consumer-protection authority granted the agency he's supposed to be in charge of. Pai had already stated that his "solution" -- to his perceived injustice that is net neutrality -- is to replace the government's existing, hard net neutrality rules with "voluntary commitments" by the likes of AT&T, Comcast and Verizon. From there, he hopes to leave any remaining regulatory enforcement to the under-funded and over-extended FTC (we've explained why this is a notably bad idea here).

Pai clarified his plans a little during a speech today in Washington, DC at an event hosted by FreedomWorks (which, not coincidentally, takes funding from the giant ISPs Pai is clearly eager to help). According to Pai, the FCC will issue a Notice of Proposed Rule Making tomorrow to begin the process of rolling back Title II and killing net neutrality. The FCC will then vote on the proposal on May 18, according to the agency head. That means there will be a full public comment period (that's where you come in) ahead of a broader vote to kill the rules later this year.

Pai's full speech (pdf) was packed with conflations, half-truths, and statements that have been repeatedly, painstakingly debunked over the course of the last decade. Among them being the ongoing claim that net neutrality rules weren't necessary -- because incumbent ISPs had done nothing wrong:

"Nothing about the
Internet was broken in 2015. Nothing about the law had changed. And there wasn’t a rash of Internet service providers blocking customers from accessing the content, applications, or services of their choice.

Pai apparently "forgot" the time that AT&T intentionally blocked iPhone users from using FaceTime unless they signed up for significantly more expensive mobile data plans. Or that time MetroPCS blocked all access to video on its introductory plans to drive users to costlier plans if they wanted the "full internet experience." Or that time a small ISP named Madison River decided to block a competing VoIP provider. Or that time AT&T, Verizon, and T-Mobile blocked their users from using Google Wallet to help prop up their own mobile payment services. Or the longstanding allegations that Comcast, Verizon, AT&T and others intentionally let their peering points get congested to kill settlement-free peering and force content and transit providers to pay an additional toll.

The idea that net neutrality rules are arbitrary and unnecessary is a joke, and if you still don't believe consumers and startups need some kind of regulatory protection from giant (and ever-growing) broadband duopolists like Comcast, the joke's on you. And it's notably unfunny.

Pai, like most of the ISP allies in favor of gutting the rules, simply refuses to be proven wrong -- no matter what the actual data shows. For years now, Pai has cited broadband industry-funded studies that try to claim that net neutrality rules severely hampered broadband investment, despite zero objective evidence that's actually the case. But this being the post-truth era, Pai was quick to trot out the "Title II and neutrality killed investment" canard to the immense joy of the crowd of attending lobbyists, think tankers and other loyal ISP allies:

"So what happened after the Commission adopted Title II? Sure enough, infrastructure investment declined. Among our nation’s 12 largest Internet service providers, domestic broadband capital expenditures decreased by 5.6% percent, or $3.6 billion, between 2014 and 2016, the first two years of the
Title II era. This decline is extremely unusual. It is the first time that such investment has declined outside of a recession in the Internet era."

It never happened. What did happen: some telecom industry-funded think tanks cherry picked data to make it appear that investment had foundered, then repeated the fabrication they'd created, apparently believing that repetition forges truth. But if you spoke privately to most ISPs, they'd be telling you they saw no investment reduction under Title II. ISPs don't oppose net neutrality and Title II because it makes investing harder; they oppose Title II and net neutrality because it prevents them from abusing the uncompetitive shitshow that is the broadband last mile.

What's abundantly clear here is that net neutrality opponents have zero problem with lying to achieve one, singular goal: maximizing the income of large broadband providers to the detriment of consumers, competition, startups and the health of the internet. And Pai poured it on exceptionally thick during his speech at FreedomWorks, claiming that gutting oversight of some of the most anti-competitive and least liked companies in America will somehow magically improve broadband competition, create jobs, expand internet access, and more:

"Without the overhang of heavy-handed regulation, companies will spend more building next-generation networks. As those networks expand, many more Americans, especially low-income rural and urban Americans, will get high-speed Internet access for the first time. And more Americans generally will benefit from faster and
better broadband.

Second, it will create jobs. More Americans will go to work building these networks. These are
good-paying jobs, laying fiber, digging trenches, and connecting equipment to utility poles. And established businesses and startup entrepreneurs alike will take advantage of the networks that they build to create even more jobs.

Doesn't that sound lovely? Except it's not happening. If the claim that Title II and net neutrality stifled investment was bullshit, the narrative that removing these regulations magically creates jobs and competition is just as fantastical. If anything, turning a blind eye to duopolists like Comcast and Verizon as they abuse the lack of broadband competition to make life harder on streaming competitors (something they're already doing) will have the opposite impact on existing and emerging internet markets to come. And if protecting ISP revenues is the top priority (and let's not fool ourselves that it isn't), actually fixing the industry's competitive shortcomings will never be on Pai's radar.

The problem Pai faces now is two-fold. One, net neutrality has broad, incredible bi-partisan support, and those consumers are certain to give him an earful during the public comment period that will begin after the May 18 vote. If Pai isn't familiar with the concept of backlash and overreach, he may want to bone up on some history. Pai will also need to show to the courts that the market has changed dramatically enough since the FCC's June 2016 win over ISPs to justify a massive reversal of the rules. If he can't, his entire effort will be struck down.

As a lawyer Pai knows this, which is why I still think Pai's playing a game of good cop, bad cop. Under this plan, Pai saber rattles for a few months about his intent to kill net neutrality, at which point the GOP shows up with some "compromise" legislation (likely this summer) that claims to codify net neutrality into law, but is worded in such a way (by the ISP lawyers that will inevitably write it) so the loophole-riddled "solution" is worse than no rules at all. If I were to guess, the legislation will come from Senator John Thune, who attempted to derail the 2015 net neutrality rules using a similar strategy.

It seems likely that neutrality opponent hubris could easily backfire. After all, every time ISPs have tried to kill net neutrality, the end result has been more stringent protections (as we saw when Verizon sued to overturn the FCC's flimsy 2010 rules, only to get... tougher rules). That said, this fight still may be harder than previous battles. With Google and Netflix likely to be less active (they're large enough now that they apparently think they no longer need to worry), the onus is going to be on grassroots activists, debate-fatigued consumers and startups to carry the brunt of the load this time around.

from the that's...-a-problem dept

For a few years now, we've written about various local governments and their pointless wars against Airbnb, which are often driven by lobbying from the big hotels. Different governments take different approaches, but Miami apparently has an incredibly restrictive regulation that effectively bars short term rentals entirely. Even worse, the mayor has been pushing to make things even worse. Since the current law only is enforced in response to complaints, mayor Tomas Regalado is pushing a plan to more proactively hunt down homeowners who offer short term rentals on Airbnb.

And here's where things get... sketchy. There was a hearing and a vote about this plan recently, and a bunch of Miami homeowners went to City Hall to speak out against this plan. Of course, in order to speak before the Miami commissioners considering this, they had to first identify themselves. The commissioners, apparently unswayed by these homeowners or by Airbnb itself, voted 3 to 2 to move forward with the plan (and also threatened to sue Airbnb directly...). But perhaps most ridiculous of all, the city is now looking to go after the homeowners who spoke at City Hall. After all, they identified themselves as homeowners using Airbnb:

“We are now on notice for people who did come here and notify us in public and challenge us in public,” said City Manager Daniel Alfonso. “I will be duly bound to request our personnel to enforce the city code.”

That sounds an awful lot like punishing Miami residents for speaking out on a matter of public interest. Yes, you can argue that they were admitting to breaking current local ordinances, but it certainly feels pretty sketchy to then directly target them. It is basically broadcasting the fact that no one is allowed to present the other side, and to describe how short term rentals might be beneficial to the city.

from the bad-idea dept

Crisis management must be a full-time job at Uber. I've argued in the past that some of the attacks on the company are greatly exaggerated, but it keeps running into crisis after crisis -- many of them avoidable. The latest is a big scoop in the NY Times about how Uber has a special program called Greyball (a play on "blackball," get it?) that helped it determine if regulators were trying to get rides and then avoid sending a car. Here are the basics from the article by Mike Isaac:

One technique involved drawing a digital perimeter, or “geofence,” around the government offices on a digital map of a city that Uber was monitoring. The company watched which people were frequently opening and closing the app — a process known internally as eyeballing — near such locations as evidence that the users might be associated with city agencies.

Other techniques included looking at a user’s credit card information and determining whether the card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations meant to catch Uber drivers would sometimes buy dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees would go local electronics stores to look up device numbers of the cheapest mobile phones for sale, which were often the ones bought by city officials working with budgets that were not sizable.

In response, Uber has claimed that the program was designed to greylist "terms of service violators", but if that's the case it can just kick them off the service and tell them they violated the ToS. From the report, it seems clear that even if the program was used for ToS violators, it was also used against regulators.

I've certainly been vocal about the fact that I think city and state regulations limiting Uber/Lyft and the like are generally bad ideas. What may have started out as a good idea to prevent cabbies taking advantage of riders has turned into quite a corrupt system used to limit competition and artificially inflate prices. I think that the idea behind Uber and Lyft and similar services is super powerful. But, that doesn't mean the company should get a pass for this kind of stuff.

Directly building an app to avoid regulators just looks really, really shady, and it's going to come back to haunt you (just ask Zenefits or Volkswagen). And while the article claims that the tool might be a CFAA violation, I don't see how that's possible, unless it involved even more nefarious activities under the hood (none of what's revealed in the article would seem to qualify as a CFAA violation, even under the really stretched interpretations of the CFAA that we've seen).

The bigger question, honestly, is why do this kind of stuff? I'll never understand why companies feel the need to take the shadiest route possible, when they could have just gone with the upfront path of explaining why what they're doing is so useful and powerful, and fighting for it, rather than trying to play silly games. Yes, you can make arguments about how they're trying to grow rapidly, and yes, (as we've discussed) these local regulators are often a nuisance for bad reasons. But this kind of stuff is clearly going to bounce back and create problems later on. Just fight these fights head on, without playing shady games that undermine basically everything else about your business.

from the benvenuto-al-registro-dei-captatori dept

As Techdirt has just reported, even though encryption is becoming more widespread, it's not still not much of a problem for law enforcement agencies, despite some claims to the contrary. However, governments around the world are certainly not sitting back waiting for it to become an issue before acting. Many have already put in place legal frameworks that allow them to obtain information even when encryption is used, predominantly by hacking into a suspect's computer or mobile phone. In the US, this has been achieved with controversial changes to Rule 41; in the UK, the Snooper's Charter gives the government there almost unlimited powers to conduct what it coyly calls "equipment interference."

One of the main tools for carrying out surveillance in this way is the trojan -- code that is placed surreptitiously on a suspect's system to allow it to be monitored and controlled by the authorities in real time over the Internet. There are clearly huge risks and problems with this approach, something that a legislative proposal from the Civic and Innovators parliamentary group in Italy tries to address, as explained by Fabio Pietrosanti and Stefano Aterno on Boing Boing. The draft law is the result of nearly two years' work by a group of experts from many fields:

a former speaker of the Parliament, civil rights activists, law enforcement officers, computer forensics researchers, prosecutors, law professors, IT security experts, anti-mafia and anti-terrorism departments and politicians.

Perhaps that breadth explains why the ideas are really pretty good, for once. The underlying principle is that a government trojan is only allowed to operate in ways that have been explicitly authorized by an Italian judge's signed warrant. For example:

A Telephone Wiretapping Warrant is required to listen a Whatsapp call.

A Remote Search and Seizure Warrant is required to acquire files on remote devices.

An Internet Wiretapping Warrant is required to record web browsing sessions.

The same kind of warrant that would be required for planting a physical audio surveillance bug is required to listen to the surrounding environment with the device’s microphone.

Those kinds of legal safeguards are welcome, but they are not enough on their own. Also needed are stringent technical controls that will limit the harm and risk of introducing government malware onto a system. The working group has addressed this too with a series of innovative requirements for trojan surveillance programs:

a. The source code must be deposited to a specific authority and it must be verifiable with a reproducible build process (like the Tor Project and Debian Linux are doing)

b. Every operation carried on by the trojan or through its use must be duly documented and logged in a tamper proof and verifiable way, using cryptographic time-stamping and digital signing, so that its results can be fairly contested by the defendant during the inter partes hearing [that is, with everyone involved present].

c. The trojan, once installed, shall not lower the security level of the device where it has been activated

d. Once the investigation has finished, the trojan must be uninstalled or, otherwise, detailed instruction on how to self-remove it must be provided.

e. Trojan production and uses must be traceable by establishing a National Trojan Registry with the fingerprint of each version of the software being produced and deployed.

f. The trojans must be certified, with a yearly renewal of the certification, to ensure compliance with the law and technical regulation issued by the ministry.

It's a remarkable list of technical and operational requirements that are surely unique in their attempt to minimize the key dangers of implanting clandestine surveillance software. Of course, it would be better if the use of government malware were avoided completely, and other methods were adopted. But realistically, the police and intelligence agencies around the world will be pushing hard for legislation to allow them to infect people's computers and mobiles in this way, not least if encryption does become more of a problem.

Given that trojans will be used, whether we like it or not, far better to constrain them as much as possible through well-thought out rules such as those drawn up by the Italian parliamentary group. Let's hope their proposals are adopted without significant amendments by the Italian parliament so that they can be used as a template for similar laws in other jurisdictions.

from the beep-boop dept

Questions about how we approach our new robotic friends once the artificial intelligence revolution really kicks off are not new, nor are calls for developing some sort of legal framework that will govern how humanity and robots ought to interact with one another. For the better part of this decade, in fact, there have been some advocating that robots and AI be granted certain rights along the lines of what humanity, or at least animals, enjoy. And, while some of its ideas haven't been stellar, such as a call for robots to be afforded copyright for anything they might create, the EU has been talking for some time about developing policy around the rights and obligations of artificial intelligence and its creators.

In a new report, members of the European Parliament have made it clear they think it’s essential that we establish comprehensive rules around artificial intelligence and robots in preparation for a “new industrial revolution.” According to the report, we are on the threshold of an era filled with sophisticated robots and intelligent machines “which is likely to leave no stratum of society untouched.” As a result, the need for legislation is greater than ever to ensure societal stability as well as the digital and physical safety of humans.

The report looks into the need to create a legal status just for robots which would see them dubbed “electronic persons.” Having their own legal status would mean robots would have their own legal rights and obligations, including taking responsibility for autonomous decisions or independent interactions.

It's quite easy to make offhand remarks about all of this being science fiction, but this isn't without sense. Something like the artificial intelligence humanity has imagined for a century is going to exist at some point and, with advances beginning to look like that may come sooner rather than later, it only makes sense that we discuss how we're going to handle its implications. After all, technology like this is likely to impact our lives in significant and varied ways, from our jobs and employment, to our interactions with our electronic devices, not to mention warfare.

I think the most interesting philosophical and moral questions surround these MEPs call to grant robots and AI with the designation of "electronic persons." The call has largely focused on saddling robotic "life" with many of the obligations humanity endures, such as tax obligations and being under the jurisdiction of humanity's legal system. But personhood can't only come with obligations; it must too come with rights. And there would be something strange in recognizing a robot's "personhood" while at the same time making use of its output or labor. The specter of slavery begins to rear its head at this point, brought on only by that very designation. Were they electronic "beasts", for instance, the question of slavery wouldn't arise outside of the fringe.

The MEPs report does also deal with the potential danger from AI and robots in its call for designers to "respect human frailty" when developing and programming these machine-lives. And here the report truly does delve into science fiction, but only out of deference to great literature.

Things descend slightly into the realms of science fiction when the report discusses the possibility of the machines we build becoming more intelligent than us posing “a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny.”

However, to stop us getting to this point the MEPs cite the importance of rules like those written by author Isaac Asimov for designers, producers, and operators of robots which state that: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”; “A robot must obey the orders given by human beings except where such orders would conflict with the first law” and “A robot must protect its own existence as long as such protection does not conflict with the first or second laws.”

While some might laugh this off, this too is sensible. There is simply no reason to refuse to have a discussion about how a life, or a simulacrum of life, that is created by humanity, might pose a danger to that humanity, either at the level of the individual or the community.

But what strikes me most about all of this is how the EU seems to be the ones out in front of this, while any discussion in the Americas has been either muted or occurring behind closed doors. If this is a public discussion worth having in the EU, it is certainly one too worth having here.

from the policy-fight! dept

In highly regulated private industries the law means what it says – right up until a regulator decides that it doesn't. For that reason Uber, a company with a reputation for aggressively challenging legal norms, must have been particularly frustrated when the California Department of Motor Vehicles decided to publicly rebuke it for complying with the law of the Golden State.

The crux of the issue is that Uber decided to move forward with deploying some of its vehicles with automated technologies onto California's roads without a permit which, the California DMV believes, it must first obtain before rolling out.

In a statement, the DMV said that it has a "permitting process in place" through which twenty manufacturers have obtained permits. Then, so as to leave no double about its position on the matter, stated that "Uber shall do the same."

Now, whether the new Volvo XC90's equipped with Uber's technologies are "autonomous vehicles" as a matter of perception or regulatory projection is up for debate. Different people have different ideas about what fits that mold. But, when it comes to whether the DMV should take action to slow Uber's work, the question turns from one of perception to one of law and textual interpretation.

California, by way of the DMV, has chosen to define an autonomous vehicle in regulation as a vehicle equipped with technology "...that has the capability of operating or driving the vehicle without the active physical control or monitoring of a natural person...." Thus, the factual question that confronted Uber before it made its decision to deploy the vehicles in California was simple: "is this vehicle capable of driving without being monitored or controlled by a driver?"

For all of their impressive capabilities, it is a matter of public record that Uber's vehicles often require human intervention. By extension, those vehicles require constant monitoring by a human driver. On that basis, Uber likely thought that, while not toeing the industry line, its vehicles do not meet the definitional threshold necessary to trigger the state's autonomous vehicle testing regulations.

Of course, what regulatory history there is that points to a different intent, one that tracks with the DMV's argument, is no doubt informative and interesting as a matter of historical record, but it should not overcome the obvious strictures of the regulation as written.

In the meantime, the DMV has sent Uber a cease and desist letter. While the merits of regulation are often a matter of debate, the even application of the plain language of the law should not be. Unfortunately, it appears that Uber, by dint of its reputation, is facing unwanted "special treatment" by its regulator. Worse, the DMV may be expanding the reach of its regulations after the fact. If that's the case, and certainty is lost, so too will be the very definitional purpose of the DMV's regulations – to make regular.

from the excellent-Chinese-culture-and-socialist-core-values dept

Techdirt has been covering China's relentless clampdown on every aspect of the online world for some time, culminating in the new "cybersecurity" law that's just been passed. But if you think the Chinese authorities are now done, you'd be wrong. They are branching out into an entirely new field -- cinema -- with a law that the official Xinhua News Agency calls "the first of its kind in China":

The top legislature on Monday adopted a film industry law, promising harsh punishment for firms that fabricate box office earnings, data or information.

That makes it sound like it is mostly about regulating the commercial activities of China's cinema industry. And it's true that there are some measures designed to prevent fraud, apparently something of a problem in the country:

Film distributors and theaters will have all their illegal earnings confiscated and be fined up to 500,000 yuan (about 73,800 U.S. dollars) if they falsify ticket sales data, according to the law adopted at the National People's Congress (NPC) Standing Committee bimonthly session after a third reading.

If their illegal earnings exceed 500,000 yuan, the fine will be up to five times their illegitimate earnings.

They may also be hit with an operating suspension or have their business certificates revoked in serious cases, according to the new law.

But the meat of the legislation is probably to be found in the following aspects:

The law specified that actors, directors and other staff should be "excellent in both moral integrity and film art," maintain self-disciplined and build a positive public image.

...

The [government] media watchdog is also establishing a "professional ethics committee," aiming to guide organizations and people in the radio, film and media circles to practice "core socialist values."

And it's not just the actors who must be on their best behavior under the new law:

China will support the making of films championing excellent Chinese culture and socialist core values.

Chinese groups can cooperate with overseas counterparts in film shooting, excluding overseas organizations and individuals that engage in "activities damaging China's national dignity, honor and interests, or harming social stability or hurting national feelings," the law said.

Since China is now the world's second-largest film market according to Xinhua, there will probably be plenty of Western companies that will be interested in co-productions. But the new rules mean that the Chinese government's interest in a film's storyline is now quite explicit, and that anything that "hurts national feelings" is a definite no-no. That probably means more discreet compromises of the kind recently seen in the film Doctor Strange, where a Tibetan Ancient One mysteriously turned into a Celtic Ancient One.

from the this-probably-isn't-such-a-good-idea dept

The National Highway Traffic Safety Administration earned plaudits from across the tech sphere for its recently released safety guidelines for self-driving cars.

With the NHTSA looking to offer guidance to this emerging industry, the agency issued a set of rules that largely just asks manufacturers to report on how they were following the guidelines. The 15-point checklist is vague in quite a few details, but that isn't necessarily a tremendous problem so long as the standards remain voluntary, which they purport to be. To many, this approach struck a good overall balance between oversight and flexibility.

Regulatory ambiguity can, however, turn out to be a real nightmare with standards that are mandatory. Vague rules can leave even the best-intentioned firms at a loss as to how to proceed. Given how much of a premium consumer confidence will be in a market as revolutionary and potentially transformative as autonomous vehicles, it's crucial that manufacturers comply with whatever standards the federal government promulgates.

That's why it's essential to pay close attention to an underappreciated part of the NHTSA guidelines -- the opportunities they afford federal regulators to coordinate with the states on oversight that, in practice, will be anything but voluntary. Indeed, the early signs from the first of what will be many proposed state rules to follow in the wake of the NHTSA guidelines suggests that compulsory standards are exactly what we're going to get.

First up are proposed rules from the California Department of Motor Vehicles, recently revised in response to the NHTSA guidelines. The revised draft of California's model regulations is far more permissive than the original version the agency promulgated late last year, a set of changes that were celebrated by various observers, even me.

But delve closely into the updated DMV proposal, and you'll find a requirement that manufacturers obtain a state permit certifying that any and all vehicle tests are conducted in accordance with the NHTSA’s "Vehicle Performance Guidance for Automated Vehicles." Thus, in the nation's largest testing jurisdiction, the NHTSA standards already are set to be made mandatory.

This is not to say the federal government doesn't have a rule to play in oversight of self-driving cars. The feds are better situated to oversee the development of safety standards, and the door should be open to refine those standards. But coordinating with the states to turn those standards into a set of de facto binding obligations smacks of underground rule-making.

The California DMV might be complicit in this collusion, but it can't be faulted for deferring to federal authority. Were the NHTSA’s safety standards clearer -- an undertaking that presents risks and problems of its own -- California’s approach wouldn't actually be a problem. The fact that the federal guidelines are so vague in so many of the details means that we can't really know either that manufacturers will be able to comply with California's rules or that the state will be able to enforce them.

For now, state regulators should use their discretion to be as liberal as possible about what sorts of vehicle testing comports with the NHTSA safety guidelines. Over the longer term, what we need is for states like California to communicate to the NHTSA that it's up to them to make absolutely clear what does and does not count as compliance.

It's broadly understood how overly restrictive regulations can dampen innovation, but regulatory ambiguity can be just as bad. For regulators, the clock is ticking. It's up to both the NHTSA and state agencies like the California DMV to bring the clarity this new market needs.

But not every tool used to remove content comes in a form that can be contested by the general public. Some of these tools are the result of private agreements with private entities -- agreements in which users have no say. The EFF calls it "Shadow Regulation."

For example, agreements between copyright holders and Internet companies that give copyright holders the ability to effectively delete users' content from the Internet, and agreements on other topics such as hateful speech and terrorism that can be used to stifle lawful speech. Unlike laws, such agreements (sometimes also called codes, standard, principles, or guidelines) aren't developed with public input or accountability. As a result, users who are affected by them are often completely unaware that they even exist.

Even those who are aware of these agreements have few options for changing them, because users aren't a party to these private deals. They tend to cover multiple companies, so shaming or boycotting a single company isn't an option. And asking regulators to step in might not be possible either, because these agreements often have the active support of government officials who see them as a cheap and easy alternative to regulation.

It may be difficult to battle these agreements, but there's nothing to be lost by exposing their inner workings to those affected by them. The EFF names a few examples: the "six strikes" infringement notification system, Europe's hate speech code of conduct, and the MPAA's "Trusted Notifier" program, which requires domain name registries to disable domains accused of infringement.

But the reach of these private agreements extends much further than this. Pretty much every intermediary between hosted content and those seeking to view it have options at their disposal for disappearing content should they be pressured to do so. The EFF highlights each link in the chain between site visitors and the hosted content, showing how these have been affected by shadow regulation.

ISPs, payment providers, certificate authorities, and search engines all are forms of internet connective tissue that can be severed at any time, and with few recourse options for those whose content has been removed. The private agreements aren't just used by other private entities, like the MPAA and RIAA. They're also exploited by censorious governments to stifle criticism or reporting that's at odds with the official government line.

This is the introduction to a series of posts by the EFF, which will more closely examine each of these "weak links." Just as importantly, the EFF is hoping to provide readers with the information they need to fight back against this unofficial, often-opaque form of speech regulation.

from the but-i-want-my-flying-car! dept

If you listen to some entrepreneurs and investors, the flying car – a longstanding staple of science fiction – is right around the corner. Working prototypes exist. At least twocompanies already take orders for the vehicles, with deliveries promised next year.

The last decade has seen the introduction of practical consumer videoconferencing, voice recognition, drones, self-driving cars and many other items that once were found only in science fiction stories. It therefore might seem plausible that practical flying cars are around the corner. They aren't. Indeed, massive safety, infrastructure and technology problems make them a near impossibility.

The first concern is safety. While flying a commercial airline is always safer than driving oneself the same distance, it's an entirely different story if one looks at per-trip fatality rates. The Department of Transportation estimates that Americans take about 350 billion car trips per-year and experience about 30,000 fatal accidents; roughly one fatal accident per 11 million trips. By contrast, there are roughly 35 million scheduled air flights around the world each year. Over the past decade, the number of commercial aviation incidents that have proved fatal has averaged 17 annually. This means about one of every 2 million commercial air flights ends in death.

We see these fatalities every year, despite pilots' years of intense training, planes' extensive safety equipment requirements, regular maintenance checks and airlines' need to maintain sterling reputations for safety. All of these provide far more safeguards than anything that applies to cars on the road.

It's true that there are some factors that might make flying cars safer than commercial jetliners. They would travel at lower speeds and lower altitudes, for instance. But there's no practical way to subject them to the same safety and training standards imposed on commercial airplanes if they are to become anything like a consumer product. Indeed, the per-trip fatality rates for private planes already is very likely higher than commercial airliners, but there are no worldwide statistics available. Safety advocates would make a plausible case for banning flying cars on these grounds alone.

Even if one thinks these risks are acceptable—and they probably are, given the potential advantages of flying cars—that doesn't solve the even greater infrastructure or technological problems. The current working models of flying cars need runways to take off and land. Bringing them into regular use would require runways just about everywhere, without obviating the need for parking lots. The world's busiest airport, Atlanta's Hartsfield-Jackson, accommodates slightly less than 2,500 aircraft movements each day on its five runways and 4,700 acres. Any sizable office building would need its own version of Hartsfield-Jackson if people were to commute to work via their flying cars. The space to build facilities this size for flying cars simply doesn't exist anywhere near any city of any size.

New technologies could theoretically obviate the need for runways. One Japanese team has shown off a modified lightweight drone supposedly capable of vertical takeoff and landing like a helicopter. But making these vehicles practical would require breakthroughs that appear to be decades away. Existing helicopters and military "jump jets" still require a significant amount of space to land, are even noisier than commercial jets and drink huge amounts of fuel. As such, they're not really used for travel. Commercially produced helicopters have existed since the 1940s and aren't currently used for scheduled commercial service anywhere in the United States. Technological breakthroughs could eventually solve these problems, but it's unlikely that a few years of flying-car development will overcome problems that have bedeviled helicopter designers for more than seven decades.

While the promised 2017 deliveries of working flying cars seem unlikely, it's far from impossible that a commercially produced civilian airplane with the kinds of retractable wings and safety equipment that would allow it to be driven on highways might make it to market within the next decade. But widely available flying cars, more likely than not, will remain clearly in the realm of science fiction.