When running for election Rouhani made numerous public proclamations that Internet filtering doesn't work, given the ability to use VPNs to bypass most filters. Rouhani also took things one step further, admitting such censorship only cultivates a broad distrust between the public and the government (who knew?). Now that Rouhani's in office, Iran is apparently taking baby steps toward sensibility by moving away from wholesale blocking of websites, to what they're calling more selective "smart filtering" of content:

"Presently, the smart filtering plan is implemented only on one social network in its pilot study phase and this process will continue gradually until the plan is implemented on all networks," Communications Minister Mahmoud Vaezi said, according to official news agency IRNA..."Implementing the smart filtering plan, we are trying to block the criminal and unethical contents of the Internet sites, while the public will be able to use the general contents of those sites," Vaezi told a news conference."

What kind of "criminal and unethical" content are we talking about? While the program is only being trialed for Instragram at the moment, such "smart" filtering includes blocking Instragram accounts like @RichkidsofTehran, which featured photos of young rich Iranians flaunting their wealth. Meanwhile, while Reuters suggests the program could involve lifting outright bans on Twitter, Facebook and YouTube, several regional reports state that those bans are going nowhere, suggesting this isn't as dramatic a step forward as some had hoped.

Obviously concern persists that Iran will continue to make cognitively-incoherent decisions when it comes to filtering out entirely harmful content or political commentary, and that the country will continue its war on VPNs and other circumnavigation techniques. On the bright side, plans by former president Mahmoud Ahmadinejad for Iran to build their own Internet appear to have been put on the hold for the moment while the country works out the kinks of their still not-particularly-smart Internet filtering efforts. The complete smart filtering program (whatever it actually winds up looking like) is expected to be implemented fully by June 2015.

from the you-had-one-job... dept

Friend (and frequent Techdirt contributor) Derek Kerton passed along a screenshot of his own recent experience trying to follow a Techdirt link at the Toronto airport and having it blocked:

The block here is clearly not directed at Techdirt, but rather at Google's Feedproxy service -- which was formerly Feedburner, a company Google bought years ago. Many, many, many sites that have RSS feeds use Google's service as it makes it much easier to manage your RSS feed and to do some basic analytics on it.

In this case, it appears that Air Canada has (for reasons unknown) wasted good money on a company called "Datavalet" which offers "Guest Access Management" for companies who offer WiFi access to customers. Datavalet proudly highlights Air Canada and famed Canadian donut chain Tim Hortons among its customers.

And yet, despite its sole business apparently being building systems to let people access the internet, Datavalet's tech geniuses can't figure out that Google's RSS feed service is not, in fact, an "Anonymizer" but merely a system for hosting RSS feeds.

These sorts of stupid false positives are not at all uncommon in the filtering business -- and Datavalet is not alone in stupidly filtering out and blocking access to things it should totally allow. This story just demonstrates, once again, the ultimate stupidity and futility of trying to block internet access. No matter how well-meaning you might be, you're going to do it wrong and you're going to block plenty of legitimate content, including (in this case) tons of well known news publishers who rely on Google's feedproxy service to serve up links to RSS readers, Twitter, Facebook and more.

from the always-broken...-always dept

The "Great Firewall of Britain" claims another victim. "Voluntary" (as in: under the threat of legislation) internet filtering by ISPs has blocked UK citizens from connecting to the website of one of the oldest computer hacking groups in existence.

A significant portion of British citizens are currently blocked from accessing the Chaos Computer Club's (CCC) website. On top of that, Vodafone customers are blocked from accessing the ticket sale to this year's Chaos Communication Congress (31C3).

The post goes on to note that while these filters are faulty and suffer from overbroad content flagging, they can be easily bypassed simply by using the site's IP address: 213.73.89.123. It also points out that this blockage could possibly be deliberate, rather than due to the inherent technical limitations of poorly-designed web filters.

However, it may very well be that the CCC is considered "extremist" judged by British standards of freedom of speech.

Could be. Governments tend to treat all hackers as criminals, no matter how much the standard definition deviates from government officials' misconceptions. The Chaos Computer Club, despite being Europe's largest hacker community, is not composed of criminals. But it has engaged in several acts that would make it less popular with various governments, including reverse engineering "lawful access" malware used by German law enforcement (which included installations on school computers), uncovering a government backdoor in Skype and filing a criminal complaint against the German government for its massive domestic surveillance programs.

As it stands now, it appears that only Three is currently blocking the main CCC website. The Open Rights Group's "Blocked" website indicates that Virgin Media and Vodafone had both blocked the site until recently, but appear to have removed CCC from their blacklists on Nov. 27th and Dec. 8th, respectively.

Blocking the CCC is just another demonstration of how internet filters don't work. The filter fails on multiple levels, going overboard with the blocking while simultaneously allowing users to bypass the system using nothing more than an IP address. The end result is the UK's passive-aggressive filter-by-proxy, one that hangs the threat of regulatory legislation over the heads of ISPs while signing off on "will this do?" filtering.

An opening anecdote details the porn-fueled formative years of Gabe Deem -- now a youth counselor who runs "reboot" programs for other porn-addled teens. This recounting concludes with the following paragraph:

“Ultimately it desensitized me and rewired my brain to my computer screen to the point where, in real life, I couldn’t feel anything in an intimate situation,” he said in an interview. “My generation was told growing up that porn was cool because it was ‘sex positive.’ But what can be more ‘sex negative’ than being unable to perform in bed?”

Deem did what any concerned young adult would in his situation: he self-diagnosed.

He Googled his symptoms and found a name for the condition: Porn-induced erectile dysfunction.

[A] former corporate attorney with degrees from Brown and Yale who writes books about the unwelcome effects of evolutionary biology on intimate relationships and the striking parallels between recent scientific discoveries and traditional sacred-sex texts…

So, on one hand, we have a closed, self-sustaining ecosystem promoting the idea that porn use can create erectile dysfunction. On the other hand, we have actual psychology. This is McLaren's opening salvo, the one supposed to sway the uncertain onto her side of the issue -- and one that doesn't hold up under scrutiny.

But it gets worse.

Porn-induced erectile dysfunction is now well documented by the mainstream medical community.

Dr. Oz devoted a show to the topic last year, and just a few months ago, researchers at Cambridge University found that porn addicts’ brains have similar responses to pleasure cues as the brains of alcoholics or drug addicts.

And as for the research, it only points to addicts' addictions triggering the similar pleasurable responses. Almost anything can be consumed up to the point that it becomes "too much of a good thing," but that's no reason to demand the proprietor (such as it were) control the end user's actions. But that's what McLaren does.

First, she offers up her own comparably pristine past as a shocking contrast to today's routine debasement.

While my generation learned to do sex by reading the dirty bits of Sweet Valley High novels and fumbling around sweatily in our parent’s basements, this generation will have learned to do sex by watching semi-violent six-ways involving hairy men and vajazzled strippers squealing on dirty linoleum floors.

Look at the language McLaren uses. There's more to her advocacy than a concern for the young men and women of the world. Her sense of shame has been violated by proxy and she's projecting it all over the Globe and Mail's editorial pages. "Hairy." "Dirty." "Squealing." "Six-ways." [??]

That's followed by this sentence, which is extremely jarring in its cognizant dissonance.

[T]he solution is surprisingly simple: The Internet is public space and we need to police it. We built it. We own it. It’s where we live and where our kids are growing up. We should be applying the same standards of decency to the Internet as we do anywhere else.

This sounds like a plea for personal responsibility and more attentive parenting. It's your house and your internet. Police it as you see fit. Use any number of third-party products to filter content if you need to (not that they'll work any better than those pushed by governments). Apply your preferred "standards of decency" to your actions and those of your children.

That's what it sounds like. But it isn't.

No, this problem can't be solved by personal actions. It needs to be forced on those who provide the connection. By the government.

In the U.K., Prime Minister David Cameron recently strong-armed the major Internet service providers into applying automatic porn filters to all mobile and broadband connections in the country... The service providers resisted heavily at first, claiming such controls were a matter of parental responsibility and tantamount to censorship, but after the government made it clear it would legislate if necessary, the ISPs relented. Unsurprisingly, the move has proved hugely popular, particularly among parents.

First, she presents the ISPs "relenting" as if it were some sort of equitable compromise rather than the only response that would prevent further government meddling. What was "strong-armed" into place was preferable to the amount of damage that could conceivably be done by a handful of legislators operating under the influence of moral panic.

Second, it is not hugely popular. It just is. The "mandatory" is always more "popular" than the truly optional. Add to this the additonal (if minor) hurdle of opting out of "voluntary" internet filtering. When you make something "opt out," most people will take the path of least resistance and go with the pre-selected choice: "opt in." Something strong-armed into pseudo-policy by a determined government is never "popular." It takes a very special kind of mind (and predisposition) to portray it that way.

McLaren wraps up her post by strongly suggesting Canadian ISPs be given the same mandate: filter or else. Make Canada every bit as ineffectively censorious as the UK, because Mehmet Oz, "porn-induced erectile dysfunction" internet circle jerks, and the "pornification of our children" demand it. (Yes. Actual quote.) But also do it to rid McLaren's Canada of the ultimate, unspeakable obscenities: "dirty floors," "hairy men" and "squealing porn stars."

from the wonder-how-that-happened... dept

We've been covering the efforts by Hollywood studios to push extreme draconian new copyright laws down in Australia, where their interests are being helped along by the Attorney General George Brandis, who has a cozy relationship with Hollywood, but cannot present any evidence he's ever met with consumer advocates. Brandis pushed out his proposal earlier this year and it was basically Hollywood's wishlist, exactly as many expected. The Australian government has been accepting "comments" on the proposal, and there have been some interesting submissions. Perhaps most interesting was that the Media, Entertainment, and Arts Alliance (MEAA), a union that represents a combination of both journalists and entertainers, put in a comment supporting the extreme proposal for an internet censorship regime via filtering. You can understand why some of the more shortsighted folks on the "entertainer" side of the union might support such a policy, but the idea that a journalist's union would do so as well seems... troubling.

The MEAA proposal said that it "strongly supports the proposal" and even talks up (incorrectly) how useful similar censorship efforts have been in the UK. However, it appears that many MEAA members quite reasonably freaked out to find out that their own union was advocating "strongly" in favor of censorship and internet filters -- because hours later, MEAA withdrew its comments saying that the whole thing was all a big mistake:

It was never our intention to make a submission which could in any way be interpreted as supporting an internet filter.

We have previously campaigned against Government proposals for an internet filter and will continue to do so, as we also continue to campaign against data retention.

That's funny, because in the submission itself they directly talked about how amazingly awesome such a filter in the UK was. Huh. It's almost as if someone simply took some talking points from Hollywood without any real understanding of the deeper issues of what they were supporting, and submitted it -- only to realize afterwards that they were a media union endorsing out and out censorship.

from the due-process-means-something dept

For years now, the entertainment industry's dream is that it should be able to point to certain websites and say "bad website" and have the rest of the world make that website disappear. The fact that the industry has a dreadful history of falsely accusing perfectly legitimate sites as infringing, leading to bogus takedowns that are destroying perfectly legitimate businesses, either by shutting them down without any evidence, or involving them in extended legal battles, apparently never enters the equation. The entertainment industry still insists that when it points to a site and says "bad" that the site should immediately be forced to disappear.

Furthermore, the industry seems to believe that everyone else has a legal responsibility to carry out its wishes once it declares a site as bad. It thinks hosts should take down sites, search engines should stop linking to them, advertisers should block ads, registrars should pull domain names and ISPs should block access. You'd think that maybe actually adapting to new technologies and giving people more of what they want might be a more compelling strategy, but the legacy entertainment industry prefers demanding that everyone else go out of their way to protect the legacy industry's obsolete business model, without the industry itself doing anything more than pointing at sites (often incorrectly).

The latest battle ground for this appears to be in Austria, where an "anti-piracy" group representing the movie industry, VAP, has sued four local ISPs (UPC, Drei, Tele2 and A1) for failing to block access to two sites based entirely on VAP's say so. That is, there is no court case or court order saying that the ISPs need to block Movie4K.to and Kinox.to. It's entirely VAP saying so (it also asked for the ISPs to block The Pirate Bay, but for some reason left that one out of the eventual lawsuit). When the ISPs quite reasonably said, "Uh, where's the court order?" rather than go to court against the sites in question, VAP sued the ISPs. That seems tremendously questionable.

Either way, it appears that IFPI is jumping in on the action too. You may recall just a few weeks ago the head of the IFPI in Austria, Franz Medwenitsch, foreshadowed this move by claiming that it's not censorship to block sites that he, personally, doesn't like. It's only censorship when you block sites he likes, you see? And he doesn't like The Pirate Bay. So, he's going to sue ISPs to block it and other sites as well.

The idea that entire websites should be completely inaccessible based entirely on the say so of a particular industry (with a very long history of attacking any new innovation) seems immensely worrisome if you believe in an open internet or basic principles of innovation. Giving the legacy film or music industry a "veto" on innovation merely by declaring a site "rogue" is a very dangerous idea. The ISPs are right to stand up and demand more than just those industry groups' say so, and hopefully courts will agree.

My understanding is that there was once a theory that America's public universities were havens of free speech, political thought, and a center for the exchange of ideas. I must admit that this seems foreign to me. I've always experienced universities primarily as a group-think center mostly centered around college athletics. That said, if universities want to still claim to be at the forefront of idea and thought, they probably shouldn't be censoring the hell out of what their students can access on the internet.

Northern Illinois University enacted an Acceptable Use Policy that goes further than banning torrents, also denying students access to social media sites and other content the university considers “unethical” or “obscene.” A discussion on the ban was brought to Reddit by user darkf who discovered the new policy while trying to access the Wikipedia page for the Westboro Baptist Church from his personal computer in his dorm room. The student received a filter message categorizing the page as “illegal or unethical.” It seems possible to continue to the webpage, but the message warns that all violations will be reviewed.

While sites that only potentially violate the policy, such as the Wikipedia page for stupidest church in America, are still accessible after the warning, other sites that NIU has deemed offensive, defamatory, or threatening remain. These, oddly, include pornography sites, for some reason. It also includes social media sites like Facebook and LinkedIN, the latter of which seems like an especially odd choice since it's primarily a job networking site and I'd think that would be the kind of thing a university would want their students to be doing. Granted, this usage policy applies to staff as well as students, but that's the entire problem with a catchall filtering system like this: you block too much good along with the "bad."

But where this really goes off the rails is NIU's apparent attempt to stifle political discussion on their campus.

Perhaps one of the most controversial of the terms is the restriction on political activities such as surveying, polling, material distribution, vote solicitation and organization or participation in meetings, rallies and demonstrations, among other activities...Isn’t it obvious that an institute of higher learning should be the last place to put a huge block in the information pathway?

It's not just obvious, it seems like the antithesis of what a public university ought to be doing. Forget the social media and pornography sites for a moment. Turning the filters up to the point when Wikipedia pages are blocked is insane. That site is a go to resource for, well, everyone, but probably especially for students. And the ban on political activism and traffic suggests NIU is turning a blind-eye to the important role that universities have always played in political thought and activism.

Shame on NIU for trying to strangle the internet access their students rely on as they learn and become adults.

from the which-way-should-we-nudge dept

Zeynep Tufekci has a really great post talking about how much algorithmic filtering plays a role in how we view the world -- with a specific focus on what's happening in Ferguson, Missouri. As more than a few people have pointed out, much of the public discussion about the mess in Ferguson was happening on Twitter -- while it seemed eerily absent from Facebook (and the mainstream media at first...):

And then I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

She notes that eventually the story did break through on Facebook, but not until the next morning when Facebook's algorithm finally caught up to the idea that something important was happening.

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now — seems to have bubbled them up, probably as people engaged them more.

But, as she notes, it's entirely possible that Facebook's algorithm wouldn't have ever found it important if the story wasn't gaining more and more attention on Twitter. And, of course, even as the story was being told on Twitter, there are questions about whether or not Twitter's algorithms suppressed some of it as well. "#Ferguson" only very briefly trended nationally, though it did trend in certain local markets.

So, there were fewer chances for people not already following the news to see it on their “trending” bar. Why? Almost certainly because there was already national, simmering discussion for many days and Twitter’s trending algorithm (said to be based on a method called “term frequency inverse document frequency”) rewards spikes… So, as people in localities who had not been talking a lot about Ferguson started to mention it, it trended there though the national build-up in the last five days penalized Ferguson.

As she points out: Algorithms have consequences.

This is not unlike Eli Pariser's idea of the "filter bubble" and the idea that companies may be effectively nudging you in ways that may not actually be that great. Frankly, that argument is a little strained, since it suggests that everyone only lives within these bubbles, and doesn't do things that exposes them further, but there is a valid point at the core of it worth exploring.

Tufekci notes, however, that this is also why net neutrality is so important. Because without it, not only do you have to worry about internet services determining what's important to you, but also the broadband infrastructure as well. And both will be focused on what enables them to profit the most. She points out the example of locals live-streaming what the police in Ferguson were doing -- including when the police announced over loudspeakers to "turn off their cameras" (a fairly chilling request in its own right). And she ponders what happens to those live streams on a non-neutral network:

But I’m not quite sure that without the neutral side of the Internet—the livestreams whose “packets” were fast as commercial, corporate and moneyed speech that travels on our networks, Twitter feeds which are not determined by an opaque corporate algorithms but my own choices,—we’d be having this conversation.

Obviously, there are lots of other issues at play in Ferguson that go well beyond the internet and things like net neutrality. But they are related. The discussion of those issues -- race, police brutality, police militarization, free speech, etc. -- are all enabled and enhanced by the issues of the internet and what it enables... and what it stifles. If the police could have kept this story from getting attention, it's likely that (1) there would have been even more abuse and (2) that all of those other discussions wouldn't be happening. Who knows if many of those discussions will be able to create real change, but you at least need to have that discussion to start the process of change. And if the technology is getting in the way of that, through non-neutral networks or algorithms that ignore important events like this, it seems like a problem worth solving, if only to speed up all those other important conversations as well.

from the more-porn-for-the-rest-of-us! dept

To call the UK's institution of ISP-level web filters "stupid" isn't just being blithely dismissive. For one, they don't work. They block the wrong stuff. They let offensive stuff in. They're easily circumvented. They're advance scouts for government censorship. The only people who think web filtering is a good thing are those with the power to turn pet projects into national laws.

Broadband customers are overwhelmingly choosing not to use parental-control systems foisted on ISPs by the government - with take-up in the single digits for three of the four major broadband providers…

Only 5% of new BT customers signed up, 8% opted in for Sky and 4% for Virgin Media. TalkTalk rolled out a parental-control system two years before the government required it and has had much better take-up of its offering, with 36% of customers signing up for it.

Those pushing for filters would have you believe it's something the public has been clamoring for to help them protect their children from the many evils of the internet. In reality, hardly anyone appears to care all that deeply about hooking up to a pre-censored connection.

There's more than simply unpopularity going on here. The numbers skew low for several reasons. At this point, the rollout isn't 100% complete and isn't being offered to every new customer (something that becomes a requirement in 2015). Virgin Media (somewhat ironically) has been hooking customers up with the filthiest internet. Techs for that company have only been presenting the "unavoidable choice" to a little over a third of its new signups. Other ISPs techs have been more thorough, presenting new customers with the option nearly every time.

Many service providers say it's also possible the filtering has been activated post-installation (Ofcom's report only tracks filtering enabled at the time of install) or that customers are already using device-based filters.

Despite all of these factors, I wouldn't expect adoption numbers to rise much. People generally don't like the government telling them what they can and can't access. Illegal content is already blocked at ISP level (as well as by several search engines), so what's being added is nothing more than a governmental parent to watch over citizens' shoulders as they surf the web. Those with children would probably prefer to run an open pipe and filter content at the device level. Not everyone in a household needs to be treated like a child, which is exactly what these filters (and their proponents) do.

Beyond that, activating a web filter goes against human nature, especially the exertion of free will and the general avoidance of embarrassment. Most people view themselves as "good" and uninterested in the long list of internet vices (porn being the most popular). But even if they truly believe they'd never view this content, they'd rather have it arrive unfiltered than be forced to approach their ISP weeks (or minutes…) later like a bit-starved Oliver Twist and ask, "Please, sir. May I have some porn?"

from the the-audacity-of-egotistical-self-interest dept

We just wrote about the UK's filtering systems blocking access to 20% of the world's top 100,000 sites, even though only about 4% of those host the porn Prime Minister David Cameron seems so obsessed with blocking. Also noted in that story was the fact that many "pirate sites" are being blocked at ISP level via secret court orders.

Speaking recently at the IP Summit in London, Former Senator turned MPAA boss Chris Dodd pronounced his love for forcing ISPs to block and filter websites accused of aiding copyright infringement. Despite the fact filters can be easily bypassed by anyone with a modicum of technical knowledge and often filter legitimate content (a report this week suggests a massive swath of legitimate websites are blocked by UK filters), Dodd believes filters are the "most effective tools anywhere in the world" at fighting piracy.

The UK’s Internet Watch Foundation (IWF) maintains a blocklist of URLs that point to sexual child abuse and criminally obscene adult content. Over in New Zealand the Department of Internal Affairs maintains DCEFS, the Digital Child Exploitation Filtering System. Both are run in cooperation with the countries’ ISPs with the sole aim of keeping the most objectionable material away from public eyes…

According to a RadioLIVE report, in order to prevent copyright infringement the studios requested access to the DCEFS child abuse filtering system.

After obtaining government permission, Hollywood hoped to add their own list of sites to DCEFS so that by default subscribers to New Zealand’s main ISPs would be prohibited from accessing torrent and other file-sharing type sites.

So, in hopes of protecting their business model, studios tried to add file sharing sites to a list of child pornography sites. Not one of them seemed to realize how wrong it was to equate their companies' profitability with the sexual abuse of children. Whatever level of entitlement these companies have risen to in the past, they've vastly exceeded it with this maneuver. Studios may secretly believe copyright infringement is (very subjectively) as damaging as child pornography, but they've never made it this explicitly clear.

Fortunately, ISPs and the Kiwi government pushed back, unwilling to be complicit in the studios' most insensitive act of self-preservation yet. Unfortunately for Dodd and his charges, the studios will have to make do with secret court orders and default web filters that still allow end users to flip the "hide file sharing sites" switch to "off."

The studios believe they should have root access to government-ordained web blocking. In the interest of not making the situation worse than it already is, this should never be granted. Various governments have already included protection for the copyright industries in some of their web-targeted "for the children" legislation. Giving studios the go-ahead to tamper with child porn blacklists would just stretch the definition of "children" to include major Hollywood studios -- entities full of full-grown adults with enough power and money to protect them from anything.