If — the argument ran — IF countries were transparent about what they censored, IF there was no overblocking (the literature’s jargon for collateral damage), IF it was done under a formal (local) legal framework, IF there was the right of appeal to correct inadvertent errors, IF … and doubtless a whole raft more of “IFs” that a proper effort to develop a BCP would establish. IF… then perhaps censorship would be OK.

I spoke against the notion of a BCP from the audience at the time, and after some reflection I see no reason to change my mind.

There will be many more subtle arguments — much as there are will be more IFs to consider, but I can immediately see two insurmountable objections.

The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible. All of the schemes for blocking content can be evaded by those with technical knowledge (or access to the tools written by others with that knowledge). Proxies, VPNs, Tor, fragments, ignoring resets… the list of evasion technologies is endless.

One of the best ways of spreading data to multiple sites is to attempt to remove it, and every few years some organisation demonstrates this again. Although ad hoc replication doesn’t necessarily scale — there’s plenty of schemes in the literature for doing it on an industrial scale.

So, in my view, a BCP will merely be used by the wicked as a fig-leaf for their activity, and by the ignorant to prop up their belief that it’s actually possible to block the content they don’t believe should be visible. A BCP is a thoroughly bad idea, and should not be further considered.

Re: The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible.

I disagree. Anyone who uses a popup blocker, disables Flash, or uses a spam filter knows that the Internet can be censored and this can work well.

The real technical issue is that filters don’t scale – as the audience protected by centralised filtering grows then you inevitably get more over-blocking because anything that offends anyone gets banned.

Pop-up blockers don’t prevent those pretty JavaScript floating ads so beloved of the national newspaer sites; disabling Flash will stop Flash, but not other technologies; and anyone who uses a spam filter knows full well that a fair amount of junk still gets through.

So, no, I don’t think any of these technologies works, except in so far as if you switch off your computer then you can’t access the Internet at all… hang, but that’s my mobile phone beeping because its download has finished…

giafly indirectly raises a point, is blocking potentialy harmfull content (in order to protect resources etc) censorship in the accepted meaning of the word or self protection?

That point asside history shows that censorship eventually fails from the internal preasures of operating a system of ever increasing lies and half truths against the external supportable reality.

The eventual result is frequently the demiese of the censoring organisation and it’s replacment with one that is (for a while) more acceptable to the populace.

To date the longest running sensors are I belive the various organisations that deal with peoples faiths, where there is no question of provable truth or falsehood just belife. And even these organisations have a tendency to either die out with time or change their behaviour.

So the question should be “why build a framework for something that is destined to impload”?

I can only assume that the proposers belive there is some short term gain to be made, and that is what should be rooted out and robustly dealt with.

>> it will reinforce the impression that censoring the Internet can actually be achieved

It’s actually quite possible depending on what your standards are. There are plenty of governments content with censoring “subversive” content 90% of the time for 90% of the population, the logic being that dissidents aren’t a problem until you have a critical mass of them. This is especially true for apathetic populations.

You also have to consider that Internet censorship is not occurring in a vacuum. Social norms, other state restrictions, and propaganda are all at work as well. Depending on how those play out, an otherwise ineffective censorship regime may succeed.

The second argument about encouraging something that may be unacheiveable does not seem to be a very strong objection. People don’t stop working for world peace, though many people would argue that that might be unacheiveable. I think your first reason, the ideological one, can and should stand on its own. Simply put, censorship, and keeping information away from people, should not be encouraged.

Censorship is NOT a Best Practice. It is impractical and ineffective–like trying to take pee out of a pool.

What needs to be developed is a Best Practice for Responding to Objectionable Content.

This would help admins and governments everywhere, who sometimes mistakenly assume that censoring is a good way to respond. This Best Practice would provide guidance on the most appropriate and effective responses for various types of content and objections.