Censorware or child protection?

Today's news that
Internet Service Providers are going to offer to "block" pornography
may be an extreme example of "spin", designed to satisfy morally
outraged MPs: but it could still pose significant dangers.

ISPs are discussing
what they call “Active Choice”: that is, to insist that adults are given a
yes / no choice before installing or using parental controls when they set up a new
broadband connection.

Now, there is a world
of difference between offering sensible child safety, and trying to persuade
adults to live with layers of censorship.

Thus the devil is therefore
in the detail, and how “options” are presented. Will adults be asked if they
need parental controls, or if they want to “adult content” switched on?

We will oppose
anything designed to induce adults to live with censorware which would
inevitably deny citizens access to commentary, health and medical advice.

The idea that the
government would try to persuade citizens to live behind private content blocks
is extremely sinister. It would provide an easy way for any morally outraged
group to argue to extend an unofficial, and apparently consensual, method of
censorship.

ISPs need to exercise some
leadership. They need to make it clear what they are proposing, and how
parental controls are likely to be presented. They need to make this a
conversation about parental controls and child safety: and to keep it clear of
a separate debate about adults accessing pornography. ISPs do have
responsibilities: and one very important one is to avoid becoming the arbiters
of Internet content. It is this responsibility that underpins our freedom of speech in a
digital world.

Comments (17)

I don't disagree with many of your aims and comments, however I think your assertion that any pornographic blocks "would inevitably deny citizen's access to commentary, health and medical advice" is over-stepping the mark and leaving it in your blog post in my view weakens the rest of the points.

There's nothing inevitable about pornographic blocks extending to other areas, either through mistakes on behalf of the blocking technology deployed or the slippery-slope argument about blocks being extended. The IWF has been running for 15 years without the slippery-slope problems predicted.

If you are worried about access to your favourite sites, feel free to download K9, see the 80 or so categories, set it to log only and surf away and look at the logs to see any errors of categorisation. Errors can be reported and changed via the Blue Coat web site.

As a quick example of the flexibility of policies - a single URL can be in up to 4 categories - so for example "NUDITY" and "HEALTH" - commercial systems such as the ISPs would be able to deploy allow policies to be set on multiple categories - such as "Block NUDITY if NOT also HEALTH".

Thanks Nigel. On the accuracy of filters, I agree that they do generally try to categorise and avoid unwarranted blocking. But in the "default blocking" model, a level must be chosen that either over-blocks for the average adult, or under-blocks for the average 12-year old child. For instance, STD sexual advice might be inappropriate for that child, but appropriate for an adult. Apologies that I haven't fully explained this point.

On mission creep, I would say two things: the IWF's remit is under constant pressure to enlarge. The remit has been enlarged, albeit in relatively limited ways, such as to "extreme" pornography. The IWF have been clear that their mission would be undermined by further extension of their remit. But politicians have a long record of trying to get them to do so.

A much vaguer system, aimed at "protecting" people from "unwanted" general adult content would be much easier to target. In a default blocking model, you would merely need to ask the ISP to set a default to block another category, like extremism or suicide sites, that are already listed in their system.

The technology the IWF have helped ISPs to introduce - such as Cleanfeed - is absolutely under pressure to be applied to new types of content. the latest example of course, is copyright blocking such as Newzbin2 case; and the pressure from rights holders for new website blocking powers, including in the Digital Economy Act.

BTW, the repeal of those powers are being debated today: if you have time, please ring your MP and ask them to attend the debate!

The devil is in the details. Though the technologies the ISPs use aren't likely themselves to have "default blocking" modes, if an ISP offers a one-size-fits-all service where an account holder has to opt-in/out of a single policy for all users in their house (and outside?) then the likelyhood of take-up will diminish very fast.

In a household with adults and children (and the children different ages, of course), there is unlikely to be a one-size-fits-all perfect solution, so if the government and ISPs want a powerful and flexible offering there should be policies down to the individual device or user. As some of the ISPs didn't want to offer this in the first place, offering a service that rarely anyone uses parks the problem for them.

Parents of children aged 5 or 10 or 15 will probably want different access for those particular children - that's why my personal recommendation is for the parents to take responsibility themselves - though for greater publicity the end-device services could be publicised and even supplied/supported by the ISPs if they are interested.

Nigel, its already been shown that the equivalent censorship on cellphones has blocked sites which are of use and comfort at a minimum, absolutely necessary to some, which provide support to LGBT minors and similar.

The whole point of censorship like this is it isn't the law which is categorising it is an arbitrary group of people who naturally impose their own beliefs and bigotries on such censorship. Immediately an site which has under gay, lesbian or transgendered content for example is swept up in the block. So support sites which provide confused minors help and advice are no longer accessible to them.

It is a sad fact that some parents still have a major issue with LGBT as a result suffer violence or are thrown out of their home when they come out. These sites provide a life line and I for one don't believe any guarantees that they 'will not be affected' because experience shows they typically are.

Basically introducing censorship is a major major step. Once you've gone from no censorship to some censorship, going from some censorship to a little more censorship because some group or government wants to block things they don't like becomes far easier.

This isn't needed. The are laws that cover extreme porn, pedophilia etc already. They are sufficient!

Thanks for your comment. just as clarification, I am not advocating central censorship and like you believe that parents should have the knowledge, technology and interest to implement what they feel is appropriate in their situation, so hopefully we are at one on the legal point.

What I am trying to do is make sure that the technology that delivers the policies isn't maligned by out of date or incorrect statements.

For us (Blue Coat), we have a LGBT category, it has nothing to do with adult content or pornography etc - it is completely separate. So, it is all down to who implements the controls (as Jim said) and that one size does not fit all.

Personally, I would like ISPs to educate their users, us to educate our friends and families and more technologies given to consumers free of charge for them to make their own decisions.

When ISPs do offer this service within their network, they should be completely transparent about what category of content they consider in/out of any particular block, the problem that has been raised here and on other pages today is "over-blocking" - sadly all too often it is that even though the technology itself can identify URLs with great accuracy, the options chosen by ISPs sweep a set of categories up into one overall block - pleasing no-one.

To go off at a tangent, one useful tool that could be implemented would be to block malware and phishing sites - perhaps if that was an option, more people would sign up. On the other hand, we shouldn't underestimate the equipment needed to achieve that (and the cost that in the end the user would have to bear).

Default blocking on mobiles is a prime example of how it goes wrong. A one-size-fits-all approach is bound to over-block. And the results are scary, because as Cyberspice says, sites people rely on, particularly in areas of sexuality and user-self-help look very similar to the filters to pornographic content.

What the block did was to stop UK users from *editing* Wikipedia. Cleanfeed pushes whole domains traffic through a check-and-filter process for the blocks, meaning that all users at an ISP viewing Wikipedia appeared to come from a single IP address. Wikipedia then blocked that IP address as it appeared to be spam-editing Wikipedia.

In fact, other incidents have occurred, we have heard, but they've been resolved between services and the IWF. The real problem is that the IWF model - targeted at a small class of content which very few people wish to access - does not work easily for other types of content. That does not stop it being suggested as an approach. It was specifically cited in the recent debates about new copyright blocking, by rights holders and ministers.

Cleanfeed blocks down to the individual URL and does not block a complete IP address or TLD unless it is the TLD that is in the IWF list.

Perhaps that wasn't always the case and, as I said, some ISPs did block a complete IP address at the time of the wikipedia debacle.

Note the text below from the IWF web site:

IWF’s role in this blocking initiative is restricted to the compilation and provision of a list: the blocking solution is entirely a matter for the company deploying the list. Our list is designed and provided for blocking specific URLs only. Any decision to convert or adapt the list to block whole domains may lead to the overblocking of legitimate content and is not supported by the IWF.

The IWF list might, but the current implementations of blocking using the IWF list tend to feed end users via a single (or small number) of proxy servers. Anyone using BT appeared to be visting Wikipedia from a small number of addresses and those addresses were blocked as behaving like either spam or an attack on the system.

Any entry in the IWF list (or a similar blocking technology) that includes a high volume site causes this or similar problems (there have been multiple incidents but they are usually resolved quickly or only impact small constituencies) expanding this to a 'porn blocking function' would have a similar impact as wikipedia contains information that some would classify as 'porn'

The ISPs really have no business meddling with what goes through their network. The Internet isn't designed to work like that, and implementing network level filtering is a bodge at best and a technical nightmare at worst. I'm speaking from experience here, as I implemented an optional content filter for an ISP I was formerly employed by.

The responsibility here lies with parents to monitor their children's Internet usage and educate them as to what may or may not be appropriate. In reality though, it's not going to be possible to shield children from all types of objectionable content so the control must be put into parents' hands and locally installed filtering is the best option. At least parents then have the powers to unblock sites they're happy for their children to see.

Neither the State nor the ISPs have any responsibility to babysit the nation's children, and nor should they try to.

Is there some reason that I am currently unaware of as to why the IWF cannot simply work directly on the OS and skip the ISP all together?

Surely it makes sense to simply create a login for your child on the computer that has all the necessary settings available to the parent. There is no reason why the current generation of smart phones cannot have this work around installed also.

Why do we need to have an ISP spoon feed parental responsibility when the parents should be taking it upon themselves to protect their children?

I too share the worry of this blog post regarding the way the parental settings are ocmmunicated upon installation. As a Community Manager myself, I am all too aware of the flaws and holes in filters and restrictions - whether that be the child still gaining access to the banned content or the adult being denied content without their knowledge. It is very naive to simply dismiss this possibility, especially given the expansive nature of the internet.

Jim, for some time now TalTalk have offered their clients their "HomeSafe" service. This allows the account holder to voluntarily have network level filters applied to their web traffic that can block designated categories of content, including pornography, self harm sites and gambling sites along with others. If the user can choose to switch these filters on or off, then how can this be objected to as censorship?

Since the launch in around May 2011, they have seen over 100,000 people take up the service, with pornography being one of the most popularly applied filter categories (after self harm / suicide sites).

I do not see how this is to be considered censorship if offered as standard by ISPs as long as the account holder has control over whether the filter is on or off.

Hi Peter, I agree, if users are opting in, and are not coerced, then there is nothing wrong with such filtering. This is a fine line. Offering a service is very different to saying ISPs will “force customers to specify if they want to view explicit sites”. There is both a potential privacy issue (why should be people be forced to enter themselves into a database of "porn viewers") and an access issue (blocking content generally comes with some unwanted consequences, which a user should understand beforehand).

From what I understand, HomeSafe has no database about who has the filters turned on or off. If so, then whether the filters are on as standard and the user can go in and turn them off, or the reverse is true, then there should not be a problem (From a personal point I'd rather have them on and have to act to switch them off).

As for unwanted consequences, these apply to any filtering programme, many of which incorrectly block some sites. In addition, most paid-for services do not tell you what sites they are choosing to filter in your behalf, so that is a kind of paid for censorship you give them control over. I have heard reports where a company was using the filters to block sites of their competition!

Having just done an image search for crowd scenes on the London tube and being presented with some highly sexualised content at work as a result, personally I think filters definitely have a place.

The opening point by Nigel that there is nothing "inevitable" or a slippery slope is itself quite telling as many have noticed and commented. In a sense, as much as it is neither inevitable it is not also either reassuring or fool-proof .... and to return to security that technological solutions can be improved and fixed it self-defeating since the systems are human generated and reflective of the many biases and errors that we harbour. As noted by some commentators here - the case of cell phones and 'central' censoring is ample proof as is the case of state-monitored censoring of online content in a number of jurisdictions. Isn't it a remarkable turn of events - a sad reflection truly - that the effort is to emulate some of these regimes? In short, ISPs shouldn't have any particular business in filtering ...

Open Rights Group exists to preserve and promote your rights in the digital age. We are funded by thousands of people like you. We are based in London, United Kingdom. Open Rights is a non-profit company limited by Guarantee, registered in England
and Wales no. 05581537.