I'm a Fellow at the Adam Smith Institute in London, a writer here and there on this and that and strangely, one of the global experts on the metal scandium, one of the rare earths. An odd thing to be but someone does have to be such and in this flavour of our universe I am. I have written for The Times, Daily Telegraph, Express, Independent, City AM, Wall Street Journal, Philadelphia Inquirer and online for the ASI, IEA, Social Affairs Unit, Spectator, The Guardian, The Register and Techcentralstation. I've also ghosted pieces for several UK politicians in many of the UK papers, including the Daily Sport.

Do We Want Google To Have This Much Power; Child Pornographer Caught By Gmail Scan

A slightly disturbing piece of news: a child pornographer has been caught as a result of GoogleGoogle scanning his emails for known images of child abuse. We’ve all known (or at least we should all have known) that Google does scan our Gmail for that’s how the company serves us up ads to go with it (and I’ll admit to a certain delight at clicking on the ads of my competitors that sometimes turn up on business emails). But that this scanning is now linked into the law enforcement network is perhaps, from a public policy point of view, rather more worrying.

Technology giant Google has developed state of the art software which proactively scours hundreds of millions of email accounts for images of child abuse.

The breakthrough means paedophiles around the world will no longer be able to store and send vile images via email without the risk of their crimes becoming known to the authorities.

Details of the software emerged after a 41-year-old convicted sex offender was arrested in Texas for possession of child abuse images.

Police in the United States revealed that Google’s sophisticated search system had identified suspect material in an email sent by a man in Houston.

Child protection experts were automatically tipped off and were then able to alert the police, who swooped after requesting the user’s personal information from Google.

This worries me for several reasons, not least that we seem to be getting ever closer to Bentham’s Panopticon.

I start out by being rather out of step with the current war on child pornography in the first place. I am, like all right thinking people, against child abuse itself but that’s not quite the same thing as being against such imagery. For we’ve good evidence that the expansion of availability to pornography itself in recent decades is one of the things that has led to the decrease in the incidence of sex crimes. As economists put it, the two are substitutes for each other, not complements or one being an incitement (or an excitement) to the other. And this also seems to be true of child pornography:

Pornography continues to be a contentious matter with those on the one side arguing it detrimental to society while others argue it is pleasurable to many and a feature of free speech. The advent of the Internet with the ready availability of sexually explicit materials thereon particularly has seemed to raise questions of its influence. Following the effects of a new law in the Czech Republic that allowed pornography to a society previously having forbidden it allowed us to monitor the change in sex related crime that followed the change. As found in all other countries in which the phenomenon has been studied, rape and other sex crimes did not increase. Of particular note is that this country, like Denmark and Japan, had a prolonged interval during which possession of child pornography was not illegal and, like those other countries, showed a significant decrease in the incidence of child sex abuse.

Greater availability of child porn leads to a decrease in what we should be arguably concentrating upon, child abuse. Urges are satisfied by images, not actions, all of which, in the absence of any method of removing those sexually attracted to children from the population, or that urge itself from them, seems to be a welcome reduction in harm. So I’m not in line with the current general thinking that any and every incidence of child pornography is something that must be stamped out by any method possible and damn the consequences.

But it is the consequences of this that do worry rather more. Yes, as above, we know that Google has been scanning our Gmail in order to serve us ads. But this newer link into the criminal justice system is worrying. For of course such a tactic is not going to remain applied to one crime and one crime only. We can easily enough imagine extensions of it to mentions perhaps of drugs, or drug dealing. Or in countries with a rather lower level of protection for civil liberty than our own for certain buzzwords to do with the politics of those countries. And don’t forget that Google is subject to the laws of the countries in which it operates. If, say, China requires that emails that contain “Tiananmen Square” be reported to the Chinese security agencies then Google will have to comply or not do business in that country (and I think Google has declined to do some work inside China on that basis and YahooYahoo has not so declined).

The point being that having now shown that they can monitor email for the proof of one crime the company will come under ever greater pressure from a number of sources to apply the same monitoring techniques to evidence of other crimes. And there’s plenty of place around the world where it is criminal to do things we regard as being a lot more benign than child pornography. Even, places where it’s illegal to do things that we consider to be basic human rights. It seems to me inevitable that this technique, however just people may think it to be for this one particular crime, is going to expand out to many others.

Of course, one can simply not use Gmail, one can also simply not do anything that might constitute a crime, but it’s still worrying to me that our communications are to be so vetted.

Update: A number of commenters have made the point that the very existence of child pornography means that a child must have been abused in its production. I’m afraid that this simply isn’t true. Here is the relevant law for the US:

Images of child pornography are not protected under First Amendment rights, and are illegal contraband under federal law. Section 2256 of Title 18, United States Code, defines child pornography as any visual depiction of sexually explicit conduct involving a minor (someone under 18 years of age). Visual depictions include photographs, videos, digital or computer generated images indistinguishable from an actual minor, and images created, adapted, or modified, but appear to depict an identifiable, actual minor.

Entirely computer generated images and also actors made up to look underage also qualify as child pornography.

Not that this was the point of the original post of course: that was the concern that the same tactics and techniques might be used to chase other, more minor, crimes at some point in the future.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.

Unless everyone is divorcing the same person and everyone is sending the same email to each other hash matching isn’t going to do that. Flat slope, uphill climb. We do have to remember it started with ad content tracking from and went on to kill spam using the same tech, and stamping out CP from the internet in general is fine. It isn’t like Google is offering a free service for anything goes.

Microsoft and all the big tech firms have active investigations to actually chase down individuals who cause harm on the internet due to highly illegal and disruptive activities. How many people are going to defend automated robotic spammers accounts.

I must say that it appears that you didn’t think this through. How do you suppose that these child are in the pornographic pictures in the first place, do you believe they are there by choice or are they being manipulated and abused. I believe it’s the later. ALL forms of child porn should be stamped out and anyone who has it should be prosecuted to the full extent of the law.

The definition of CP in the UK may include software generated images but the watchlists probably only include actual victim’s images and their associated known in circulation image hashes.

Google chases after spammers, scammers, malware authors in emails. And every tech company has active investigation times chasing down illegal computer activities globally and they do already use the data stored in our accounts to find people or see where the attack is coming from. It is a common tactic to highjack or use massive amounts of free accounts so those accounts are having their privacy blown wide open because their data hashes match known watchlists for spam/malware.

Obviously spam and malware detection is critical as well as the offensive legal approach and CP is far worse because of the actual human cost so what is the problem here exactly.

Nothing is actually even reading the emails is something we should be clear on. Its the same thing as saying spamming a computer virus and Google sending the authorities on you is an invasion of your privacy.

And because in your country (in which, BTW, 600+ child pornography criminals were caught just a few months ago, and in which BBC and your government appears to have a particularly pernicious infestation) they “include” items made “in software”, that makes your argument correct? Are you crazy? Do you even have any children? What would you think if someone abducted your children, or those of family or friends, and made child pornography with them? Would you tell your family or friends it’s ok because it keeps other children from being abused???

Please note that software generated images are also illegal in the US. So it’s not just in “my country”. As to your other point, that’s really where the problem comes in, isn’t it? If images reduce actual abuse (which is what peer reviewed science tells us does happen) then should we allow images and reduce abuse or not? Do note that I don’t actually decide one way or the other above. Only ask that question: what is it that we want to do?

Mr. Worstall, If you had conducted even a small amount of research (simple Google searches), you would find that the US Supreme Court (Ashcroft v. Free Speech Coalition) struck down the very provisions of the US Statute that you cite above indicating that the abuse of an actual child is not necessary to be considered Child Pornography. As a result, only material involving an “actual child” is prosecutable in the US.

Also, as someone who has been working this crime type for over a decade I can tell you with first hand experience that child abuse material is not a replacement for the actual abuse of a child as you would indicate in your article.

You do realize that any files that are released onto the Internet are there to stay right? Therefore, arresting people for possession of having such images is counterproductive. You can arrest 600+ people in 2014 and 200+ people in 2015, but in the end people will always be arrested for having images the same images every other perp has.

You really are not doing anything to stop the flow of illegal images. It’s the equivalent of whack-a-mole. Government’s create/manufacture crimes that did not exist before in order to gain public support (like in this discussion). I see many people shunning the writer for merely questioning the methods used by a tech giant.

I bet you that 20 years from now nothing will have changed, people will still be arrested for having images or videos dating back to the 1980s. All because there is a faulty theory that people who look, also abuse, which ironically cannot be applied to any other crime.

Child pornography is a whole different ball game, some state its a supply/demand market based on bartering, which is true in some instances, but not so much in others. It won’t be long now until people realize enough is enough and demand a change to laws that were created out of hysteria and mass ignorance.

Your making up fake information the image matched a nude child girl because it was a child abuse image entered into a database that hashes materials that child predators make of real children. They do not go around searching Google Images to add images to their database.

So please note its only actual images of real children that is being affected and rightly so. This person was also found creeping away from his registered home and took a job at a restaurant and was secretly recording family’s and their children for some time. Eventually even with his images he would have done it again as he is known already to the police due to his previous actions.

Your study referenced is basically BS statistics/lies type while google just matched a identical image of a real child abuse image and some including you misunderstand that even though Google is evil this isn’t an evil part of Google.

Not really with hashed based matching you can easily make it impossible to upload the identical image on any public website. The same system is used to block spam and malware. While it is correct that you can never completely rid of it do you really want me to turn off the anti-spam/malware/AV system I run at my campus I’m not sure those studies are very correct according to the statistics I get in reality.

It is all about a barrier to entry and roadblocks to suppress a negative activity on society if we disabled learning anti-spam/malware filters the internet would once again be totally awash with spam comments, spam emails, spam this, spam that. Even this commenting system has anti-spam matching systems and while you can never get rid of spammers filters work very well in keeping it at a level which people can actually use a service without being swamped by garbage.

Wow, one doesn’t read many pro child pornography articles online. I’m surprised this got by your editor. In the best light, it comes across as deeply ignorant. At least, I hope that’s what’s going on here–your ignorance—and you’re not trying to excuse or minimize your own illegal practices. You do realize that pictures come from somewhere, right? So a picture of a child being abused means that a child was actually abused for this person’s sick desires, and that buying and trading in those pictures is directly connected to the abuse–pedophiliacs are the market which people sell to by raping children and then hocking the pics. the least bad outcome is that the abused have to deal with pictures of them being raped spread across the globe for the rest of their lives. You might want to consider re-thinking your support of child abuse, bud.

“and you’re not trying to excuse or minimize your own illegal practices.”

Correct, I’m not.

“So a picture of a child being abused means that a child was actually abused for this person’s sick desires”

That’s not necessarily true. Images that were created entirely in software, 3d renderings for example, without the use of a single human being at all, can and are still defined as child pornography.

Which is where it all gets rather difficult in both a legal and a moral sense. What is it that we actually want to reduce? The volume of child pornography? Or the number of sexual assaults on children?

That is just an excuse unless you manage to get your computer generated CP onto high value watch list to root out people far worse than malware authors and spammers then your probably also sending out other non-computer generated images.

These types of filters don’t catch one off images they are detecting massive sharing of the digitally exact copy. They could easily determine if your actually a CP spammer, malware spammer, plishing spammer, spammer spammer. But they can’t and don’t just detect “thought crimes” or anything even remotely like what your trying to imply.

The content aware advertising filters do read the emails more closely but even those filters don’t actually understand anything they are reading. (Far more imprecise and if you want you can make it do whatever you want by messing around with the personalization stuff. I tweak it to do research for me by altering the interested subjects artificially)

Image matching by hashed values is an exact filter no one is going to be accidentally labeled a CP spammer unless they are one.

You walk a dangerous line. While privacy is a big concern, and privacy of private communication is desirable, the world has changed and the expectation of privacy has decreased significantly. Privacy is still available via courier and encryption, for those that want it or have the resources to pay for it.

That said, the argument that the availability of pornography has led to a decrease in sex crimes has merit. I support the first amendment right of people to look at what images they want to look at. But I draw the line at child pornography. Those images are of exploited children. What happened to them was criminal. The further distribution of those images is criminal and an affront to the child being exploited. If pedophiles are sexually aroused by pornographic images of children, and are more likely to exploit a child themselves if images are not available, then so be it. Are you advocating the sacrifice the privacy of a victim to prevent another victim? Sexual crimes against children should result in the chemical or physical castration of the perpetrator, not the provision of the explicit materials of victims to soothe their own need to exploit others.

Public policy moves too slowly. And since the tech industry loves change why not just extend malware filtering to include images of non-computer generated known CP image sets.

You are ignoring the fact that these filters are likely built from libraries seized from convicted child abuse individuals and the filter only catches the leftover people from what are often CP distribution rings. How a normal individual would have a portion of a known CP dataset is not likely to occur randomly or by accident.

The NSA and CIA are pretty dumb and wasteful there is no way to actually process the entire internet at once. And by tapping basically the entire internet they get far more data then they can ever process. So even if they did try to detect CP in the raw data captures it would be very difficult and far less efficient than google just checking images against a hash table which is both precomputed and extremely efficient/accurate.

You’ve slightly missed the point I’m making. I’m not worried about this use of the technology: I’m worried about the future uses to which this technology is going to be put.

Things do have a tendency to slip you know. Drug asset confiscation laws were at first only going to be about real drug dealers having their profits taken from them. Now they’re applied to just about anyone found carrying significant cash. There really is a slippery slope.

Image based hashing can only match known images. What type of doomsday privacy scenario are you thinking about. Even hundreds of images of the same person will never have the exact same image hash unless it is a massively distributed copy at which point it probably is not personally related anymore.

If this is a slippery slope we already started at the bottom (ad based content tracking) and are clawing are way back up by using the same tech to instead kill spam, malware, CP, plishing attacks, fake donations, …

And if your going to say google is bad with content tracking why exactly are there so many external domains on news sites many related to ad tracking, user tracking, … (The internet is funded by ads it is pretty obvious)

And people know the internet is free because we basically “sell” our info in exchange for “free” services. But smart people can also withhold as they choose which is why I selectively give permission to sites.