Table of Contents

In July of 2018, I was informed by Google Adsense that specific content on my site was going to have “restricted ad serving” and I needed to go to the policy centre on Adsense to find out why. There was no link to this centre, by the way, and it took me a while to figure out I went to Adsense > Settings > Policy where I saw this:

My friend pointed out that it was ridiculously damaging to moderate content (or at least in this case, revenue) by “casting a wide net based solely on the presence of key words” and she’s quite right. Now I did attempt to get Google to reconsider, but like my past experiences with their censorship and draconian view, they don’t give a damn if you aren’t ‘big.’ And even then, important people get slapped by Google all the time.

In 1964, there was a landmark case in the US, Jacobellis vs Ohio, about whether the state of Ohio could, consistent with the First Amendment, ban the showing of the Louis Malle film The Lovers (Les Amants), which the state had deemed obscene. During that case, and the reason it became so well known, was not the content matter.

In fact, the decision remained quite fragmented until 1973 Miller v. California decision in which it was declared that to be smut (i.e. obscene) it must be utterly without redeeming social importance. The SLAPS test addresses this with a check for “serious literary, artistic, political, or scientific value” – and yes, the acronym is hilarious.

No, everyone knows about the first case because of the following quote by Justice Potter Stewart:

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

When I was very young, maybe six, my father did a talk about artificial intelligence with a slide of Captain Kirk ordering things from the ship’s computer. It stuck with me, which Dad finds amusing, and I’ve often reflected back on it as an understanding of what an AI can and cannot do.

The ship’s computer on Star Trek can do a great many things, but it cannot make ‘decisions’ for a person. In the end, a human always has to decide what to do with the variables, what they mean, and how they should be used. Kirk has to ask the computer to chill the wine, for example, and if he doesn’t specify a temperature, the computer will go back to what some other human (or more likely Mr. Spock) has determined is the optimal temperature.

AIs don’t exist. Even as useful as I find digital assistants like Siri and Alexa, I know they aren’t intelligent and they cannot make decisions. They can follow complex if/then/else patterns, but they lack the ability to make innovation. What happens if Kirk just asks for ‘white wine, chilled’? What vintage will he receive? What temperature?

To a degree, this is addressed with how Captain Picard orders his tea. “Tea, Earl Grey, hot.” But someone had to teach the system what ‘hot’ meant and what it meant to Jean-Luc and not Riker, who probably never drank any tea. Still, Picard has learned to specify that he wants Earl Grey tea, and he wants it hot. There’s probably some poor tech boffin in the belly of Starfleet who had to enter the optimum temperatures for each type of tea. Certainly my electric kettle has a button for ‘black tea’ but it also tells me that’s 174 degrees Fahrenheit.

My end result with Google was that I had to set up that specific page to not show ads. Ever. Because Google refused to get a human to take a look and go “Oh, its the image, remove that and you’re fine.” But even then a human could look at the image, recognize it’s not pornography, and flag it as clean.

What we have is a limitation in the system, where in there is no human checking, which results in me getting annoyed, and Google being a monolithic annoyance. Basically, Google has automated the system to their specifications, and then instead of putting humans on the front lines to validate, they let it go.

Have you ever visited a forum or a chat site and it’s full of people acting like decent people to each other? Humans did that. A human sat down, purged the site of the vile content, and had to sit and read it to be sure. They pushed back on trolls and other problematic people, all to help you.

Don’t believe me? Okay, do you remember the WordPress plugin WangGuard by José Conti? He shut the service down in 2017 because it was giving him a mental break down. The plugin worked so well because he, a human being, evaluated content.

WangGuard worked in two different ways, one an algorithm that had been perfecting for 7 years, and that was perfecting as the sploggers evolved, so it was always ahead of them. And a second part that was human, in which I reviewed many things, and among them sploggers websites to see their content, improve the algorithm and make sure that it worked correctly both when a site was blocked and not it was. The great secret of WangGuard, was this second part, without this second part, WangGuard would not have become what it was.