The Chronicle blogger does not like this. Predictably Scribe, when given sample text which must have a small and quirky corpus to draw upon (e.g. “hermeneutics”), suggests weird, only arguably grammatical things. (Though not necessarily less grammatical than the sorts of articles which contain the word “hermeneutics”.)

It is of a piece with posts I read earlier today snarking on Google Instant, or with the torrent of condemnation I have seen about the inaccuracy of Google metadata. (Passim. Seriously.)

And all of this criticism that Google is not doing these things correctly is making me cranky, because I think that it’s missing the point. Because what if the point is not, say, providing research-grade metadata every time? (Crucial when you need it, but generally, people don’t.) What if the point is not even providing correct autocompletions? What if the point is saying — we have data. We have computational powers on a scale incomprehensible only a decade or two ago. What can we do with that?

And if that’s the point — if that’s the game being played — then the way to win it isn’t by being correct: it’s by pushing the boundaries. The way to win is asking — then operationalizing — questions you don’t know the answer to, like Peter Norvig said, flipping the coin, seeing when you come up heads.

One of the striking things about Google is that they flip those coins very publicly, so we all get to see when they come up tails. And then they get criticized for coming up tails, and the criticism rolls off their back and they merrily steamroller along, because the criticism is missing the point. It’s like criticizing a fencer for not intercepting a touchdown pass. In the fraction of a second before, armed only with complaints, you get skewered.

There are some fascinating criticisms out there of Google’s errors. The fiasco over privacy and Buzz, say. Blundering into the unknown means stirring up complex systems in unexpected ways, and the anatomization of that fail can be infinitely intriguing.

But fundamentally, twenty years from now, our understanding of what access to information means will be transformed by how we have internalized the lessons of Google’s bold failures. It will not be transformed by complaints that fencers are bad at football. Even if they are.

“We didn’t think about default privacy settings for Google Buzz” is not so fine.

There has to be some give and take that goes on here. But yeah, I generally agree: if you’re willing to admit you made errors and have the capacity to rapidly change, then absolutely, go forth and try lots of things that’ll turn out to be errors. 🙂

Yeah, this is why I’m interested in criticisms which come from an out-of-the-box direction — not “you’re playing a traditional metadata game wrong” (when they’re not even playing that game at all) but “the game you are playing turns out to interact with these things from TOTALLY OTHER BOXES, and [here’s why that’s conceptually challenging/here’s why you needed to think about that box before you launched/etc.]”.

Those privacy concerns are really serious and interesting. The sociological arguments, that Google employee culture is a small and unrepresentative fraction of all the culture in the world — yet they produce tools for the entire world — hence blind spots are both likely and problematic — that’s interesting.

Definitely not advocating that people stop snarking on Google. But I would like higher-quality snark.

“What if the point is saying — we have data. We have computational powers on a scale incomprehensible only a decade or two ago. What can we do with that?”

From the point of view of a cog in the very big machine, that’s exactly the point. No one’s figured out yet how people do some of this stuff (though in the case of book metadata we’ve apparently figured out that they do a pretty lousy job of it), so naturally there are going to be some mis-steps trying to get machines to do it for us.

By contrast, when we behave in a tone-deaf manner towards people—and force them to put up with our tone-deafness—by all means, dig in.