Three or four years ago I described what I called “the book findability” problem. The audience was a group of confident executives trying to squeeze money from an old school commercial database model. Here’s how the commercial databases worked in 1979.

. . . . . .

What the write up triggered was the complete and utter failure of indexing services to make an attempt to locate, index, and provide a pointer to books regardless of form. The baloney about indexing “all” information is shown to be a toothless dragon. The failure of the Google method and the flaws of the Amazon, Library of Congress, and commercial database providers is evident.

ROBERT STEELE: This is a hugely important commentary. The problem is even worse when one looks at the additional complicating factors of material published in hard copy only (gray literature) in the deep web (not in full text and without a persistent URL), and in languages other than English. The standard finding is that 1% of written papers are processed; 1% of what NSA collects is processed; 1% of what a standard embassy or corporation collects is indexed and processed. We are are largely ignorant world, because the top down “because we know best” model is corrupt, ignorant, and divorced from the public interest.

Needed is a new open source everything model that breaks completely from the top down secrecy rule “we don’t need no stinkin’ accountability” of the West. I expect this model to emerge in South America, Asia, and Africa.