If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access â€œâ€¦promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.â€ This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible â€” the laity shouldnâ€™t have direct access to the source material, because they wonâ€™t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gormanâ€™s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.

In Web 2.0: The Sleep of Reason, Gorman rambles over the landscape of authority, truth, and web 2.0 like a lost puppy, not quite sure where he’s supposed to be going, but sure he has a destination. And that destination is TRUTH. I believe that he has no idea what he is talking about re: Web 2.0, and that his article clearly illustrates the significance of his misunderstanding.

Let’s begin with some examinations of his quotes, shall we? The opening paragraph is a doozy:

The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called â€œcitizen journalistsâ€; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable â€œcitizen surgeonâ€ movement; millions of Americans are believers in Biblical inerrancyâ€”the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics.

I suppose we’d be better off, Michael, if journalists were required to get a governmental approval pass before they could write? The US has a long history of “citizen journalism”…if Thomas Paine were alive today, he’d have a blog.

And to equate the social movement inherent in Web 2.0 with creationism and alternative medicine is not only a category mistake of the largest sort, it is also just insane. It isn’t that there is a “flight from expertise”, Mike…it’s that we are re-defining “expert”. You sound like the Catholic loyalists railing against the Protestant movement…only the priests are allowed to talk to God! Bibles will only be printed in Latin!

The fact that information changes forms or source has no effect on its Truth. Truth judgments arise because the information itself is reflective of the world at large, testable and reproducible in the case of claims about the world (scientific claims) and verifiable in the case of claims about information itself. The goddamn source of the information has absolutely no bearing on the truth of it. None. Zero. Nada. Ziltch.

Ah, but Mike has a bit about that:

Print does not necessarily bestow authenticity, and an increasing number of digital resources do not, by themselves, reflect an increase in expertise. The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print.

The reason that the “scholarly apparatus” evolved isn’t because of some desire to desperately produce only the best knowledge…it evolved because of economic pressures. In print, not everything can exist. Print costs money, and in the world of the academic the things we put our financial faith in, mostly, are things that pass the “scholarly test” of peer review. We have to have some limiting process because there is only so much money, NOT because the process itself is holy.

In the digital world, money is often the least of the concerns of information production. That simply means that we have to critically examine each piece of information as it lies with the web of knowledge, and draw coherence lines between the pieces. But we don’t want to get bogged down in the old way of doing things just because it worked in print. Digital is different, and demands different processes and analysis.

The structures of scholarship and learning are based on respect for individuality and the authentic expression of individual personalities. The person who creates knowledge or literature matters as much as the knowledge or the literature itself. The manner in which that individual expresses knowledge matters too.

Ummm…no? After holding up the Scientific Method so often in his article, you’d think he’d understand it a bit more. The point of the scientific method is to eliminate the person and make it about the knowledge, writ pure. The person does not matter, can not matter when it comes to the expression of the knowledge…keep in mind, we aren’t talking about the native intelligence necessary to invent or have insight. We’re talking about the information itself.

This is a rambling, nearly incoherent piece of writing when you try to connect logical lines between his arguments. He moves from comparing Web 2.0 to Creationism, to how his research on Goya done via print is the best way to do it, to comparisons between Web 2.0 and Maoism, to finally accusations of antihumanism.

Like this:

This would be the Super Secret Project #2 that I’ve been alluding to for a few weeks now. Full press release and information available on LITABlog, and much, much more to come on the official Showcase site.

Why do this? Well, the guiding hands of BIGWIG (Michelle Boule, Karen Coombs, and myself) had grown increasingly frustrated at the formal requirements for “official” ALA presentation, especially as they relate to technology. A paper-based, formally structured, face-to-face conference is just not the right answer for the majority of librarians anymore. I have taken part in multiple virtual conferences (HigherEdBlogcon and Five Weeks to a Social Library), and I prefer them for actual content to the sorts of things that ALA puts on. That isn’t to say that F2F isn’t valuable…its just a different measure of value. Witness that we included F2F as something that enhances the content of the presentations, but I would argue no more than having open communication channels virtually. It’s all about conversation…that’s the heart of the social web.

Combine the above with the ridiculous timeline needed for presentation topics…12-18 months out for a technology presentation? I can list at least 4 things that have happened in the last month that would be interesting. Trying to predict what might be interesting in technology in 12 months is a losing game, and it does nothing to actually serve either librarians or our patrons. We gave our presenters a deadline of a week before the conference to give us their content….a week. It is possible to be timely and flexible with this stuff, if its done well.

Join us in the experiment! Follow the conversations on the wiki, join us at ALA to meet what we think are the cream of the crop of current library technology people. We’ve got movers & shakers, we’ve got OCLC award winners, we’ve got radical metadata pirates and the guy who made LibraryThing. Why wouldn’t you want to come along for the ride?

Like this:

A new “search engine” went live this week calling itself Mahalo. How does it distinguish itself from the big guns of search (Google, Yahoo, Ask, MSN)?

Mahalo is the world’s first human-powered search engine powered by an enthusiastic and energetic group of Guides. Our Guides spend their days searching, filtering out spam, and hand-crafting the best search results possible. If they haven’t yet built a search result, you can request that search result. You can also suggest links for any of our search results.

Yep, they are human-indexing the web! Disregarding the “first human-powered search engine” bit, since they aren’t a search engine (they appear to be an index, with a search on top) and they clearly aren’t the first in any case (Yahoo started out exactly the same way, and the Librarian’s Internet Index is the same thing done by information professionals).

In their FAQ, they handily tell you they selection criteria. Here’s the couple that stood out to me:

Sites they will not link to:

… sites of unknown origin (i.e. we cannot establish who operates the site).
… sites which have adult content or hate speech.

Establishment of “who operates” the site on the Internet? Really? Does a nom de plume count? How about a site whose authors must remain anonymous for political reasons? And that’s setting aside the longstanding legal precedent that anonymity in speech is a necessary for free speech. (see: McIntyre v. Ohio or Talley v. California)

Restricting Adult Content and Hate Speech makes it sound like those are two very clear categories. I’m always wary of groups who feel like they should be the ones making content decisions…one of the reasons I’m so happy to be a librarian.

They will link to:

… sites that are considered authorities in their field (i.e. Edmunds for autos, Engadget for consumer electronics, and the New York Times for news).

I swear on a stack of pancakes, I will get off my ass this year and write that article that’s been rattling around in my head about how Authority as a criteria for ANYTHING is old and busted.

The Library Salary Database includes aggregated data from 10,631 actual salaries for six librarian positions in 1053 public and academic libraries.

The site itself, however, says:

The Library Salary Database has current aggregated salary data for 68 library positions from more than 35,000 individual salaries of actual employees in academic and public libraries in the United States.

So which is it? 6 positions, or 68? I’m certainly not paying to find out! Jenifer kindly clarifies in the comments…

As unclear as the actual sources may be, no one disputes that the data they are aggregating is collected from their own constituents. Who else is reporting this, if not ALA members. So the ALA is collecting the info, and then selling it back to us. For an annual rate of $150!!!!!

This is yet another of the absolutely insane things that come out of ALA. I might understand charging outside interests for the information, but this should be free for members. Then again, I think that the ALA should be operating in a far more open and free manner than it has for years (some of you might remember my Master’s Paper, which, flawed as I admit portions are, spoke strongly against the locking up of ALA content)

I’ve not talked at length about my individual issues with the organization yet, but if I could be a LITA member without being an ALA member, you can bet I’d go there. ALA as a whole is overgrown and needs a good weeding.

Now that I think about it, sets of facts really aren’t copyrightable. Anyone out there with the ability to scrape this database and produce a free version? I’ll pony up the $30 for a months access if it frees the data behind the scenes.

Seriously, I’m certain this is the future of the catalog. Not just the specific tools, but the idea of leveraging one set of data against another set using easily modified and extensible tools. It’s many-pieces-loosely-joined for the OPAC, and it’s brilliant.

I particularly love the tag browser, as well as the similar books links. Leveraging the LibraryThing data is a wonderful way to start this, but eventually libraries will need a way to share in a P2P system rather than having a central storehouse. We need to be sharing our data in a P2P format, with always-on trickle-and-compare running, updating the tag clouds and recommendations. If we just managed to collect the click-through data of our catalogs, we could manage to put together some pretty robust recommendations, all driven by scholarly activity.

Like this:

I just had to laugh at one of the more recent posts on the ACRLblog about questioning the standard spiel of authority in Information Literacy instruction. Mark Meola says:

This is very simple advice yet I seldom see it recommended outright in the checklists. Itâ€™s a tricky balancing act, but in our drumbeat for students to â€œuse authoritative sourcesâ€ letâ€™s not forget to recommend questioning authority.

Indeed, that is the focus of an entire class that I do, using the sources on this slide (also, up for many years).

Information evaluation without reliance on authority is being taught, and I maintain it is the way it should be taught. Authority is the thing we used to have to use as an explanation, back when actual verification wasn’t possible except for those willing to spend weeks/months/years doing so. We relied on the magical word “authority” in the same way we relied on phlogiston and ether. And just like those, authority is just an explanatory shortcut that is no longer needed.

Like this:

Check it out! Our very own Jessamyn Westgets on BoingBoing, and is called an “Internet Folk Hero” by Cory Doctorow…I’ve always been a huge fan of Jessamyn, and happy to call her a friend, but my “proud to know” radar just went ballistic!