E-Resources: What Could Be Better? | Not Dead Yet

When I recently did my annual Best New Reference Databases list (it’ll be in the March 1 issue of LJ) I included one outstanding new re-release: the August 2012 update of JSTOR from ITHAKA. I reviewed the file in the 11/1/12 LJ, and was so impressed by the new main search page, along with the ease of searching, finding, and manipulating records. ITHAKA deserves credit for making the file so much better, rather than resting on its laurels and the ubiquity JSTOR enjoys among researchers.

As I was working on the Best list, it struck me that the state of e-resources is truly a mixed bag with respect to their discovery, access, and usability. Some of that is due to the fact that continuing substantive development of search interfaces is a variable thing—not all companies keep trying to improve their products, as did ITHAKA. Some of it’s due to the fact that there are so many different products being brought to market by so many different entities, and it’s hard for librarians, let alone the individual researcher, to keep track of all of them. But I think that some of it’s due to the fact that A) it’s not really possible to “standardize” e-products, because they need to be able to deliver different things differently, depending upon the searcher’s need, and B) it’s not really all that easy to search online successfully and effectively.

Google and Google Scholar are very popular for several reasons, not least of which is they’re so simple to use: a single search box and immediate results. Possibly 23,000,000 results, but immediate, anyway. Their popularity is proof of just how much (mostly newbie) researchers don’t know about, dislike, or dread using many library-based databases. And I have to admit that back in the early days of electronic resources, I was trepidacious about so-called “end-user searching,” because it seemed obvious that if researchers were doing the online searching themselves, they were likely to experience a great deal of frustration, not getting what they wanted out of the databases. That, of course, was in the dark ages of the 1980s, when we were often searching using commands and tags, but my concerns continue to be borne out today. Undergraduate researchers now look at me like I’ve got two heads if I talk about subject headings or descriptors, unless I can get them to pay attention long enough to see what a difference using those antediluvian information appendages can make to the quality of their search results. I try to do this as fast as possible, since so many students can barely sit still long enough for me to sign into a database. Frankly, I don’t explain what I’m doing much of the time when I’m helping a student researcher, because they don’t want to hear it—they want to see the full-text of the perfect article onscreen right now and if I can’t do that what good am I to them, anyway?

Then there are the wonderful students who want you to show them exactly what you did to get the results you got out of the database. All goes well until you get to the part that took you 20+ years to learn about how information works (and doesn’t work) and how you have to tease it out of a zillion online items. And trying to explain the bare facts of that would take so long the student would have graduated by the time you finished.

The syllogism that giving students the ability to search online themselves will make them good researchers is predicated on the flawed premise that they know what they’re doing in an online database, or that they can “pick it up” in a matter of minutes. This idea is a load of random. Post-baby-boom researchers may know how to mark up a web page in HTML within seconds, but they’re not going to grasp the complete underpinnings that govern sophisticated search systems in a trice. It takes extensive online experimentation and education to coax what you really want out of that computer.

The part in the searching equation that hasn’t happened yet, because it is so hard to do, is getting online systems to the point of employing sufficient artificial intelligence to be able to bring into play what takes humans years to learn—a combination of knowledge and technique that encompasses a huge range of subjects and technologies. Discovery systems haven’t gotten us there yet, not by a long shot. And the more I see of current day online technology, the more heartened I am about job security for librarians.

In the meantime, we’re all struggling with how to get these online resources to our users. Given how much they cost, and how much of a chunk of library budgets they account for, it’s a shame (not to say scandal) that more library researchers continue not to know about what we actually have for them to use (or the related problem: that they use these wares a lot but don’t know that the library provides them, ergo, they think they don’t need the library anymore because “they can get it all online”—don’t get me going on this or my blood pressure will soar). As one means of helping to fix that problem, my colleague Marie Kennedy and I have recently finished the book, Marketing Your Library’s Electronic Resources: A How-To-Do-It Manual for Librarians. It’s due out in March 2013, and I hope it’s helpful for ameliorating at least one part of the e-resource problem facing us all.

Meanwhile I’m on the lookout for new databases and re-releases that might help to make “that miracle occur” for every researcher. I’ve love to hear candidates if you have any.

About Cheryl LaGuardia

Cheryl LaGuardia always wanted to be a librarian, and has been one for more years than she's going to admit. She cracked open her first CPU to install a CD-ROM card in the mid-1980s, pioneered e-resource reviewing for Library Journal in the early '90s (picture calico bonnets and prairie schooners on the web...), won the Louis Shores / Oryx Press Award for Professional Reviewing, and has been working for truth, justice, and better electronic library resources ever since. Reach her at claguard@fas.harvard.edu, where she's a Research Librarian at Harvard University.

On Friday, October 20, in partnership with Fort Vancouver Regional Library—at its award-winning Vancouver Community Library (WA)—the newest installment of Library Journal’s building and design event will provide ideas and inspiration for renovating, retrofitting, or re-building your library, no matter your budget!

Comments

Cheryl, you are dead on in your comments, and I see nothing to quibble with. I think, however, that librarians tend to assume that enabling students to optimize complex databases is all our responsibility (as in, librarians and professors are each in their own silos and professors don’t care about research, so we librarians have to bravely and tragically fill the gap). The real problem is that, though we are deep into the information age in which our graduates need to be skilled information handlers, academia has not yet awakened to the fact that skilled information handling requires education, not brief training. I have argued in several publications that information literacy – from understanding the nature of resources available to formulating good research problems and finding/evaluating resources well – needs to be at the foundation of education, having equal billing with content. Learning how to handle information well (including optimizing databases) is more akin to learning a new language than learning how to mark up a web page. We librarians know that. Our fellow academics need to get on board if they truly want to educate their students for the information age.

Dear William,
I agree heartily with you about librarians and faculty needing to be partners in making students able to access, assess, and synthesize information, and I think we’re going to be seeing academic institutions relying increasingly on librarians’ expertise with the increase in distance ed and MOOCs. The nature of distance ed will require effective online information instruction and coaching, and there’s one of our natural roles.
Thanks so much for writing, and hope you’ll continue to read and comment,
Cheryl