This article was based on some work we did for this presentation at the Virtual Reference Summit in 2008. From the conclusion:

It is easy to let the technology be a barrier to teaching and learning. It is easy to assume, in the absence of visual cues, that patrons who come to us via virtual reference services are not interested in learning how to search for themselves. Facilitating exploratory search via virtual reference does not depend on new technology, it depends on policies, reference interview skills, and perhaps most important, attitudes that are geared towards looking for opportunities to put the patron in control of his or her learning. New technology features or tools might make this switch easier or more successful, but in the absence of an instruction-focused attitude there is no technology that will make instruction simpler, more effective, or more prevalent.

Open Access Mandate

In this case, the open access mandate didn’t really influence our behavior, but it probably pushed some things up a little higher on the priority list, and made it more important to follow up on things that we would have wanted to do anyway. It influenced the choice of venue – RSR is published by Emerald, a Romeo green publisher. When we weren’t sure what version to archive, the mandate pushed us to more actively communicate with the journal editor for clarity.

“Every university or public library centers in its reference room. This is the room for the first task of research, the bringing of books together. Among the thousands of books on the shelves, perhaps a score are needed for the particular topic. Which are these? Perhaps half of them require but a few minutes and the others demand extensive reading. Which is which? The object of the reference room is to answer these questions so far as possible in advance. (emphasis added)….

Its function then is two-fold: first to facilitate that preliminary survey which gives general notions and boundaries and shows how a particular investigation might be limited; secondly to facilitate comparison on particular points.”

There’s something about spring term that’s always crazy. Last week was my last presentation obligation of the term – the WILU conference in Montreal. WILU is one of my favorite conferences, based on the one time I’ve been before, and luckily we presented on Tuesday, so I was able to enjoy most of it without imminent presentation pressure looming over my head.

Kate and I presented on some very early findings from a research project we have been working on for the last several months – examining stories that instruction librarians tell. I told Kate at the end that if I ended up blogging about this presentation at this early stage, it would be to write something up about how incredibly valuable it can be to present on research in the early stages, even in the very early stages. Basically, the segment of the research that we presented on at WILU was drawn from an online survey where we asked instruction librarians to share some stories. Our interest is … epistemological. We were hoping to identify some themes that would suggest what we “know” as instruction librarians and professionals, as well as some ideas of what we talk about, worry about, and feel proud about when it comes to our professional practice. This work was primarily intended to inform another round of story-gathering, done as interviews, but we were also hoping that these preliminary results would be interesting on their own. ETA -it was brought to my attention that some more information about the kinds of stories we gathered might be useful. This is the slide listing the prompts we used to elicit work stories. They’re adapted from a chapter in this book. So beyond the obvious benefit of a deadline and potential audience forcing you into your data to figure out what it might say early on, presenting even those early findings was a really positive experience. For one, other people are as interested in the story method as we are, which is awesome. For another, a whole room full of other pairs of eyes is a fabulous thing. Kate and I started the conversations that started this project talking about this conversation between Kate and I and some others (and further framed into reflective practice talk by Kirsten at Into the Stacks) though I don’t think it has stayed there. There has definitely been research-question creep along the way. We started the project thinking about theory/practice, a is obvious from the conversation linked above. And we made the connection to reflective practice based on that as well – based on the idea that scholarship represents another way of knowing what we know, and thinking about ways that scholarship can inform and push our reflections on practice. And we got a great question about whether it makes sense to conflate scholarship with theory in this context, especially when, as another commenter mentioned, much of the LIS literature isn’t clear when it comes to any theoretical frameworks the author used. A really useful question to think about that scope of the project creep – and also exactly the kind of question I can never answer on the spot. Theory vs. practice is useful shorthand, especially in a short session like these were. And I do think that including non-theory generating scholarship in the initial conversations that sparked the project reflected some of the ambivalence we were seeing. As I said at that time, I really don’t think all of that ambivalence is tied up in “if the scholarship in librarianship was more useful, or more rigorous, or more scholarly, or better-written, or more theoretically grounded, I would totally use it.” I also think that Schon’s Reflective Practitioner allows these things to be discussed together as well, not because he conflates them, but because he sets the Reflective Practitioner in contrast to both the pure theorist and the applied scientific researcher:

As one would expect from the hierarchical model of professional knowledge, research is institutionally separate from practice, connected to it by carefully defined relationships of exchange. researchers are supposed to provide the basic and applied science from which to derive techniques for diagnosing and solving the problems of practice. Practitioners are supposed to furnish researchers with problems for study and with tests of the utility of research results.

Schon argues that this hierarchical model of professional knowledge has dominated the way we understand, and teach, professional practice – and it is in both the development of grounding theory (basic, disciplinary knowledge) and the development of a body of rigorous, scientific applied knowledge for problem-solving that the practitioners, and the practitioner’s unique ways of knowing, are left out of the equation. Which is a long way of saying that the initial connections we were making still have value for me mentally when thinking about these questions, but I’m not sure we want to stop there. All of this begs the question of whether thinking about these questions, and thinking about the stories, with a clearer distinction between theory and practice in mind might be more useful. I think maybe it would be. On the one hand, clarity is good, and a lack of clarity in prior discussions might actually suggest the need for more clarity all by itself. But the conversation brought a couple of additional thoughts to the forefront, neither of which were really clear until the mental presenting-dust settled. Here and there along the way, I’ve been thinking about the real-world information literacy literature and its connection to this discussion. One reason to not discuss it in our 30 minutes was the fact that some of what I have read in that literature recently (as relates to real-world information literacy in professional contexts) examines the differences between the ideal knowing captured in our professional texts/ training/ theories and the real-world/ tacit/ experiential knowing that comes with actually dealing with the uncertainty of practice. The connections to our original questions probably seem clear, but I wasn’t comfortable calling the peer-reviewed literature our abstract, ideal text-based knowing in the same way as the firefighter’s manuals were understood in this article, for example. Which on the one level is part of the subject of our next steps with this project – figuring out what our abstract, ideal, text-based knowing IS in instruction librarianship. But on another level points to the problems with conflating theory and scholarship – parsing them out more clearly I think would make the connections to this body of literature more useful. Related to this comes the question of our training (or lack therof) as instruction librarians, in LIS education and after that. Between us, we saw several sessions about professional development for new librarians, which dovetailed with conversations we’d had about the distinction between the stuff we read related to information literacy in grad school and most of the stuff in the literature today. Kate mentioned that the articles she read in library school instruction classes weren’t the articles about practice, but about theory. I didn’t take a specific instruction class, but I would say the same was true at my school, and was definitely true in the learning theory class that I took. I think to follow up on that question usefully will also require parsing that discussion more clearly. So thanks to all of the people who participated in this great (for us) conversation, and we’ll be contacting people soon for the next round of work on the project. Final lesson from WILU? I’m still useless when I try to speak from notes. Not necessarily the speaking part, though it is defintiely not natural for me, but more the actually using the information in my notes part. I tried in this talk, not throughout but just in one moment at the end, and I still made a total mess of the process. I walk away from them, get lost, talk past where I am in the notes, and leave things out anyway. It’s weird that speaking from notes is as much a learned skill as speaking without them is, but it totally is. I think I blame high school debate, and I suspect it’s too late for me now.

This article is talking about sponsored trials – research that is sponsored by drug companies, that finds that the drug in question works:

Overall, studies funded by a company were four times more likely to have results favourable to the company than studies funded from other sources. In the case of the five studies that looked at economic evaluations, the results were favourable to the sponsoring company in every case. The evidence is strong that companies are getting the results they want, and this is especially worrisome because between two-thirds and three-quarters of the trials published in the major journals—Annals of Internal Medicine, JAMA, Lancet, and New England Journal of Medicine—are funded by the industry (citation here, Egger M, Bartlett C, Juni P. Are randomised controlled trials in the BMJ different? BMJ. 2001;323:1253.)

Which has been a topic of conversation for a while, but why stop there? If the drug companies can create a bunch of the research, why don’t they create the journals too? Just create a journal. Don’t pretend that it’s reporting knowledge for the public good, don’t make it so the public can even find it, don’t make it so the doctors can even find it – don’t index it in Medline, don’t even put a website up.

See, I talked briefly here a while back about my frustration with people like Andrew Keen and Michael Gorman when they accept uncritically the idea of traditional media gatekeepers serving a quality-control or talent-identifying role, without acknowledging that the corporate media makes many decisions that are not based on a mission of guaranteeing quality or identifying genius.

And Kate and I talk frequently about how traditional methods of scholarly publishing are not intended to guarantee quality in terms of identifying the best articles, or even the most true or accurate articles, but that those methods are instead intended to create a body of knowledge that supports further knowledge creation.

We’ve managed to fill presentations about peer review pretty easily without focusing on the corporatization of scholarly publishing — there’s a lot of discussion of this corporatization in open access conversations already and a lot of confusion that comes up about the implications of open access for peer review. Sometimes it seems like every open access conversation in the broader higher education world gets bogged down by misunderstandings about peer review. So it is has seemed true that drawing this artificial, but workable, line between what we are talking about and what we’re not, just makes it easier to keep our focus on peer review itself.

But man – it might be just too artificial. Maybe we can’t talk about peer review at all anymore without talking about the future of a system of knowledge reporting that is almost entirely dependent upon on the volunteer efforts of scholars and researchers, almost entirely dependent upon their professionalism and commitment to the quality of their disciplines, in a world where ultimate control is passing away from those scholars’ and researchers’ professional societies and into the hands of corporate entities whose decisions are driven not by commitment to quality, knowledge creation or disciplinary integrity.

We’ve been focusing on “why pay attention to scholarly work and conversations going on on the participatory web” mostly in terms of how these things help us give our students access to scholarly material, how they help our students contextualize and understand scholarly debates, how they lay bare the processes of knowledge creation that lie under the surface of the perfect, final-product article you see in scholarly journals. And all of those things are important. But I think we’re going to have to add that “whistleblower” aspect — we need to pay attention to scholars on the participatory web so they can point out where the traditional processes are corrupt, and where the gatekeepers are making decisions that aren’t in the interests of the rest of us.

I think a lot about peer review, but it’s almost all about the journals side of things – the related-but-not-the-same issues of open access and peer review. And by that which is called “editorial peer review” to distinguish it from peer review in the grants/funding world, a kind of peer review that is probably much more important to a lot of people than the journals-specific kind.

But a couple of recent notes about the other kind of peer review jumped out at me and connected – what do these, taken together, suggest about how we = beyond higher ed, as a society and a culture – value knowledge creation. Or maybe more what I mean is what do they suggest about how we should value knowledge creation.

First, there’s this note today from Female Science Professor. She’s responding to another article in Slate, but it’s the piece she’s responding to that I am interested in here too as well – the amount of time that faculty in different disciplines (and in different environments) spend writing proposals to get funding for their research. The Slate article includes a quote suggesting that med school faculty at Penn spend half their time writing grant proposals. That number has increased, it goes on to suggest, because of the effort to get in on stimulus funding.

The comments, with a few exceptions, suggest that the 50% number is not out of line in that environment.

(quoting the abstract of an article in Accountability in Research) Using Natural Science and Engineering Research Council Canada (NSERC) statistics, we show that the $40,000 (Canadian) cost of preparation for a grant application and rejection by peer review in 2007 exceeded that of giving every qualified investigator a direct baseline discovery grant of $30,000 (average grant).

Obviously, there are stark differences in scope and scale between these disciplines. Also obviously, the process of writing grant proposals isn’t entirely divorced from the goal of knowledge creation – the researcher undoubtedly benefits from going through the process – the project benefits from the work donw on the proposals – in some ways.

In others, they are undoubtedly a distraction, and the process becomes more about the process than about the knowledge creation. No solutions offered here, not even a coherent articulation of a problem, more like it just makes you wonder what it says about us when, within the knowledge creation process itself, the problems and issues of getting funding take precedence over the problems and issues connected to the direct experience of creating new knowledge.

What is it about spring term that it always ends up being overloaded? Sometimes it is travel, and this term definitely has its share of that – informational visits to the University of Minnesota and the University of Wisconsin-Madison, WILU in Montreal, and an insane 30-hour total trip across 3 time zones for a college reunion. But unlike other travel-crazy terms, this time around it’s the presentations that have me feeling that “there’s always something more to be working on” feeling.

Then a few days later, Kate and I are going to do a version of the Peer Review 2.0 talk as a professional development workshop for community college libraries in Seattle.

Given budget realities for all of us, I would expect that this form of presentation and sharing will only become more important, so I am excited to try it out. But I’m also nervous. I’ve definitely been in on online presentation situations where the content and/or presentation style didn’t translate very well. And it’s not something you can practice, or at least I haven’t figured out how; every practice feels even more artificial than practicing a traditional presentation in front of the mirror.

(Not that I’d know anything about that – I am a practice-while-driving type of presenter)

So yeah, if you’ve ever sat in on a really great webcast presentation, or a really bad one, I’d love to hear what works and what doesn’t.