Important to determine at the beginning of the studies to provide some organized structure for future work. It helps to minimize the bias for inclusion of some articles; it can be verifiable by the readers who would like to make sure that the authors did adhere to the selected criteria. There’s also something that is called “publication bias”; it means that often the studies with the positive results get to be published more often that the studies with the negative results or often such studies are published in the journals that are of a least importance and not indexed properly in major databases.

In our example we knew right away that we would like to stay clear off the articles that talked about implementation of the electronic reference or establishing such service. We wanted to avoid reviews or book reviews as they were not original studies. Some articles examined the demographic parameters of their users – for example how many female vs. male patrons used the IM services or what was their age, etc. We felt as though it is not important to the main idea of our study of user satisfaction. We also knew that we won’t be able to read raticles in non-English language, so those were excluded as well.With the inclusion criteria we tried to come up with some clear parameters that helped us to identify the initial group of articles to examine. It helped a lot, in fact.

For this part of the process the tool was needed. Different researchers approach this in different ways: some look for existing tools, some came up with their own questions that better suit their topics.

Each of the four sections contains from 5 to 8 questions. For example, the population section questions whether the study population is representative of all the users, actual and eligible, whether inclusion/exclusion criteria definitely outlined, sample size, whether the population choice is bias-free, etc. Answering these questions can be difficult – we spent a lot of time doing it first on our own and then together, discussing the articles over and over.

6.
What is Evidence Based Librarianship (EBL)?History Gained traction in Medical fields in 1990’s and spread to social sciences after thatMedical librarians were the first to bring this approach to LIS researchIncreasingly used in social sciences and information/library scienceSources: Booth and Brice, ix.

7.
Don’t we ALREADY use “evidence”?Evidence is “out there, somewhere” Disparate locations: many different journals, many different researchersEvidence is not summarized, readily available and synthesizedNo formal, systematized, concerted effort to quantify and understand if there is a pattern or just our general sense of things

10.
Systematic Reviews: When Are They Useful?Too much information in disparate sourcesToo little information, hard to find all of the researchHelp achieve consensus on debatable issuesPlan for new researchProvide teaching/learning materials

Research QuestionsResearch question formulationDescription of the parties involved in the studies (librarians and patrons, for ex.)What was being studied (effectiveness of instructional mode, for ex.)The outcomes and how they can be compared What data should be collected for this purpose (either student surveys or pre/post tests, etc.)

16.
Our Research Questions1. What is the level of satisfaction of patrons who utilize digital reference? 2. What are the measures researchers use to quantify user satisfaction and how do they compare?

21.
Working with Results93 articles were selected based on inclusion/exclusion criteriaFull text was obtained and read by both authors independently to determine if at least one variable pertaining to user satisfaction was present; then the results were compared

25.
Critical Appraisal Process24 articles were subjected to critical appraisal Each question from Glynn’s tool was answered (either yes, no, unclear or N/A) and the results were calculated12 research papers selected and subjected to the systematic review

26.
Analysis (Findings of Review)Settings and general characteristics:Multiple instruments in a single article9 unique journalsUS basedMethods and timing of data collection7 paper surveys3 pop up surveys3 transcript analysis

31.
Lessons LearnedLessons about user satisfaction with electronic reference:Overall pattern of users being satisfied, regardless of methodology or questions askedMeasurement of user satisfaction is contingent upon contextResearchers most often try to connect user satisfaction to another variable, satisfaction the sole focus of only one article

32.
Lessons LearnedLessons about library researchExtensive amount of qualitative research makes performing systematic reviews challengingInconsistency of methodologies used in original research makes the systematic review challenging, meta-analysis is more often than not impossibleCommon pitfalls in LIS research that affect the quality of the published article

33.
Lessons LearnedBenefits of undertaking a systematic review:Sharpens literature searching skills: benefits for both librarians and their patrons who need this kind of researchResearcher gains the ability to critically appraise researchThe practice of librarianship is strengthened by basing decisions on a methodological assessment of evidence

34.
Systematic Reviews and EBL:Impact on the ProfessionFormal gathering and synthesis of evidence may:Affirm our intuitive sense about the patterns in current researchRefine, clarify and enhance a more robust understanding of a current problem in librarianshipMay, on occasion, provide surprising results!