I’m three months into my first year as an academic librarian and it has been a whirlwind. Conversations with many of my LIS friends confirm that the transition to professional librarianship presents invigorating ups as well as exhausting downs. Something I have been trying to focus on is embracing the ups and moving quickly and gracefully past the downs (with a little reflection). In the spirit of trying to get better at this, I’d like to share the best “up” I’ve found in my short three months as an Information Literacy Librarian.

If you have the opportunity, use your personal experience in the classroom. I know that this is incredibly scary. Being vulnerable as a (new!) instructor is terrifying. Further, balancing vulnerability with expertise can sometimes be a challenge. Yet, Maria Accardi recently gave a brilliant keynote on library burnout in which she held, “I think to truly see each other, to respect and care for the souls of students, means aligning the emotionally vulnerable parts of your self to the corresponding parts of the student” (p. 13). Moments of vulnerability in the classroom, while intimidating, can foster unbelievably rich and meaningful dialogue. I’ve even had students approach me after class to ask me about a specific part of the testimony I shared, which can lead to subsequent conversations about their own research. I’m still struggling to figure out exactly why this happens, but a recent Twitter conversation sparked some ideas:

I so appreciate April’s observation that it creates a stronger connection between experience and learning. Accardi adds that students are whole people in the classroom and that they “bring with them all of the things that make them human—their stories, their beliefs, their filters, their talents, their challenges, their emotional baggage, everything” (p. 12). Why can’t librarians be whole people too? Why can’t we bring the same baggage into the classroom? And doesn’t being “whole” make us more approachable? Doesn’t it make research more approachable?

I believe that it does. So how does one even start to integrate more personal experience into their teaching? Many of the tactics I have tried stem from an intensive research project I’m currently doing. I’m completing my first peer-reviewed article for In the Library with the Leadpipe and I have found that this provides rich testimony for many different research issues.

For example, I recently asked students to articulate what their research process looks like. They spent a few minutes drawing their process, from the time a research project is assigned to the time that they turn it in. We then tried to combine their ideas into one complex research process on the board. I was currently going through my own research process and I used this opportunity to challenge them with trials I had faced. I asked the students questions like “but what happens if you’re tracking down citations and you suddenly realize someone has already written the paper you’re writing?” and “how is research continually part of the writing process?,” often providing tangible examples from my article along the way. Before we knew it, the board was covered in arrows, illustrating the iteration necessary to do quality research. After the class, the professor came to my office to thank me. She said that she thought that the activity might have been the first time her students have had to articulate exactly what their process looks like. She said that she thought it would definitely help the students be more thoughtful researchers. I also believe that it made iteration and revision “okay” and maybe even reduced some library anxiety.

I have also used my experience with Leadpipe to facilitate conversations about how peer review works, blind vs. open and more collaborative forms of peer review, and the time it takes to complete vetting processes. This often sparks a more thoughtful and nuanced conversation about the pros and cons of peer review, which moves students away from peer-reviewed-equals-good-and-popular-sources-equals-bad conversation.

I have also plugged our citation management system, Zotero, in these conversations. I have a single-spaced twenty-five page document of notes and draft citations for my article (no, this is, unfortunately, not a joke). I might risk compromising my “expertise” with students by sharing this fact and letting them know that I wish I would have used Zotero at the beginning of my project. Again, it is definitely nerve-wracking to be vulnerable in this moment. But I think it makes me more human and illustrates to students that research is a continual learning process, even for librarians.

Sharing your experience can be as simple as sharing tidbits about how you approach research. How do you figure out what the scholarly conversation is? What tools do you use to start your research? Do these change after you know the important scholars or disciplines for your topic? For example, I often share that one of my favorite ways of entering the scholarly conversation is by reading more about my general topic area and then finding claims I’d like to challenge or push back on and doing citation tracking from there. You can even reflect on the research you did in undergrad or graduate school. How did you use class readings to guide your thesis development? How did you organize your research? The point is not to show that you’re perfect. The point is to show that imperfect research can be successful too and that librarians can help guide students through this process because we’ve been there.

This work is not always easy. I have definitely noticed that sharing personal experience in the classroom can be harder or easier because of class dynamics, faculty involvement, or even student level. The reality is that it is difficult to build trust in the classroom when sometimes the space doesn’t even feel like your own. I hope to continue to brainstorm how sharing personal experience can go beyond the one-shot session. For example, I am currently thinking through how I might use some of this testimony in my research consultations with students.

There are generally two types of research that take place in the LIS field, one is more rare and is capital-R-Research, typically evidence or theory-based and generalizable; the other, more prevalent, is lowercase-r-research, typically anecdotal, immediate, and written in the style of “how we did it good.” The latter has historically been a defining quality of LIS research and receives much criticism, but as librarianship is a professional field, both theory and practice require documentation. Gorman (2004) notes how value and need have contributed to a mismatch in what is published, “[leading to] a gap in the library journal literature between arid and inaccessible reports of pure research and naive ‘how we did it good’ reports.” There are implications for these concerns both within and outside of the field: first, those within the field place less value on LIS research and might have lower confidence and higher anxiety when it comes to publishing, and second, those outside the field might take LIS research and librarians less seriously when we work to attain greater equality with faculty on campus. Understanding these implications and how human subjects research and the Institutional Review Board (IRB) fit into social sciences research can help frame our own perceptions of what we do in LIS research.

What is the IRB? The IRB regulations developed in the wake of the revelation of Nazi experimentation on humans during WWII, as well as the U.S. government’s infamous Tuskegee study in which black men with syphilis were allowed to go untreated so that researchers could examine the progression of the disease. All U.S. academic and research institutions that receive federal funding for research must convene an IRB to review and monitor research on human subjects and ensure that it remains ethical with no undue risk to participants. There are three levels of IRB approval — exempt, expedited, and full; a project is assigned its level of review based on the amount of risk to the subject and the types of data collected (informational, biological, etc.) (Smale 2010). For example, a project involving the need to draw blood from participants who are under 18 would probably be assigned a full review, while one featuring an anonymous online survey asking adults about their preferences for mobile communications devices would likely be exempt. It’s worth noting that many of the guidelines for IRB review are more relevant to biomedical and behavioral science research than humanities and social science research (for more discussion of these issues, see George Mason University History professor Zachary Schrag’s fascinating Institutional Review Blog).

Practically speaking, what is the process of going through IRB approval like for LIS researchers? We’ve both been through the process — here’s what we’ve learned.

Maura’s Experience

I’ve gone through IRB approval for three projects during my time as a library faculty member at New York City College of Technology (at City University of New York). My first experience was the most complex of the three, when my research partner and I sought IRB approval for a multiyear study of the scholarly habits of undergraduates. Our project involved interviews with students and faculty at six CUNY campuses about how students do their academic work, all of which were recorded and transcribed. We also asked students to photograph and draw objects, locations, and processes related to their academic work. While we did collect personal information from our participants, we’re committed to keeping our participants anonymous, and the risk involved for participants in our study was deemed low. Our research was classified by the IRB as expedited, which requires an application for continuing review each year that we were actively collecting data. Once we finished with interviews and moved to analysis (and writing) only, we were able secure an exempt approval, which lasts for three years before it must be renewed.

The other two projects I’ve sought IRB approval for — one a solo project and one with a colleague — were both survey-based. One involved a web-based survey of members of a university committee my colleague and I co-chaired, and the other a paper survey of students in several English classes in which I’d used a game for library instruction. Participation in the surveys was voluntary and respondents were anonymous. Both surveys were classified exempt by the IRB — the information we collected in both cases were participants’ opinions, and little risk was found in each study.

Comparing my experiences with IRB approval to those I’ve heard about at other colleges and universities, my impression is that my university’s approach to the IRB requirement is fairly strict. It seems that any study or project that is undertaken with the intent to publish is considered capital-R-research, and that the process of publishing the work confers on it the status of generalizable knowledge. Last year a few colleagues and I met with the Chair of the college’s IRB committee to seek clarification, and we learned that interviews and surveys of library patrons solely for the purpose of program improvement does not require IRB approval, as it’s not considered to be generalizable knowledge. However, the IRB committee frowns on requests for retroactive IRB approval, which could put us in a bind if we ever decide that results of a program improvement initiative might be worth publishing.

Nicole’s Experience

At the University of Arizona (UA), I am in the process of researching the impact of digital badges on student motivation for learning information literacy skills in a one-credit course offered by the library. I detailed the most recent meeting with our representative from IRB on my blog, where after officially filing for IRB approval and having much back-and-forth over a few months, it was clarified that we in fact did not exactly need IRB approval in the first place. As mentioned above, each institution’s IRB policies and procedures are different. According to the acting director of the UA’s IRB office, our university is on the more progressive end of interpreting research and its federal definition. Previous directors were more in line with the rest of the country in being very strict, where if a researcher was just talking with a student, IRB approval should be obtained. Because their office is constantly inundated with research studies, a majority of which would be considered exempt or even little-r research, it is a misuse of their time to oversee studies where there is essentially no risk. A new trend is burgeoning to develop a board comprised of representatives from different departments to oversee their own exempt studies; when the acting director met with library faculty recently, she suggested we nominate two librarians to serve on this board so that we would have jurisdiction over our own exempt research to benefit all parties.

Initially, because the research study I am engaging in would be examining student success in the course through grades and assessments, as well as students’ own evaluation of their motivation and achievement, we had understood that to be able to publish these findings, we would be required to obtain IRB approval since we are working with human subjects. Our IRB application was approved and we were ranked as exempt. This means our study is so low-risk that we require very little oversight. All we would need to do is follow guidelines for students to opt-in to our study (not opt-out), obtain consent for looking at FERPA-related and personally identifiable information, and update the Board if we modify any research instruments (surveys, assessments, communications to students about the study). We found out, however, that we actually did not even need to apply for IRB in the first place because we are not necessarily setting out to produce generalizable knowledge. This is where “research” and “Research” come into play. We are in fact doing “research” where we are studying our own program (our class) for program evaluation. Because we are not saying that our findings apply to all information literacy courses across the country, for example, we are not producing generalizable “Research.” As our rep clarified, this does not imply that our research is not real, it just means that according to the federal definition (which oversees all Institutional Review Boards), we are not within their jurisdiction. Another way to look at this is to consider if the research is replicable; because our study is specific to the UA and this specific course, if another librarian at another university attempted to replicate the study, it’s not guaranteed that results will be the same.

With our revised status we can go more in depth in our study and do better research. What does “better” mean though? In this sense, it could be contending with fewer restrictions in looking for trends. If we are doing program evaluation in our own class, we don’t need to anonymize data, request opt-ins, or submit revised research instruments for approval before proceeding because the intent of the research is to improve/evaluate the course (which in turn improves the institution). Essentially, according to our rep, we can really do whatever we want however we want so long as it’s ethical. Although we would not be implying our research is generalizable, readers of our potentially published research would still be able to consider how this information might apply to them. The research might have implications for others’ work, but because it is so specific, it doesn’t provide replicable data that cuts across the board.

LIS Research: Revisiting Our Role

As both of our experiences suggest, the IRB requirement for human subjects research can be far from straightforward. Before the review process has even begun, most institutions require researchers to complete a training course that can take as long as 10 hours. Add in the complexity of the IRB application, and the length of time that approval can take (especially when revisions are needed), and many librarians may hesitate to engage in research involving human subjects because they are reluctant to go through the IRB process. Likewise, librarians might be overzealous in applying for IRB when it is not even needed. With the perceived lower respect that comes in publishing program evaluation or research skewed toward anecdotal evidence, LIS researchers might attempt big-R Research when it does not fit with the actual data they are assessing.

What implications can this have for librarians, particularly on the tenure track? The expectation in LIS is to move away from little-r research and be on the same level as other faculty on campus engaging in big-R Research, but this might not be possible. If other IRB offices follow the trend of the more-progressive UA, many more departments (not just the library) may not need IRB oversight, or will be overseeing themselves on a campus-based board reviewing exempt studies. As the acting IRB director at the UA pointed out to library faculty, publication should not be the criterion for assuming generalizability and attempting IRB approval, but rather intent: what are you trying to learn or prove? If it’s to compare/contrast your program with others, suggest improvements across the board, or make broad statements, then yes, your study would be generalizable, replicable, and is considered human subjects research. If, on the other hand, you are improving your own library services or evaluating a library-based credit course, these results are local to your institution and will vary if replicated. Just because one does not need IRB approval for a study does not mean it is any less important, it simply does not fall under the federal definition of research. Evidence-based research should be the goal rather than only striving for research generalizable to all, and anecdotal research has its place in exploring new ideas and experimental processes. Perhaps instead of focusing on anxiety over how our research is classified, we need to re-evaluate our understanding of IRB and our profession’s self-confidence overall in our role as researchers.

Tl;dr — The Pros and Cons of IRB for Library Research

Pros: allows researchers to make generalizable statements about their findings; bases are covered if moving from program evaluation to generalizable research at a later stage; seems to be more prestige in engaging in big-R research; journals might have a greater desire for big-R research and could pressure researchers for generalizable findings

Cons: limits researchers’ abilities to drill down in data without written consent from all subjects involved (can be difficult with an opt-in procedure in a class); can be extremely time-intensive to complete training and paperwork required to obtain approval; required to regularly update IRB with any modifications to research design or measurement instruments