The containers for scholarly information are evolving, and the impact is felt everywhere. The containers have to change, because the traditional publishing modes create too much friction. Traditional textbooks cost too much money, and don’t take advantage of adaptive learning. Scholarly publication cycles are too long and too slow for the pace of researchers. Formats imposed in the age of print have proved to be sub-optimal for the reproducibility of science. Intellectual property regimes too frequently inhibit what you can do with materials needed for teaching, and are too complex for everyday use. Key contributions to the advancement of knowledge (tenure-worthy contributions) are swimming happily outside the journal article and monograph pool. Discovery technologies have us thinking about the presentation of knowledge in new ways (data! simulations!), and learning science pushes that envelope further as we endeavor to share knowledge in efficient ways for learners at all levels.

A couple years ago, remarks at a “libraries and MOOCs” event by Christian Terwiesch (Wharton School professor and co-director of the Mack Institute of Innovation Management, among other things) provoked my thinking about libraries and the information ecosystem. His comments, paraphrased years later (and apologies if I don’t get it just right) made three important points:

Teaching and learning requires an ecosystem of quality information

Academic libraries developed a good ecosystem for quality information in the world of print resources and face to face university teaching

Academic libraries need to be deeply involved in new ecosystem development to keep step with changes in higher ed

In this post, I’m focusing my attention on information and information services that support teaching & learning. (As opposed to thinking about research collections, preservation or any of the other aspects of the ecosystem.) And I’m thinking about MOOCs and new business models for global education.

If we think inside the box of current constraints and traditional activities, when we approach new forms of teaching like MOOCs, we come up with services like checking and clearing copyrights, or the probably unsustainable activity of faculty or librarians searching for open educational resources (OER) that match the MOOC needs. Why do we approach it this way? Because the traditional ecosystem around a course involved the set reading list.

How do you get yourself outside the box, outside the frame of how we did/do things in a world of mostly local teaching and locally owned print materials?

I got some help thinking about this in a recent webinar on personalized education, particularly from the presentation by Drew Paulin, Manager, Learning Design and Innovation, Sauder School of Business, University of British Columbia. I’m not going to re-create his very insightful remarks – instead, drawing heavily on his remarks, this is my take on how they apply to the problem space I’m exploring.

The problem space looks like this: In the F2F/print resources world, we have been used to Fixed Content – the reading list, the reserves, the recommended readings. We optimized for the F2F/print context – fixed content results in high quality resources, with high relevance and applicability to the course.

There were no real down sides (or so we thought). Access issues didn’t arise except in the single book–multiple readers situation, which we solved by putting materials on reserve.

But once we leave the local campus and go into a more porous education system (MOOCs, lifelong learning, etc) we see the disadvantages of the Fixed Content paradigm.

In open education environments, you immediately confront what I call the student variability problem (wide differences in prior knowledge and skill levels among students). Paulin and others also refer to the anomalous state of knowledge (you don’t know what you don’t know). Both matter.

The student variability problem also existed in F2F classrooms–and it was exacerbated by fixed content (some materials pitched at a level too difficult for some students). Coping with variable skills often involved: narrow university admissions for certain types of students, pre-requisites for certain courses, faculty office hours for struggling students, and TA-led review sessions. And, we accepted that some students wouldn’t learn as much.

But the solutions of the F2F world for student variability don’t work so well for the more porous education models that will emerge with MOOC platforms and changed business models. The way Paulin put it, you have to get the resources to perform for you. He used terms I didn’t fully grok, but the ideas I came away with include getting the resources to solve problems via:

personalization for prior knowledge and skill levels, and

customization for student goals (familiarity? mastery? certificate?).

In the case of customization, we recognize that people need to learn for different kinds of reasons — a researcher who will perform statistical research needs different levels of mastery than an administrator who wants better skills to evaluate research and be more evidence-based in practice.

Maybe the highly fenced-in, pre-selected single set of resources is not going to be the answer for the course of the future…

If we turn to OER for the answer, we find lots of good things. The access issue (openly available stuff, not just locally available for some tiny set of students) is addressed. And there are abundant resources available via OER that can solve the personalization issue (what I can understand based on my prior knowledge) and even to some extent the customization issue (fits my goals) — but!

OER exist as a giant mosh pit and it’s hard to find stuff. How do I, a student, go after materials appropriate to my prior skills and knowledge, and my learning goals? What should I read first? We lose the pre-selection for high quality and applicability. We lose efficiency for the student. There’s going to be a lot of labor involved, it seems, to get the student matched up with the right resources in order to efficiently learn the course materials. Students themselves are not good at finding appropriate materials much of the time because of the anomalous knowledge problem – the don’t know what they don’t know, so they don’t know what to look for. If we think we’ll need experts to find materials to address personalization and customization, that seems like an overwhelming amount of expert intervention when we are talking about thousands of students. But if we don’t take up the particular advantages of OER — for personalized content, for customized content — then we are simply using the old paradigm of fixed content to teach to the middle. In a new environment that has new affordances, surely we can do better than tread water?!

The Fixed Content approach has its advantages and disadvantages; the OER mosh pit also has pro and cons. What’s the new optimal ecosystem?

How should we solve these problems:

efficiency – getting users to the right materials (for them!) quickly

high quality resources

high applicability resources, just the right resources for the course topics

access – stuff people are allowed to use or can access based on their affiliations and location

personalization – aimed at my prior knowledge and skills, helps me forward from where I am

customization – is geared toward my own goals for learning

Thinking about the problem space in this way gets me to imagine how linked data and computational methods (rather than unscalable human interventions) can bring the right materials to users. Metadata, crowd sourcing, linked data, recommendation systems and feedback loops, connecting user profiles (“I live in Bangalore”) with access metadata, and so many other options are available to us. If users contributed information about themselves and their affiliations, it could be linked to data already existing in global library databases, and layered with ways to improve the accuracy of initial recommendations.

Even though I’m not directly connected with life sciences, I pored over a recent article in PLoS * while I reflected on that question. The authors surveyed life science researchers to determine the kinds of influences that promote or constrain data sharing, and how life science researchers themselves perceive influences on their own data sharing practices.

I found this comparison interesting — 65% of survey respondents overall found NIH policies as positively influencing sharing, and 39% were positively influenced to share data as a result of formal instruction. You would expect the NIH influence to be high — after all, that’s how researchers get funded and discussions about these national policies are widely discussed. For that reason I would expect the formal instruction percentage to be lower in comparison with the NIH figure. In fact, that formal instruction has a positive effect on research sharing 39% of the time surprised me – it seems like a relatively strong positive response and makes me wonder what is going on in that formal instruction. And, given that formal instruction is hard to scale, I wonder about the origin of instruction for researchers who are engaging in it. Some comes from libraries, I know, but where else are researchers going for the info they need? And how does library-based instruction compare with the other instruction researchers are getting (at conferences? in discipline-based programs?) and is there room for more coordination and collaboration? What do researchers most need to know in making decisions about data sharing? For the 57% for whom formal instruction did not influence data sharing, or the 4% who reported it had a negative influence on their data sharing, why was that?

Part of the answer to formal training having a negative effect is described in the study — there are institution-based technology and material transfer agreements that impede willingness to share on many levels. Formal instruction may be informing researchers of requirements that seem onerous.

Life science researchers, like anybody else I suppose, base their communication practices — in this case data sharing — on social values that predicate sharing on mutual behavior. If a scientist doesn’t share their information with others or seems excessively self-interested, in return colleagues will refrain from sharing information with that person. In addition, there are the usual tradeoffs about protecting your own career status by withholding data until you can reap expected benefits from your research.

Other key factors in sharing data are well discussed in the article – including the infrastructure available via open data repositories, the bureaucratic costs of complying with policies and guidelines, the low level of consequences for non-compliance in various ways, and so on.

The “getting scooped” problem is one I’m fairly sure libraries can play a role in, via researcher networks and institutional repositories that can help researchers publicize and report on their research as a work in progress, thereby kind of staking a claim and relying on social norms to keep others from free riding. No one really wants to spend their dissertation time doing research someone else is already doing, so openness platforms designed to help prevent duplication of effort and prevent scooping/free riding would be a good service to researchers. Another fruitful area would seem to be in the design of repositories so that metadata schemes can be extended as data sets are re-purposed. And of course developing tools like the DMP Tool, that help provide researchers with information they need to do their work and reduce the bureaucratic cost of compliance is another kind of information service libraries can provide.

I’ve been involved at various times in helping to build an organizational “culture of assessment” so I was interested to read Wendy Weiner’s take on what is involved. She writes from the standpoint of the whole university.

The fifteen elements needed to achieve a culture of assessment are the following: clear general education goals, common use of assessment-related terms, faculty ownership of assessment programs, ongoing professional development, administrative encouragement of assessment, practical assessment plans, systematic assessment, the setting of student learning outcomes for all courses and programs, comprehensive program review, assessment of co-curricular activities, assessment of overall institutional effectiveness, informational forums about assessment, inclusion of assessment in plans and budgets, celebration of successes, and, finally, responsiveness to proposals for new endeavors related to assessment.

On the way to being fully data-driven, I think it is important for a culture of assessment to know what to do with feedback – anecdotal comments and stories that involve users. How to systematically and appropriately use feedback (a kind of qualitative data) along with other qualitative and quantitative data seems like a very good organizational effectiveness skill. It’s a bit like critically evaluating a resource — given the way the feedback is gathered, that tells you something about how to use it appropriately.

And finally, for building a culture of assessment, I would re-state the professional development element as “planned professional development” that has its own set of goals and assessment attached to it.

I can’t get that Jimmy Buffet song outta my head, and that seems like a harsh punishment for walking past Margaritaville to the CVS for some yogurt and bottled water….

Yep, there I was with my vendor tote bag and my “fit over” sunglasses, rubbing elbows with all the glitzy people who came to Las Vegas for a really different experience!

But ALA has always been a great place to run into former colleagues and catch up. (These guys had never heard of the Library of Congress…my dinner pal was astonished but hey, they were centuries too early, no?)

ARL Liaison Programs Discussion

One of the prompts for me to attend ALA this year was the opportunity to participate in an ARL-sponsored discussion about liaison programs. The discussion was started at ALA-midwinter in Philly, and continued in Las Vegas.

The facilitators asked the participants–all liaison supervisors–to discuss two questions.

What can I do to improve liaison work at my institution?

What can ARL do to support the development of liaison programs?

We got put into the right frame of mind for the discussion by a presentation from 3 subject liaisons who discussed some of the challenges they face. The liaisons didn’t agree on everything, but points were made about how collections work (esp., tasks that should be automated) can trump liaison engagement work in the eyes of supervisors or library leadership. They also mentioned how often incentives for engagement work are lacking, esp. at institutions or in the professional marketplace where published articles still earn the most reputation for librarians trying to build their career. Another theme was how many low-impact or administrative tasks are still part of the job (or have become part of the job) of public service librarians, fragmenting their days into an hour here or half an hour there. Outreach has a positive connotation, but outreach that is low-level “infotainment” or participating in this and that to show how friendly the library is got mentioned as a low-impact but time consuming expectation of liaison librarians by their managers. These were vivacious, thoughtful, creative and hard-working folks – they wanted structural change from their institutions to do their liaison work better. It was the perfect kind of thought-provoking event, where you return home and more fully answer some questions for yourself. In the next two weeks, I will be thinking about these questions:

What is the vision, what is the preferred mission statement, for the liaison program we want? How do we arrive at that vision institutionally?

What are the challenges in achieving that vision? Although some of the info we need has to do with challenges the liaisons face now, that’s only part of the picture.

What are the propitious conditions that allow great liaison work to flourish? (Positive inquiry.)

What are ways to move the dial, to push the organization in the direction of liaison work that has the kind of impact we defined in our vision?

It was broadly agreed that given the way liaisons generally work, not enough is known institutionally about how effort is being expended, how resources (time, attention, expertise) are allocated. If we want to move the dial in a certain direction, we need to understand our starting place. In my breakout group, I talked about some kind of dashboard so we can see what is happening and adjust (as opposed to an end-of-year report which becomes the occasion for praise or criticism, after the fact, in the annual review.) In fact, my suggestion was more along the lines of a heat map, because all disciplines are not the same. It may be appropriate for the history librarian to be doing lots and lots of instruction, but the philosophy liaison to be doing very different things. Each liaison might generate “hot spots” in their activity dashboards in different areas.

Supporting Globalization at Your Institution – Discussion Group – heads of public services

This was too rapid fire for me to take notes. The speakers were from NYU and UIUC. Globalizing the university seems to be a priority for universities everywhere, and these libraries were committing attention and thought to supporting that university priority. The takeaway for me was that they had both approached this kind of support systematically and proactively. These campuses had very different landscapes in terms of globalization issues, and in neither case was there a neatly tied up, unified approach at the university level. Rather, there were a lot of stakeholders and a lot of different experiments and programs being launched. (Very similar to the data visualization situation, see that post below.) So the first step was getting a good picture of the landscape. Who’s playing, what are their goals, what are they doing? All those questions help the library see where best to contribute. My happiness was that not once did they mention making a research guide or a list of useful resources or anything like that. They weren’t trying to bolt something on to the outside of the effort, they were looking for ways to facilitate the actual globalizing work of the campus community.

Nicole Vasilevsky is a great speaker (Oregon Health & Science University). By the third day of a conference, where you’ve gone madly from session to session, her presentation was a beautiful moment of intellectual repose because you never had to struggle to understand her point. Ahhhh.

My notes will not do her justice, but plenty of what she said resonated with things I learned about at the 2013 Research Data Alliance gathering.

She and her group had some research questions:

How to make science more reproducible?

How can we educate researchers so that their data will be more reusable and reproducible?

How can we use data to generate new hypotheses and make new connections?

Q. #1

Nicole explained that reproducible science involves providing good metadata about resources used in your lab experiments. Her analogy was cooking – you might copy the recipe of a famous chef, but if your ingredients weren’t of the same quality as the chef’s, your results may vary… Verifying results in science means using the same exact resources (antibodies, model organisms, etc.)

Another issue is the methodology for an experiment. In many scientific journals there are length restrictions on this part of an article, so even if a researcher intends to fully describe the methodology, they may be prevented by publishing practices.

She suggested we take a look at the comments at this twitter hashtag:

#overlyhonestmethods

In their study, they took journal articles from biomed literature, across several domains, and looked at 200 papers from journals with various impact factors. Across all those articles, only about 50% of resources were identifiable. (antibodies, cell lines, organisms, knockdown reagents, etc) even when there were stringent requirements for including this information according to journal guidelines, which pointed out that the guidelines weren’t being enforced.

Evidently they looked at lab notebooks too, which are often meticulous. Where labs are doing a good job tracking the info, they aren’t getting that info into the publications (vendors, catalog numbers, stable unique identifiers, etc).

Tools to help researchers are emerging. Unique identifiers for resources are available in some places – e.g., biosharing.org. But in experiments, resources could also be software and tools. There needs to be more registry-like oversight of the identifiers and controled vocabularies that are needed.

So, one of the projects her group is now working on is the Resource Identification Initiative, promoting unique RRIDs (Research Resource IDs). Along the lines of Force 11, RRIDs should be: Machine readable, free to generate and access, used consistently across publishers and journals. to aid in discovery, RRIDs should be used in methods sections and as keywords in published articles. Even though this is a very recent project, RRIDs are getting used. Where they are being used, they are correctly used about 90% of the time.

Q. #2

In their effort to help educate researchers, her boss entered a contest called the one-page challenge: What would you to with $1000 in order to…..

They won, and used the money to fund a Data Management Happy Hour to advertise their workshops and consultations and other services, and to talk with researchers about their data. Seems like part of the reason it was a success (besides the wine) had to do with being very open about how everyone is learning how to do this better. They had some giveaway where people shared some badly managed data sets or visualizations, got people laughing, used the mistakes to make points about better practices and establish themselves as useful consultants with relevant library services.

They also had a data wrangling open house for grad students, who are less immediately concerned with the use and re-use of data or reproducible science — they are really focused on getting thru school, graduating. In order to do that, they need to be efficient and avoid mistakes in their data management practices, so Nicole and her colleagues involved grad students in organizing and promoting a data wrangling workshop.

Q. #3

Making new connections via data was the third part of the presentation. I learned, finally, the difference between an ontology and a controlled vocabulary–it’s not complicated, it just requires a clear explainer. CTSA Connect is the project Nicole reviewed as making the connections alluded to in her third question. CTSA is explained on their own website, and it sounds like VIVO:

CTSAconnect aims to integrate information about research activities, clinical activities, and scientific resources by creating a semantic framework that will facilitate the production and consumption of Linked Open Data about investigators, physicians, biomedical research resources, services, and clinical activities. The goal is to enable software to consume data from multiple sources and allow the broadest possible representation of researchers’ and clinicians’ activities and research products. Current research tracking and networking systems rely largely on publications, but clinical encounters, reagents, techniques, specimens, model organisms, etc., are equally valuable for representing expertise. http://www.ctsaconnect.org/

Nicole and others have been working on the VIVO Integrated Semantic Framework (VIVI-ISF) ontology suite. The general idea as I understand it is to have a semantic framework for describing relationships among all the entities that are interesting to researchers trying to stay up-to-date in their fields. So there needs to be an ontology for resources as well as an ontology for people – a framework for revealing the relationships that are important about these kinds of entities.