Lib-Value is a three-year study, funded by a grant from IMLS, with the ultimate goal of understanding how to measure value and return on investment (ROI) in all aspects of academic libraries. The workshop is a collaborative effort of ARL with the University of Tennessee, University of Illinois at Urbana-Champaign, and George Washington University.

A workshop designed to provide potential and current participants with vital information on the LibQUAL+® service. This one-hour webcast provides practical information for administering a survey, help participants with interpreting the data and its analysis, and share best practices in using the results.

Key members of the LibQUAL+® team, Martha Kyrillidou and David Green, hosted the webcast. Our guest presenters were:
- Sandra Phoenix, Executive Director of the HBCU Library Alliance,
- Carla Stoffle, Dean, University Libraries and Center for Creative Photography, University of Arizona, and
- Chestalene Pintozzi, Director of Project Management & Assessment, University of Arizona

Evaluation is a profession composed of persons with varying interests, potentially encompassing but not limited to the evaluation of programs, products, personnel, policy, performance, proposals, technology, research, theory, and even of evaluation itself. These principles are broadly intended to cover all kinds of evaluation. For external evaluations of public programs, they nearly always apply. However, it is impossible to write guiding principles that neatly fit every context in which evaluators work, and some evaluators will work in contexts in which following a guideline cannot be done for good reason. The Guiding Principles are not intended to constrain such evaluators when this is the case. However, such exceptions should be made for good reason (e.g., legal prohibitions against releasing information to stakeholders), and evaluators who find themselves in such contexts should consult colleagues about how to proceed.

The SUSHI Reports Registry provides a listing of the standard report names and releases for COUNTER reports that should be used when implementing the schema. Also includes a registry of non-COUNTER reports that have been developed to work with the SUSHI protocol.

The COUNTER Auditing requirements are needed to ensure that the usage reports provided by vendors are in line with the COUNTER principles of credibility, consistency and compatibility. For this purpose COUNTER has defined specific audit test-scripts for each of the COUNTER required usage reports. As the majority of vendors will work with their own auditor, the test-scripts will guarantee that each of them will follow an identical auditing procedure and result measurement.

COUNTER provides an international, extendible Code of Practice for E-resources that allows the usage of online information products and services to be measured in a credible, consistent and compatible way using vendor-generated data. Release 4 is an integrated Code of Practice covering journals, databases, books, reference works and multimedia content. It replaces both Release 3 of the Code of Practice for Journals and Databases and Release 1 of the Code of Practice for Books and Reference Works. The deadline date for its implementation is 31 December 2013. After this date only those vendors compliant with Release 4 will be considered to be COUNTER compliant. Release 4 contains the following new features...

We asked six experts to reflect on their areas of expertise in evaluation and respond to two questions: (1) Looking through the lens of your unique expertise in evaluation, how is evaluation different today from what it was 10 years ago? and (2) In light of your response, how should evaluators or evaluation adapt to be better prepared for the future?

Our first task was to identify one trenchant research question to guide the project. The question we developed was, What do students really do when they write their research papers? Between the assignment of a research paper and the finished, submitted product was a black box that largely concealed the processes undertaken by the student. We wanted to take a peek into that box to see what we could find. We felt that this question accurately reflected our ignorance of student work habits while providing a manageable focus for our information-gathering activities. We took a general approach, avoiding presuppositions. We wanted to begin our project by exploring students’ practices; we did not set out to prove a point. Our initial aim was to be able to describe in detail how students actually write their research papers. This would enable the library staff to develop new ways to help students meet faculty expectations for research papers and become adept researchers.

The development of a library assessment plan requires a substantial commitment on the part of the library director. The director must provide strong leadership since assessment will require resources -- money for staff to be released from their regular duties, money for surveys and other data collection efforts, and so forth. Even if the staff are excited about assessment, it will be difficult, if not impossible, to develop an assessment plan and see it through to fruition without the wholehearted support of the director and other top administrative staff.

In order to gain familiarity with the conceptual and practical foundations of these standards and their applications to extended cases, the JCSEE strongly encourages all evaluators and evaluation users to read the complete book, available for purchase at http://www.sagepub.com/booksProdDesc.nav?prodId=Book230597& and referenced as follows:

In August 2008, the Duke Teaching and Learning Center “Link” opened in the Perkins Library Lower Level. The technology-enhanced classrooms and group study spaces have been particularly designed to accommodate and encourage collaborative work and project-based learning activities. This project represents the culmination of years of planning and has been influenced by several recently renovated prototype spaces. As Duke prepares to undertake a significant amount of classroom construction and renovation over the next decade, this project represents a significant opportunity for evaluation and assessment to inform the many academic space planning decisions that lie ahead.

How do we quantify... the contributions the library makes in return to the university? Research libraries are not used to assigning a monetary value to the use of their collections, services and expertise, although public libraries have been moving into this direction in the past few years. Borrowing some of the methods public libraries use, RAU has calculated dollar values for some core library transactions. This is only an illustration and is by no means an exhaustive list of the ways the library contributes to the university. It cost $56,678,222 to maintain Cornell’s 20 libraries in 2008/2009. If CUL did not exist, the university would have had to pay $90,648,785 last year to secure services that are comparable to the use that the Cornell community makes of the library.

This conceptual piece presents a framework to aid libraries in gaining a more thorough and holistic understanding of their users and services. Through a presentation of the history of library evaluation, a measurement matrix is developed that demonstrates the relationship between the topics and perspectives of measurement. These measurements are then combined through evaluation criteria, and then different participants in the library system view those criteria for decision-making. By implementing this framework for holistic measurement and cumulative evaluation, library evaluators can gain a more holistic knowledge of the library system and library administrators can be better informed for their decision-making processes.

Conducting evaluations of programs that are useful to decision makers is the hallmark of successful evaluation. Appropriate program implementation and operation are critical to this work. A strategy that can be used to determine the extent to which a program is ready for full evaluation, is known as evaluability assessment. Initially developed by Wholey (1979), evaluability assessment (EA) seeks to gain information from important documents and input from stakeholders concerning the content and objectives of the program. Outcomes from EA include clear objectives, performance indicators, and options for program improvement. Wholey (1979) recommended EA as an initial step to evaluating programs, increasing the likelihood that evaluations will provide timely, relevant, and responsive evaluation findings for decision makers.

Although academic libraries have a long tradition of program assessment, in the past the results have been more meaningful internally than externally. Recent changes in the conceptualization of libraries’ role in higher education and advances in measurement tools will likely provide answers to different questions, particularly the relationship of library services and resources to student learning and success.

Read: Executive Summary & Survey Questions and Responses. Survey results indicate that while a modest number of libraries in the 1980s and earlier engaged in assessment activities beyond annual ARL statistics gathering, the biggest jump in activity occurred between 1990 and 2004. The overwhelming majority of responses indicate the impetus was service driven and user centered and came from within the library itself rather than from an outside source. Respondents’ top impetus for beginning assessment activities (63 respondents or 91%) was the desire to know more about their customers. Based on responses to a question about their first assessment activities, over half began with a survey, almost all of which were user surveys.

The University of Minnesota Libraries received support from the Andrew W. Mellon Foundation to develop a multi-dimensional model for assessing support for scholarship in the context of a large research campus. The project team explored discipline-specific needs for facilities, information content, services, tools, and expertise in the humanities and social sciences. The goal was to develop a model for bringing greater coherence to these distributed resources through physical and virtual means, and also a research support environment that could be modeled, prototyped, and evaluated. The study is also being used to assist the academic leadership in understanding how libraries can promote the physically boundless nature of inquiry and information use.

During a year-long research project with 12 companies at the leading edge of performance measurement, we devised a "balanced scorecard"-a set of measures that gives top managers a fast but comprehensive view of the business. The balanced scorecard includes financial measures that tell the results of actions already taken. And it complements the financial measures with operational measures on customer satisfaction, internal processes, and the organization's innovation and iniprovement activities-operational measures that are the drivers of future financial performance.

Purposeful data results from an expressed purpose in combination with an adequate method. Data gathering is an essential part of online user studies, and every method has its areas of application and its limitations: quantitative surveys are limited in their ability to detect causal relations; with qualitative interviews broad generalizations are risky. In library and information science, user research is a domain in which we gather large amounts of data. But is our data really "purposeful"? Already in 1972, Frank Heidtmann (p. 36-37) made the criticism that we use inadequate research techniques and that these research techniques are – independent of their appropriateness – used in an inaccurate and invalid way.

Goals * Increase knowledge and skills of library staff about social science research methods and best practices for studying user behavior * Provide a forum for discussing important findings from major studies of library user behavior and implications for our services * Foster collaboration among librarians to conduct user studies * Build a support structure and network for librarians interested in conducting user studies * Support at least one user study that results in a report suitable for publication via the library web site and/or local event by June 2010

In the rapidly changing information environment, libraries have to demonstrate that their services have relevance, value, and impact for stakeholders and customers. To deliver effective and high quality services, libraries have to assess their performance from the customer point of view. Moving to an assessment framework will be more successful if staff and leaders understand what is involved in organizational culture change. This paper describes the new paradigm of building a culture of assessment, and places it in the framework of organizational culture change, utilizing a learning organization and systems thinking approach.

The ACRL publication Value of Academic Libraries: A Comprehensive Research Review and Report is a review of the quantitative and qualitative literature, methodologies and best practices currently in place for demonstrating the value of academic libraries, developed for ACRL by Megan Oakleaf of the iSchool at Syracuse University. The primary objective of this comprehensive review is to provide academic librarians with a clearer understanding of what research about the performance of academic libraries already exists, where gaps in this research occur, and to identify the most promising best practices and measures correlated to performance.

IMLS defines outcomes as benefits to people: specifically, achievements or changes in skill, knowledge, attitude, behavior, condition, or life status for program participants (“visitors will know what architecture contributes to their environments,” “participant literacy will improve”). Any project intended to create these kinds of benefits has outcome goals. Outcome-based evaluation, “OBE,” is the measurement of results. It identifies observations that can credibly demonstrate change or desirable conditions (“increased quality of work in the annual science fair,” “interest in family history,” “ability to use information effectively”). It systematically collects information about these indicators, and uses that information to show the extent to which a program achieved its goals.