Annual Conference Reports

Volunteer Reporters Cover ALCTS Continuing Education Events

ALCTS members who attended the ALA Annual Conference 2007 in Washington, D.C. provided these summary reports. We thank the volunteers who covered a program or preconference sponsored by ALCTS or one of its units. Their efforts enable the rest of us to benefit from their presentations. We regret that volunteers were not available to report on all the preconferences and programs.

Basic Library of Congress Classification

This first offering of the ALCTS/PCC Workshop on Fundamentals of Library of Congress Classification provided quite a mental workout. Participants galloped through the 328 page manual (not including the exercise and answer pages) along with the four well-versed instructors, all of whom were part of the group which developed the course. The instructors were Steven Arakawa, Yale University; Lois Chan, School of Library and Information Science, University of Kentucky; Paul Frank, Library of Congress; and Lori Robare, University of Oregon. Their varied backgrounds added to the strength of the pre-conference. Attendees also had the good fortune of access to the specialized knowledge of Mary Kay Pietris, LC Cataloging Policy and Support Office (CPSO).

Instructors soon discovered the need to adapt presentations to fit the two days available. Time was reserved for exercises and group review of answers following each major topic. The first day began with an overview of the classification schedules, the history of their development and continued maintenance, and the various tools used to create call numbers. The good news regarding the tools is the two major subject cataloging manuals for shelflisting and classification will be merged into one volume by the end of the year. The structure of the LC schedules is main classes (disciplines), divided by subclasses which are further divided by form, place, time and subtopics. The elements of LC call numbers include class numbers, book numbers, year and additions to call numbers. The topic of MARC coding was demonstrated, however the art of assigning a book number (Cuttering) was explored in some depth. The group forged on through general principles of assigning LC class numbers, assigning the closest number, and the importance of shelflisting using the principle of literary warrant (that which is already established as precedent in a given shelf list may supersede that which is in the LC schedules.) The basic structure of LC classification is by discipline, so that various aspects of a topic are not grouped all together.

The second day of the workshop was devoted to the most challenging, yet also most used, schedules H, N and P, and the application of accompanying tables. This made for stimulating hands on exercises. There are special types of materials to consider when classifying, such as juvenile materials, textbooks, collected papers at conferences, etc. Decisions to classify according to format or subject area are generally based on literary warrant.

Schedule H

Published in 1910, H includes sixteen subclasses, with seven subclasses reserved for sociology and seven for economics. The instructor’s general advice to read the scope notes first is particularly important. More than one table is often needed.

Schedule N

This schedule is devoted to fine arts and was first published in 1910. It is influenced by Dewey Decimal Classification, the Library of the Art Institute of Chicago, and the Berlin Kunstmuseum. N is also a subclass focused on visual arts. There is also the subclass NX for art in general. The scope notes should be consulted for the difference between art in general and visual arts. The order of precedence is unique to class N and it is sometimes contradicted by the instructions. The order of precedence is: 1) Country number; 2) genre by nationality or period; 3) genre (in general); 4) special topics. Examples were used to illustrate the order of precedence. Assigning a call number based on country of origin can be challenging, as artists may often reside in several countries throughout their lifetime.

Schedule P: Language and literature

The development of this schedule began in 1909 and was completed in 1948. It includes nineteen subclasses and thirteen different schedules and tables. Language and literature are in the same subclass with the exception of literature in major Western European languages such as English, American, Spanish, French, German, etc. They are represented by their own subclass. There is extensive use of tables in the P schedule. Form and genre prevail over topic in class P. Issues of nationality and national language pose a special problem in this classification. Vladimir Nabokov was cited as an example of an author who wrote in Russian and English and lived in the United States. The structure of an author’s literary number was explained.

Local Policy Decisions

One should consider users’ needs, the benefits of following standards rather than local practice (which will cost time and money in the long run) and remember to document local decisions. The decision to classify and shelflist hinges on whether the item will be paged or browsed. There is no need to classify if the material is not integrated in the general collection or is in remote storage. Depending on the availability of time and staff, bibliographies and atlases can be classified by subject. The following questions were raised: Will all children materials go to PZ? Should series be classed together or separately? Copy cataloging decisions should be made concerning obsolete call numbers. Should one tolerate split collections if call numbers change? In conclusion, one should weigh costs and benefits, users’ needs, workflow efficiency, and consider political issues.

Participation in the SACO Program

The workshop concluded with a presentation on the SACO program. Training is not required, and libraries may join independently using
the online proposal application. Once a proposed classification number is accepted, it appears in the weekly list and Classification Web. A new class number can be proposed when the specific concept is not covered in an existing schedule.

The preparation that went into this session was impressive. The sample exercises were quite helpful. Letting attendees do them communally if desired helped considerably. It was an enormous amount to cover in two days and it would be difficult what might be cut from future sessions. It will be interesting to see the next iteration of this course.

What They Don’t Teach in Library School: Competencies, Education, and Employer Expectations for a Career in Cataloging

The purpose of this preconference was not only to help new catalogers bridge the gap between what they learn in library school and what it is expected of them actually working in the field, but also to recruit members for the new “Task Force on Competencies and Education for a Career in Cataloging.” The mission of the Task Force is to assess the current state of education and employment in cataloging, recommending new programs that seek to promote continuing education and training in the profession.

The Task Force will serve as an umbrella organization for three new initiatives:

A Cataloging Education Fellows Program to promote cataloging education, educational programs such as workshops, and internship opportunities;

A program to connect cataloging practitioners and employers with library educators to build upon the ALCTS/Committee on Education, Training, and Recruitment for Cataloging (CETRC) Mentoring Program, link catalogers and employers with educators in order to provide better internship and practicum opportunities, and establish a lecture series to discuss current trends and future developments in cataloging;

A clearinghouse for cataloging resources for Internet resources related to cataloging, including training courses, documentation, terminology and tools, as well as links to cataloging related Internet resources.

Randy Call, Director of Technical Services, Detroit Public Library, described “Cataloger Competencies for Public Libraries.” Two key points were that new librarians generally do not have enough training to step into full-time cataloging positions, and that the institution expects degreed librarians to assume non-cataloging duties such as leadership roles, special projects, and professional involvement while maintaining productivity levels.

Beacher Wiggins, Director for Acquisitions and Bibliographic Access, Library of Congress, discussed “Managing a Shortage of Catalogers: A Research Library Perspective.” “Blended” or “hybrid” positions are becoming the norm. These changes will enable the professional cataloging staff the ability to offer solutions on how to describe bibliographic elements covered by cataloging rules, or that require interpretation. A long-term goal is to give support staff responsibility for descriptive cataloging.

Karen Calhoun, Vice President, OCLC WorldCat and Metadata Services, OCLC, presented “On Competencies for Catalogers.” Calhoun reiterated the previous observations that professional catalogers are being down-sized, and expected to assume more responsibilities. The focus of cataloging is evolving as the Generation X and Millennial users’ needs and expectations change. Additionally, fewer LIS programs are offering cataloging and fewer new librarians are choosing cataloging as a career.

Janet Swan Hill, Associate Director for Technical Services, University of Colorado-Boulder Libraries, discussed “The Brick Wall: Recruiting People to a Career in Cataloging.” Hill noted that in addition to fewer LIS programs offering cataloging, catalogers are “invisible” to patrons. Patrons consider “librarians” to be reference and circulation staff. In order to counteract these hurdles, practicums and mentorships are essential.

Brian E.C. Schottlaender, University Librarian, University of California at San Diego, described “What They Don’t Teach in Library School: Employers’ Expectations for Cataloging Recruits.” The skill sets of librarians will evolve as users’ needs and expectations change.

Matthew Beacom, Metadata Librarian, Yale University Libraries, presented “Training Issues Managers Face.” Beacom also focused on skill sets. Employees need proper training and good workplace morale in order to be more productive because productivity equals success.

Sylvia D. Hall-Ellis, Associate Professor, Library and Information Science Program, University of Denver, discussed “Cataloging Education: A New Emphasis on the Library and Information Science Curriculum.” Hall-Ellis focused upon new avenues for cataloging education within LIS programs. The evolution of cataloging and the convergence of technologies present ongoing challenges. The most effective ways to meet these challenges is through communication, mentorships, and research.

There were two breakout sessions during the preconference. The morning breakout session focused on competencies in cataloging, continuing education in cataloging: resources available, and how institutions can promote continuing education in cataloging. The afternoon session focused on design of a Cataloging Education Fellows Program to recruit, educate and train the next generation of faculty members; construction of a practitioner-library educator partnership for teaching, cataloging, classification, metadata, mixed media information, etc.; and the implementation of a marketing strategy to connect potential users of and contributors to the Clearinghouse of Cataloging Resources.

Hallie Pritchett, University of Georgia

The magnificent Thomas Jefferson building at the Library of Congress was the setting for this two-day session, co-sponsored by the Map and Geography Round Table (MAGERT), the Association for Library Collections and Technical Services (ALCTS), the Rare Books and Manuscripts Section (RBMS), and the Government Documents Round Table (GODORT). With thirty-seven participants, primarily map librarians and/or catalogers and special collections catalogers, this preconference addressed the issues of cataloging pre-twentieth century cartographic resources via instruction and hands-on exercises.

Following opening remarks by John Hérbert, Chief of the Geography and Map Division, Library of Congress and Nancy Kandoian, New York Public Library, examined the differences between cataloging cartographic resources and textual materials, focusing on description and access points. Seanna Tsung, Library of Congress, discussed the differences between cataloging early and modern maps, such as those related to early printing and publishing practices and cartographic information and its presentation. Tsung then turned her attention to describing the primary mapmaking techniques used prior to 1900, including woodcut, engraving, and lithography. Carolyn Kadri, University of Texas at Arlington, ended the morning’s instruction session by discussing the vocabulary used to describe early maps and their features. Next was a tour of the Geography and Map Division, which offered participants a behind-the-scenes look at the world’s largest collection of cartographic materials and resources. Kadri and Kandoian spent first part of the afternoon session on description, specifically chief source of information, title/statement of responsibility and publication data. The day ended with a session by Kandoian on research, often a necessary part of the process of cataloging pre-twentieth century maps and atlases, and the various reference materials and resources useful in doing such research.

The second day began with a session by Kandoian on the mathematical data associated with maps, emphasizing those issues particular to early cartographic materials. Tsung discussed the issues specific to cataloging early atlases, including creating analytic records for atlas plates and maps in books as a means of providing both greater access to such resources as well as a record of their existence should they ever be separated from their original container. The afternoon session, led by Tsung and Kadri, first addressed copy-specific notes and manuscript maps, then dealt with map reproductions, including the distinctions between facsimiles and photocopies as well as scanned images. The day ended with a session by Deborah Leslie, Folger Shakespeare Library, on how to transcribe the early letter forms and symbols often found in pre-twentieth century works.

This preconference provided a great deal of interesting and useful information about pre-twentieth century cartographic resources in general, particularly for those librarians who primarily work with materials other than maps and atlases. The speakers were very knowledgeable and enthusiastic about their individual specialties and quite eager to help participants learn and understand the finer points of map and atlas cataloging. Participants were given several opportunities throughout the preconference to apply what they learned by cataloging copies of early maps and atlas plates and discussing their results. By the end of this two-day session, participants had gained the knowledge, skills and tools necessary to successfully catalog their library’s pre-twentieth century cartographic resources.

ALCTS 101

M. Dina Giambi, University of Delaware

ALA inaugurated 101 programs at the 2007 Annual Conference, hosted by the ALA divisions, to assist first-time conference attendees. “Your ALCTS Experience: An Open House” was held on Friday, June 22, 7-9 p.m. The event was organized by the ALCTS Membership Committee under the leadership of Rebecca Ryder, Head, Preservation Services, University of Kentucky, who served as the mistress of ceremonies. Formal remarks to the group of about sixty were offered by M. Dina Giambi, Assistant Director for Library Technical Services, University of Delaware and incoming ALCTS President-Elect; Bruce Johnson, Acting Assistant Chief, Cataloging Distribution Service, Library of Congress and ALCTS President; and Pamela Bluh, Associate Director for Technical Services & Administration, Thurgood Marshall Law Library, University of Maryland, School of Law and ALCTS President-Elect. Attendees were encouraged to volunteer for the many opportunities offered by ALCTS to serve on committees, interest groups, and discussion groups.

New ALCTS members and several ALCTS/SAGE Library Support Staff Travel Grant recipients were present, along with active members who informally provided advice about how to navigate the exhibits, select and organize a conference schedule, etc.

An impromptu feature of the event were the testimonials offered by a number of the ALCTS veterans who recounted some of their early ALCTS experiences. Judith Hopkins, Associate Librarian, State University of New York at Buffalo (retired), who was celebrating her fiftieth year as a librarian, implored the audience to become professionally active, ending her comments with an enthusiastic “You’ll have a ball!” Hopkins has been the listowner or co-listowner of AUTOCAT, the library cataloging and authorities discussion group, since 1993.

José-Marie Griffiths, chair of the working group and dean of the School of Library and Information Science, University of North Carolina at Chapel Hill, gave an overview of the group’s function, and of the three public meetings held this year.

Sally Smith, Kings County Library System in Seattle, spoke in more detail about the first meeting, held in Mountain View, California on March 8. This meeting focused on “Users and Uses in Bibliographic Data.” In particular, the group discussed users and uses outside traditional libraries. “There needs to be more cooperation between different library types,” Smith said.

Greta de Groat, Stanford University, also commented on the first meeting. She criticized the working group for focusing on increasing cataloging services, rather than cutting costs, and wondered if there was a way to provide more services, while still keeping costs down.

The next speaker was Diane Dates Casey, Governors State University. Casey discussed the second meeting, held in Chicago on May 9, which dealt with “Structures and Standards in Bibliographic Data.” She stressed the importance of authority control, and the need to expose others outside the library world to the standards we have established.

Michael Norman, University of Illinois Urbana-Champaign, reported on the Chicago meeting from the perspective of an audience member. He spoke of the necessity of interoperability in bibliographic data, and suggested automation in cataloging as a way to cut costs.

The third public meeting, scheduled for July 9, discussed “Economics and Organization of Bibliographic Data.” Griffiths announced that a report would be drafted following this meeting. The report will be made available for public comment in October, and delivered to the Library of Congress in early November. More information can be found on
the group’s website.

Informing the Future of MARC

Arlene Klair, University of Maryland Libraries

This program presented findings from the MARC Content Designation Utilization (MCDU) project team. They are providing empirical data on how catalogers actually use MARC. These findings can inform future directions for MARC and cataloging practices. The presenters were William E. Moen and Shawne D. Miksa, School of Library and Information Sciences, University of North Texas, and Sally H. McCallum, Chief, Network Development and MARC Standards Office, Library of Congress.

Beginning with a review of the purpose of MARC, Moen demonstrated how MARC experiences constant growth. Contrast the two hundred seventy-eight fields in use in 1972 against the current two thousand seventy-four fields. In an examination of over fifty-six million records, the MCDU project found seven tags and ten subfields occur in every record. In records created by LC, seventeen fields/subfields account for 80 percent of usage. In records created by others, thirty-six fields/subfields accounts for 80 percent of usage. The study also identified commonly used format specific fields and subfields.

The project compiled a comparison of MARC use compared to the National and Minimal, PCC Core and CONSER record standards as well as a frequency of use of the standards. Less than 1 percent of the contributed records in WorldCat are coded PCC Core or CONSER while for LC they represent 7 percent.

The team also examined which commonly occurring MARC fields and subfields support the functions of the various user tasks for FRBR Entities. The results raised as many questions as they answered. Is it known how many content designators are needed to support a task? Do a higher percentage of content designators mean stronger support? Should one focus on the high impact and visibility of a few fields? What the study cannot show is how often elements are assigned when they are applicable.

The project is rich in data which cannot be fully conveyed in a brief report. Find complete information, including the presentation and handouts on the
MARC Content Designation Utilization Inquiry and Analysis web site. MCDU intends to make the software used to manipulate MARC data sets available so others can decompose additional record sets.

Sally McCallum noted that the granularity of MARC is seen as costly to apply. Simplification of MARC is not easy as special groups clamor for more specific elements to be added or retained. MARC is rich but does need streamlining since the same information resides in multiple places. Yet it also lacks elements found in other metadata standards, such as hierarchy support (links to urls, etc.), rights administration and preservation information. MARC is, nonetheless, still viable. It has longevity, is used in communities with very different standards, and billions of records exist world-wide. MARCXML is downloaded 10 times a day. Many crosswalks exist. The future of MARC may reside in leveraging its strengths, eliminating redundancies and adding elements it lacks. Active engagement by the audience showed intense interest in all the issues presented, which bodes well for the future of MARC.

Reflections on Cataloging Leadership

Sherab Chen, Ohio State University

Beth Picknally Camden, program moderator, introduced the five panel speakers: Sheila Intner, William Garrison, Regina R. Reynolds, Matthew Beacom, and Janet Swan Hill. People gathered here to talk, to listen, and to share their experiences and concerns on questions such as How should we be mentoring potential leaders? What development paths could younger librarians follow to become the next generation of leaders? In a time of rapid technological change and changes within the practice of cataloging and classification, and in a time of looming retirements of senior cataloging librarians, these are serious questions put forth to those in leadership positions or who are on track to become future leaders.

The question posed to the panel was "How does my experience guide other librarians?" To paraphrase the first respondent Sheila Inter, librarians should say "yes" to leadership activities such as teaching and training; and do research that benefits their work life, and publish the results. They should also strive towards achieving credentials that increase job effectiveness, and as a way to gain recognition and respect.

William Garrison strongly disagrees with the idea that "there won't be catalogers anymore." For him, "cataloging is not a dying art." As he states, "probably the most important lesson I learned was that I am a professional and that I chose to be a professional and the life as a librarian, and the added responsibility to give back to this profession."

Regina Reynolds confessed that her library career "got off to a very bad start" when she "tried to arrange books in a small technical library by LC card number" (thinking they were the call numbers!). She later went to library school and finally "learned to control the raging serials." She translates a catalog librarian's leadership development into a set of verbs with interesting proverbs, including Learn, Visualize, Serve, Control, and Enjoy.

"We need to make good use of our budgets, our money, but we should base our main value on those values of public good. 'Not everything counts that can be counted; not everything can be counted counts.' (Einstein)"

"Up to the present, the librarian has been principally concerned with the book as a thing, as a material object, but from now on he must pay his attention to the book as a living function. He must become a policeman, a master of the raging book." (Ortega, the Spanish philosopher speaking to a group of librarians in 1934, which predated e-books, e-serials, and the entire menagerie of web resources)

"'You achieve success in your field when you don't know whether what you are doing is work or play.' (Warren Beatty) Our future cataloging leaders need to see the fun, the enjoyment and the satisfaction that can come from organizing the information. Having fun and even just playing around with ideas and projects, these bring about creative solutions and innovation, and certainly our future calls to her all the creativity that our profession can ask of her."

Matthew Beacom felt fortunate to start his career at Yale University Library with a large number of professionals who were generous with their time and knowledge, turning him into a cataloger, (he did not arrive there as a cataloger). The first thing he did in his talk was to thank the people who gave him the career he has had so far, and who gave him an environment in which to grow. "In an institution filled with colleagues who are generous, it was their time and their knowledge that gave me knowledge, support, and opportunity. And I find myself now in the room as one of those people who needs to be generous, to be the one who is willing to share his ideas, willing to look in someone who I don't know very well, and see the potential in that person, and try to think what opportunity I can provide to this person for him to grow.” When people who have come to him, he models his behavior on that of his own predecessors. He emphasizes that “it’s not just to pay back those who helped you, but you just need to start helping people.”

Before sharing her career experiences, Janet Swan Hill identifies a number of points for leadership development. To paraphrase her key points, the first one is, while we can not have control over everything, but if there is something over which you do have control and are capable of achieving, you should set your mind to it. The second point is to never tell yourself to sit down and shut up (instead speak up). Third, find yourself in a place where interesting people are working or interesting things are happening. You should also try to be with people who can put you into contact with useful people. Take advantage of these interesting things, and develop a wide circle of acquaintances. And more importantly, take interest in things that extend beyond a narrow specialization. Finally, you must be willing to speak up in public. Therefore, write, publish, and speak up are the keys to a successful career development.

A question and discussion session followed. Attendees asked the speakers for advice regarding a first professional position. Public librarians raised concern that they do not get the same support for professional development as is given to academic librarians. There were also discussions on maintaining balance between job responsibilities, professional activities, and personal interests, and on how to integrate all of these things (research, works and services) into one package. Speakers also shared their thoughts and experience in balancing family life and professional development while serving in a leadership role.

New Developments in Form/Genre Access

Where We are, Where We are Heading, and Where We Want to Be

Brian McCafferty, Wabash College

Three speakers addressed an SRO audience on the subject of form/genre headings, headings that indicate what things are rather than what they are about. Robert L. Maxwell, Brigham Young University, discussed broad issues concerning tagging and indexing of form/genre headings, the lack of authority records, and the complications in using MARC authorities for topical headings that are also used as form/genre headings. He discussed the many form/genre thesauri developed for special materials or areas of study, the lack of coordination among these thesauri, and the practice of using multiple thesauri as sources for form/genre headings. He then related form/genre headings to the FRBR model, illustrating in detail how form/genre headings apply to FRBR Group 1 entities.

Adam Schiff, University of Washington Libraries (UW), addressed practical considerations in implementing form/genre headings including indexing, cataloging policies, and authority records. The UW catalog has a separate and utilized index for form/genre headings and form subdivisions. Form/genre headings are accepted in most copy cataloging, but some categories of materials (artists’ books, audiobooks, newspapers, etc.) are handled in a separate workflow. LC authority records for topical headings are edited for use as form/genre authorities, and a limited number of form/genre thesauri have been adopted. Problems arise from the current inability to control incoming headings from copy cataloging, discrepancies among different thesauri, and protecting changes from the actions of authority vendor.

Geraldine Ostrove, Library of Congress, discussed form/genre activities at LC. The major project, known as the X55 Initiative, is focusing on headings for vocal music. Many existing topical music subject headings are form/genre headings. This project is examining existing authority records and the LC classification ranges for vocal music as the basis for creating form/genre authority records. New headings are being created where no related topical headings currently exist. LC’s Cataloging and Distribution Service hopes to distribute form/genre music headings by September 2007, and LC has committed funding and personnel to this project.

Digital Asset Management: Implications for Preservation

Joanna Burgess, Reed College (Portland, Oregon)

Three speakers examined key factors related to the preservation of digital assets, including specific strategies and best practices being implemented by their institutions and in the digital preservation community at large.

Assess the potential long term value of the resources being considered for digitization

Streamline efforts by selecting collections that have already been processed or cataloged

Do not cut corners with substandard scanning that will likely have to be redone

Leave ample time for creation of metadata, as well as quality control checks

Document choice points and decisions

Start small and leave yourself room to make mistakes and learn

Discussing preservation metadata for digital repositories, Robin Wendler, Metadata Analyst, Harvard University, emphasized the significance of timely and appropriate metadata for long term control over digital resources. Unlike descriptive metadata, most preservation metadata must be captured at the time of the object’s creation. An appropriate schema should be established. Consider implementing PREMIS (PREservation Metadata Implementation Strategies), a flexible, core preservation metadata element set. Digital preservation efforts should be machine automated for best feasibility.

Most importantly, accurate rendering of a digital object’s format is critical in providing access to the resource over time: two emerging tools that facilitate maintenance and sharing of such technical metadata are JSTOR/Harvard Object Validation Environment (JHOVE) for automating format validation, and the Global Digital Format Registry (GDFR), a collaborative environment that will facilitate sharing and accessing information about digital formats.

Lastly, Joseph JaJa, Professor of Electrical and Computer Engineering, University of Maryland, outlined a suite of tools that are being developed at his institution called Approach to Digital Archiving and Preservation Technology (ADAPT). These include the PAWN (Producer-Archive Workflow Network) application, which provides a framework for consistent packaging of data between content producers and harvesters, an Auditing Control Environment (ACE), and the FOCUS (FOrmat CUration Service) registry for information and services related to formats.

Mentoring for Success: You Can Do It. ALCTS Can Help

Lia Hemphill, Nova Southeastern University

Rhonda Maker, Rutgers University, moderated an exceptional program on the importance of mentoring. Her opening remarks started with the etymology of the word mentor. The word mentor can be traced back to a character from Homer’s Odyssey. The word mentor has come to mean a person who is willing to share his knowledge, and be a faithful and wise guide. Mentoring can be formal, or informal. The program speakers were Shoshanna Kaufmann, Queens College, and Priscilla Williams, University of Florida, who shared their mentoring experiences. The key points emphasized are that mentoring requires structure, a timetable and planning to succeed. Mentoring can be accomplished in person or remotely. The mentee is not only a student, but can also be a working professional or a tenure track faculty. The most critical component for a successful relationship is matching the mentee with an appropriate mentor.

Shoshanna Kaufmann designed a mentoring program to assist tenure track professionals as well as library science students.

Priscilla Williams discussed her mentoring experience as a member of the ALCTS CCS Committee for the Education, Training and Recruitment for Cataloging (CETRC) Mentoring Program. CETRC has
a formal mentoring program to encourage professionals to consider cataloging as a career.

The presenters answered several questions about mentoring and encouraged all the attendees to become mentors.

Each vendor representative described how his/her company uses metadata to provide electronic resources (e-resources). MetaPress normalizes and validates metadata provided by publishers and creates full-text and access rights indexes. ExLibris, Inc. offers SFX, a link resolver that searches for patterns in metadata to create links. Google Scholar aims to provide a single place to find scholarly material by searching through metadata. Jill Emery provided a librarian’s perspective. She described the need for more administrative metadata to effectively manage e-resources.

Several trends in e-resources emerged as the speakers made their presentations. User behavior and expectations continue to change. Users were described as “information snacking.” Linking thus needs to be more granular, enabling users to spend less time searching for information. Users have a harvesting behavior; they do not read the full text online but rely on metadata to identify text to read later. Users have no brand loyalty—scholarly sources must compete with all the rest. Content indexing is being done by third party search engines. Publishers face the challenge of helping users keep track of the source of their “snack” of content. Users connect to the web to answer specific questions; they expect a simple user interface with results ranked within less than a second. This means a linking service becomes the vital element of a library’s electronic resources services. More metadata provides more data to link. New sources of metadata include the social web (folksonomies, ratings, reviews) and usage analytics (most popular, most cited).

There is a growing use of standards to create the metadata for this linking environment (e.g. NISO is working on expressing licensing terms in metadata). As this program demonstrates, there are many different players collaborating to make the links between e-resources that users expect.

Continuing Resources Cataloging Committee Update Forum

Brian Falato, University of South Florida

Three librarians answered questions that were preparing in advance on the topics of title-level versus article-level access for journals, library use of metadata from non-library suppliers, and the CONSER standard record for serials cataloging. The speakers were David Bade, Monographic Cataloger, University of Chicago; Diane Boehr, Director of Cataloging, National Library of Medicine; and Helen Heinrich, Cataloging Coordinator, California State University-Northridge.

Bade stated that title-level access is still important, particularly for more obscure serials, because it may be the only information available to assist researchers. Use of machine-generated metadata without human intervention presupposes that the data is always accurate, although this may not be true. His fear is that the CONSER standard record, while intended as a “floor” to which catalogers can add for their own individual catalogs, will be mandated as a “ceiling” by management, thus discouraging catalogers from adding anything to the standard record, even if needed.

Boehr argued that patrons are more interested in article-level information than title-level, and encouraged librarians to work with abstracting/indexing services to ensure the quality of article-level information that these services could provide. NLM receives article-level metadata in XML format from publishers for use in its Medline indexing service. She said quality standards are high, but corrections will not be made to metadata that is incorrect if it does not affect patron access. Boehr foresees the day when the CONSER standard record will not be seen as a separate entity, but rather the default standard for use in serials cataloging.

Heinrich believes that a fusion of title-level and article-level information is needed. She pointed to OCLC’s Common Data Format, and said federated searching of heterogeneous resources accomplishes some of what is needed now. Heinrich encouraged use of metadata from Serials Solutions or MarcIt, because it will provide at least minimal information on all the e-journal titles available through aggregators. She approved of the CONSER standard record for its less-cluttered, user-friendly look, but said its future is dependent on whether the journal-as-such endures and the importance of title-level access.

RDA Update Forum

Melissa DeFino, Rutgers University

Library of Congress’ Beacher Wiggins began the session with a broad overview of
Resource Description and Access (
RDA). He explained the Committee of Principles’ role as overseeing the Joint Steering Committee (JSC) for AACR2 and RDA.

Marjorie Bloss, RDA Project Manager, described
RDA’s online functionality. Users will have the option to view content in three levels of detail: full, concise, or custom. They can also choose between various interfaces, such as “Search/Browse,” SmartSheet,” or “Step-by-Step.”

John Attig, ALA representative to the JSC, explained that
RDA is “a metadata schema, an application profile, and a content standard like
AACR.” It was based not only on
AACR, but also on
FRBR and
FRAD. He presented an outline of
RDA, noting that it is composed of two main parts: description and access, and access point control.
RDA will also bring some significant rule changes. Most abbreviations will be eliminated. The rule of three will no longer affect the primary access point, and the compiler of a work will be acceptable as a primary access point.

Diane Hillmann, CC:DA Liaison for the Dublin Core Metadata Initiative (DCMI) reported on a meeting held in the British Library in London on April 30, 2007. The purpose of this meeting was to examine
RDA in relation to other metadata standards. It was agreed that RDA and the DCMI should collaborate in the future.

The program closed with a question and answer session with the audience. Most questions reflected the audience’s concerns that
RDA is not simplifying existing cataloging standards.

Collecting for Institutional Repositories: All the News that's Fit to Keep

E. Giltrud, Catholic University of America

This program articulated views from the perspective of three universities, which were early adopters of digital institutional repositories. While each approached the Who, the What, and the How from a different perspective, the content, format, depositors, copyright, and lessons learned were remarkably similar. Not a" build it and they will come project," marketing, promotion, "buy in" from faculty, visibility, serendipity, flexibility, and networking were key factors to success.

Susan Gibbons, Associate Dean, Public Services and Collection Development, University of Rochester (UR), discussed the lessons learned from their soft launch in early 2004 of UR Research using the open source D-Space. UR spent months of work crafting a framework of policies. "The What" became any "faculty supported content." "The Who" are teaching and research faculty, and their proxies, which includes librarians. "The How" relates to supported formats to ensure preservation. The copyright falls under the non-exclusive rights to distribute the works.

Jim Ottaviani, Coordinator of University of Michigan's Deep Blue Project, approached the repository with the University's pre-populated digital works. The broadest spectrum of scholarly works, computer programs, and "director's cut" are the accepted What. The Who, is broader in scope and adds administrative units. The How, relates to permanence and safe storage of deposited works. Buying back copyrights, they found that work was cited 25 percent more often when found in Google Scholar.

George Porter, EAS Librarian, Information Services (Library), Caltech, discussed the Collection of Open Digital Archives, (CODA) which uses a Knowledge Management perspective for their institutional Knowledge Management Data Warehouse. In the beginning, the Electronic Theses Database was the starting point. Adding "Grey Literature," the “faculty sponsored content" is a key component as is the non-exclusive right to preserve and distribute the work. Blended into the strategic objectives and the accreditation self study, CODA represents the scholarly communication of the community.

For the bibliography, please see Collection Development & Electronic Resources Committee.

Bringing Order to Chaos: Managing Metadata for Digital Collections

Brian McCafferty, Wabash College

Jane Greenberg, School of Information and Library Science, University of North Carolina- Chapel Hill, described DRIADE (Digital Repository of Information and Data for Evolution). DRIADE is addressing the need for aggregating metadata for data repositories serving evolutionary science. Both large and small data repositories exist in this interdisciplinary field containing data sets collected by individuals, over time, and often unique in structure. DRIADE will be a one-stop destination for data deposition and searching and to support acquisition, preservation and resource discovery and reuse. Most of the major journal publishers in this field will participate, and DRIADE will provide access to data for experiment replication, information exchange, and authentication.

Ann Caldwell, Brown University Library, described the development of project management software used to track digitization projects. This system allows for considerable granularity in assigning authorizations, permits tracking of projects through the digitization/metadata process, extends quality control efforts, and documents all aspects of digitization (project specs and ownership, equipment and software used, all actions related to an item by item, date, operator, etc). It will also enhance the Library’s ability to record and collect technical data.

Erin Stalberg, Head of Cataloging Services, University of Virginia (UVA), discussed the mainstreaming of metadata creation in the cataloging department. She emphasized that catalogers understand structured data from their experience with MARC and have extensive experience with workflow, training, and staffing issues. At UVA the goal is “one workflow,” but the reality is microworkflows even for some traditional materials. The key to successfully mainstreaming metadata creation is to recruit catalogers based on abilities and interest, designing training that relates to specific projects, and forming support groups for these catalogers that allow them to take charge, claim territory, and be perceived as experts.

M. Claire Stewart, Northwestern University, is project coordinator for digital projects. She described two projects using Fedora for managing digital content and technical and administrative, descriptive, and rights metadata. She illustrated how this system incorporates standards and tools such as JHOVE, METS, MODS, DC, MIX, and PREMIS to bundle data for individual projects. Issues raised by this approach include determining appropriate structures for all metadata, finding tools for efficient generation of metadata, the need to be able to maintain metadata directly in Fedora, and establish the trustworthiness of the Northwestern repository.

Karen Mokrkzycki, University of California - Santa Cruz

This was the final program in a series of audio preservation programs sponsored by the ALCTS Preservation and Reformatting Section (PARS). Tom Clareson, Program Director for New Initiatives at PALINET, introduced the panel:

The presentations addressed new technology, digital audio preservation issues and standards, new and continuing funding sources, and working with vendors. The audience received a CD containing all presentations from the entire series of programs (2005–2007), which is available from Safe Sound Archive (email
georgeblood@safesoundarchive.com).

George Blood, Safe Sound Archive, reviewed the current state of audio preservation standards (outdated) and best practices (abundant), and highlighted the work of the European Broadcast Union for its development of the Broadcast Wave Format (.bwf) which will provide a body of useful experience for standards and best practices development. He presented in detail the characteristics, advantages, challenges and typical solutions for an audio Preservation Set which includes a Preservation Master, a Use and Access Copy, and a Web-Accessible Copy, noting that the Preservation Master is the most important copy to manage. For each copy of the set, selection of physical format, technical specifications, and rights, restrictions and infrastructure issues were addressed. In discussing selection of CD-ROM for the Use and Access Copy, due to its greater reliability for playback, the speaker noted that with a reasonable level of care, CD-ROMs may last between twenty-five to fifty years, with carrier obsolescence more likely to happen before format obsolescence and data deterioration. The question of how long an item may be played is critical. Blood noted that the preservation challenges are captured in the definitions recently drafted by the ALCTS PARS Working Group on Defining Digital Preservation: access to and accurate rendering of authenticated content over time “regardless of the challenges of media failure and technological change.” He indicated that digital makes migration a permanent way of life. In defining our preservation strategies, examining lifecycle cost over multiple migrations is important and institutions must assess their ability to support this. Blood noted that help is on the way: the Association for Recorded Sound Collections Technical Committee Transitional Repository Subcommittee is developing recommendations for best practices for digital preservation strategies that will reflect the nature of the source material, collection size, and institutional support for lifecycle management.

Digital Stewardship

Joyce Ray’s (IMLS) presentation was entitled “Digital Stewardship: Sound and Audiovisual Media.” She began by noting that the Heritage Health Index identified the existence of over 46 million sound recordings and more than 40 million moving images, with more than 50 percent of these in unknown or at-risk condition. She then provided a comprehensive overview of the IMLS response, a Conservation Initiative entitled “Connecting to Collections.” This initiative includes a National Conservation Summit on June 27-28, 2007, with representation from libraries, museums and archives from all fifty states, four conservation forums to take place in 2008 through June 2009 in various locations around the country, the development of a Conservation Bookshelf which libraries, museums and archives can apply to receive, and new Statewide Conservation Planning Grants with awards of up to $40,000. Other opportunities for IMLS funding include National Leadership Grants for Libraries and Museums, which include categories for Research and Demonstration Projects, projects in Building Digital Resources (both digitization and tools development), and Library and Museum Collaborations. Examples of projects with a digital preservation component were presented. These include the Florida Center for Library Automation (development of DAITSS software application for preservation repositories, freely available); Alabama Commission on Higher Education (State-based networked repository based on the LOCKSS model); University of California Santa Barbara (digitization of more than 6,00 wax cylinder recordings); University of Denver (development of a shared infrastructure for audio resources for the Collaborative Digitization Program); University of North Carolina at Chapel Hill (online access to digitally reformatted films and preservation of original films offline); and University of Illinois-Urbana Champaign (audio-visual self-assessment tool for preservation of and access to endangered recorded sound and moving image collections).

For the first time, collaborative planning grants of $30,000 are also available to enable project teams from more than one institution to work together to plan a collaborative project in any of the three categories. Guidelines for all grant programs are on the IMLS website. A good source of current information is the free IMLS online publication
Primary Source.

Ray shared many important perspectives on digital stewardship which grew out of the WebWise 2007 conference “Stewardship in the Digital Age: Managing Museum and Library Collections for Preservation and Use.” Conference speakers noted the disappearance of analog processes and materials from the marketplace, and emphasized the need for continual migration, redundancy, active file management and making decisions at the outset for the long-term rather than the short-term. IMLS is working with NISO on the 3rd edition of the online publication
“A Framework of Guidance for Building Good Digital Collections.” In summary, IMLS recognizes that preservation of digital content involves reformatting and storing in trusted digital repositories and that collaboration among institutions is essential to achieve economies of scale and benefits of shared expertise.

Working with Vendors to Save Sound

Tara Kennedy (Yale) led the audience through a review of audio conversion project preparations. Her presentation was entitled “Saving Sound: Working with Vendors.” Kennedy began by noting the Heritage Health Survey results that indicate that more than 40 percent of the nation’s sound recordings lack backup copies and are at risk of loss due to decay, disaster, or obsolescence. Her presentation included a review of things to know at the start of an audio conversion project, what to ask and expect with a Request For Information (RFI), and the questions and process for the Request for Proposal (RFP.) The following needs to be known before vendors are approached. Understand how a proposed audio project fits in with other library projects and initiatives, and be clear about the scope and purpose of the project, whether it is for preservation, access, broadcasting or a combination of these. Define expectations for the final product that are realistic for the age and quality of the original. Know what is on the recording and what you are willing to pay to digitize, do audio clean up, labeling, metadata and re-housing of the originals. What copyright and privacy issues exist for the material? How will the project be funded? Most importantly, do the institutional capabilities and resources exist to support digital audio? The RFI process allows for a clear description of the project to be presented and information requested from the vendor about their relevant experience, specific procedures, methods and equipment they will use, on-site security and transport of items, quality control, and costs for conversion, metadata creation and other processing for each file type created. Finally, the RFP uses information gained from the RFI responses to develop a request for a price quote based on a very specific project proposal and timeline. The RFP will, in addition, specify who the project liaison will be, how communication will be conducted during the project, and provide other very specific information such as how long the price quote will be valid in the event the project is delayed, staging of the project, etc. Kennedy described the benefits to be gained from staging the project work, and also recommended doing a pilot project with the selected vendor to measure work performance and ability to produce the desired product, and to assess the client vendor working relationship for the full project.

The panel was asked to consider how e-books are distributed, how that fits with library workflows, and what kind of support services are needed to facilitate the adoption of e-books by libraries and users.

National University and e-books

Jeffrey Earnest opened the panel discussion with an overview of National University’s unusual profile. It is a newer institution, geared toward adult learners, most of whom already have full time employment. Seventy percent of the course offerings are online. National has been an early adopter of e-books, working with numerous providers, including niche publishers. Earnest discussed decision points in the process of acquiring e-books, including interface, access versus ownership models, and print plus online availability. He noted the challenges of dealing with many licenses, the issues of coordinating the availability of the electronic book, the delivery of a catalog record, and the library’s internal workflow in cataloging, acquisitions, and systems operations.

Individual e-books are selected much like print books, with additional consideration of the interface and access models factored into the decision. Earnest noted that ownership was their preferred model, but there are always concerns about perpetual access and whether the library can support access and delivery of content locally should a publisher cease trading. He stated that publisher packages of e-books are a good approach to acquiring content. He also pointed out the multiple interfaces represent a challenge in successful marketing to users.

e-books in Approval Plans

Michael Levine Clark discussed the potential for including e-books in approval plans. He described the University of Denver’s current approach as a hodgepodge of electronic reference materials, e-books from small providers, and retrospective collections such as Early English Books Online. The Colorado Alliance, in which University of Denver participates, is developing a shared purchase plan, designed to decrease duplication of monographs across the consortium. At present, this is focusing on print monographs, but the expectation is that this will extend to e-books. He noted that users are treating e-books as a discovery tool to lead them to the printed book. He hopes to see a mix of formats, but believes that a different pricing structure is needed; electronic is often as much, or more, than print. Publishers also tend to split their offerings among multiple providers. Publishers seem reluctant to commit to e-books, making backlist titles available, but delaying access to front list titles. He argues that early access to the front list will serve as a marketing tool and drive print sales.

Publishers are beginning to provide access to more front list titles, but there are still delays which have a negative impact on including e-books in approval plans. He believes that publishers need to consider e-books in the manner used for print books, and make them easily available through traditional library vendors. Notification of forthcoming titles needs to be available as it is for print. In conclusion, Levine Clark stated that it is possible to integrate e-books with print books in an approval plan, but there needs to be comprehensive coverage of the publishers and none of the major book vendors’ products are quite at that point.

e-books and the Revenue Stream

Brian Weese commented that librarians are ahead of publishers in thinking about e-books, and retailers are “the caboose.” A recent conference hosted by O-Reilly Publishers confirmed his sense that now is the time to move into e-books. Publishers do not really know how e-books are used; there are many models, but no clear direction, and lots of hand wringing as a result. Island Press is evaluating its relationship with e-book providers. The O’Reilly model of current content free on the web is driving users to purchase the print book is one with which they will experiment. The challenge is how to make content available electronically without disrupting the revenue stream. Another model is the bundling of print with electronic, but again, there is insufficient data to develop a coherent business model. Weese notes that there is lots of fear among publishers, but very little knowledge or hard data. In addressing the question of the disintegration of the book, he commented that publishers have not determined how to pay royalties at the chapter level. He mentioned the Caravan project, in which six nonprofit publishers are making multiple formats available simultaneously, went live in May 2007. At this time, it is too soon to gather any data from this project. Ingram is supporting the project and several more publishers have joined in. Each publisher is offering about five titles. The program is retailer based, but it is hard to imagine how this can succeed. He closed by characterizing the current situation as inertia: publishers cannot figure out what to do next.

Assessing the Status of the e-book

Rich Rosy summarized the growth of e-books over the past seven years, noting usage is up, but echoing Weese’s comment that he does not know how they are used. Rosy noted that publishers participating in the Google Book project believe it has increased print sales. He offered the example of the National Academy Press, whose content is free on the web, and available through NetLibrary, and still has strong print sales. He believes the rights issues are changing and chapter sales will be next. He agreed with Levine Clark that e-books need to be part of approval plans. The simultaneous availability of print and electronic is changing the workflow of book production. Increasingly, electronic is coming first, allowing publishers to move to a print on demand model. Publishers are beginning to make decisions about electronic earlier in the production cycle, and are beginning to include this in advertising. Platforms are another issue—are they too complex, and what is really needed? Usage has increased, but is being counted differently (at the page or chapter level). A lot of this change is being driven by libraries in response to patron needs. He believes the model for books will be different from journals in that print will continue to exist, and electronic will be supplemental.

Leslie Lees commented that answers to the questions about business models and usage are both contextual and transitional. Trial and error is possible in libraries where there is high tolerance for experimentation and change. Publishers are less tolerant of experimentation that may disrupt revenue, but clearly they are changing and making more front list titles available. There is still a lot of analysis that needs to be done. A survey sent to 2,500 libraries asking them consider business models and requirements for e-books generated a 20 percent response. Forty-five percent of the respondents indicated that key drivers in selecting e-books in order of importance are depth of the collection, price, currency of content, and access models. Responders found models confusing with too many options; the preferred models are either by subscription or purchase. Sixty percent of the responses indicate no duplication between print and electronic. Discovery is critical; publishers and libraries both need to acknowledge that web navigation extends beyond particular products or opacs and consider how best to integrate services.

A lively question period followed the panel presentations. October Ivins noted that publishers may need to pay provide their content on various platforms and this influences pricing. Weese responded that the Caravan project was trying to price all formats at nearly the same rate, but project funding makes this artificial. He noted that there are two important standards in the e-book equation: the reader and portability of content. Rosy pointed out that now PDF is the standard, but XML flows are becoming a new standard. Peter Allison noted the increased importance of table of content information for making print versus e-book purchase decisions, stating that NetLibrary no longer offers this metadata. Kim Stanley asked if subscription agents offered e-book collections. The general sense was not or not yet, but agreement that e-book collections are often much more like subscriptions than like print book purchases. Linda Brown reiterated the importance of front list titles and asked how e-books will affect the availability of print.

Rosy mentioned Lightening Source as an example of increasing print on demand access. Will Wakeling asked the panelists to speculate on pricing of e-books, noting that the constant space pressure on libraries and the desire for e-books as relief for this pressure could lead to premium pricing. Lees responded that the print model was clear, but e-book pricing is more diffuse. There are many unknowns, but Rosy stated that they are working with publishers to understand the price point.

There were more questions and comments to be made, but the allotted time was up and the chair closed the meeting noting that the next forum’s topic was suggested by the Janus Conference: What is the core?

Virginia Taffurelli, Science, Industry and Business Library, New York Public Library

Peter Morville, President of the Semantic Studios and co-founder of Information Architecture Institute, spoke to a standing-room-only audience, which is quite impressive considering that the session competed with other ALA programs including one featuring Julie Andrews. In his welcoming remarks, Bruce Johnson, ALCTS President, quoted Robert F. Kennedy, “May we live in interesting times.”

Peter Morville opened his remarks with the statement “Information that’s hard to find will remain information that is hardly found.” Interestingly, the first edition of Morville’s book,
Information Architecture for the World Wide Web, did not contain a definition of information architecture (IA). Later editions contained four definitions, but some people argue that the definition of IA is still not clear. Morville demonstrated several websites to illustrate different design styles. Using illustrations from Jesse James Garrett’s book,
The Elements of User Experience, Morville explained the “underlying relationships among Ã¯Â¿Â½ various elements.”

Website designers should create experiences that are valuable, desirable, accessible, credible, findable, and usable. Usefulness is the most important element. According to Morville, “Users must be able to find our websites, find their way around our websites, and find information despite our websites.” The Google website is a good example of bubble-up technology. The number of hits affects relative placement of websites on a search results screen. The key is “location, location, location.” Morville used other websites to demonstrate other elements. He asked, “How do we break down walls for users who live in Google and don’t know about our databases? You can’t find our articles on Google.”

Getting on Board

Morville commented that we must transition from fear to enthusiasm for the openness of Web 2.0. We have one foot in the past and one in the future. The products we develop today may take six to eighteen months to be launched. Therefore, we are developing the legacy systems of tomorrow.

Ambient findability is the “ability to find anyone or anything from anywhere at anytime.” Perfect findability is unattainable. Compare today with the middle ages when books were chained to tables. Morville quoted Dilbert, “Information is gushing toward your brain like a fire hose aimed at a teacup.” In today’s information age, many people in developing countries are still starving for information.

Morville then discussed several new websites and devices available today, such as Microsoft’s Surface, Neighboroo.com, the Cisco wireless location appliance, RFID chips and other products utilizing global positioning, wireless technologies, mash-ups and social tagging. We must remember that one size does not fit all. Referring to Chris Anderson’s
The Long Tail, Morville stated that we need to continue to explore algorithms, but we also need to explore best bets so our users can discover our resources.

In conclusion, Morville told the story of the three stone cutters. When asked what they were doing, the first one replied, “I am making a living.” The second replied, “I am doing the best job of stone cutting in the county.” The third one responded, “I am building a cathedral.” Libraries are cathedrals of knowledge that lift us up.

The ensuing question and answer period, as well as the size of the audience, indicates that this is a hot topic in today’s rapidly changing environment. Web 2.0 technology is ubiquitous and as librarians, we need to stay abreast of these changes and learn how to use them to the best advantage.
Morville's presentation slides can be found online.

Technical Services 2.0: Using Social Software for Collaboration

Katharine Farrell, Princeton University Library

The ALCTS Acquisitions Section Technology Committee sponsored a panel discussion entitled “Technical Services 2.0” on Monday, June 25. Attendance at the session exceeded expectation, with an audience of 400. The panel, moderated by Rick Lugg of R2 Consulting, consisted of Matt Barnes, R2 Consulting (
Eds. Note: Mr. Barnes was not employed by R2 Consulting at the time he was invited to participate on the panel), Beth Picknally Camden, University of Pennsylvania, and Elizabeth Winter, Georgia Institute of Technology.

Lugg set the frame for the panel with his opening remarks, noting that Wayne State University had just posted a position for a ‘blog and Wiki’ librarian, and citing an article in
D-Lib Magazine about University of Washington’s initiative to imbed links to their content in Wikipedia articles. The Web 2.0 dynamic puts the consumer is in control. How does this extend to libraries?

Overview of Web 2.0/Library 2.0 Characteristics

Barnes opened the panel by providing a rapid fire overview of Web 2.0/Library 2.0 characteristics. At the Charleston Conference in November 2006, there was much talk about Web 2.0/Library 2.0, but everyone had a different definition of what that meant. Tim O’Reilly defines Web 2.0 as the move to the web as a platform on which you can build. Tim Berners-Lee holds that Web 2.0 is simply jargon. Barnes offered a long list of examples of Web 2.0 applications such as multi media sharing, social tagging, social book marking, blogs, including audio blogging, RSS syndication, and more, providing examples of each: YouTube, Technorati, Geotagging, NING. He cited O’Reilly’s rules for development of applications: build applications to harness web potential, knowing that things happen which can not be anticipated; do not treat software as an artifact, but as a process of engagement; open your data and services to use by others; think of applications not as client or server, but devices in between. He also referred to various virtual environments such as Second Life.

Barnes noted that Web 2.0 is a way to solve problems in light of an organization’s mission and goals, but there is no single or correct solution, only a state continuous beta. He cautioned attendees: Get ready for Web 3.0, the semantic web.

Library 2.0 and Tech Services

How does this relate to libraries? Library 2.0 is the initiative to make library space, both virtual and physical, more open to community needs. What does this mean on the technical services side of the library equation? There are a number of possibilities such as opening the online catalog to patron tagging and review, supporting social networking between staff and patrons, social book marking, using RSS feeds to push data to patrons’ devices. Systems, currently on different platforms, need to be unified to make access and activity seamless with the ease of use common to many commercial web applications. For example, is your AV collection as easy to access as YouTube? Can your patrons create mash-ups using data from your library or other people?

Penn State, Social Book Marking and Tagging

Following Barnes’ overview, Beth Picknally Camden discussed the University of Pennsylvania’s social book marking initiative called PennTags. It was developed locally using Ajax, Java scripts and wetware. There are currently about 900 users on the network. She noted that it differs from Flickr and other such sites since it is academic in focus. It allows users to tag articles, records in the online catalog, create annotations, and manage bibliographies. The genesis of Penn Tags was a request from a film studies professor seeking a better way for students to create annotated bibliographies. The team that developed the resulting application included a catalog librarian. Picknally Camden began to experiment with the program as a way of learning more about social bookmarking. She noted that realizing others could see what she was doing shaped her tagging behavior. She also pointed out that she found it interesting to see what others were marking and what they were reading. She has begun to use this as a kind of current awareness tool with staff, using tagging as a way to make others aware of trends she sees. PennTags includes an advanced bookmarking feature that allows users to annotate citations, which she has found useful as a way to remind herself or others of why she wanted to read a particular article or book.

Picknally Camden then described an ongoing project to catalog databases which uses tag clouds and annotations rather than a spreadsheet. It is more flexible and less static. She also mentioned plans for using tagging in their institutional repository. Twenty-four percent of the tags in the system are for records in the online catalog. As a result of this finding, the default display in the online catalog has been changed to include the subject headings. While tags are personal and may be meaningless to others, analysis of the tags indicates different levels of granularity. Catalogers provide subject heading for the whole book, but a user may want to tag at the chapter level. She used the example of the tag Hollywood, which in a subject heading simply indicates a place, being used to refer to the concept of commercial movie making. She posed the question of whether tagging could inform subject analysis and suggested the potential to bring out new aspects of a work, suggest cross references or updated terms—terms that are in general use long before they appear in a structure like the Library of Congress Subject Headings. She noted that there is not yet a critical mass of data or users in PennTags, and closed with a quote from David Weinberger to the effect that folksonomies will not change taxonomy.

Interdepartmental Library Communication by Wiki and Other 2.0 Tools

Elizabeth Winter discussed the Georgia Institute of Technology (GT) e-journals Wiki and ancillary applications that support e-resource management and communication between technical and public services. She began by talking about play versus productivity, noting that technical services staff are traditionally concerned with counting and producing and need to take more time to ‘play’. She listed the core competencies of Web 2.0 companies as control of unique data, trusting users as co-developers, harnessing collective intelligence and the development of lightweight interface models. These same directions drove the development of the Georgia Tech Toolkit. The motto is “throw it out there and see what sticks.”

The GT journals Wiki was developed in response to the problem of high volume activity and communication in a complex workflow environment. A group of staff identified a Wiki as a possible solution to allow users to share information, track progress, and reduce the burdensome volume of email. She trained staff in the techniques of editing a Wiki, using the management of the renewal process as a driver, and thereby provided collection development staff with information about activities of interest to them. Using the Wiki allowed staff to push email notifications, and share and track information on troubleshooting access problems with e-resources. One real benefit has been to build confidence among the staff in working with new technologies. GT is also using Google Spreadsheet in conjunction with the Wiki. This mash up of the two applications replaces a static spreadsheet and its associated versioning and permissions issues as a way of managing cancellation projects between collections staff and technical services staff.

Winter also described the use of instant messaging as a staff tool. A pilot was conducted with about thirty staff as a more efficient way to manage interdepartmental communication. Prior to rollout of the pilot, the team assembled an FAQ and provided basic training. At GT, they have also used a Wiki in acquisitions to collect and collaborate on documentation. They are using PM Wiki software, and the Wiki is password protected. There was a learning curve, but learning created a sense of ownership and satisfaction among the staff which ultimately resulted in greater productivity.

Q&A Follow-up

After the panel presentations, Lugg facilitated a question and answer session with the audience.

Questions and comments for Winter included: Why use Google spreadsheet over a shared folder on a network drive? There were issues of access using the network drive. Were there concerns about data security on an external server that might dictate the choice of wiki platform? She did not think that was an issue at this time. Zoe Stewart Marshall noted that she had referenced her presentation at the LITA Forum and had been misrepresented in Ms. Winter’s remarks. She stated that play had no expectation of result, which is quite different than experimentation with a known technology with the intent of developing a tool for staff use.

Questions and comments for Picknally Camden included: Are tags used in catalog records if there are no LC subject headings? The answer was “no.” Are there plans to map tags to subject headings? No, and that seems contrary to the nature of tagging. There is no substitute for controlled vocabulary. Tagging is more like a mob mentality. Lugg noted that a published comparison between Wikipedia and Encyclopedia Britannica seemed to confirm the notion of collective intelligence rather than mob mentality. Picknally Camden asked the audience if anyone was using collaborative tools to make connections between tags and controlled vocabulary.

Further questions about PennTags elicited the information that tags are permanent, but when users leave the institution they may no longer have access to their tags. Tags are a layer on top of the catalog and do not become part of the catalog record. It is not known if users actually borrow the titles they tag, and there has not been a real advertising campaign to make the capability known. PennTags seems to have the potential to develop as a reader’s advisor.