Remediation of Near-Match Data: Processing Bibliographic Records for Migration to a New ILS

Margaret "Annie" Glerum, Florida State University Libraries

In the summer of 2017, Florida's 40 public universities and colleges will be merging into a single ILS, a project overseen by the Florida Academic Libraries Services Cooperative (FALSC). As chair and member of the Cataloging/Authorities Working Group of the FALSC ILS Implementation Team, the presenter outlines automated processes for the analysis and remediation of data in 500 fields to standardize "near-match" strings in order to minimize unnecessary duplication of equivalent information during the merge of university and college bibliographic records. The first step is to flip any truly local data in 500 fields to 590 fields. Then a report of system numbers and 500 fields is loaded into OpenRefine to cluster the data and choose the preferred version of the note. Instructions on how to use OpenRefine to identify local notes and standardize general notes will be provided for each university and college that wish to remediate their own data.

The National Institute of Standards and Technology (NIST) Research Library is a federal library located in Gaithersburg, Maryland. The NIST Research Library's mission is to support and enhance the research activities of the NIST scientific and technological community through a comprehensive program of knowledge management. To fulfill this mission, the Library makes available proprietary databases, journals, and e-books as well as agency content such as the NIST Digital Archives (NDA), oral history, photo collections, NIST Museum objects, and NIST authored technical publications. The Library also supports the publication and digitization of the agency's Journal of Research of NIST and NIST Technical Series publications. The Library's challenge has been to make all of its content accessible and discoverable as possible through a "one stop shop" single-search interface. Our solution to that challenge was to implement a discovery layer, which brought side-along obstacles of its own, including metadata mapping, cataloging inconsistencies, and unclean data caused by legacy practices. We utilized tools like MarcEdit, XSLT scripting, and ILS vendor API's in our data manipulation. In addition to launching our discovery layer, we realized that our ERM needed extensive clean-up. Also, as legacy practices evolved through the years, our workflows had not. We decided to investigate the practices of other libraries to see how they were using their ERM as the basis for technical services workflow and what practices we could adopt. As a result of these changes, we anticipate increased discovery and use of our proprietary resources and agency content. We hope we will see an increased impact through frequent citing of NIST authored content which will raise the agency's profile in the scientific community.

Doing Similar with Less

Rob Nunez, Kenosha Public Library

After the financial recession of 2007, the Kenosha Public Library restructured staff to become more efficient and lean; however, not all procedures and practices were changed. The Collection Services team went from a staff of 20+ to 9 overnight, but continued to operate in the same fashion. When I was hired as the new department head change was soon brought to the department. In this presentation I will be covering how as the new Head of Collection Services, I worked with staff to streamline workflows, created training opportunities, leveraged APIs and reports to automate tedious tasks, and used basic project management techniques to help ensure smooth transitions.