Author Archive

One important point of today’s presentations at code4lib was on using community-based approach to provide solutions. There was also an interesting breakout section led by Lyrasis on the importance of open source solutions in libraries and why they are becoming so popular.

In order to develop a digital exhibit that would aggregate digital collections originally in different formats, the University of Notre Dame decided on a community-based approach. They then joined the Hydra Framework community. The community includes Stanford, Virginia University, DuraSpace, MediaSpace, and Blacklight. Hydra Framework is a shared base of code that each Hydra community member benefits from. It provides developers with a set of tools that facilitate the rapid development of scalable applications. The Notre Dame’s digital exhibit’s information architecture includes Apache Solr and Fedora Commons as their repository and Blacklight and Active Fedora as their interface.

The Chicago Underground Library also used a community-based approach to collect and catalog the Chicago city’s history. They collected every piece of print data imaginable and this included hand-made artist books, university press, self-published poetry books etc. They also gathered information about each individual who contributed to the collection so users can trace back to them. They have accumulated over 2000 publications so far. Their future goal is to expand the collections to include audio and video.

A rep from Lyrasis led a breakout section to talk about how their organization can help libraries achieve their goals and find out why there is so much interest in open source solutions and what is driving such enthusiasm. It was interesting to find out that no attendant thought that the decision to embrace open source solutions as opposed to vendor provided solutions was solely due to financial reasons. Everyone agreed it was more about the independence and the flexibility that open source software provide. Then there was a long discussion on the cost involved in open source software implementation. Overall the group found that open source solutions are definitely worth it.

Before commenting on today’s topics, I thought I would say few things about the pre-conference section on “Solr: what’s new?” that I attended yesterday.

Although there were other interesting pre-conference sections offered concurrently, I chose to attend the one on Solr because of the important role that its SolrMarc utility plays in vufind record indexing. If you wonder how this works, basically, SolrMarc reads in records from an imported voyager marc load file, extracts information from various fields and adds that information to the vufind Solr index. I wanted to learn more about Solr and see if there is any new update that can be used to move our vufind implementation to the next level. The preconference was rather technical. Erik Hatcher from Lucid Imagination was the presenter and he talked about how Solr has drastically continued improving over time. Some of the new features in development include SolrCloud which relies on Zookeeper, a centralized service for maintaining configuration information, to provide a shared, centralized configuration and core management to programmers. He also talked about pivot/grid/matrix/tree faceting which is a hierarchical way of providing facets that would branch out to other facets (“sub-facets”) to further narrow down a search. Another cool feature that Solr has improved is the date faceting and that is going to be seen in our upcoming vufind upgrade.

The actual conference started today and Erik has already blogged about all the important subjects. I was interested in what Damian had to say about vufind.

The idea about centralization of code introduced by Erik Hatcher was also embraced by Damian Katz when he talked about the redesign goals for vufind. He is aiming for a centralization of marc specific code to facilitate replacement. Just to be a little bit controversial, Damian stated that “MARC must die”. He wanted to say that library data is not limited to marc data but also consists of other data types that are becoming more and more popular. He expressed pride in the ability for the upcoming release of vufind (vufind 1.1) to provide among others, full Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH) server support capability, which will enable harvesting metadata into a directory for further data manipulation. Here, Damian provided a solution to the question “where is my data?” . His solution, “grow the toolkit” that will solve the problem of obtaining records from remote sources, process harvested files, and index arbitrary xml records. According to Damian, understanding record drivers gives the programmer a lot of control over vufind.

This blog post gives an overview of my first experience at LITA. It also highlights some of the important points of the conference and demonstrates why some of the topics discussed made LITA Forum 2010 in Atlanta, GA, a wonderful experience.

I had the opportunity to attend the preconference on Virtualize IT presented by Maurice York, Head of the Library Information Technology at North Carolina State University. Some of the important parts of his talk included a great illustration of the latest trends in IT. These trends involve:

Virtualization illustrated by a suitcase and defined as one piece of hardware that does the work of several machines (virtual machines). It separates local operations from physical resources.

Cloud illustrated by a plug and defined as a set of pooled resources and services available over the internet, that offer elasticity, scalability, high availability, low upfront cost, pay for what you use etc to the user.

Grid illustrated by a PowerStation and defined as a massive parallel and distributed computing entity that is geographically disperse and loosely coupled computers, like a supercomputer or mainframe.

Maurice also talked about his library’s positive experience with thin clients that are currently deployed on public workstations in their library and are in the process of being rolled out to staff computers throughout the library. He emphasized on some of the great benefits of using thin clients and pointed out that only one single image needed to be changed to see the result on computers across the organization. Other benefits are centralized patching, virtualized applications (no conflict), live migration and failover, create and destroy virtual machines in a blink, access images and applications from any device (lending).

Green IT was also one of the important topics that Maurice introduced. According to him, mobile devices (iPhones, iPads, Blackberries, etc.) suck more power than any other electronic devices on campus because they are everywhere and need to be constantly charged. In addition, almost every student has one. He then pointed out that one server emits as much carbon dioxide (CO2) in one year as a Sport Utility Vehicle that gets 15mpg. Maurice believes that in about 10 to 20 years, “computing” will be “virtualization” as the world will realize that we are running out of power and to be able to accommodate the ever increasing demand in power which by the way “cannot be stored”, then the world will need to use one piece of hardware to perform the work of several machines, which is what virtualization is all about.

Some more ways of addressing this eventual penury of power includes adopting zero carbon emission strategies like wind, solar, geothermal, solar thermal etc. He also recommended moving data centers to “where the power is” like cloud computing, smart grids, and cyber infrastructure, as he pointed out the notions of “follow the wind”, Infrastructure as a Service, and trade offsets.

In order for libraries to keep up with future technology requirements, Maurice prescribed the need to perform a basic assessment of one’s institution including application suites, staff computing, networking services, power, data, facilities, as all these require particular attention. To him, in order to efficiently sustain users’ demand in term of computing resources, the library information technology support providers need to understand the infrastructure that they are running even if they don’t need to know it. This includes knowing their facilities people.

One of the final activities of the preconference involved working in small groups to talk about a make-believe of the library and come up with a fictitious ideal library. There were several groups and each one had a topic to discuss. One group had to design a library facility; another one was tasked to take care of networking in the library. My group was asked to focus on applications. This was interesting as I was the one who had to speak on behalf to the group and share our discussion with the whole conference attendees. In our group we decided to go open source on all applications. We also decide on hosting our services in the Amazon cloud, use Evergreen as our ILS, VuFind/XC/Backlight as our discovery interface, Ubuntu as our operating system, MySQL/ Solr/MongoDB as our database, Apache Tomcat as our web server, and Agile as our development model. We also agreed on deploying only thin client stations throughout the library and rely on SSL, LDAP, VPN, PKI, to configure security in the cloud.

As you can tell, the preconference was worth it and was an opportunity to network with others that are moving in the direction of virtualization whether it is with a private cloud like at NCSU, or with a public cloud that we are more in favor of at the ZSR Library.

My ultimate great experience was the presentation that Erik, Kevin, and I gave on Making your IT skills virtual during the main conference last Friday. I agree with Susan when she said that the presentation was successful. Since this was my first presentation at LITA (my first professional presentation), I felt extremely overwhelmed but had the opportunity to demonstrate my understanding of the part of the topic that I covered and responded to multiple questions that served to clarify in more depth what it takes to succeed in cloud computing. Our message got well received and several people lined up after the conference to have one-on-one discussions with each one of us.

This experience coupled with all the great sessions that I attended contributed to make this first LITA experience, a milestone in my efforts to fulfill my scholarship requirements.

The Vufind 2.0 conference officially started this morning at around 9am where participants discussed ways to bring Vufind to the next level in a Vufind version 2.0. The discussion focused on highlighting trends and goals for the future development of the software.

The group recognized that a collective effort from different libraries using Vufind will be crucial in improving the software. There was a significant use of words like collaboration, community, and knowledge commons.

The Villanova University has been able to merge Vufind with Summon, a Serials Solutions web-scaled discovery service to improve findability and discovery by offering users the ability to search articles (using Summon) and books (using Vufind) more efficiently and see the result on the same page.

The conference attendees expressed interest in Social Metadata where tags from different systems would match each other. Here, the example of Social OPAC driven by drupal was given and raised significant interests. Developers in Australia have been able to include a tagging feature in Vufind and when somtehing is tagged, it appears in harvesting interface called The Fascinator.

There was a breakout section on Authority Data and Linked Data where the discussion we talked about the useful features of VIVO, an open-source semantic web application that allows to bring together in one site, publicly available information on researchers across institutions, and VIAF a project developed by OCLC aiming at linking authority records of libraries nationally, and then making that information available on the Web.

The Code4Lib Conference started yesterday and Cathy Marshall, Senior Researcher at Microsoft’s Silicon Valley Lab gave the keynote speech. She talked about personal digital archiving and shared some insights on how with digitization, people can now hold on to everything digital. I found her talk pretty interesting as it made me wonder if I should keep everything. Then I looked at files and folders on my work computer that I have had for about six months now and I realized that I saved everything I came across and my Outlook inbox is so cluttered right now because I kept all emails that I received so far. In addition, I even downloaded files from my old computer that I kept for several years and that I have never used since the first time I saved them. Now I ask myself: am I a digital hoarder? Maybe I am (to a certain extent) . Kathy also talked about how people react initially with horror when they lose data stored on a computer hard drive but almost feel good about the loss later on once they realize they just got rid of unwanted digital artifacts.

Jeremy Frumkin from the University of Arizona and Terry Reese from the Oregon State University introduced Cloud4Lib which is an open digital library platform. The idea here is to leverage the implementation of tools like Evergreen, Koha, Vufind, LibraryFind, Duraspace, dSpace, Blacklight, Tellico, Moai, Solrmarc, Greenstone, Pymarc, and Fedora Commons to benefit more than one community. Doing so will enable libraries to collaboratively build and use common infrastructures. Ultimately, not just one institution that installs a product will benefit but the whole community of users. This enables development efforts to enhance an entire platform.

For Terry, team work is key in leveraging implementations. The work should be extended to the entire library and having a wiki could help when talking about collaborative space. He recommended Amazon (S3, Web Server, EC2 instances) as collaborative workspace. Jeremy added that breakout sessions, discovery across platforms could further enhance this process.

Ross Singer talked about liked library data cloud which can be implemented by building a linked data service using MarcXML. He mentioned how RDF and SPARQL could be used to provide users with useful information when they look up a URI. He actually used this concept to embed some RDFs into a Vufind instance to link external data.

Harrison Dekke, Data Librarian at UC Berkeley defined the role of the cloud as a replacement of the desktop and mentioned that this may make users work smarter. He also talked about Rapache which is an apache module that puts an instance of R in each apache process where R is an interpreter.

Karen Coombs recommended ways to improve library user interfaces with OCLC web services. She talked about cross listing print and electronic records and suggested the use of Open URL resolvers to aggregate and add a link to the library interface to allow display of print availability at other libraries. She also talked about http://librarywebchic.net a web interface that accepts an OCLC number and a zip code and returns a map with libraries that have a searched item.

Jennifer Bowen from the University of Rochester gave an overview of the eXtensible Catalog to take control of library metadata and websites. At Rochester, they used NCIP and OAI to provide connectivity between eXtensible Catalog and ILS, and developed a user interface offering facets and a customizable search interface and metadata tools for automated processing of large batches of metadata.

Anjanette Young, Systems Librarian and Jeff Sherwood, Programmer from the University of Washington talked about matching dirty data. Just like us at the ZSR Library, they use DSpace as a repository for electronic theses and dissertations. They use Pymarc for dirty data matching where they match marc bibliographic data with corresponding authors. The great thing about this Pymarc software is that its latest release has the ability to change records from MARC-8 encoding toUnicode-UTF8.

Erik and I drove down to Asheville this morning for the Code4Lib conference that is being hosted at the Renaissance Marriott hotel. We got in town around 9:15 and after checking in and dropping our luggage, we attended the pre-conference on Koha that Erik signed up for. I signed up for the blacklight pre-conference, which was scheduled for 1:30pm so I had a lot of time to killJ. I am sure Erik will enjoy talking about Koha so I am going to focus on blacklight.

Naomi Dushay, Jessie Keck, and Bess Sadler presented on blacklight. Blacklight is an open source UVa‘s faceted Next Generation Discovery tool based on solr. It is a plugin to the rails framework and offers features including faceted browsing, keyword searching, relevancy, and display of content types. The great thing about blacklight is that it has a highly configurable ruby on rails interface. In addition, blacklight can index and retrieve several kinds of xml documents including EAD, TEI, and GDMS.

I installed blacklight on an ubuntu platform on a virtual box running on my vista laptop. The installation was pretty straight forward except for some few configurations here and there. Blacklight can be used to serve the same purpose as Vufind (http://searchworks.stanford.edu/) but can also be used for searching and displaying digital objects like the Northwest Digital Archive. The coolest feature that blacklight offers is the ability to write your own application helper using existing helper methods included in the blacklight application. This allows back-end users to basically write pure text and see it converted into html suitable for a good display in a browser.

First of all please allow me to say thanks to the ZSR Library’s leaders and in particular my supervisor Erik Mitchell, Susan Smith, Wanda Brown and Lynn Sutton for giving me the opportunity to attend such an informative and rewarding conference on Library Resource Management Systems, so beautifully organized by NISO.

NISO is the National Information Standards Organization that provides information professionals, publishers, and software developers with information industry standards that allows them to work together. Its goal is to eliminate barriers to discovery, retrieval, management and preservation of published content.

For clarity purpose, I have structured this report into two main parts: “Day One” as the first day of the forum and “Day Two” as the second.

Day One:

The first day started off with a continental breakfast that allowed conference attendees to get acquainted and get comfortable with one another. It was an occasion to start networking with some of the participants. That is how I got to meet Grace Liu, Systems Librarian from the University of Winsor who later gave a presentation on her institution’s experience with migrating from Voyager to Conifer.

From Grace’s presentation, I learned that her library, the Leddy Library, switched from Voyager to Conifer because of highly problematic upgrades of Voyager. The ILS industry has not been able to develop structures necessary to face the increasing expectation of libraries and users. In fact the needs and complexity of information management has grown beyond the ability of vendor’s integrated library systems to respond efficiently to users’ demands. In addition, it is now clear that vendors have not adequately invested in their ability to provide quick support to libraries faced with an overwhelming need for information process. Vendor ILSs’ back-end as they are currently and have always been could not sustain patrons current need for an inclusive ILS that will not only support processes like acquisitions, cataloging, ILL and circulations but also include friendly and social features similar to Google, Facebook, Wikipedia, and Amazon. The open source community has so far provided more improved features when it comes to integrated library systems and Evergreen is one of them.

The project conifer understands the power of collaboration and includes three other universities, all members of the Ontario Council of University Libraries. This group believes in the importance of in house support and that is why the Leddy Library has 2 systems librarians and 4 technicians who respond quickly to fix bugs and maintain conifer’s vital systems. They use:

Verde for ERM shared with other 6 institutions

SFX as context sensitive link resolver shared with other 19 academic organizations (some universities have more than one library)

Evergreen as ILS shared with 3 other institutions

The keynote presentation of day one was given on Toward Service-Oriented Librarianship by Oren Beit-Arie, Chief Strategy Officer, Ex Libris, Inc. He talked about more interdisciplinary activities and change in scholarly communication models. He mentioned that new models are taking advantage of networking technology and extending the traditional benefits of print journals while facilitating the exchange of findings and preservation of scholarly records. The technology models are, among other:

Computing as a service (cloud computing)

Open interface

Service oriented architecture

Mobile systems

Semantic web

He also said that Moore’s Law is still working and therefore technological limitation are now going away making what would not have been possible before, possible today. For him, new form of scholarship implies new form of librarianship and to get there, we need to focus on collaboration. Continuing he added that doing so could generate savings that libraries could use towards other means that have been neglected like teaching for example. In addition, he mentioned that from Ex Libris interviews, users have prescribed a to-do list. I talked about this in a previous post (NISO Forum Day One).

Oren emphasized that open (platform) is really important now as it gives the ability for users to build collaborations. In addition, he talked about Digital Library Federation (DLF), which is a consortium of libraries and related agencies that are pioneering the use of electronic-information technology to extend collections and services, with its ILS Discovery Interfaces (ILS DI). Continuing, Oren mentioned other initiatives like Open Publication Distribution System (openpub) and the need for new methods of interoperability and pointed out three areas of focus:

Traditional: doing same things differently utilizing the wisdom of the clouds: (network level, cloud computing). Here, Oren thinks that there are great opportunities in some traditional URMs through “rethinking” by streamlining production and supply chain of bibliographic metadata, lowering costs, increasing utility and productivity. However he noted the need to go beyond the traditional by including more “granular items”, new types like research data and data sets, complex distributed data and he recommended looking into OAI-ORE. Although Oren does not believe that there will be one single index that will do it all, he supports user side of aggregated indexes like new discovery tools which enable indexing of article data with availability and affordability.

Transitional: Oren is advocating for new support for library tasks and wants to leverage the capabilities of a network deployment to improve support for traditional activities. He gave examples like usage-driven collection development, content selection, integration with vendor systems and shared purchases. He also talked about Flickr Commons and appreciated the ability for users to share photographs. He even pointed out that the Library of Congress updated records based on information on Flicker. Oren added that we have to meet the users where they are and gave the example of the Ex Libris URM Dissemination Control. He concluded that we need to focus on the unique (the institution) and integrate the global.

Transformational: this involves scholarly communication, mining usage data to enhance library services and recommender services like BibTip, LibraryThing, and bX Service. Oren also talked about project MESUR looking for validation, and long term digital preservation through sustaining the digital. Finally, Oren emphasized on the library as middleware (between publisher and user).

Thomas Wall, University Librarian from the Boston College who was previously scheduled to present, received (as we were told) an invitation from his provost to attend a budget meeting and instead, Bob Gerritty, the Associate University Librarian, gave the speech on What Do Libraries Want to Achieve with Their Library Systems?

I learned from his presentation that their institution uses the Ex Libris Aleph ILS and MetaLib for federated search. They are doing all they can to accommodate users who are looking for a one stop shopping solution. They use overlay services, add-ons, and widgets to add value to current systems and local developments. In addition, their library has developed a feature to orient/guide new students to understand call numbers and find items on the shelves quickly.

Kevin Kidd, Library Application and Systems Manager at the Boston College presented on the Project Aerie which is a next generation of service oriented librarianship. According to Kevin, the purpose of this project was to create a framework or portal to deliver online library services that take into account the users (student, faculty, and staff) and to decouple these services from the Aerie framework and reuse them in other environments to meet the library’s overall service goals. For him, it is all about services and data and library resources in the network environment. Traditionally, we have been cataloging, collecting, providing access to information/knowledge. Now we have the internet and the primary problem to solve is no longer access. He believes libraries can now do the following:

Filter information and help patron make information choices

Provide resources where and when they are likely needed and he called this contextualization.

He concluded that to do this, libraries should:

Organize online information to help our decision systems

Provide resources utilizing web 2.0 applications

Systematically acquire and prepare data to facilitate all of the above

After a quick break, Judi Briden, Digital Librarian for Public Services, University of Rochester, River Campus Libraries presented on User Perspectives: How our Patrons Interact with Our Services.

Judi talked about a usability test for the OPAC and from that experience, users are not sure what the OPAC is because results are not obvious, unclear where items are, and assume libraries just don’t have anything on their topics. I believe the result of this usability test is pretty much what motivated us at the ZSR Library to seek a more user-oriented front-end to our catalog. After studying students, they realized that students would rather use a system that they are already familiar with. I believe our choice of VuFind was cleaver as the use of VuFind will not be a much difficult task for a student who is already familiar with Google and Amazon. Judi also talked about XC (eXtensible Catalog) User Research and referenced the project’s preliminary report and pointed out that the focus of XC is interest in user research by focusing on the OPAC and solving known problems for casual, and non-expert users.

John Culshaw, Professor and Associate Director for Administrative Services, University of Colorado at Boulder Libraries presented on Build it or Buy it.

John talked about ILS platforms. He mentioned CARL‘s public access catalog and said that their library implemented PAC in 1984 and migrated to INNOPAC (an Innovative Interfaces integrated library tool) in 1994. Other critical services that they use are the digital library LUNA‘s INSIGHT that allows users to build, manage, and share digital collections no matter how big they might be. John noted that the reason why they buy is functionality: need for stable, consistent systems, and Millennium continues to meet needs and also because open source ILS platforms cannot fully support their needs at this time. He also expressed their positive experience with partnerships. He thinks they work well and they have a strong user community. He gave the example of ENCORE that their library (with other partner libraries) is using to connect users to trusted resources that the library collects. He pointed out that the encore selection was due to the following reasons:

Seamless integration with Millennium platform

Real time circulation information:

No need to unload and reload

Implementation was straight forward

Invested in LDAP solution

Based on evaluation, ENCORE was found to be the suite for their need

Single sign-on electronic resource data

John also commented on their campus environment and said that his campus also advocates the “buy it” approach. The campus is using SIS PeopleSoft, tested Sakai but went for WebCT (or Blackboard Learning System). John brushed finances and staffing and noted that his institution lacks the people to be able to give back to their community and therefore they experience challenges acquiring the human resources to manage their rich materials. Their university belongs to the Association of American University (AAU) and he couldn’t help saying that, compared to other peers in this association, they are really behind in staffing. He concluded his presentation by saying that their strategy is to continue to buy and maintain strong partnership with vendors.

A panel discussion was held on Open Source Systems: What is Working? What is Progressing? by Tim McGeary, Team Leader, Library Technology, Lehigh University and Andrew Nagy, Senior Discovery Services Engineer, Serials Solutions.

Tim talked about the OLE project and noted that they have finished publishing their project this summer and the project will start in February next year. They are looking to form the FLUID Project in a partnership. He referred to other projects but held back on disclosing them at this time. He concluded by saying that the OLE project will support ILS discovery API and other discovery interfaces but will not be developing a discovery interface.

Andrew gave a history on VuFind. The idea behind the interface was to integrate with Integrated Library Systems, authenticate via SIP2, and interoperate with major ILS platforms. VuFind uses the MARC Import Tool, SolrMARC that indexes a large marc collection of data. He noted that the project is a done project but the VuFind community is growing and that the leadership needs to grow as well. In September 2009, there were seven team members and about ten volunteers that narrowed down to seven. Members are voted by the community and renewed each year. Andrew also mentioned that many institutions have tweaked and modified VuFind and there is a work being done to regroup and merge to “trunk”. He also signaled that they began to gather statistics from VuFind sites and compile stats to evaluate VuFind performance. He said that according to Amazon, only 2% of users use facets for browsing.

As I mentioned in a previous post, I asked some questions based on our experience with VuFind and his answer was that server configuration causes indexing to take too long. However, he has seen institutions where indexes of millions of records would index in a very short time. He is pretty certain that our VuFind troubles are server configuration related and not programming.

She talked about her institution’s experience with open source and in particular with LibX browser plug-in and edition builder for libraries. They are using web services and widgets like MAJAX and MAJAX2 and Google book resources like TICTOCLOOKUP. She stated that open source software can work with vendor systems to enhance existing OPAC and link users to vendor systems. Then she pointed out some of the challenges setting up OPACs in LibX. They have difficulties with how to ask a system for information (request syntax issues). They believe that the following must be documented by the vendor or reverse engineered:

Document Type Definition for III Millennium needs to be configured

Figuring out settings for catalogs takes time ( time that could be used for developing new features)

Requires auto-detection and fingerprinting

More JavaScript code from the ILS makes things more difficult

Non-disclosure agreements have a chilling effect on development

Standards can be better like OpenURL syntax, NISO Z39.88, and emerging services like Widgets (MAJAX2) and Mash-Ups used to combine information from various online sources into new or existing web environments.

Continuing, Annette found that getting information from the ILS needs some work. She condemned the fact that most vendors provide no API or service at all and some that do provide services don’t provide enough and libraries are dying for standards on a functionality that a vendor can offer. She expressed disappointment in some existing standards that do not define holdings. She also thought standards must define not just functionality but also syntax.

The discussion was pretty interesting and they all recognized the importance of open source software and their ability to address certain features that vendors do not provide. According to Tolin, SirsiDynix is investing in virtualization to better satisfy users because there are servers out there that are underused and it is a waist for the owning institution. He said that they are in the process of moving most of their products to a virtual server in order to apportion server use and therefore save money for the customer. The panelists added that they leverage open source in their productions and they embed open source software in their technologies and this is proof that they support open source very much. However, they pointed out that if a system does not integrate Acquisition, Cataloging, and Circulations, that system is of course going to be less expensive but at the end of the day it is not going to meet libraries’ needs. They noted that libraries have to make the decision based on whether or not open source is meeting their business needs. According to Galen, big open source applications have a good community of support that reacts quickly to solve technical problems. For Carl, Ex Libris opened their API to offer flexibility to the user and has commitment to support open source. Overall, the group thought vendor and open source have to work together and libraries owe it to themselves to exercise due diligence with their vendors.

Day Two:

The keynote presentation of day two was given by Rachel Bruce, Program Director, Information Environment, JISC.

She talked about Investing in a time of disruptive change. They think about content as a utility and the web is their mode of distribution. At the UK they start actively collaborating with other institutions and share resources. They have started bringing in new processes to improve services within their libraries by including more electronics. According to her, the problem in relation to managing and sharing research data is that some people think that this is something for scientists. She found that it is important for us to learn about what type of resources we need. She also talked about Open Source and Open Education Resources: OpenWetWare, OpenSpires, myexperiment are entities involved. It is all about sharing these resources on the open web and they now anticipate web 3.0. She said that they need to keep in mind that the speed of young people’s web searching means that little time is spent in evaluating information either for relevance accuracy or authority. They want it quickly and they want it now. She referred to young people as “Generation Y” and found that they are thinking and processing information fundamentally differently from their predecessors. For her, the future of libraries is that for scientific research, the library is probably becoming obsolete. There is a need for purchasing which should be done nationally by specialists but most of the rest will be web based. She also talked about digital divide and thought that needs to be address efficiently. She signaled that we are in a perfect storm with our technological changes and we don’t really know which side to turn as there are a lot of directions that we haven’t even explored yet. Then she summarized that this is the UK situation and they don’t really know where to turn. Continuing, she said that this radical change is a total chaos for everyone and the current system is so brittle, and the alternatives are so speculative, that there’s no hope for a simple and orderly transition from state A to state B. Chaos is the UK lot; the best that can be done is identify the various forces at work shaping various possible futures. She added that they have noticed that the use of LibraryThing has not exceeded the use of WorldCat. They believe libraries can do much more to open up their metadata for reuse. OCLC and TALIS already offer platforms that enable library data to be reused. She talked about connectedness, platform, and network effect and pointed out that we need to work at the network level. She mentioned that further insights are gained by analyzing borrowing histories, facilitated by the use of library cards. She mentioned 4 library vendors holding 80% of the market. She thinks we should make our assets available to the world through linked data and be resource oriented. In the UK, they are even supporting making government data part of the linked data for the whole world to see! When I asked about how they are doing their filtering so it is not disclose sensitive data to the general public, she said that they have people who do that filtering. She mentioned sustainable scholarship. She said that TALIS is very present in the UK.

According to Ivy, the Digital Library Federation’s Electronic Resource Management Initiative (DLF ERMI) provides tools for managing the license agreements, related administrative information and internal processes associated with collections of licensed electronic resources. She talked about ONIX for publication licenses (ONIX-PL), LEWG (Land for Environmental Working Group) and added that LEWG is sponsored by NISO, DLF, and PLS. She encouraged libraries to enter the Shared E-Resource Understanding (SERU) in order to reduce costs for library and publisher and therefore speed up access for users at subscribing institutions. She mentioned standards post ERMI like COUNTER for counting online usage of networked e-resources, SUSHI used to automate request and response model for the harvesting of electronic resource usage data utilizing a web service framework, KBART (Knowledge Base and Related Tools) which is a joint effort of NISO and UK serials group, CORE, I2 (Institutional Identifiers).

MacKenzie Smith, Associate Director for Technology, MIT Libraries, and Diane C. Mirvis, Associate Vice President for Information Technology and CIO, Magnus Wahlstrom Library, University of Bridgeport, addressed The Library System in a Broader Context: Interaction with Other Library Systems.

MacKenzie Smith presented on Integrating Library Resource Management Systems into Campus Infrastructure for Research and Education. She clearly expressed frustration with their resource management system as it is so big and includes so many entities she thinks could stand on their own to make things much simpler. They use DSpace to archive theses. She thinks we should build on and harmonize bibliographic data models, define new conceptual data models, and focus on the data in a data oriented architecture where the web is the architecture. She anticipates data processing to become much more complicated in the future.

Diane C. Mirvis presented on Considering a New Information Topology. According to her, the University of Bridgeport (UB) is a content driven university. She believes it is important to rethink how work is being done by their scholars and their students. She believes information is everywhere and owned by everyone. UB has not been embracing open source very much and they like their partnership with their vendors. They use Voyager, Primo discovery, MetaLib, SFX (all form Ex Libris), and institutional repositories.

Kat Hagedorn, HathiTrust Special Projects Coordinator, University of Michigan presented on Seamless Sharing: NYU, HathiTrust, ReCAP and the Cloud Library.

She talked about Cloud library and said that it is not Cloud Computing although it has some similarities but involves the necessity/desire to share resources: leverage shared investment, reduce local cost. It also involves multiple digital and print repositories that can now move into a cloud that will become a shared network resource. She stated that the infrastructure needed for that includes:

Understand preservation and make it available for collection development

Understand what consumers need and emphasize on information access quickly and make print and digital library part of the cloud library

Outside collections.

She noted that there are perceived needs: they already have ILL including Document Delivery. They also need to make what exists outside their repository that is not currently accessible, accessible. She listed their partners in pilot:

She shared some lights on what should motivate an institution to become a member of large consortium systems; they provide:

Better patron experience: one set of credentials gets you everything, a patron who needs two books should use the same mechanism to acquire it

Strategic benefits, reduction of redundant systems and workflows, Leadership opportunity, partnership with OCLC: standards based solution resent for long term viability and bringing disparate services together.

However, she objected, this change requires training and communication. She also talked about SUMMIT, the union catalog and browsing system with 9.2 million titles representing 28.7 million items. In addition she found consortial workflow and fairness important:

Load balancing ensures all institutions benefit/contribute

After using automatic load balancing for two months, 86% of membership has seen improvement

According to her, in the future, with network ILS, there will be no need to catalog the same record several times and noted that network circulations allow easy formation of arbitrary groups and therefore many things should not be so different at different institutions.

The end of the day two also included a group activity that I talked about in a previous post (Day Two at NISO).

The NISO forum was such a wonderful learning experience for all participants and I believe we all left Boston with a lot of new knowledge that will certainly help us improve processes at our respective institutions.

Day two at NISO has been as intense as day one with a lot of information to digest within just one day. Everybody was pretty relaxed (and exhausted) and comfortable with each other as we all got well acquainted the day before. Today’s talk was much about collaborative work and consortial partnership. Vendors believe that by adhering a consortium, libraries could put their voices together to acquire resources. Doing so will significantly reduce cost for each member of the consortium.

The most exciting part of the day was when we were divided into groups to discuss what libraries want in Library Resource Management Systems. We also answered interesting questions that we shared with the whole conference. Below are some of the question and the answers we came up with:

What did your group talk most about?

Talked a lot about ILS and discovery tools

In the area, which features are needed?

Standards and interoperability and real time sharing of data

Need to be able to input and output data from ILS to other systems like patron info, financial info

Getting data out is not difficult but functionalities like patron renewal needs some work

Interoperability is very important and when you talk about different products it does not make sense to people

Need to have our system authenticate with other systems.

Would be interested in something like a data warehouse that would house all library data resources around which other systems would revolve.

The ability to get data in/out through API based on standard and interoperability

We don’t know if it is any cheaper to join consortiums like WorldCat local

Would like vendors to open up their product a little bit more to the customers

Need for systems to talk to each other.

Name one specific bottleneck in the area. What is causing the problems in getting work done, in training people? What is the one thing holding you back?

The proprietary system is a big bottleneck in this area that is holding back

It was a good feeling when my group selected me to be the note taker and the speaker of the group as each group was asked to make this selection.

Just like yesterday, I have a lot of information to share and don’t be surprised to see more posts coming your way during the weekend. I now have to go catch a flight back home.

The first day of the NISO forum on Library Resource Management Systems went very well and touched on some of the different means that libraries are now using to manage their resources to respond to the changes in the library environment.

After a great continental breakfast where I started networking with Grace Liu, Systems Librarian from the Leddy Library at the University of Windsor, Canada, the day began with a great welcome introduction from Todd Carpenter, Managing Director of NISO. During his introduction, Todd quickly pointed out that it is important to know that every organization is different from others and this should be taken into consideration while planning any implementation whether open source or commercial. He did not forget to thank Ex Libris and EBSCO for sponsoring the event.

The Keynote Presentation titled Toward Service-Oriented Librarianship was given by Oren Beit-Arie, Chief Strategy Officer at Ex Libris. He pointed out that collaboration between libraries is the way to go as it can generate savings that libraries can use toward other important means that have been neglected. Continuing, he noted that based on some Ex Libris interviews, libraries want to:

Meet users needs by providing a single interface for discovery and delivery of library and institutional data

Do more with less by consolidating workflows, traditional with “digital library”

Oren also mentioned some framework changes for physical content management systems like Aleph Millennium, Voyager, and Unicorn; electronic content management systems like SFX, Verde, III-ERM, and Metalib; digital content management systems like Digitool, contentDM, Fedora, and DSpace. In addition, he noted that libraries will significantly benefit from integrated systems in a much more modern way using new frameworks like Usenet Resource Downloader (URD) that allow for local, remote, and deep search, URD as decoupled front-end as single entry point for discovery and delivery of all material types including Unified Resource Management (URM) systems for selection, patron management, access rights etc.

The most exciting part of the day was when it sort of like became a dialog between Andrew Nagy and I when I asked him some questions after his informative presentation on VuFind. A couple of my questions were as follows:

Most of our students like VuFind but our staff and faculty, not so much. What improvement do you think can be done to improve VuFind to satisfy both students and faculty and staff?

We implemented VuFind on a server that has 1 gigabit of memory allocated to VuFind. However, lately, as the number of VuFind users increases (I believe), VuFind has been causing the server to crash. What solution would you suggest in resolving this situation and how much memory should be allocated to VuFind?

He responded by saying that there isn’t anything that can done to satisfy faculty and staff as VuFind has been developed to be a more modern tool for better resource discovery including social features that student may be more used to. Consequently, it is understandable when users who are not very open to experimenting with new technologies express dissatisfactions toward VuFind. He also mentioned that my boss (meaning Erik Mitchell) contacted him about the VuFind server crashing issue and according to him this issue was due to a bad server configuration. He also said that the issue is now resolved.

I still have a lot more information to share but it will be tomorrow as today was a busy day.

I was so relieved when my flight from Greensboro touched down in Washington at about 11:30am this morning. I went through the worst turbulence ever on this flight and when it was over, I had to do it all over again from Washington to Boston but fortunately, this one went smoothly.

After collecting my luggage at the Boston airport, I took a shuttle to the nearest train station where I caught a train to my hotel. After spending a couple of minutes trying to get my elevator moving, I was told that I needed to use my room key card to activate the elevator before pressing the button to my floor. This was something new that I just learned and the receptionist was kind to say that it happens all the time.

After dropping my luggage and made sure I got online ok with my laptop, I went to have an overdue lunch at about 3pm at a Mexican Grill called “Q Doba”. The burrito was great! I then decided to go for a walk to discover the city. The afternoon was beautiful and the outside temperature, just comfortable for a wonderful promenade. In about five minutes, I made it to the Boston Public Garden where I took some beautiful shots. Unfortunately, I am not going to be able to share these shots until I get back home because I forgot the USB cable that is needed to download the pictures.

I am now going to hit the gym and get ready for tomorrow. I think I just saw Andrew Nagy and other folks that are going to present tomorrow in the hotel lobby. I will write a real post tomorrow as it is going to be an excited day.