Main menu

Post navigation

How do students and employees at an institution access resources off campus? From my experience, many library staff members will say many people use Google Scholar but the students really should start on the library homepage because that’s the best way to access library resources. Is this really an answer librarians should be satisfied with? Is that the confines we want to limit people to? Have you ever tried to access scholarly material like a student or faculty member would?

I recently attended a talk at ENUG by Rich Wenger entitled “IP Filtering is Dead. What’s next?” which touched heavily on this topic. Rich mentioned a fascinating video on the Scholarly Kitchen blog that presents a real life use case of the stumbling blocks faced by students and researchers and what a poor user experience that can be. It’s worth watching especially if you’ve never tried to access articles off campus. To address some of these issues by taking an entirely new approach to access, Rich talked about an exciting collaboration forming between subscribers and vendors called RA21. I look forward to future directions and outcomes from this adventure.

Meanwhile, I’ve been frustrated myself with accessing resources off campus, which is particularly agitating since I know the systems quite well and am still hitting friction and pain points in accessing stuff my library subscribes too.

So earlier this year I took a stab at making access slightly easier and discovered something called LibX as well as a GettingThingsTech blog post about different proxy re-direct options. This was great information and I diligently set out to try many of these options. Essentially the options allow you to click an addon or something and have a page re-direct through your institutions proxy. I found that the Chrome extension works great but it’s only available on larger screen devices which isn’t useful on phones. I won’t install outdated addons, so the Firefox and Safari options were a no go. However, the Zotero customization is something I recommend to everyone who actively uses Zotero.

As for the LibX addon for Chrome and Firefox, I successfully set one up for my institution and, while it works and has many possible features like a direct search of your discovery service or catalog, I wouldn’t recommend most libraries tinker with this option. It took me a few hours to get the configuration correct and I think there are better options that require less clicks.

I continued my search and I discovered a bookmarklet option described by UCSF Libraries that uses a bookmark with javascript in the url field to redirect pages through your proxy server. If you have javascript enabled on your device (and you probably do unless you’ve turned it off), this is a fairly simple option that works on any browser and any device. I simplified the directions and made them a little more browser agnostic before sharing with some faculty. The response so far has been a resounding appreciation for the simplicity of this workaround. So, an acceptable option but not preferable has been found that makes for a slightly more seamless user experience but the journey will continue.

Can the library community make this user experience better? I know we can, it’ll just take some time, collaboration, and imagination.

If you use Google Analytics, do you ever wonder what the “Search Terms” in the dashboard could mean? One library-centric (and probably highly contentious) use is to capture search terms used in online databases. In the broad sense, these databases are any search entity you may have such as journal finders, discovery layers, ILS catalogs, etc. If you can add Google Analytics code to the website and the search terms are retained in the url, you can almost definitely capture search terms. (Note: there are most definitely other ways to gather search terms, but I’m sticking to this version for now).

In your Google Analytics account, go to the Admin for the account/website you want to capture search terms for. Under View, click View Settings. This is where you can adjust the basic settings regarding what you want captured. You’ll need:

After a long hiatus from tinkering with library technology due to chairing a classroom renovation committee and doing the backend work of a book inventory project, I finally got to some of my sidelined to-do lists.

Google Analytics (GA) is one popular tool used for tracking website usage, however, the default setup only tracks usage within the domain listed in the settings. For a library with lots of links to external resources like catalogs, journal finders, databases, etc. the default setting can feel lacking. It’s also only so interesting to know x number of people visited your site, spent at most a few minutes on your homepage and left. Event Tracking solves that dilemma.

At first, Event Tracking looks like a lot of coding but it doesn’t have to be. Google Developers pages so nicely directed me to some GA code on GitHub called autotrack. By adding the javascript file to your webserver and a few lines of code to your already existing GA code on your website . If you don’t already have GA, GA gives you the basic code to insert when setting up a site and the additional autotrack lines get inserted into that.

Right now, I’ve added outboundFormTracker to track our LibAnswer search box, eventTracker to track our EDS search box (it wasn’t acting like a form for autotrack, so I did have to add a small amount of code to the submit input item), outboundLinkTracker to track everything else. So far, the heaviest usage from our homepage is EDS and our database list. I look forward to seeing what is and isn’t really used over the summer and into the fall semester.

A quick tip for those starting to use the ISO protocol with ILLiad, especially if you are hosted by OCLC. It turns out that if you edit any of the ISO keys in ILLiad’s customization manager, the ISO server needs to be restarted. At the moment, it doesn’t automatically restart on a scheduled basis.

So after you edit those ISO keys, submit a ticket to OCLC to restart the ISO server.

I recently received a message that a site I manage for a library organization was about to exceed its bandwidth allotment. There is a small user group of webmasters within this organization and bandwidth limits on the wordpress .org sites occasionally make the email discussion lists.The typical suggestions are:

ensure you have a robots.txt file

install a wordpress plugin like WP Super Cache that caches pages

ask for more bandwidth

The first two suggestions were put in place ages ago and I did end up pursuing the third suggestion, an option I was grateful to have. However, I knew there had to be another way to reduce my sudden spike in usage since it was not attributed to more visitors.

In fact, my spike seemed to correlate to some pdf files that I put in a post. I considered that action routine and almost trivial at the time, but, wordpress created a “preview” of each pdf that caused the entire file to download every time the page was opened. Considering some of these files displayed on the homepage, this greatly increased our bandwidth usage.

Lesson learned:

Host slides, large pdf’s, videos, and large photos elsewhere (the function is your friend)

One of my final summer projects before campus descended into controlled chaos was integrating our institutional repository records from BePress into our discovery layer from EBSCO. As usual, I learned some interesting tidbits along the way.

To get started, Bepress has some good information about harvesting records from their system: Digital Commons and OAI-PMH: Harvesting Repository Records. Much of this resource is about using the Open Archives Initiative Protocol for Metadata Harvesting. What’s neat about this protocol, is that anyone with an internet connection can obtain the metadata and contents in a consistent format. I know this doesn’t sound impressive but ask me if it was easier to ask EBSCO to add our institutional repository or our catalog of books to the discovery layer and, hands down, the institutional repository wins. Some of the stuff involved in extracting and displaying our library catalog included: extracting marc records in specific formats; uploading the file to an FTP site; and converting that file to a format ready for our discovery layer based on lots of field mappings that are specific to our library.

On the other hand, our institutional repository metadata and contents can be viewed by using this link: http://digitalcommons.esf.edu/do/oai/?verb=ListRecords&metadataPrefix=oai_dc The link is essentially the base url for the repository, with a few “commands” attached. The same can be said for obtaining several other details about the content including the field abbreviations found in the <setSpec> field in the previous link. The setSpec details can be viewed by adding /do/oai/?verb=ListSets&metadataPrefix=oai_dc to the base repository url. Check out the OAI-PMH documentation for more possibilities.

So in theory, the metadata fields from OAI-PMH repositories should be the same and people/vendors/groups who want to use that information in different interfaces can create a method that is easy to replicate.