2) “Hot or not” score for each blog — using a top secret formula (which I might patent as “BiblioBlogRank”!), for each day’s blog posts, points are added or subtracted to the overall score for that blog. Points are gained for using words which have seen a recent increase in usage, but are lost for using words that are declining in usage. For reasons that even I’m not too sure about, Slaw is today’s hottest blog and TangognaT is the least!

As promised/threatened just before Christmas, the new version of HotStuff is now up and running: www.daveyp.com/hotstuff/

It’s still early days, so it’ll be a week or two before it really starts to pick up on the hot new topics in the biblioblogosphere. So far, it’s sucked in just under 1,000 blog posts and found nearly 17,000 unique words.

Each day, it’ll create a new Word of the Day blog post using a word that’s seen a sizeable increase in usage in the previous few days. Today’s word was “skills“.

You can also search for specific words (e.g. Dewey, LCSH or cool) or view keyword clouds for specific blogs (e.g. “Walt at Random” or “Tame the Web“). There’s also a keyword cloud that pulls everything together to show the most used frequently words from all the blogs.

Once again — if you’d like your RSS/Atom feed adding, just leave a comment (same goes for if you’d like your feed removing!). You can see a list of the current feeds on Bloglines: www.bloglines.com/public/liblogs

After killing off Hot Stuff due to a server upgrade, I find that I’m kinda missing it!

So, I’ve decided to have a second stab at the problem and this time the code is much cleaner and faster. In particular, I’m using Bloglines to handle fetching all of the feeds and then grabbing the new posts via the Bloglines API.

It’s too early for the code to start spotting new keywords and topics yet, so it’ll be early in the new year before it launches fully. In the meantime, feel free to check that your favourite library/librarian blogs are included in the list of sites I’m pulling content from: http://www.bloglines.com/public/liblogs.

Please post a comment with the URL of any blogs you’d like including!

I’m hoping the make the new code a little more visual, so expect to see things like these…

About 90 minutes ago, I had the pleasure of doing a short presentation to the JISC TILE Project’s “Sitting on a gold mine” workshop in London. Unfortunately I wasn’t able to present in person, so we had a go doing it all via a video conferencing link. As far as I can tell, it seemed to go okay!

Our Repository Manager was keen to try putting something non-standard into the repository and twisted my arm into recording the audio… and I’d forgotten how much I hate hearing my own voice!!!

Anyway, as soon as SlideShare starts playing ball, I’ll have a go uploading and sync’ing the audio track. Otherwise, here’s a copy of the PowerPoint: “Can You Dig It?: A Systems Perspective” and you can hear the audio by clicking on the Flash player below…

I’m very proud to announce that Library Services at the University of Huddersfield has just done something that would have perhaps been unthinkable a few years ago: we’ve just released a major portion of our book circulation and recommendation data under an Open Data Commons/CC0 licence. In total, there’s data for over 80,000 titles derived from a pool of just under 3 million circulation transactions spanning a 13 year period.

I would like to lay down a challenge to every other library in the world to consider doing the same.

This isn’t about breaching borrower/patron privacy — the data we’ve released is thoroughly aggregated and anonymised. This is about sharing potentially useful data to a much wider community and attaching as few strings as possible.

I’m guessing some of you are thinking: “what use is the data to me?”. Well, possibly of very little use — it’s just a droplet in the ocean of library transactions and it’s only data from one medium-sized new University, somewhere in the north of England. However, if just a small number of other libraries were to release their data as well, we’d be able to begin seeing the wider trends in borrowing.

The data we’ve released essentially comes in two big chunks:

1) Circulation Data

This breaks down the loans by year, by academic school, and by individual academic courses. This data will primarily be of interest to other academic libraries. UK academic libraries may be able to directly compare borrowing by matching up their courses against ours (using the UCAS course codes).

2) Recommendation Data

This is the data which drives the “people who borrowed this, also borrowed…” suggestions in our OPAC. This data had previously been exposed as a web service with a non-commercial licence, but is now freely available for you to download. We’ve also included data about the number of times the suggested title was borrowed before, at the same time, or afterwards.

I mentioned that the data is a subset of our entire circulation data — the criteria for inclusion was that the relevant MARC record must contain an ISBN and borrowing must have been significant. So, you won’t find any titles without ISBNs in the data, nor any books which have only been borrowed a couple of times.

So, this data is just a droplet — a single pixel in a much larger picture.

Now it’s up to you to think about whether or not you can augment this with data from your own library. If you can’t, I want to know what the barriers to sharing are. Then I want to know how we can break down those barriers.

I want you to imagine a world where a first year undergraduate psychology student can run a search on your OPAC and have the results ranked by the most popular titles as borrowed by their peers on similar courses around the globe.

I want you to imagine a book recommendation service that makes Amazon’s look amateurish.

I want you to imagine a collection development tool that can tap into the latest borrowing trends at a regional, national and international level.

Sounds good? Let’s start talking about how we can achieve it.

FAQ (OK, I’m trying to anticipate some of your questions!)

Q. Why are you doing this?
A. We’ve been actively mining circulation data for the benefit of our students since 2005. The “people who borrowed this, also borrowed…” feature in our OPAC has been one of the most successful and popular additions (second only to adding a spellchecker). The JISC TILE Project has been debating the benefits of larger scale aggregations of usage data and we believe that would greatly increase the end benefit to our users. We hope that the release of the data will stimulate a wider debate about the advantages and disadvantages of aggregating usage data.

Q. Why Open Data Commons / CC0?
A. We believe this is currently the most suitable licence to release the data under. Restrictions limit (re)use and we’re keen to see this data used in imaginative ways. In an ideal world, there would be services to harvest the data, crunch it, and then expose it back to the community, but we’re not there yet.

Q. What about borrower privacy?
A. There’s a balance to be struck between safeguarding privacy and allowing usage data to improve our services. It is possible to have both. Data mining is typically about looking for trends — it’s about identifying sizeable groups of users who exhibit similar behaviour, rather than looking for unique combinations of borrowing that might relate to just one individual. Setting a suitable threshold on the minimum group size ensures anonymity.

Anyway, unless I get run over by a bus, later on this week I’m going to post something fairly big — well, it’s about 90MB which perhaps isn’t that “big” these days — that I’m hoping will get a lot of people in the library world talking. What I’ll be posting will just be a little droplet, but I’m hoping one day it’ll be part of a small stream …or perhaps even a little river.