Both libraries have a strong track record of digital library innovation of different kinds. The University of Virginia is a leader in digital humanities and NCSU has gained a reputation for creating user friendly, Web interfaces to Library services and resources, in particular.

University of Virginia

Scholars’ Lab

UVa Libraries is home to the Scholars’ Lab – a service which supports and enables the use of technology in humanities scholarship by the postgraduate and research community at the University. There are three strands to the service:

a ‘walk in’ facility where students can use high end computers and applications (GIS and statistical applications, for example) with access to specialist help (provided by fellow, experienced students employed by the Library)

a programme of workshops and training opportunities. In particular, the Scholars’ Lab runs a graduate fellowship programme where about 6 lucky students each year are trained and supported to work together on a particular project – developing valuable technical and ‘soft’ skills (including project management) in the process.

a research and development team of Web developers – from a research background – who work with academic staff on development of specific projects.

Scholars’ Lab

This all adds up to an impressive service. The Lab benefits from some endowment funding and, unusually, the research and development team is funded from the core Library budget – not from short term, grant funding.

Recent work includes the creation of Neatline – a platform for creating digial exhibits as overlays on maps with timelines. This is just the sort of thing we wanted to try out as part of our Jisc funded Manufacturing Pasts project – but ran out of time and didn’t have a suitable platform. Neatline is built on Omeka – a content management system created at George Mason University. Both are open source and if you have – or can have – access to a LAMP server (which I eventually did for Manufacturing Pasts) – it doesn’t sound too difficult to try them out …

SHANTI

UVa is also home to SHANTI – the Sciences, Humanities and Arts Network of Technological Initiatives. SHANTI isn’t part of the Library but is based in the ‘main’ library – the Alderman Library – as is the Scholars’ Lab.

SHANTI provides practical support and guidance for researchers who want and need to use information technology to carry out research – but who aren’t ‘techies’. The resources it has created include a Knowledge Base – which includes a suite of software tools – many of which anyone can use.

Digital Media Lab

The Digital Media Lab is part of the Library and provides an impressive range of resources and support for use of multimedia by the University community – including creating videos, large scale data visualisation, a ‘telepresence’ lab and use of video clips for teaching. A lot of the technology is Mac based.

The Lab is based on a newly refurbished floor of one of the site libraries which is gradually being redeveloped as the ‘learning and teaching’ library (this redevelopment also includes the provision of social learning spaces).

Digital Media Lab

The Lab has its origins back in the University’s audiovisual service which became part of the Library many years ago. It has evolved to become more about the creative use of technology in learning and teaching – than the simple provision of hardware and software as such – although, clearly the two need to go together.

While many libraries provide high end, ‘self service’ multimedia facilities – providing an expert, staffed service of this kind is unusual.

Staffing

Compared to many UK university libraries – and certainly compared to us – UVa Library is big – with 220 staff, 11 libraries and a complex structure.

That much is predictable for a major, American university library. But some aspects of how the Library is organised are less obvious.

The people I met were from a diverse range of backgrounds – the outcome of a decision over ten years ago to seek applications for vacancies from both formally qualified librarians and other relevant professions.

Learning commons floor of the undergraduate library

The head of the Scholars’ Lab is an ‘academic’. The Deputy University Librarian comes from an IT background (she joined the Library to head up its technical services, originally). The head of the learning and teaching focussed library is a learning technologist. A recent appointee to the University’s very impressive Special Collections Library has a background – amongst other things – in the rare books business.

Some recent senior retirements and resignations have led to a decision to ‘flatten’ the structure – removing some second tier posts and bringing the managers of some of these specialist, newer services into the management team.

NCSU

So, what has enabled NCSU to sustain this consistent record of service development and success?

Staffing and culture

Like UVa, NCSU Library is a big department – with about 220 staff and a budget of about $20m. The University has 35,000 students.

Again, like UVa, the Library has quite a complex staffing structure.

One thing which is notable about this structure, is that there is both a Library IT and a Digital Library Initiatives (DLI) team. Unusually, also, the Library IT team runs both servers and storage for the Library – not just Library specific applications (this wasn’t the case at UVa Library where – like us – they see servers and storage as clearly being part of central IT infrastructure).

I met some members of the DLI team. As the team name suggests, their focus is on developing and implementing new services – such as the Library Course Tools service noted above.

The team has existed for about 12 years – and grew out of a small service which the Library had created to support use of GIS and geospatial data.

One of the DLI team’s mobile apps

Most members of the team are librarians – who have become skilled Web Developers during the course of their careers. As librarians, they understand the context within which they are working and the services that are being provided – and this understanding combined with the technical skills clearly makes for a powerful combination. This is also true of some of the members of the Library IT team – with the person responsible for the specification and installation of the very extensive IT facilities in the new Hunt Library (below) being a qualified librarian with an Arts background (who then developed a specialism in IT).

NCSU has a ‘Library fellowship’ programme. This means a number of two year, fixed term posts which are open to newly qualified library professionals. Postholders are based in a ‘home’ department and also work on a project. Some of these projects are very significant. For example, one Library Fellow is developing a Web based application for browsing the contents of items in the Hunt Library’s new ‘bookBot’ (see below).

About 50 people have been through this programme since it started. Interestingly, many members of the DLI team originally joined the Library through this route – so, it clearly seems to have worked as a way of attracting capable, highly motivated people who – crucially – are looking for on-going opportunities to learn and develop on the job.

I was interested to find out how the DLI team communicates with other teams. The picture that was painted was of lots of horizontal communication i.e. between teams. Ideas for service development are as likely to emerge this way as be from ‘top down’. They said this works because individuals take responsibility to make communication with their colleagues work – they don’t wait for a ‘manager’ to do it for them. Later on I spoke to a member of staff in a public services, student facing role – who sung the praises of the DLI team – so, she clearly saw them as student focussed and helping her to do her job.

There is still, structured, organised decision making because there needs to be. But they have a pragmatic straightforward process for specifying and agreeing projects that are going to be resourced – taking a 2 sides of A4 approach to make sure objectives, timescales, responsibilities etc. are clear (something we have tried to do consistently in recent years).

Hunt Library

The Hunt Library is a major development for the Library, the University and the evolving concept of what a ‘university library’ is and what it is for.

Hunt Library entrance area, North Carolina State University

The Hunt Library opened in January 2013. It cost $110m and, so, represents a huge investment by the University (and its primary funder the state of North Carolina).

It joins the University’s other primary library – the Hill Library – which dates from the 1970s and is a ‘traditional’ ‘book tower’ library – lots of shelves, lots of floors, lots of single study spaces (although in 2011 the entrance floor of the Hill Library was totally redesigned in ‘learning commons’ mode).

The Hunt Library is on the university’s technology park – which is also the home of its large Engineering and Textiles teaching programmes and research (NCSU is – largely – a science and technology institution).

There are a lot of things about the Hunt Library that you would expect in a modern library.

lots of natural daylight

lots of social learning space of different kinds (including 100 group study rooms!)

high quality interior design and fit out

single integrated service point and staff ‘roving’ to provide help at point of need

What you can create with a 3D printer

Where the Hunt Library is really different is in the scale of the IT facilities it provides. These go way beyond access to desktop PCs/Macs and wireless networking to include:

lending of a huge range of equipment – and accessories – including laptops (of different kinds), high end filming and photography equipment, storage devices etc.

data visualisation lab with very high resolution screens

3D printing

‘creative’, multimedia lab which includes creation of virtual environments

a gaming lab

The technical facilities aren’t just used by the engineering students etc. but also by their Arts and Social Sciences departments (they do exist).

The bookBot at the Hunt Library

The Hunt Library is also about books – but most of these – 1.5m – are stored away in an automated, high capacity, racked storage system (the bookBot!). Users request items through the Catalogue and they are delivered to the Hunt Library service point within about 5 minutes (some staff intervention is required). This system cost about $4m to install.

What about staffing such a facility?

No new money was available to staff this library – so existing staff have been allocated between the Hunt Library and the Hill Library. Students are employed to help at the Hunt service point and with the bookBot. There are 4 people on duty ‘front of house’ at most – this is between 10.00am and 4pm. So, lean front of house – which reminds me of the Information Commons at Sheffield. The Library is open 24 hours – with staffed services continuing overnight (two Library staff employed for the purpose and a student helper – a model they already had at the Hill Library).

While use of some of the high end facilities is by appointment with specialist staff, most of the facilities can be used directly by students and they have found that students have needed very little ‘training’ to use them.

So what?

So, you visited UVa and NCSU – so what?

This clearly was a great opportunity for me personally as I have long wanted to see something of the large, North American university libraries in action (because, one way or another, what happens in the North American academic world has a huge influence on us and we are almost entirely dependent on library systems and resources provided largely for the North American market).

Data visualisation lab at the Hunt Library

But there are also some specific questions which I think we could realistically ask ourselves based on the experience of these libraries – despite the fact that they are clearly much larger and much better resourced than we are (although they don’t necessarily support a very much more students than us).

How do we provide the Web development expertise – focussed on library services and context – which we are going to need to develop our Web services further? (not everyone may agree with me on this – but I see this as absolutely essential and I don’t think that the advent of cloud based services reduces the need. We are still going to need to integrate services and build services which draw on disparate, underlying services. That is a large part of the ‘added value’ that we can offer our users);

What opportunities do we have/can we create to attract technically able, highly motivated, early career professionals and then develop them on the job?

How do we improve access to generic software tools/solutions for digital scholarship/humanities projects at Leicester – including exploiting the tools identified/created by SHANTI, George Mason University and others? (there is a Web developer need here as well – currently the subject of a bid to the University’s Research Infrastructure Fund which Simon Dixon and Dan Porter-Brown have put together).

In August last year the Library launched a new website, and in order to facilitate its creation, myself, Selina Lock and Mark Harrison ran a couple of user testing sessions, on the original website and alpha version of the new website. The redesign was prompted by the University moving to a new content management system, and there was support from an Information Architect in holding sessions to discuss content and organisation of material. The focus of the user testing sessions was therefore on navigation around the website and terminology.

The new (red) and original (blue) homepages side by side

The participants

15 participants took part in the testing on the original website, and 5 participants in the testing on the alpha website (where we experienced significantly lower take-up and turnout to the tests). A mix of taught and research students and staff were recruited to participate in the testing, and there was a good spread of subject disciplines represented as well.

In order to get a little bit of background information, each participant completed a questionnaire. This indicated that around half of the participants had not been introduced to the library website by their Information (subject) Librarian. From this we could conclude that a signficant proportion of users would be approaching the website without having received formal training on its use.

Navigating the websites

Wherever possible the participants were paired, and encouraged to discuss the tasks they were given aloud. These discussions were recorded. In both the original and alpha website tests tasks involved navigating around the website to find certain pieces of information, such as a book, information on a company, a PIN reminder and a journal article. These tasks were identified as common user activities by the librarians.

Major themes arising from the participants’ navigation and discussion of the website were:

Library-specific terminology could be confusing, e.g. terms like catalogue

Users were uncertain about which system listed particular resources, e.g. the difference between the catalogue vs. e-journal lists vs. databases

Information was sought in context, e.g. PIN reminder information was expected to be by the PIN entry box

Information was expected to be consistent located across different systems, e.g. the library homepage always listed at the top left

Users could often end up in a ‘dead end’, e.g. searching for information within the catalogue which was in a different system, and never returning to the main website

Users often abandon the website when stuck, choosing to send an email or contact library staff instead

It was therefore concluded that terminology needed to be improved on the new library website, and that there also needed to be more consideration of where information was located on each page. The improved structuring of the website did seem to have improve the location of information for some tasks in the alpha website testing.

Terminology

One of the alpha website mock-ups, with one set of terminology options

In the alpha testing, users were also presented two alternative mock-ups of the alpha website, with different terms on each. Half of the groups saw version A first, and then commented on the version B as an alternative, and the other half saw version B first, and then commented on version A.

Some terms were more debated than others. A particular issue involved differentiating between the new discovery system, which allowed searching by journal article title, and the existing journal title search, with participants putting the article title and journal title in both boxes interchangeably.

The expertise of the users had a significant impact on preferences for some terms. Novice users preferred the label ‘Books’ for the catalogue search, but the more experienced users (particularly academic staff) felt that this oversimplified the diverse range of materials in the catalogue.

Where an explicit preference arose, this term was included in the new version of the website (now visible at www.le.ac.uk/library). However, even the current solutions to the most debated terms (the Articles, Journals A-Z and Books searches) continue to confuse a proportion of users.

Conclusions

The main conclusions arising from the studies were the following:

Don’t assume that library website users will have had training: a proportion of users will always be approaching the site ‘blind’.

Terminology is an issue, and some terms present bigger challenges than others. Describing searches is a particular problem because of the multiple search interfaces available. This problem isn’t yet entirely solved by discovery systems (which don’t always offer journal title searches, for example).

Information needs to be consistently presented on different parts of the website and help needs to be contextually located, near to its point of use.

The difference between library systems (catalogue, journal title search, discovery systems, databases) are not instinctively understood by users. Furthermore, once a user enters one particular system, it will often serve as a dead end: users rarely return to the homepage and look for an alternative route to information.

Many of these conclusions are directly applicable to the design of library websites in general, and the outcomes were consistent with more general guidelines for website usability.

I was able to get to the third day of LILAC Conference 2011 (Librarians’ Information Literacy Annual Conference) this year held in London on the final day at the LSE. I’ve put down the main points I picked up from some of the sessions I attended.

Does information literacy have a future? Geof Walton & Alison Pope.

Perhaps it’s a sign of the times that people are concerned about their future in an economic climate of cuts that this session was so well attended. Geof Walton modelled a session on enquiry based learning by giving us all a set of questions to discuss in small groups and report back.

It was a discursive session that covered a lot of ground, here is a selection of the type of issues that all the groups came up with:

– How do we manage the expectations and perceptions about the library and information of various groups; from students to academics / researchers to admin staff.
– How to make more connections to get more timely training/ teaching into student’s courses.
– Information Literacy as a birthright, related to literacy in general being able to read. Its not a luxury but a life skill.
– Need to be able to demonstrate the positive outcomes.
– Teach alongside academics so they can contextualise information literacy skills.

Geof Walton emphasised the need for research informed teaching, and enquiry based learning. Information literacy is the scaffolding to enquiry and it can blend with technology supported learning.

Information Literacy beyond 2.0. Peter Godwin
Peter Godwin had trouble getting any sound for his video clips, but that didn’t matter as he is direct and entertaining enough without needing to resort to videos. He favours big global themes and here are a few he mentioned:

– Web 2.0 is old now, but actually no one knew what it was. Its settled down but not gone away and we are all influenced by it. Students don’t know what web 2.0 is although they experience and use it themselves all the time.
– We are heading for an increasingly mobile and social world and that won’t change. Our job is to accommodate to that.
– There are early adopters and slow adopters. People don’t change quickly. We can watch the early adopters and watch from their mistakes.
– The nerds are a minority. Most young people use tools but don’t have a techie understanding of them.
– Younger generation are not good at sharing and neither are academics / researchers or librarians. We need to reallocate the time we have and change the way we behave and work.
– Only when you try to write something for wikipedia do you realise how difficult it is.

He had some engaging thoughts on information literacy, for instance it has been ‘pampered’ by its attachment to academia, he suggested we should be thinking of it in the context of transliteracy. This made me think that information literacy as we know it is based almost entirely on textual information rather than visual or audio. We are dealing with increasingly multimedia information for instance from the familiar such as video to emerging technologies for instance Mike Matas; A news generation digital book and augmented reality / virtual reality. New media is in perpetual development but on a day to day basis our students need help dealing with old media and communication tools. Perhaps the gap between the two is where we come in at present.

Led by faculty member who is not a librarian Lana Ivanitskaya is an academic in industrial / work psychology. She designs tests such as personality tests and has to assess them.

Her first point was that competencies are not just knowledge and skills but also attitudes and beliefs. If you only focus on the skills you will miss a lot. Students own knowledge of their skills gaps is a familiar scenario for librarians. First year students think there is nothing you can teach them (often), PhD students seem to have the opposite attitude. Lana Ivanitskaya described the RRSA (research readiness self-assessment) online survey which includes tasks such as evaluating websites and application of knowledge. The survey includes ‘soft’ questions which assess the students’ beliefs as well as their results and they have found this is very predictive of their level of attainment.

The RRSA survey also found some interesting differences between students and experts at information skills. They found experts better and that students overestimated their skills. In fact the experts under estimated their skill the more expert they were.

Lana stated that students still find how to do research hard and are not taught how to do it. She compared the number and quality of references cited in student papers between those who had completed the RRSA and those that had gone through library information literacy training. She found that the impact of library teaching was three times better than the RRSA, but that the students preferred doing the RRSA and were more willing to do it.

So the message? Lana wondered if we should focus more on online training. Without seeing in detail what either the RRSA consisted of compared to the library training its hard to say of course. Perhaps its down to the old messages of getting to the students at the right time and place and using the right voice.

Knotworking as a means to strengthen information skills of research groups. Elija Nevalainen & Kati Suvalahit.

Finding new ways to connect with colleagues across campus that work isn’t always easy. At the University of Helsinki they had success using ‘Knotworking’ a way of working developed by one of their academics, Professor Yrjö Engeström. The process brings together different groups from across the organisation to work more quickly and less hierarchically than team structures. ‘Knots’ are formed to find solutions to specific problems, and the problem they wanted to address was how to re engage with researchers.

Here is my summary of what they found:

– Research groups think information literacy is for the good but they have no time to do it, its best aimed at Masters students.
– Information skills still important to research groups are; bibliographic tools, searching databases, current awareness, obtaining material you can’t get locally, establishing networks of contacts, organising references, consulting library staff.

Interestingly the librarians learnt that their changing role put them in the same boat as the researchers, and they learnt a lot about the researchers from this project. The project also had the unexpected effect of gelling together the researchers as a group. The project reinforced the value of personal networks and working with user groups. Working with researchers as equals also had a beneficial effect on the library staff who developed greater confidence in working in emerging subjects and services they don’t yet have expertise in. These themes are not new of course, but success in developing a change in culture is something often dreamed of but not realised.

UKeIG ran a very informative workhop on mobile access to information resources given by Martin White on the 13th April 2011.

This was good timing for me as we are currently working on a new Library Web site. ‘Mobile’ has obviously been on our minds although it has also been clear from early on that we would not be able to address the delivery of mobile services within the immediate project which needs to be completed by the end of August this year.

We also need to think about the whole of our Library ‘Web presence’ – the formal Web site but also the Web interface to the Catalogue and the associated Library user account, our Open URL Resolver, Summon (which we are implementing at the moment) and our digital institutional repository.

Then there is what publishers and other information providers are doing and this varies widely.

While I already realised that the options in developing mobile services are not clear cut, this workshop underlined the point!

Do you make your Web site ‘mobile friendly’ so that it is usable on smartphones and other mobile devices or do you create a bespoke Web site for mobile devices?

Do you provide mobile or Web apps? Mobile apps deliver specific functionality and are designed to run on the operating system of a specific device i.e. using Android, Symbian or other. Web apps also offer specific functionality but are designed to run in a Web browser – so have the advantage of running on any device but it must be connected to the Internet for the app to work.

What sort of mobile device are you trying to cater for anyway? Smartphones? Tablets? Or both? Tablets and smartphones are very different propositions given the much larger screens which tablets have.

What do your customers want to be able to do from a mobile device?

Not surprisingly, this is the most important question of all and it is certainly a question that we currently do not know the answer to.

Having started testing access to our Web interfaces from an iPad and smartphones recently the ‘user experience’ certainly differs radically from different devices.

Our existing and new Web site are perfectly usable on an iPad as is our Catalogue and Summon.

But the ‘experience’ using an iPhone and the HTC Wildfire we have tested to date are very different.

This is not surprising given the tiny screens and rather fiddly touchscreen keypads involved.

While our new Web site works *technically* on these phones, the interface is so compressed that it is virtually unusable. Yes, you can move about the screen and expand different parts of the page but it is a laborious process. Entering usernames and passwords to login into library accounts and resources is even more tedious than it is on a desktop.

My own experience and this workshop have led me to the following conclusions at this stage:

– don’t even think (if you were tempted) of making all your content and services available from small mobile devices (although you can do a lot more with tablets)

– understanding the context and motivation of the user is key. Where are they going to be using their mobile device and what are they going to want to do with it? (And here you also need to really think about what it is actually practical to do with one of these small devices i.e. alerting service for new journal articles sounds a useful application – but reading a journal article??)

– start again when defining and designing the content and services to provide (this doesn’t necessarily mean a completely different site as bespoke stylesheets can be used for mobile devices)

– keep it simple!

– do not try and support everything – you will not be able to

– remember what is really different about ‘mobile’ and play to the advantages that these differences offer. One of the real biggies here is that if a mobile device has GPS capability (as smartphones do) you have the option of providing location specific information to the user

We need to start with some user research to find out what our students and staff would want and find *easy and convenient* to use from small, mobile devices.

One of my fellow workshop participants was from another university library and they are a bit further ahead than us in their thinking on this. They plan to start with quick, ‘look up’ type information such as opening times and PC availability and they have some user feedback which supports this.

Note that these are all sites with the commercial imperative and income to get it right.

The American Chemical Society and Nature were mentioned amongst publishers who see major uses for ‘mobile’ – in providing alerting services to newly published research, for example.

Note these publishers use of ‘apps’. An NHS librarian amongst us noted the demand they were getting from some clinical staff who originated from the States to be ‘apped up’ to access content available using local subscriptions – suggesting what might be an emerging need i.e. support with installing the appropriate apps. Although how far can you take this?

Finally, we were pointed to an article in issue 64 (July 2010) of Ariadne which outlines how mobile delivery of information and services is being treated as an integral part of developing Birmingham’s new ‘central’ Library – the Library of Birmingham.

Last week I had the pleasure of attending the above named UKeiG course. The day consisted of a full programme jam packed with useful information, knowledge and anecdotes, all provided by Professor Charles Oppenheim in his usual engaging manner.

The morning focussed on IPR issues, both in relation to the ‘rights holder(s)’ of user generated content produced via Web 2.0 applications, but also the incorporation into such content of different types of 3rd party material, which of course is a completely separate but equally important issue.

Charles helpfully directed us to the Web2Rights materials, which I have found useful in the past for their flowcharts and diagnostic tools, and tested us with a number of scenarios. As I and my fellow participants were from a wide range of copyright/IPR/Web 2.0 technology backgrounds it we interesting to see that we were all fairly consistent with our responses/approaches to the issues raised.

The most interesting sessions of the day for me were those covering the issues of defamation and data protection. The increasing adoption of Web 2.0 technologies as part of educational engagement means content generators need to be aware of UK defamation law, and what can constitute libel, even if said in jest.

Whilst many of us know the basics of the Data Protection Act (DPA), it might come as a surprise to those who have embraced cloud computing that personal data such as that covered by the act should not be moved outside the EEA, unless the recipient country has an ‘adequate’ level of protection themselves, and that data held in a ‘cloud’ is often moved around the world, albeit temporarily, to maximise system efficiency.

It was a day that provided much food for thought, and I think it would be very easy to get weighed down in the detail and the intricacies of the Acts. However, in the first instance I think I shall just draw up some guidelines to include in my training materials!

I attended a JISC workshop on use of user activity data on 14th July. The purpose of the workshop was to present some recent JISC funded and other work in this area and to consult on what additional work JISC might usefully fund in the future.

The presentations and discussion largely focussed on user activity data gathered by library management systems such as search histories and circulation data. Other types of activity data would include usage statistics for electronic journals at both title and article level and activity data generated within VLEs.

So, what might you want to use such activity data for?

Well, as the title of the workshop suggests, one use is to gain ‘business intelligence’ of different kinds. So, in a library context, to see what resources are being used and by whom. This is, of course, very topical in a challenging economic environment where difficult decisions need to be made about what to spend money on and what not.

Another possible use, is to use activity data to enhance the ‘user experience’ in different ways. The best known example of this is probably Amazon. So, ‘customers who bought this item, also bought these items’. In a library context, this could become, ‘people who borrowed this item, also borrowed these items’.

The best known (only?) university library in the UK which has been using activity data in this sort of way is that at Huddersfield – and the workshop included a presentation from Huddersfield’s Library Systems Manager, Dave Pattern. This illustrated that borrowing of unique titles has increased at Huddersfield since the ‘recommender’ features were introduced – suggesting that students are benefitting from a new means of locating related resources.

The JISC funded MOSAIC project has been exploring how user activity data can be extracted from library management systems and combined with data from student record systems to provide recommendations along the lines of ‘Economics students who borrowed this item, also borrowed these items’.

And Ex Libris now have a recommender system called Bx which uses activity data taken from open URL resolver log files.

Issues which arose during the discussion included:

Why haven’t more libraries done what Huddersfield has done? Possible reasons touched upon included lack of data (some systems do not log the necessary activity data), limited access to technical skills, competing priorities, concerns about data protection and other legal issues.

Why would a senior manager commit time and resources to exploiting user activity data? Business drivers might include usage analysis to demonstrate/assess value for money; improving students’ ability to find relevant resources, therefore enhancing the student experience and improving performance and retention rates.

Who ‘owns’ user activity data and who should manage it? Issues here include user consent, trust and the purposes to which data is put.

One thing which the workshop underlined for me is that there are many kinds of user activity data which can be used for many different purposes. This diversity of data and potential purposes can complicate the discussion at times.

Discussion suggested that what may be needed now is some practical use cases, addressing practical needs which provide more evidence of practical benefits in an HE context. And that’s on the recommender system side of things.

On the business intelligence side of things, the case for committing staff time and resources to analysis of user activity data seems easier to me to make at the institutional level in current circumstances.

Last but not least, JISC is currently funding development of a Usage Statistics Portal – which will provide libraries with a mean of accessing e-journal usage data in one place. That certainly is addressing the ‘business intelligence’ aspect of user data.

I went to my first Mashed Library event, Mash Oop North, in Huddersfield in July 2009, had a fantastic time, and was pleased to go back to Liver and Mash in Liverpool in May this year. The Mashed Library events unfold in a relatively informal unconference format, with lots of discussion of ideas and ways of quickly and easily implementing mash-ups in library and information services.

This post won’t be so much a reflection on the event as a collection of tools and ideas which I found inspiring, and hope to come back to over time. Hopefully there’ll be something to inspire others too.

The Worldcat Basic API is available free for up to 1,000 queries a day (assuming non-commercial use) and can return a list of books held in OCLC’s comprehensive Worldcat Catalogue from a query. The list is returned in RSS or Atom format, and can be formatted by a number of standard citation guidelines. I’d be wary of using it long-term on an academic library site with the query limit, but there are further options available to those subscribing to OCLC services.

Unfortunately, we were lacking a reliable wireless signal on the day, so weren’t able to develop much on site. The second day, however, moved on to a wider variety of applications, so I was able to take notes and experiment later. Again, here’s a selected few:

Tony Hirst from the Open University spoke about gathering data on use of library websites (e.g. via Google Analytics), and segmenting users into groups by types of behaviour. Gathering behavioural data definitely sounds like something I’ll need to think about in our forthcoming redesign of the library website as part of the team moving the site to the University’s new content management system, Plone.

Julian Cheal from the University of Bath, demonstrated some ways of using RFID. I’ve long had a bee in my bonnet about the limited uses (issue and return) we have for RFID in libraries considering we’re one of the biggest users of the technology, and it was interesting to see demonstrations of library cards generating prompts and information as users entered the library or carried out library-related activities.

Lastly, John McKerrell talked about using maps in mash-ups. Maps are something I’ve seen used quite a lot on library websites, but only occasionally do these services go far beyond embedding Google Maps. Services which particularly stood out were Mapstraction – which allows web developers to switch quickly and easily between different map services, Get Lat Lon – which is a quick and easy way of finding latitude and longitude values for a given location, and OpenStreetMap – a free, collaboratively-edited map.

While I’ve not jumped in and used any of these services straight away, both the Mashed Library events I’ve been to have really opened my eyes to the wide variety of options available to me for using and integrating data on the web. You may see a few of these services turning up on the library website as we get further down the line with the Plone rollout! To finish the post, here’s a video of Liver and Mash, which I think catches the atmosphere and creativity of the event pretty well.

Like Gareth, I pick up lots of useful information and links to new reports via twitter now rather than by other routes.

When using these technologies it is important to be human: respond to people, don’t just broadcast, share things.

The best use of web2.0 csome when you allow it to overlap your personal, workplace and professional lives, but if you’re not comfortable with this level of engagement it can still be useful when used only in work hours.

Talked about embedding library resources & links into the VLE so student “don’t have to remember where to go” to get stuff. Student feedback suggested that they often forget how to use resources between years/terms/f2f sessions.

Often need to ‘sell’ the resources/need to embed to the academics, but once in a few courses then get a snowball effect due to good student & course team feedback.

Embeds all his teaching resources as well as core library resources.

Sustainability: think about time/workload required, timescales and the tools needed. E.g. previous html editor wasn’t up to the job so now uses Wimbacreate. There approach is to use a repository and link all courses to one version of core resources page so easily updated in one place by more than one person.

Updating: design so it only needs updating once or twice a year.

This initiative has led to more visibility, embedding of f2f sessions, more liaison with academics and more enquiries.

Currently a trial and only being done by Science team.

Just about to start using TalisAspire for reading lists.

Approx 1-2 weeks of time needed to build resources & embed them.

Moved subject-based library pages within the VLE and linked out to other types of library pages.

Wanted to identify 80% of ‘start’ points for 80% of tasks that 80% of users do 80% of the time by asking 130 students where they look for resources.

Student feedback was that there start points for university work/resources were google, OPAC, reading list, Blackboard VLE, databases, library homepage or student homepage (in that order of preference).

The first place they go when sitting at a university PC: Uni email, Google, Blackboard, Facebook.

Key library services: ejournals, renew books, search resources.

They do not use the library homepage as anything other than a gateway & don’t read library news.

Happy to use search tools but unsure of finding the right search tools in the first place.

customer journey mapping of tasks such as finding an article form a reading list showed very convoluted routes to get there! Hope MyLibrary tool will help get them there quicker.

Can put MyLibrary button in variety of places they use frequently such as Facebook and VLE.

Got a quick overview of the new RSC interface and they are very keen for librarian feedback. Either via their survey or as beta testers. Quick look at ChemSpider, an excellent, free chemical structure resource.

Since I stopped working at places that used Netscape, I’ve only really used Internet Explorer. Recently, though, I have come into contact with other things (OK, I admit that one of the “other things” is also IE, but bear with me!)

Firstly, I teach one student group that do everything on laptops with Linux, Open Office and Firefox. The Library has one such machine (to help enquiry staff see things as Firefox users see them, but it has been very handy for me!). We looked at PubMed, Web of Science and RefWorks in a recent session and had no difficulties (apart from me downloading documents and then not being able to find them).

Then, my home laptop has Internet Explorer 8 (I installed this as part of a regular Windows Update). It has tabs, like IE7. But it has various features that can suggest related sites or search results to you – an enhanced search box which suggests sites to you, and buttons that suggest sites related to the one that you are looking at. I don’t have the enhanced search box, and I haven’t yet got on very well with the button, so more research needed. Right clicking things also allows you to quickly send Google Mail email, or blog about things.

The thing that caught my eye, though, about IE8 is the “InPrivate” facility. This opens a new browser window which does not record any history of what you have looked at. As long as the “InPrivate” logo is at the start of the address bar, it will not predict sites as you type into that bar, and going to favorites and history will list nothing. According to Microsoft’s IE8 webpages, this is aimed at people who want to check personal email on a shared PC (in an internet cafe, perhaps), or who want to order presents online. I can imagine that it might enable all sorts of things to go on, and it does not seem to be possible to turn it off.

Last of all, Google Chrome. I recently installed Real Player on my laptop and was offered this, so thought “why not”. I have looked at library webpages and it seems fine, and very quick, which is one of its selling points. But when I tried to access the RSPB Big Garden Birdwatch site to report results, it hung and nothing happened. More experimentation needed. I would be interested to hear (or read!) of others’ experience.

We may have users who have these various browsers, so it is good to see them, and hopefully to remember that the browser may be playing a part in any problems we are trying to troubleshoot. It is fair enough to have a preferred one for University purposes, perhaps, but we need to be aware of the wider browsing world.

I’ve just been reading about a case study of the University of Westminster who it is claimed could save £1million by using the Google Apps education edition, so all its students and staff use google docs, email etc. At the same time I was reading about a new pay per view approach to accessing research papers launched October 2009 via the search engine Deepdyve (which specialises in scientific, technical and medical papers). Users of this model can read but not download papers as often as they wish for a 24 hour period for $0.99 each. Publishers such as Oxford University Press, Sage, Taylor & Francis, Wiley-Blackwell can be found there. There are also subscription services, so for $9.99 users can get 20 free articles a month and for $19.99 they can read unlimited articles. The search engine also includes open access papers which can be viewed of course for free. The search engine offers the usual services of email alerts / RSS feeds and interestingly you are invited to copy paragraphs of text into the search box “No need to come up with the perfect 2-3 words. Simply paste an interesting article into the search bar and click!”. DeepDyve have recently partnered with CiteUlike so their users can also rent articles directly from DeepDyve.

Whilst this is probably aimed at researchers outside of the conventional channels to accessing research literature, I can imagine that lots of post graduate students and academics might be tempted to pay 61p on an occasional basis just to save the trouble of filling and and signing forms which give them free access to journals via traditional document supply. Then again perhaps signing up for an account is just as much hassle. I wonder what take up they will have, and what new publishing models could be coming soon?

00:32 Indeed, the physical space of the library isn’t the be-all and end all anymore. Nor has it been to be honest all the years I’ve been a professional.

00:45 Who are these people? On screen names might have been a good idea – most of these talking heads haven’t got immediate recognition factor (I know if I’d been on there no one would know I was without a caption!)

01:12 First mention of Google. Is this the library of the future? These two guys I will say are pretty typical of most of my students.

01:25 Oh that’s who she is, Director of Oxford Libraries. Would have been useful to know earlier.

01:39 Yep, mobile devices are the future (and indeed present) of an increasing number of students accessing information. How many of our information resources we provide are m-compatible? Indeed hands up those of you who have access to mobile devices comparable to the students to test them out? Thought so…

02:12 Technology as enabler not driver? I think it’s a bit of both personally. 24/7 global access is real demand, and usually satisfied I’d say. 24/7 support on the other hand…

02:40 Really warming to Sarah Thomas (Oxford). Never met her, but she seems an insightful individual.

02:53 Oh now you suggest technology is a catalyst for change as well.

03:00 Technology lets you work smarter, but you have to change to make use of it. Yep, agree, old paradigms just don’t hold in Library 2.0.

03:20 Popular themes for libraries of the future. First talking head still talking about the library as a physical space, I think less and less that the space will be so crucial. But that’s only opinion. But a fair point raised about study space, rather than storage space as a crucial continuing role.

03:58 The library will be like a bee hive? Filled with workers, and drones thrown out to die when their purpose is through? Not quite the enabling metaphor I’d have hoped for. I don’t think bees show excitement, more a work ethic.

04:25 Sounds like the DWL fullfils many of these criteria for a future library, which is quite heartening.

05:06 Libraries as contributors to knowledge base. Nothing new, this is what we’ve been doing for years, exposing our catalogues, websites and information and making sure the metadata is discoverable. Certainly the repository is doing this!

05:13 What does the future hold for the librarians? Early retirement somewhere hot would be nice.

05:29 The old fashioned librarian is a “detail oriented, highly introspective individual”. Erm, not me then, ah but the modern librarian is an entrepreneurial, enthusiastic and more outward looking. Yeah, that’s me, clearly I’m future proofed. But what do we do with all the old librarians who don’t meet this specification? Retrain?

05:55 Loss of face to face contact with users. Sad but true, hence the need to engage with them through other channels. Blogs, twitter etc.

06.28 Academic image and card catalogue juxtaposed. Surely no one is using those in academia anymore?

06:39 This video brought to you by JISC and the number 9.

07:12 Libraries need to change the way they work and support learning, teaching and research. Ah, but many of us are already. Good to hear about levels of investment from JISC though towards this end.

07:51 The sound track hardly screams modern with its classical violins.

08:16 Global environment, but no mention of potential competitors for library services. Whither Google University and the like. I think there are some big sharks out there that we need to be aware of, ready to pounce unless we’re more mobile/adaptable and promoting the real USPs that we libraries and librarians offer to our fee paying users.

08:29 This year long JISC campaign and debate, don’t recall engaging in it myself. Or is this the start of the debate, discuss!

08:56 Libraries are happening places. Groovy man.

09:12 Agree, libraries need to act now and plan to meet the future challenges.

Well that was well worth watching, despite my misgivings at the start. Quite a bit of food for thought, even if most of the conclusions and points raised were hardly news to me. So the debate has begun. But at what level will it happen? Since all these talking heads were either very senior librarians or students, I didn’t see a lot of input from those of us exploring, experimenting and adapting technologies and techniques. Then again, I am blogging about this – so maybe I am starting to kick into the debate.