JISC's Learning Registry Node Experiment at Mimas

JLeRN Reporting: Skills and Capacity in the UK for the Learning Registry

This is one of a series of posts from the JLeRN Experiment, forming sections of what will be our Final Report. Please send feedback and discussion in the comments section, as your own blog post response, via Twitter (hashtags #jlern #learningreg) or on the OER-Discuss and Learning Registry email lists. To see all the posts, click here.

JLeRN Draft Report: 3. Skills and Capacity in the UK

Who has skills and expertise?

As far as we can tell, only a small number of JISC and JISC CETIS staff, the JLeRN team, and those few dozen who have attended Learning Registry or JLeRN events are familiar with the Learning Registry concepts and use cases. See the previous report section on Appetite and Demand to get a feel for who has been following and participating in this work. Obviously there are probably other lurkers out there whom we haven’t met (yet)- feel free to make yourselves known in the blog comments!

Direct technical experience with the Learning Registry in the UK HE sector consists of (and please let me know if I’ve missed anyone!):

JLeRN developer Nick Syrotiuk;

The ENGrich team at Liverpool University, and their local technical staff who have worked with them setting up a Learning Registry node;

Julian Tenney and his colleagues who work on Xerte at Nottingham University, who took an early interest in the Learning Registry when it started;

Pat Lockley, formerly of Oxford University, now maintaining an interest in his own time outwith his current post; see his blog post for JLeRN describing and giving access to four Learning Registry tools he has developed.

We have unfortunately lost the expertise of one key JISC CETIS expert: John Robertson; and two key developers: JLeRN’s Bharti Gupta, and Jorum’s Steven Cook (who worked on developing a CakePHP Datasource for Jorum to use with the Learning Registry). All of these people had become skilled in various aspects of the Learning Registry and all have now moved outwith JISC’s community.

Where the gaps are

As the afore-mentioned Appetite and Demand blog post notes, we’ve had enough interest and use cases coming in to make us feel the involvement we’ve had at Mimas has been worthwhile. Feedback from the institutions and projects working with or dipping their toe into JLeRN and the Learning Registry has mostly consisted of requests for easier entry to and navigation of the existing Learning Registry documentation, and simple Web apps and APIs to enable exploring, publishing to, searching and extracting from a Learning Registry node, as well as tools for capturing and collating paradata.

JLeRN has done a little to move forward with tools: we have the JLeRN Node Explorer by Nick Syrotiuk; Pat Lockley’s tools which we have facilitated access to through presentations slots and this blog; and we’ve tested using an OAI-PMH utility, developed in the U.S., to extract Jorum metadata and publish it to a node with its keywords converted into Learning Registry tags. We have kept our own node up-to-date in terms of software releases, and experimented with installing nodes on Ubuntu and Windows. We never found capacity to try out a cloud-hosted node – our American colleagues use Amazon Web Services, but we could never get to the bottom of how to manage costs within a UK university funding environment. Nor was there much call for experimenting with the more complex architectural proposals of the Learning Registry, namely, linking nodes in networks and communities.

What would be needed for the Learning Registry to move to the next level in the UK?

“[…] those who recognise the gaps in paradata provision are interested in the Learning Registry at a strategic level, but this invariably raises an accompanying problem of expectations management. In describing the problem area the Learning Registry is aimed at, and the innovative approach being taken, it can be easy to latch onto this work as the next big thing that will solve all problems, without recognising that there is (a) a lot of work to be done, and (b) that work would require a well-supported infrastructure, both technically and in terms of communities, shared vocabularies, and so on.”

The JLeRN team have found that working with the Learning Registry code and specifications has been, to date, relatively straightforward. There is an initial steep learning curve in becoming familiar with the documentation and concepts, but the CouchDB-based software itself is easy to install and manage. As Pat Lockley has said, it is “state of the art”.

However, an example of an obvious weak spot is the limited vocabulary of the Learning Registry Paradata Specification, which throws into sharp relief the idea that any scalable implementation of the Learning Registry as an architecture will bring about some very familiar requirements: communities will need to agree on such issues as vocabularies, degrees of openness, licensing, shared content standards, and so on. That the Learning Registry as a technical investigation has chosen to push these concerns back out to the community, thus avoiding more years of technical discussion and testing around interoperability, does not mean that these concerns have evaporated. Moreover, JLeRN has asked David Kay of Sero Consulting to prepare a brief on where the Learning Registry sits in the broader context of the Web, linked data, research data management, activity data and libraries. There may be parallel development going on and we need to keep an eye on that.

What it does mean is that, if we are to close the gap between the strategic enthusiasm for the potential wins of the Learning Registry, and the small-scale use case and prototype testing phase we are in, we will need a big push backed by a clear understanding that we will be walking into some of the same minefields we’ve trodden in, cyclically, for the past however many decades. And it is by no means clear yet that the will is there, in the community or at the strategic level.