JISC's Learning Registry Node Experiment at Mimas

The Story So Far?

I’ve recently had a number of interesting and informative conversations in my attempts to pin down the nature of the Learning Registry (LR) and its potential significance within and beyond its US origins. I thought it might be interesting to summarise some of the headline ‘findings’ ahead of the Mimas JLeRN workshop on 22nd October, all of which are of course subject to learning more on the day. These are best presented as a sort of historical narrative …

2) Whilst those challenges may have had particular priority in the minds of DoD and DoE stakeholders, they were symptomatic of the much-discussed difficulties of describing learning resources in manners that will be enable potential uses and users (from course designers to learners) elsewhere; to put it bluntly there is no consensus after all these years that enables us to homogenize / harmonize learning resource metadata – it’s like a muddy pond containing fish, plant life, shopping trolleys, industrial byproducts, children swimming, others fishing … it feels like a random ‘mess’ not an ecosystem.

3) Paradata (i.e. usage data with context) may be a vital part of the jigsaw – allowing resources to become increasingly ‘well-described’ on the basis of their utilization (Who, Where, When, How, etc…); however it is only a format that is subject to the quality of the data itself, particularly re- the use (or not of) consistent and persistent identifiers to ‘link’ paradata.

4) Paradata has the potential to be more powerful at scale (introducing statistical reliability and exposing the long tail of resources and of usage) and may therefore benefit from the ability to network datasets across the community (subject, national, international).

5) The LR project developed an ‘approach’ (model, architecture…) that addresses the key issues of mess (see 2), context (see 3) and scale (see 4). In my simple terms, the LR approach proposes that the mess is addressed by a flexible approach to data attributes (anything goes), context is evidenced by paradata, and scale is enabled by orchestration between networked nodes.

6) In organizational and temporal terms, LR is only a project and therefore the potential has yet to be realized; however, it is already valuable that practitioners intuitively recognise both the relevance of this response and the possibilities it may open up.

7) In solution terms, LR is only a configuration of plumbing, of machines talking to machines (that’s all it set out to be) and therefore it is emphatically down to the community (who’s that?) to build both local and larger scale services, interfaces and applications on top.

8) In technology terms, LR made some choices at a moment in time (e.g. to use the Couch noSQL database); whilst these are far from out-dated, it may be that it is the approach that is more significant going forward than the architecture or the code.

9) The potential of learning paradata raises issues about the divergence (or is it convergence, glass half full?) of approaches to usage data / activity streams; in one dimension this data represents part of the personal learning record (“I did this”), whilst ‘at the other end of the triple’ (thanks, Phil) it is about the resource (“It was used in this way”); that sounds exciting till we dig deeper in to issues of storage / retrieval, privacy / access and more.

10) At this point in the history (Is that an end point? I think it was Simon Schama who asserted that the French Revolution is still ongoing), we are faced with a familiar dilemma concerning the investment relationship between rapid innovation (technology and tools are always moving on), embedding in the community and reliable productisation … Is the target audience too narrow and the governance too uncertain to deliver the power of the LR approach? Is there a wider value in the LR approach that would bring critical mass and a sustainable trajectory?