I posted a while ago about my upcoming move to the USA and today is my last working day at CETIS. Later in the week I’ll be getting on a plane to head over to Seattle and continue job-hunting on the West Coast. I’m looking forward to the challenge and new opportunities but will miss my colleagues and all those I’ve worked with (in particular all those who have been involved in UKOER). I’ve really enjoyed working with the programmes and thinking through the differences, pitfalls, and opportunities openness affords. I’ll look forward to hearing in due course how the Phase 3 and Rapid Innovation projects develop. I am very grateful for the opportunity I’ve had to work for CETIS and have been privileged to work with some great people.

No matter how carefully planned a move may be there are always unfinished fragments – the paper you’ll need to shelve for a while, the draft blog posts which either never quite got finished or never got beyond the record of a neat idea. As a way of rounding of my time with CETIS I wanted to mention some of the more recent blog posts which never quite made it (with apologies if this is more waffle than normal). [Rowin’s blogged about all of our most popular and favourite published posts of 2011].

Identity, Emigration, and Learning
this post tried to draw together some of the thoughts and conversations I’ve had in the past year about culture, identity, and the opportunity for reflection and learning that they have presented. My reflections are still in process but I’m grateful for how twitter, conferences, and ds106radio have triggered opportunities and conversations which have helped me think through what we’re doing. In particular I’d have to single out @easegill for conversations about change, kids, and songs of emigration.

Kickstarter, innovation, and project bids
A post on my experiences with Kickstarter and experience as a backer. This post stalled because it became more about boardgames than about funding (and as much as I love games it is not what I’m paid to write about). The part which was of wider interest was thinking about good pitches vs good delivery and the importance of post-funding communication. It is a bit different for projects in JISC world but as Lorna and Amber are fond of saying – “as a project you can’t fail if you’re open about the process” It also might have talked about the increasing ‘subversion’ or ‘success’ of Kickstarter as it is increasing used by commercial companies (in boardgames at least) to fund their first print run and remove some of their risks – prompting a reflection on a) to what extent institutions are actually capitalising on JISC funding to fund institutional change, and b) how hard it is to make the funding process lightweight and attract interest from those who don’t regularly bid.

Next Generation Interfaces
This was another post inspired somewhat by games, but more about the radical shifts that we’re experiencing in consumer technology and the impact they will have on innovation. In themselves cameras and motion sensing are nothing new, but the availability of technology like the kinect in a relatively inexpensive consumer format (and upcoming mainstream release for windows) is likely to create affordances for all sorts of possibilities for innovation. Yes, it’s gimicky and fiddly but I also think it’s going to be one of those quiet leaps forward that changes how computers integrate into life and education .

copyright, zero cost reproduction, non-rivalrous goods
Prompted by a debate over a university sticking a copyright license on a digitized copy of a work which was in the public domain. I agreed and disagreed with this post all over the place – the intellectual content of it is in the public domain but the recreation and availability of a given version is a different discussion – I want to say the digitized copy should be free and pd but I know first hand what it costs to make such things, the relative lack of support for doing so, and the genuine creative work that can go into it (and how tightly institutions hold IPR of their offline special collections [which is incidentally why this whole discussion was a bit odd]). I could make tangential points for and against this and wish it could be open but in the wider discussion (not the original post) I found the lack of appreciation of the work and costs involved to be an unhelpful simplification.

opened11
given how much I’ve written about oer and ukoer lots could be said about the number of my unfinished posts relating to open but frequently for these unfinished posts it was simply that at times someone else said it first, said it well, and it was better to point out their work than to finish a post saying the same thing.

Looking forward

This blog will continue to exist here but I’ve also copied it onto kavubob.wordpress.com and hope to continue it there.

Update: For clarity, this is a piece of documentation for a specific group rather than a “regular” blog post. It may be of wider interest but it makes a number of contextual assumptions…

This document assumes that you have some familiarity with intent of the Learning Registry (LR) http://www.learningregistry.org/ and that you are interested in contributing information about your resources. It lists a few things to consider before you get into the detail of the how to guide. More extensive information is available from the Learning Registry document collection. This document draws on that documentation (By US Dept of Ed, SRI International, and others) and feedback from the LR development team. It’s primary audience are those in the UK community thinking about in contributing metadata/ paradata/ resources. It’s intended to help technical managers get a quick overview of the issues in contributing to the Learning Registry test node and forthcoming experimental node at MIMAS.

Preparing your data

The primary purpose of the LR record is to indicate the existence, location, and owner of the resource and related metadata and paradata. The LR allows you to submit full or partial metadata, and to (optionally) include the resource itself as payload. The more metadata you submit, the more discoverable your resources become. It does allow you to optionally include some basic information about resources to support filtering and browsing you can opt to include original records in the data rather than referring to them. The LR does not care what metadata formats you use (though data consumers who discover your information through the LR might…).
Contributors submit/push data about their resources to a node which distributes that data to other nodes in the system. In itself the LR will not harvest/ gather information about your resources, you need to actively contribute it.

However, there are issues of local practice that you may want to consider prior to the process of sharing your data. In particular – how are you identifying your resources (e.g. does it have a cool uri? and how are you exposing any usage/activity data which you have about those resources (the paradata format developed alongside the learning registry might be useful).

Mechanisms for LR deposit

Contributors have to create a signature (OpenPGP key pair) for themselves on the LR (anonymous contribution is not permitted). This is a relatively simple self –registration process and will let users interact with the LR test node. However, contributors should note that to contribute to the live LR they will need an agreement with a given live node which has opted to accept their signed data.

The LR uses JSON rather than xml and offers a number of approaches to publishing data, these are:

Policy and License Issues

Please note the LR requests that any data you create or publish to the LR is clearly licensed. A Creative Commons Zero (CC0) or Attribution (CC: BY) licence are good options [Reccommended by both the Learning Registry, JISC, and UK Discovery]. You should ensure that you have the rights to assign this licence if it not already assigned and that the data you publish conforms with appropriate data protection and privacy laws. whatever data you submit to the LR is likely to move between legal jurisdictions.

“The responsibility of acquiring books was the libraries and you might therefore think of extending the libraries role to…educational resources in general” p 21 (Nikoi, S. (2010) “Open Transferable Technology enabled Educational Resources (OTTER) project: Stakeholder Views on Open Educational Resources” Research Report, University of Leicester)

About a month ago I tweeted that we CETIS and CAPLE had a visiting scholar working with us for this semester. I’d like to take this opportunity to more comprehensively introduce Gema, some of the work she’s doing while she’s with us, and plug our survey investigate the role of academic libraries in OER efforts (you’ll remember I ran a study in this area last year).

Gema Bueno de la Fuente is a visiting scholar from the Library and Information Science Department, University Carlos III of Madrid, Spain where she works as an assistant professor teaching in several Undergraduate and Graduate Programs, e.g. the Master in e-Learning Production and Management. She holds a PhD. in Library Science since 2010 with the dissertation “An Institutional Repository of Educational Content (IREC) Model: management of digital teaching and learning resources in the university library”. Her main research interests are digital teaching and learning materials, open content, digital repositories and e-learning systems, with and special focus on the library role in these topics, mainly related to metadata, vocabularies and some specific standards.

While she’s at Strathclyde she’s working on two projects, one looking at OERs and Libraries and one looking at institutional practice in managing learning materials. She’s primarily working with myself and Stuart Boon (one of the lecturers in CAPLE). The survey Gema has created and introductory email are copied below:
“The Centre for Academic Practice & Learning Enhancement (CAPLE) and Centre for Educational Technology and Interoperability Standards (CETIS) at the University of Strathclyde are conducting a study about the involvement of the Library as an organizational unit, and of individual librarians and other information science specialists, in OER initiatives. OER (Open Educational Resources) are “digitised materials offered freely and openly for educators, students and self-learners to use and reuse for teaching, learning and research” (OECD, 2007).

This survey (http://www.surveygizmo.com/s3/669964/OER-Libraries-and-librarians) is open to any institution or initiative dealing with OER and/or open content for learning and teaching in a Higher Education context. This includes the creation and release of OER, its dissemination and promotion, the implementation of learning repositories or others management and publishing systems, the aggregation of open educational content, etc. Those projects focused solely on open educational practice are out the intended scope of this survey.

The survey should be answered by an individual OER initiative team member with an overview of current activity and the team composition and profiles. The survey instrument has 15 questions and the estimated time for completion is 15-20 minutes.

No personal data will be required, but you will be able to provide some basic information about your type of organization and OER initiative purpose and objectives if you wish. Participating organisations will be listed in the study report but responses are not connected to individual participants.

The results will be published in a report through JISC CETIS Open Educational Resources web page. If you want to receive a free PDF copy of the final report, please provide your email address at the end of the survey (your email will not be published or held beyond distribution of the survey results).

The survey is open for your feedback until Friday, November 4, 2011.

Thank you in advance for participating in this study, your contribution is very valuable to us.

An intial look at UKOER without the collections strand (C). This is a post in the UKOER 2 technical synthesis series.

[These posts should be regarded as drafts for comment until I remove this note]

In my earlier post in this series on the collections strand (C), I presented a graph of the technical choices made just by that part of the programme looking at the issue of gathering static and dynamic collections, as part of that process I realised that, although the collections strand reflects a key aspect of the programme, and part of the direction future I hope future ukoer work is going, a consideration of the programme omitting the technical choices of strand C might be of interest.

The below graphs are also the ones which compare most directly with the work of UKOER 1 which didn’t have an strand focused on aggregation.

Platform related choices in UKOER2 excluding the collections strand

Standards related choices in UKOER2 excluding the collections strand

I’m hesitant to over-analyse these graphs and think there’s a definite need to consider the programme as a whole but will admit, that a few things about these graphs give me pause for thought.

wordpress as a platform vanishes

rss and oai-pmh see equal use

the proportional of use of repositories increases a fair bit (when we consider that a number of the other platfoms are being used in conjunction with a repository)

Reflections

now in a sense, the above graphs fit exactly with the observation at the end of UKOER that projects used whatever tools they had readily available. However, compared to the earlier programme it feels like there are fewer outliers – the innovative and alternative technical approaches the projects used and which either struggled or shone.

Speculating on this it might be because institutions are seeking to engage with OER release as part of their core business and so are using tools they already have, it might be that most of the technically innovative bids ended up opting to go for strand C, or I could be underselling how much technical innovation is happening around core institutional technology (for example ALTO’s layering of a web cms on top of a repository).

To be honest I can’t tell if I think this trend to stable technical choices is good or not. Embedded is good but my worry is that there’s a certain inertia around institutional systems which are very focused on collecting content (or worse just collecting metadata) and which may lose sight of why we’re all so interested in in openly licensed resources (See Amber Thomas’ OER Turn and comments for a much fuller discussion of why fund content release and related issues; for reference I think open content is good in itself but is only part of what the UKOER programmes have been about).

Notes:

the projects have been engaged in substantive innovative work in other areas, my comments are purely about techincal approaches to do with managing and sharing OER.

when comparing these figures to UKOER graphs it’s important to remember the programmes had different numbers of projects and different foci; a direct comparison of the data would need a more careful consideration than comparing the graphs I’ve published.

Working with CETIS is arguably among the most interesting jobs in ed tech. In the past 5 years I’ve worked with great people and interesting projects, I’ve constantly been exposed to stimuli which challenge, stretch, and (mostly) expand my knowledge and abilities. In the Repositories Research team, and more recently in CETIS support for the UK Open Educational Resource programmes I’ve had the opportunity to learn from, think about, share, and participate in a lot of great work.

UKOER 2 has just finished and things are rapidly coming together for UKOER 3 which looks like it is going to be awesome (and you know how little I use that word).

I am, however, moving on.

I’m not going anywhere quickly but all being well in about 3 months my paperwork will be done and some time between January and July I’ll be emigrating to the States.

No, I don’t know what I’m going to do next, but we decided that it’s time to give the US a go.

I hope to keep working in ed tech, open education, and repositories though I may consider moving back towards more digital library like stuff depending on what prospects come up. I’m going to start looking in Seattle (my wife’s hometown) and expand my search from there (advice welcome).

In the meantime it’s time to finish the tech synthesis of UKOER 2 and get ready for UKOER 3.

Are you wondering what to do with your OER next? Are you wondering how to keep the ball rolling in your institution and share some more educational resources openly? Are you looking for a tangible way to get your open content used? or perhaps looking for a way to turn your OER into something a little more tangible for your CV?

well, this might be your lucky day…

If your OER is transformable into a textbook (or is already a textbook) and is entirely licensable as CC: BY content (either already CC:BY or you’re the rights holder and are willing to licence as such), the Saylor Foundation would like to hear from you. There’s a $20000 award for any textbook they accept for their curriculum.

What technology is being used to aggregate open educational resources? What role can the subject community play in resources discovery? This is a post in the UKOER 2 technical synthesis series.

[These posts should be regarded as drafts for comment until I remove this note]

In the UKOER 2 programme Strand C funded “Projects identifying, collecting and promoting collections of OER and other material around a common theme” with the aim “…to investigate how thematic and subject area presentation of OER material can make resources more discoverable by those working in these areas” (UKOER 2 call document). The projects had to create what were termed static and dynamic collections of OER. The intent of the static collection was that it could in some way act as an identity, focus, or seed for the dynamic collection. Six projects were funded: CSAP OER, Oerbital, DelOREs, Triton, EALCFO, Open Fieldwork and a range of approaches and technologies was taken to making both static and dynamic collections. The projects are all worth reading about in more detail – however, in this context there are two possible general patterns worth considering.

Technology

Overview of technical choices in UKOER 2 Strand C

The above graph shows the range of technology used in the Strand. Although a lot could (and should) be said about each project individually when their choices are viewed in aggregate the following technologies are seeing the widest use.

Graph of technologies and standards in us by 50% or more of Strand C projects

Although aspects of the call might have shaped the projects’ technical choices to some extent, a few things stand out:

the focus on RSS/Atom feeds and tools to manipulate them

reflection: this matches the approach taken by many of the other aggregators and discovery services for OER and other learning materials as well as the built in capabilities of a number of the platforms in use [nb “syndicated via RSS/Atom” was a programme requirement]

a relative lack of a use of OAI-PMH

reflection: is this indicative of how many content providers and aggregators in the learning material’s consume or output OAI-PMH?

substantial use or investigation of wordpress and custom databases (with php frontends)

reflection: are repositories irrelevant here because they don’t offer easy ways to add plugins or aggregate others’ content (or are there other factors which make WordPress and a custom database more appealing)

Community

One of the critical issues for all of these projects in the creation of these collections has been the role of community; for some of the strand projects the subject community played a crucial role in developing the static collection which then fed, framed, or seeded the dynamic collection, for other projects the subject community formed the basis of contributing resources to the dynamic collection.

Although the projects had to be “closely aligned with relevant subject or thematic networks – for example Academy Subject Centres, professional bodies and national subject associations” , I find it striking that many of the projects made those defined communities an integral part of their discovery process and not just an audience or defining domain.

Reflections on community

I’m hoping someone else is able to explore the role of community in discovery services more fully (if not I’ll try to come back to this) but I’ve been struck by the model used by some projects in which a community platform is the hook leading to resource discovery. It’s the opposite end of the spectrum to Google – to support discovery you create a place and content accessible and relevant to a specific subject domain. The place you create both hosts new content created by a specific community and serves as a starting point to point to further resources elsewhere (whether those pointers are links, learning pathways, or tweaked plugin searches run on aggregators or repositories). This pattern mirrors any number of thriving community sites (typically?) outside of academia that happily coexist in Google’s world providing specialist sources of information and community portals (for example about knitting, cooking, boardgames).

What it doesn’t mirror is trying to entice academics to use a repository… [I like repositories and think they’re very useful for some things , but this and the examples of layering CMSs on top of repositories, increasingly makes me think that on their own they aren’t a great point of engagement for anybody…]

What technical protocols are projects using to share their resource? and how are they planning on representing their resources in Jorum? This is a post in the UKOER 2 technical synthesis series.

[These posts should be regarded as drafts for comment until I remove this note]

Dissemination protocols

Dissemination protocols in use in the UKOER 2 programme

The chosen dissemination protocols are usually already built in the platforms in use by projects; adding or customising an RSS feed is possible but often intricate and adding an OAI-PMH feed is likely to require substantial technical development. DelOREs investigated existing OAI-PMH plugins for WordPress they could use but didn’t find anything usable within their project.

As will be discussed in more detail when considering Strand C – RSS is not only the most supported dissemination protocol, from the programme’s evidence, it is also the most used in building specialist discovery services for learning and teaching materials. The demand for an OAI-PMH interface for learning resources remains unknown. [debate!]

Jorum representation

Methods of uploading to Jorum chosen in UKOER 2 programme

The statistics on Jorum upload method are denoted expressions of intent – projects and Jorum are still working through these options.

Currently RSS upload to Jorum (along with all other forms of bulk upload) is set up to create a metadata record not deposit content.

Three of the uploaders using RSS are using the edshare/eprints platform (this platform was successfully configured to deposit metadata in bulk via RSS into Jorum in UKOER phase 1).

Jorum uses RSS ingest as a one-time process – as I understand it it does not revisit the feed for changes or updates [TBC]

As far as I know PORSCHE are the only project who have an arranged OAI-PMH based harvest (experimental for Jorum upload under investigation as part of an independent project – [thanks to Nick Shepherd for the update on this HEFCE-funded work: see comments and more information is available on the ACErep blog)]

How are projects tracking the use of their OER? What tools are projects using to work with their OER collections? This is a post in the UKOER 2 technical synthesis series.

[These posts should be regarded as drafts for comment until I remove this note]

Analytics

Analytics and tracking tools in use in the UKOER 2 programme

As part of their thinking around sustainability, it was suggested to projects that they consider how they would track and monitor the use of the open content they released.

Most projects have opted to rely on tracking functionality built into their chosen platform (were present). The tools listed in the graph above represent the content tracking or web traffic analysis tools being used in addition to any built in features of platforms.

Awstats, Webalizer and Piwik are all in (trial) use by the TIGER project.

Tools

Tools used to work with OER and OER feeds in the UKOER 2 programme

These tools are being used by projects to work with collections of OER, typically by aggregating or processing rss feeds or other sources of metadata about OER. SOme of the tools are in use for indexing or mapping, others for filtering, and others to plug collections or search interfaces into a third-party platform. The tools are mostly in use in Strand C of the programme but widgets, yahoo pipes, and feed43 have a degree of wider use.

The listing in the above graph for widgets covers a number of technologies including some use of the W3C widget specification.
The Open Fieldwork project made extensive use of coordinate and mapping tools (more about this in a subsequent post)