Search

I’ve been struggling to find a way to mark the passing of UKOLN, or at least UKOLN as we knew it (I’m not sure whether the remaining rump is still called UKOLN; the website has not been updated with much if any information about the changes that occurred on 1 August, as of this writing). I enjoyed the tweet-sized memories yesterday under the #foreverukoln hashtag. The trouble is, any proper marking of UKOLN needs more than a tweet, more than a post, more even than a book. And any less proper marking risks leaving out people who should be thanked.

But, I can’t just leave it unmarked. So you have to accept that this is just some of the things I’ve appreciated from UKOLN, and names just some of the many people from UKOLN who have helped and supported me. If you’re left out, please blame my memory and not any ill-intent, but also note this doesn’t attempt to be comprehensive.

So here’s the first thing. I’ve found in my store of ancient documents the text of the draft brochure for the eLib Programme, written in 1995 or 1996 (some of you will remember its strange square format and over-busy blue logo). Right at the bottom it says:

Now (currently at least, if you click on that link it will still work, redirecting you to http://www.ukoln.ac.uk/services/elib/. There have been multiple changes to the UKOLN website over the years, and they have always maintained the working links. I don’t know most of the people who did this (though Andy Powell and Paul Walk both had something to do with it), but my heartfelt thanks to them. Those readers who work anywhere near Library management or Library systems teams: PLEASE demand that prior URIs continue to work, when getting your websites updated!

The first phase of the eLib programme had around 60 projects, many of them 3 year projects. As we moved towards the second and third phases, the numbers of projects dropped, and it was clear that the UK’s digital library movement was losing many people with hard-won experience in this new world. (In fact, we were mainly losing them to the academic Libraries, so it was not necessarily a Bad Thing.) I remember trying to persuade JISC that we needed a few organisations with greater continuity, so we wouldn’t always have new project staff trying to learn everything from the ground up. Whether they listened or not, over the years UKOLN provided much of that continuity.

Another backroom group has also been hugely important to me. Over the 15 years I was working with them, UKOLN staff organised countless workshops and conferences for eLib, for JISC and for the DCC. These staff were a little better publicly known, as they staffed the welcome desks and communicated personally with many delegates. They were always professional, courteous, charming, and beyond helpful. I don’t remember all the names; I thank them all, but remember Hazel Gott from earlier andNatasha Bishop and Bridget Robinson in more recent times.

A smaller group with much higher visibility would be the Directors of UKOLN. Lorcan Dempsey was an inspired appointment as Director, and his thoughtful analyses did much to establish UKOLN as a force to be reckoned with. I’d never met anyone who read authors like Manuel Castells for fun. I was a simple-minded, naïve engineer, and being in 4-way conversations with Lorcan, Dan Greenstein of the AHDS, and John Kelleher of the Tavistock Institute, larded with long words and concepts from Social Science and Library Science, sometimes made my brain hurt! But it was always stimulating.

When Lorcan moved on, the role was taken by Liz Lyon, whom I had first met as project coordinator of the PATRON project at the University of Surrey. A very different person, she continued the tradition of thoughtful analyses, and promoted UKOLN and later the DCC tirelessly with her hectic globetrotting presentations. She was always a great supporter of and contributor to the DCC, and I have a lot to thank her for.

One of the interesting aspects of UKOLN was the idea of a “focus” person. Brian Kelly made a huge impact as UK Web Focus until just yesterday, and though our paths didn’t cross that often, I always enjoyed a chat over a pint somewhere with Brian. Paul Miller, if I remember right, was Interoperability Focus (something to do with Z39.50?), before moving on to become yet another high-flying industry guru and consultant!

That reminds me that one of my favourite eLib projects was MODELS (MOving to Distributed Environments for Library Services, we were big on acronyms!), which was project managed by Rosemary Russell, comprising a series of around 11 workshops. The second MODELS workshop was also the second Dublin Core workshop, so you can see it was at the heart of things. Sadly at the next workshop I coined the neologism “clumps” for groups of distributed catalogues, and nobody stopped me! We chased around a Z39.50 rabbit hole for a few years, which was a shame, but probably a necessary trial. Later workshops looked at ideas like the Distributed National Electronic Resource, information architectures, integrated environments for learning and teaching, hybrid environments, rights management and terminologies. And the last workshop was in 2000! Always huge fun, the workshops were often chaired by Richard Heseltine from Hull, who had a great knack for summarising where we’d got to (and who I think was involved directly in UKOLN oversight in some way).

Rachel Heery also joined UKOLN to work on an eLib project, ROADS, looking at resource discovery. She had a huge impact on UKOLN and on many different areas of digital libraries before illness led to her retirement in 2007 and sadly her death in 2009. The UKOLN tribute to her is moving.

UKOLN did most of the groundwork on eLib PR in the early days, and John Kirriemuir was taken on as Information Officer. I particularly remember that he refused to use the first publicity mugshot I sent; he told me over the phone that when it opened on his PC someone in the office screamed, and they decided it would frighten small children! I think John was responsible for most of the still-working eLib website (set up in 1995, nota bene Jeff Rothenberg!).

Ariadne has become strongly identified with UKOLN, but was originally suggested by John MacColl, then at Abertay, Dundee and now St Andrews, and jointly proposed by John and Lorcan as a print/electronic parallel publication. John Kirriemuir worked on the electronic version in the early days, I believe, later followed by Philip Hunter and Richard Waller, both of whom also worked on IJDC (as also did Bridget Robinson). Ariadne is a major success; I am sure there are many more who worked on making her so, and my thanks and congratulations to all of them.

Most recently I interacted with UKOLN mostly in terms of the DCC. As well as Liz and those working on IJDC, Alex Ball, Michael Day, Manjula Patel and Maureen Pennock made major contributions, and wrote many useful DCC papers.

Last but by no means least, we tend to forget to thank the office staff behind the scenes. I don’t remember most names, my sincere apologies, but you were always so helpful to me and to others, you definitely deserve my thanks.

… and to so many more UKOLN staff over the years, some of whom I should have remembered and acknowledged, and some of whom I didn’t really know: thanks to you from all of us!

I’ve spent the last few months looking at the JISC data management planning projects. It’s been very interesting. Data management planning for research is still comparatively immature, and so are the tools that are available to support it. The research community needs more and better tools at a number of levels. Here are my thoughts… what do you think?

At group or institution level, we need better “maturity assessment” tools. This refers to tools like:

Some of the existing tools seem rather ad hoc, as if they had emerged and developed from somewhat casual beginnings (perhaps not well put; maybe from beginnings unrelated to the scale of tasks now facing researchers and institutions). It is perhaps now time for a tool assessment process involving some of the stake-holders to help map the landscape of potential tools, and use this to plot development (or replacement) of existing tools.

For example CARDIO and DAF, I’m told, are really tools aimed at people acting in the role of consultants, helping to support a group or institutional assessment process. Perhaps if they could be adjusted to be more self-assessment-oriented, it might be helpful. The DAF resource really needs to be brought up to date and made internally consistent in its terminology.

Perhaps the greatest lack here is a group-oriented research data risk-assessment tool. This could be as simple as a guide-book and a set of spreadsheets. But going through a risk assessment process is a great way to start focusing on the real problems, the issues that could really hurt your data and potentially kill your research, or those that could really help your research and your group’s reputation.

We also need better DMP-writing tools, ie better versions of DMPonline or DMP Tool. The DCC recognises that DMPonline needs enhancement, and has written in outline about what they want to do, all of which sounds admirable. My only slight concern is that the current approach with templates for funders, disciplines and institutions in order to reflect all the different nuances, requirements and advice sounds like a combinatorial explosion (I may have misunderstood this). It is possible that the DMP Tool approach might reduce this combinatorial explosion, or at least parcel elements of it out to the institutions, making it more manageable.

The other key thing about these tools is that they need better support. This means more resources for development and maintenance. That might mean more money, or it might mean building a better Open Source partnership arrangement. DMPonline does get some codebase contributions already, but the impression is that the DMP Tool partnership model has greater potential to be sustainable in the absence of external funding, which must eventually be the situation for these tools.

It is worth emphasising that this is nevertheless a pretty powerful set of tools, and potentially very valuable to researchers planning their projects and institutions, departments etc trying to establish the necessary infrastructure.

“PDF is almost a de facto standard when it comes to exchanging documents. One of the best things is that always, on each machine, the page numbers stay the same, so it can be easily cited in academic publications etc.

But de facto standard is also opening PDFs with Acrobat Reader. So the single company is making it all functioning fluently.

However, thinking in longer perspective, say 50 years, is it a good idea to store documents as PDFs? Is the PDF format documented good enough to ensure that after 50 years it will be relatively easy to write software that will read such documents, taking into account that PDF may be then completely deprecated and no longer supported?”

I tried to respond, but fell foul of Stack Exchanges login/password rules, which mean I’ve created a password I can’t remember. And I was grumpy because our boiler isn’t working AFTER it’s just been serviced (yesterday, too), so I was (and am) cold. Anyway, I’ve tried answering on SE before and had trouble, and I thought I needed a bit more space to respond. My short answer was going to be:

“There are many many PDF readers available implemented independently of Adobe. There are so many documents around in PDF, accessed so frequently, that the software is under constant development, and there is NO realistic probability that PDF will be unreadable in 50 years, unless there is a complete catastrophe (in which case, PDF is the least of your worries). This is not to say that all PDF documents will render exactly as now.”

Let’s backtrack. Conscious preservation of artefacts of any kind is about managing risk. So to answer the question about whether a particular preservation tactic (in this case using PDF as an encoding format for information) is appropriate for a 50-year preservation timescale, you MUST think about risks.

Frankly, most of the risks for any arbitrary document (a container for an intellectual creation) have little to do with the format. Risks independent of format include:

whether the intellectual creation is captured at all in document form,

whether the document itself survives long enough and is regarded as valuable enough to enter any system that intends to preserve it,

whether such a system itself can be sustained over 50 years (the economic risks here being high),

not to mention whether in 50 years we will still have anything like current computer and internet systems, or electricity, or even any kind of civilisation!

So, if we are thinking about the risks to a document based on its format, we are only thinking about a small part of the total risk picture. What might format-based risks be?

whether the development of the format generally allows backwards compatibility

whether the format is widely used

whether tools to access the format are closed and licensed

whether tools to access the format are linked to particular computer systems environments

whether various independent tools exist

how good independent tools are at creating, processing or rendering the format

and no doubt others. By the way the impact of these risks all differ. You have to think about them for each case.

So let’s see how PDF does… no, hang on. There are several families within PDF. There’s the “bog-standard” PDF. There’s PDF/A up to v2. There’s PDF/A v3. There are a couple of other variants including one for technical engineering documents. Let’s just think about “bog-standard” PDF: Adobe PDF 1.7, technically equivalent to ISO standard ISO 32000-1:2008:

The format was proprietary but open; it is now open

it is the subject of an ISO standard, out of the control of Adobe (this might have its own risks, including the lack of openness of ISO standards, and the future development of the standard)

it allows, but does not require DRM

it allows, but does not require the inclusion of other formats

PDF is very complex and allows the creation of documents in many different ways, not all of which are useful for all future purposes (for example, the characters in a text can be in completely arbitrary order, placed by location on the page rather than textual sequence)

PDF has generally had pretty good backwards compatibility

the format is extremely widely used, with many billions of documents worldwide, and no sign of usage dropping (so there will be continuing operational pressure for PDF to continue accessible)

many PDF creating and reading tools are available from multiple independent tool creators; some tools are open source (so you are not likely to have to write such tools)

PDF tools exist on almost all computer systems in wide use today

some independent PDF tools have problems with some aspects of PDF documents, so rendering may not be completely accurate (it’s also possible that some Adobe tools will have problems with PDFs created by independent tools). Your mileage may vary.

So, the net effect of all of that, it seems to me is that provided you steer clear of a few of the obvious hurdles (particularly DRM), it is reasonable to assume that PDF is perfectly fine for preserving most documents for 50 years or so.

A month or so ago I got an email from the OpenRightsGroup, asking me to write to a minister supporting the idea of retaining the Postcode database as Royal Mail is privatised, and making it Open. The suggested text was as follows:

“Dear [Minister of State for Business and Enterprise]
“We live in an age where location services underpin a great chunk of the economy, public service delivery and reach intimate aspects of our lives through the rise of smartphones and in-car GPS. Every trip from A to B starts and ends in a postcode.
“In this context, a national database of addresses is both a critical national asset and a natural monopoly, which should not be commercially exploited by a single entity. Instead, the Postcode Address File should be made available for free reuse as part of our national public infrastructure.The postcode is now an essential part of daily life for many purposes. Open availbaility would create re-use and mashup opportunities with an economic value far in excess of what can be realised from a restrictive licence.
“I am writing to you as the minister responsible to ask for a public commitment to:
“1) Keep the Postcode Address File (PAF) under public ownership in the event of the Royal Mail being privatised.
“2) Release the PAF as part of a free and open National Address Dataset.”

A few days ago I got a response. I think it must be from a person, as the writer managed to mis-spell my name (not likely to endear him (or her) to me!)

“Dear Mr Rushbridge,

“Thank you for your email of 6 February to the Minister for Business and Enterprise, Michael Fallon MP, regarding the Postcode Address File (PAF).

“I trust you will understand that the Minister receives large amounts of correspondence every day and regretfully is unable to reply to each one personally. I have been asked to reply.

“The Government’s primary objective in relation to Royal Mail is to secure a sustainable universal postal service. The postcode was developed by Royal Mail in order to aid delivery of the post and is integral to Royal Mail’s nationwide operations. However, we recognise that postcode data has now become an important component of many other applications, for example sat-navs.

“In light of PAF’s importance to other users, there is legislation in place to ensure that PAF must be made available to anyone who wishes to use it on terms that are reasonable. This allows Royal Mail to charge an appropriate fee whilst also ensuring that other users have access to the data. The requirement is set out in the Postal Services Act 2000 (as amended by the Postal Services Act 2011) and will apply regardless of who owns Royal Mail. It is this regulatory regime, and not ownership of Royal Mail, that will ensure that PAF continues to be made available on reasonable terms. Furthermore, Ofcom, the independent Regulator, has the power to direct Royal Mail as to what ‘reasonable’ terms are. Ofcom are currently consulting on the issue of PAF regulation and more information can be found on their website at: http://www.ofcom.org.uk.

“On the question of a National Address Register, the UK already has one of the most comprehensive addressing data-sets in the world in the form of the National Address Gazetteer (NAG). The NAG brings together addressing and location data from Ordnance Survey, Local Authorities and Royal Mail; the Government is committed to its continuation as the UK’s definitive addressing register.

“The Government is similarly committed to ensuring that the NAG is used to its full benefit by both public and private sector users, and keeps pricing and licensing arrangements under review with the data owners. Alongside our commitment to the NAG, the Government is continuing to consider the feasibility of a national address register.

“I trust you will find this information helpful in explaining the position on this subject.

“Yours sincerely,

“BIS MINISTERIAL CORRESPONDENCE UNIT”

So, that’ll be a “No” then. But wait! Maybe there’s a free/open option? No such luck! From Royal Mail’s website, it looks like £4,000 for unlimited use of the entire PAF (for a year?), or £1 per 100 clicks. You can’t build an open mashup on that basis. Plus there’s a bunch of licences to work out and sign.

What about the wonderful National Address Gazeteer? It’s a bit hard to find out, as there seem to be mutiple suppliers, mainly private sector. Ordnance Survey offers AddressBase via their GeoPlace partnership, which appears [pdf] to cost £129,950 per year plus £0.008 per address for the first 5 million addresses! So that’s not exactly an Open alternative, either!

Now I’m all for Royal Mail being sustainable. But overall, I wonder how much better off the whole economy would be with a Open PAF than with a closed PAF?

Terminology in this area is confusing, and is used differently in different projects. For the purposes of a report I’m writing, unless otherwise specified, we will use terminology in the following way:

Data management is the handing and care of data (in our case research data) throughout its lifecycle. Data management thus will potentially involve several different actors.

Data management plans refer to formal or informal documents describing the processes and technologies to be deployed in data management, usually for a research project.

Data deposit refers to placing the data in a safe location, normally distinct from the environment of first use, where it has greater chance of persisting, and can be accessed for re-use (sometimes under conditions). Often referred to as data archiving.

Data re-use refers to use made of existing data either by its creators, or by others. If re-use is by the data creators, the implication is that the purpose or context has changed.

Data sharing is the process of making data available for re-use by others, either by data deposit, or on a peer to peer basis.

Data sharing plans refer to the processes and technologies to be used by the project to support data sharing.

Some JISCMRD projects made a finer distinction between data re-use and data re-purposing. I couldn’t quite get that. So I’m balancing on the edge of an upturned Occam’s Razor and choosing the simpler option!

David duChemin, a Humanitarian Photographer from Vancouver, wrote a bog postduC13 at the start of 2013 (in the “New Year Resolution” season) entitled “Planning is just guessing. But with more pie charts and stuff”. He writes:

“Planning is good. Don’t get me wrong. It serves us well when we need a starting point and a string of what ifs. I’m great at planning. Notebooks full of lists and drawings and little check-boxes, and the only thing worse than planning too much is not planning at all. It’s foolish not to do your due-diligence and think things through. Here’s the point it’s taken me 4 paragraphs to get to: you can only plan for what you’ll do, not for what life will do to you.”

OK he doesn’t really think planning is just guessing; in the post he’s stressing the need for flexibility, but also pointing out that planning (however flawed) is better than not planning.

That blog post is part of what inspired me to write this. Another part is a piecce of work that I’m doing that seems to have gone on forever. It seems like a good idea to put this up and see what comments I get that might be helpful.

Planning to manage the data for your research project is not the same thing as filling in a Checklist, or running DMP Online. The planning is about the thinking processes, not about answering the questions. The short summary of what follows is that planning your research data management is really an integral part of planning your research project.

So when planning your research data management, what must you do?

First, find out what data relevant to your planned research exists. You traditionally have to do a literature search; just make sure you do a data search as well. You need to ensure you’re aware of all relevant data resources that you and your colleagues have locally, and data resources that exist elsewhere. Some of these will be tangentially referenced in the literature you’ve reviewed. So the next step is to work out how you can get access to this data and use it if appropriate. It doesn’t have to be open; you can write to authors and data creators requesting permission (offering a citation in return). Several key journals have policies requiring data to be made available, if you need to back up your request.

The next step, clearly, is to determine what data you need to create: what experiments to run, what models, what interviews, what sources to transcribe. This is the exciting bit, the research you want to do. But it should be informed by what exists.

Now before planning how you are actually going to manage this data, you need to understand the policies and rules under which you must operate, and (perhaps even more important) the services and support that is available to you. Hidden in the policies and rules will be requirements for your data management (data security, privacy, backup, continued availability, etc). Hidden in the services and support will be some that will be very useful to you, and will save you time and diverted resources (institutional backup services, institutional data repositories, etc). As suggested above, these services and support could come from your group, your institution, your discipline, your scientific society, or your invisible college of colleagues around the world.

So now you can plan to manage your data. You may need to address many issues:

Identification, provenance and version control: how to connect associated datasets with the experimental events and sources from which they derived, and the conditions and circumstances associated.

Storage: how and where to store the data, so that you and your colleagues (who may be in other institutions and/or other countries with different data protection regimes) can work on it conveniently but securely. Issues like data size, rate of data creation, rate of data update may all be relevant here. Data backup! Encryption for sensitive data taken off-site. Access control. Annotation. Documentation.

Processing: how will you analyse and process your data, and how will you store the results. Back to provenance and version control!

Sharing: How to make data available to others, and under what conditions. Where will you deposit it? With what associated information to make it usable? Depends on the data of course, and issues such as data sensitivity. May also depend on data size etc. Which data to share? Which data to report?

That’s not everything but it’s the core. When you’ve done the basic planning at this sort of level, you can get down to writing the Plan! At this point the specific requirements of research funder and institution will come into play, and tools like DCC DMP Online will be useful. They may even remind you of key issues you had forgotten or ignored, or local services you (still) didn’t know about.

At this point you don’t know whether your research will be funded, so there is a limit to the amount of effort you should put into this. NERC wants a very much simplified one-page outline data management plan; it may be more sensible to have a 2 or 3-page plan covering the stuff above, and condense down (or up) as required by your funder.

But you’re still only at the first stage of your research data management planning! If you are lucky enough to get your project funded, there will be a project initiation phase, when you gather the resources (budget, staff, equipment, space). Effectively you’re going to build the systems and establish the protocols that will deliver your research project. At this point you should refine your plan, and add detail to some elements you were able to leave rather vague before. Now you’re moving from good intentions to practical realities. And given that life does throw unexpected events at you (staff leaving, IT systems failing, new regulations coming in), you may need to do this re-planning more than once. Keep them all! They are Records that could be useful to you in the future. In a near-worst case, they could form part of your defence against accusations of research malpractice!

My point is, this isn’t so much good research data management planning, as good planning for your research.

So what’s the Digital Object Identifier for, really? I thought it was a permanent identifier so that we could link from one article to the articles it references in a pretty seamless fashion. OK, not totally seamlessly, since a DOI is not a URI, but all we have to do is stick http://dx.doi.org/ on the front of a DOI, and we’re there. So we should end up with an almost seamless worldwide web of knowledge (not Web of Knowledgetm, that’s someone’s proprietary business product).

Obviously the Publishers must play a large part in making this happen. They support the DOI system through their membership of Crossref, and supplying the metadata to make it work. And sometimes they remember that when they transfer a journal from one publisher or location to another, they can fix the resulting mess simply by changing the redirect inside the DOI system. (And sometimes they forget, but that’s another story.)

And of course, these big, toll-access, subscription-based Publishers trumpet all the Added Value that their publishing processes put onto the articles that we write and give to them (and referee for them, and persuade our libraries to buy for them, and…). So obviously that Added Value will extend to ensuring that all references have DOIs where available? A pretty simple thing to add in the copy-editing stage, I would have thought.

Except that they don’t. They display few if any DOIs in their reference lists of “their” articles. In fact my limited, non-scientific evidence-collecting suggests to me that they probably do the opposite to Adding Value: remove DOIs from manuscripts submitted to them. OK, I have no direct evidence of the removal claim, but I reckon there is pretty good circumstantial evidence.

I don’t have a substantial base of articles to work from (not being affiliated with a big library any more), but I’ve had a scan at the reference section of several recent articles from a selection of publishers. What do I see?

Yes, there’s a DOI in the reference I used. Mendeley picked that DOI up automatically from the paper. If I use that paper in a reference, the DOI will be included by Mendeley. This presumably also happens with EndNote and other reference managers. (Here’s me inserting a citation for (Shotton, Portwin, Klyne, & Miles, 2009) from EndNote… yes, there it is, down the bottom with a big fat DOI in it.) (This is part of my circumstantial evidence for Value Reduction by Publishers! We give them DOIs, they take them away.)

Anyway, looking at that Nature editorial, there are no DOIs in the reference list. Reference 7 is:

Maybe that article (and all of the others) doesn’t have a DOI? Same trick with Google, we don’t get there straight away, we get to another search, for articles with the word “perspective” in that journal… which does get us to the right place. And yes, the article does have a DOI (10.1257/mic.2.2.222). Let’s try this article; surely Nucleic Acids Research is one of the good guys?

Do the latest OA publishers do any better? Sadly, IJDC appears not to show DOIs in references. I couldn’t see any in references in the most recent PLoS one article I looked at (Grieneisen and Zhang, 2012). Nor Carroll (2011) in PLoS Biology. But yes, definitely some DOIs in references in Lister, Datta et al (2010) in PLoS Computational Biology.

What about the newest kid on the block? You know, the cheap publisher who’s going to lead to the downfall of the scholarly world as we know it? Yes! The wonderful article by Taylor and Wedel (2013) in PeerJ has references liberally bestowed with DOIs!

When I tweeted my outrage about this situation, someone suggested it’s just the publishers simply following the style guides. WTF?

Publishers! You want us to believe you are adding value to our srticles? Then use the Digital Object Identifier system. Keep the DOIs we give you, and add the DOIs we don’t!

PS At one stage in preparing for this post I tried copying reference lists from PDFs and pasting them into Word. You should try it some time. It’s an absolute disaster, in many cases! Which is NOT the fault of PDF, it is the fault of the system used to create the PDF… ie the Publisher’s system. Added Value again?

EDIT: As the comments below suggest, my post is generally true insofar as PDF versions of articles are concerned, although even there some publishers (eg BioMedCentral) do incorporate a hidden clickable link behind the reference (in BMC’s case to PubMed rather than the DOI). Several publishers have MUCH better behaviours in their HTML versions, with both explicitly visible DOIs and clickable versions of references). Sadly, HTML has no agreed container format, and is next to useless for storing articles for later reference, so it is most likely that the articles you store and use on your computer will be the sort of stunted PDFs I describe here. I still claim: this is not good enough.