I’ve learned a lot during the process, including how to add .srt captions to videos, organise and manage working groups across 18 hours’ time difference, and wrangling metadata in Figshare.

Many people were involved in bringing all this together: members of the Meerkat, Giraffe, and Eagle working groups, ORCID Ambassadors, colleagues in the Community Team and Development Team at ORCID, film stars of the future who participated in the WhyORCID? video, people who worked on translations, and those who are now spreading the word about these new materials across the world…

To mark the occasion and thank everyone who contributed, I hosted two (for different time zones) virtual launch parties, and here is the order of service:

Education & outreach launch party menu

Thanks to my colleague Gabi for the artwork and the inspired drinks list 🙂 I’m off to enjoy something suitably alcoholic before starting work on phase two of this project tomorrow…

After all this time telling other people about the benefits of ORCID, I was very pleased be be able to interact with several integrations this week! I agreed to do some peer review for a journal, and was able to use my ORCID credentials at several stages in the process:

1 .Logging in to ScholarOne Manuscripts via ORCID is a breeze

Logging in to ScholarOne Manuscripts using my ORCID account (I would otherwise have had to create a new account and manually enter data into a form)

2. Authorising Wiley (ScholarOne Manuscripts) to read and update my record means reduced data entry for me, and the information they push to update my record is validated, not simply self-asserted

Granting permission for Wiley (ScholarOne Manuscripts) to read and update my record

3. Authorising Publons to get my ORCID iD

Granting permission for Publons to get my ORCID iD

4. When the peer review process is complete and the article is published, this will show up on my ORCID record under the Peer Review section. And all I had to do was grant permission for this to happen…

On 26th September, I attended Crossref LIVE in London, one of their series of local outreach meetings. These are aimed at “the whole community, welcoming publishers, librarians, researchers, funders, technology providers, and members alike to share their thoughts and to find out more about who we are and how to get the most out of Crossref”.

Organization Identifier Working Group – we’ve got persistent identifiers for objects (DOIs), people (ORCID iDs), and this is next piece of the puzzle. Organisation identifiers are a different type of problem from person iDs – organisations can merge, split, go bankrupt and reinvent themselves, have legal entity names often different from common names (e.g. “Harvard University”, formerly “Harvard College”, formally “The President and Fellows of Harvard College“) so the data model is harder.

Outputs from this project will be open. The organisational iDs dataset will be around 75,000 iDs – much smaller than Crossref, ORCID, and Datacite are used to! Most likely use case is providing controlled vocabulary for organisation names (institution lookup).

Simple comparison widget – shows importance of including country name

No hierarchy info so can’t link a departmental entry to top-level entry for an organisation. Can compare affiliation matching – search across grid.ac, Open ISNI, Wikidata and others and see results from each. It even works for collegiate universities e.g. Jesus College will find matches for Oxford and Cambridge.

Why didn’t this project use Ringgold as a starting point? Ringgold is proprietary and didn’t give Crossref the whole dataset, but there is some representation of it in Open ISNI.

Many publishers didn’t know what metadata they were giving Crossref as they had outsourced that responsibility. The participation report above shows how much is being contributed in terms of: funding information, award numbers, license URLs, preference lists, full-text URLs, ORCID iDs, Crossmark data. Encourage publishers to engage with data sharing.

Metadata 2020 is a collaboration that advocates richer, connected, and reusable, open metadata for all research outputs, which will advance scholarly pursuits for the benefit of society.

Richer: Richer metadata fuels discoverability and innovation.

Connected: Connected metadata bridges the gaps between systems and communities.

Have you ever followed a link only to find a 404 error instead of the page you wanted? This is called link rot. Where content “lives” on the internet can be unstable for a number of reasons, such as removal of content, website restructuring, and changes to domain names (hello SAGE Journals and Oxford Academic (OUP) who both have migrations in progress at the moment). Alongside link rot, trust and authority control (establishment and maintenance of consistent forms of terms) can be difficult to establish on the web.

A persistent identifier (PI or PID) is a long-lasting reference to a document, file, web page, or other object. Using PIDs helps to combat problems of link rot and authority control. There are different sorts of PIDs depending of the type of entity being referred to:

A digital object identifier (DOI) (such as those registered with Crossref) is used to uniquely identify objects, particularly electronic documents such as journal articles. Loyal readers of this blog may remember Crossref as the sponsor of 23 Things Oxford back in 2010 🙂

Bibliographic identifiers such as ISBN and ISSN have been in use since the 1970s. As ISBN is to a book, DOI is to an article, and ORCID iD to a person.

ORCID iDs identifying researchers and adding Linked Data goodness

Your ORCID iD connects with your ORCID Record that can contain links to your research activities, affiliations, awards, other versions of your name, and more. You control this content and who can see it. Sign up for an ORCID iD today!

1. The Systematic Review – is the social sciences librarian involved? If not, why not?

Alan spoke of his experience of working with academics involved in doing systematic reviews to inform national policy. He found that the academics only searched one database (Medline) and did not use synonyms or broader/narrower keywords, or related terms, when searching. He and a colleague wrote a paper about this, to try to find out why the academics’ research skills were so poor.

His paper identified weaknesses in the systematic review process e.g. Academics ignoring all grey literature on the grounds that it wasn’t peer-reviewed.

Home Office guidelines for systematic review focus on synthesis of findings, not search strategies. Alan’s work shows that key UK information is being systematically excluded in favour of information from the big-name US databases.

Uni of Reading have developed re-purposeable resources toolkit – “Academic integrity toolkit”. Aimed at academics. It’s meant to be bites iced and incorporated into teaching, not just given out to students for them to read (/ignore). Considering publishing it as an Open Educational Resource. For now, guest access to their Blackboard can be arranged. Contact details here.

Results of research

Crucial to go beyond formatting and show role of correct referencing in academic writing

Many students failed to engage with skills training

Students report lack of consistency and difficulty in finding guidance

4. Identifiers for Researchers and Data: Increasing Attribution and Discovery

Identifiers such as DOIs uniquely identify research objects. DOIs assigned by DataCite and CrossRef. I think the difference is that DataCite makes DOIs for things that aren’t articles, whereas CrossRef assigns DOIs for articles. ARK = archival research key, a URL to create a persistent identifier.