The Open Economics Working Group is inviting PhD students and academics with relevant experience and research focus to participate in the first Open Economics Workshop, which would take place on December 17-18, 2012 in Cambridge, UK.

The aim of the workshop is to build an understanding of the value of open data and open tools for the Economics profession and the obstacles to opening up information, as well as the role of greater openness in broadening understanding of and engagement with Economics among the wider community including policy-makers and society.

The workshop is a designed to be a small invite-only event with a round-table format allowing participants to to share and develop ideas together. For more information please see the website.

The event is being organized by the Centre for Intellectual Property and Information Law at the University of Cambridge and Open Economics Working Group of the Open Knowledge Foundation and funded by the Alfred P. Sloan Foundation. More information about the Working Group can be found online.

To apply for participation, please fill out the application form and send us a CV at economics@okfn.org.

The interest of Europeana for Linked Open Data

Europeana aims to provide the widest access possible to the European cultural heritage massively published through digital resources by hundreds of musea, libraries and archives. This includes empowering other actors to build services that contribute to such access. Making data openly available to the public and private sectors alike is thus central to Europeana’s business strategy. We are also trying to provide a better service by making available richer data than the one very often published by cultural institutions. Data where millions of texts, images, videos and sounds are linked to other relevant resources: persons, places, concepts…

Europeana has therefore been interested for a while in Linked Data, as a technology that facilitates these objectives. We entirely subscribe to the views expressed in the W3C Library Linked Data report, which shows the benefits (but also acknowledges the challenges) of Linked Data for the cultural sector.

Europeana’s first toe in the Linked Data water

Last year, we released a first Linked Data pilot at data.europeana.eu. This has been a very exciting moment, a first opportunity for us to play with Linked Data.

We could deploy our prototype relatively easily and the whole experience was extremely valuable, from a technical perspective. In particular, this has been the first large-scale implementation of Europeana’s new approach to metadata, the Europeana Data Model (EDM). This model enables the representation of much richer data compared to the current format used by Europeana in its production service. First, our pilot could use EDM’s ability to represent several perspectives over a cultural object. We have used it to distinguish the original metadata our providers send us, from the data that we add ourselves. Among the Europeana data there are indeed enrichments that are created automatically and are not checked by professional data curators. For trust purposes, it is important that data consumers can see the difference.

We could also better highlight a part of Europeana’s added value as a central point for accessing digitized cultural material, in direct connection with the above mentioned enrichment. Europeana indeed employs semantic extraction tools that connect its objects with large multilingual reference resources available as Linked Data, in particular Geonames and GEMET. This new metadata allows us to deliver a better search service, especially in a European context. With the Linked Data pilot we could explicitly point at them, in the same environment they are published in. We hope this will help the entire community to better recognize the importance of these sources, and continue to provide authority resources in interoperable Linked Data format, using for example the SKOS vocabulary.

If you are interested in more lessons learnt from a technical perspective, we have published more of them in a technical paper at the Dublin Core conference last year. Among the less positive aspects, data.europeana.eu is still not part of the production system behind the main europeana.eu portal. It does not come with the guarantee of service we would like to offer for the linked data server, though the provision of data dumps is not impacted by this.

Making progress on Open Data

Another downside is that data.europeana.eu publishes data only for a subset of the objects the our main portal provides access to. We started with 3.5 million objects over a total of 20 millions. These were selected after a call for volunteers, to which only few providers answered. Additionally, we could not release our metadata under fully Open terms. This was clearly an obstacle to the re-use of our data.

The new version concerns an even smaller subset of our collections: in February 2012, data.europeana.eu contains metadata on 2.4 million objects. But this must be considered in context. The qualitative step of fully open publication is crucial to us. And over the past year, we have started an active campaign to convince our community of opening up their metadata, allowing everyone to make it work harder for the benefits of end users. The current metadata served at data.europeana come from data providers who have reacted early and positively to our efforts. We trust we will be able to make metadata available for many more objects in the coming year.

In fact we hope that this Linked Open Data pilot can contribute a part of our Open Data advocacy message. We believe such technology can trigger third parties to develop innovative applications and services, stimulating end users’ interest for digitized heritage. This would of course help to convince more partners to contribute metadata openly in the future. We have released next to our new pilot an animation that conveys exactly this message, you can view it here.

One of the key problems in natural science research is the lack of effective collaboration. A lot of research is conducted by scientists from different disciplines, yet cross-discipline collaboration is rare. Even within a discipline, research is often duplicated, which wastes resources and valuable scientific potential. Furthermore, without a common framework and context, research that involves animal testing often becomes phenomenological and little or no general knowledge can be gained from it. The peer reviewed publishing process is also not very effective in stimulating scientific collaboration, mainly due to the loss of an underlying machine readable structure for the data and the duration of the process itself.

If research results were more effectively shared and re-used by a wider scientific community – including scientists with different disciplinary backgrounds – many of these problems could be addressed. We could hope to see a more efficient use of resources, an accelerated rate of academic publications, and, ultimately, a reduction in animal testing.

Effectopedia is a project of the International QSAR Foundation. Effectopedia itself is an open knowledge aggregation and collaboration tool that provides a means of describing adverse outcome pathways (AOPs)1 in an encyclopedic manner. Effectopedia defines internal organizational space which helps scientist with different backgrounds to know exactly where their knowledge belongs and aids them in identifying both the larger context of their research and the individual experts who might be actively interested in it. Using automated notifications when researchers create causal linkage between parts of the pathways, they can simultaneously create a valuable contact with a fellow researcher interested in the same topic who might have a different background or perspective towards the subject. Effectopedia allows creation of live scientific documents which are instantly open for focused discussions and feedback whilst giving credit to the original authors and reviewers involved. The review process is never closed and if new evidence arises it can be presented immediately, allowing the information in Effectopedia to remain current, while keeping track of its complete evolution.

How can this coordinate better scientific research?

The type of knowledge needed in Effectopedia requires a paradigm shift in the way research is conducted: from phenomenological to more hypothesis driven. Instead of testing an individual chemical with results often applicable only in the context of the specific experimental design, Effectopedia is targeted at defining the conditions under which certain knowledge can be transferred to other species, levels of biological organization, exposure routes, exposure durations, chemicals and so on. A key element to make this approach work is to provide a common framework where scientists with different backgrounds can contribute and encode their knowledge. Since adoption of a new multidisciplinary data representation format could be quite challenging, an ontology-enhanced natural language interface for encoding this knowledge is envisioned.

Entering the results of one’s research in a machine-readable format could enable a sharing platform with a holistic concept, which could unite scientists from different academic backgrounds in a multidisciplinary research framework. The data format of chemical scientists, biologists and toxicologists may be different, but if there is a unified data format, there can be “a common language” in science. Additionally, this could serve as a platform for students in the natural sciences to download the datasets of previous research in order to learn about working with and analysing data.

How can this enable wider access to scientific research?

Many universities in the developing world and in transitional economies are not able to pay the full subscription fees to access published work, which keeps them out of the mainstream up-to-date scientific discussions. Other experienced scientists with a lot of relevant expertise who are either retired or no longer affiliated with an academic institutions may be able to contribute their knowledge in an open online encyclopaedia. Making this encyclopedia semantic would also allow the development of multi-lingual support, enabling academic exchange between scientists who don’t even speak the same language.

How could this reduce animal testing?

The simplest way Effectopedia is envisioned to help in the reduction of unnecessary animal testing is by providing a centralized repository for open, easily searchable access to the existing knowledge. Effectopedia could also aid the design of alternative test methods by providing statistical information of what are the most used and needed tests. Once established, the adverse outcome pathways themselves could become an invaluable tool for discriminating when simple chemical or assay-based test can be used to predict whole animal effects for one or more species, eliminating the need for actual testing.
Much of the research which has already been done on cells and animals could be extrapolated to other species, as many have evolved in similar ways and therefore have similar functions. Therefore, cross-species extrapolation could reduce the need for further animal testing, as scientists could use information about research that has already been done.

I am interested. How can I get involved?

If you come from the chemical, pharmaceutical, flavoring, cosmetics or many other industries, the products your company creates are often subject to regulations. A system like Effectopedia could dramatically reduce the cost for development and registration of new products without compromising your competitive edge. The ability to store your data on your own in-house server in the same format as the public data allows you to mix them during the product research phase and then send them for registration using the same format. Without spending any additional resources you can also create a better public image by publishing information that no longer provides any competitive advantage or that you are required to publish by governmental policies. You can also reduce the cost of building your in-house knowledge management system by concentrating your efforts on the custom and company specific modules, while sharing the cost and resources of development of the common modules with all involved stake holders.

Large organizations like OECD, WHO and agencies like EPA had already expressed interest in the development of Effectopedia as a tool for development and documentation of adverse outcome pathways that meet the standards for regulatory acceptance. If you or your organization see the benefit of such system please contact us and see how you can help establish Effectopedia as a an open standard for representation of adverse outcome pathways.

If you are scientist you can help us to form the initial expert panels creating the guidelines for all future adverse outcome pathways contributions. Your feedback is also valuable to us in these beta stages of development so please fill free to contact us with any comments ideas and suggestions.

The adverse outcome pathways (AOPs) describe the molecular interactions of a chemical with biological systems and the biological response models that document how molecular effects lead to adverse effects many levels of biological organization. ↩

BibSoup is here! And it’s going to revolutionise how you work with bibliographic metadata.

The team has been coding and blogging and bugfixing for a while now on the BibServer software, and we’ve mentioned in passing that our own instance has been up and running under the name of BibSoup… Now we are officially launching for beta fun, and asking the community to come and have a go and let us know what you think.

Share your collection with the world

If you are already used to managing your bibliography in tools like Zotero, Bibsonomy, Mendeley – or even in text editors using Bibtex or RIS – then you already have a collection and a management strategy, and probably even a way to collaborate with other people on it. But then, what do you do with it? At BibSoup you can upload it and create an interactive web page to share with anyone.

Search and faceted browse

Once your collection is up on BibSoup, you can search across all the records and filter by any value in your collection. You can also customise the view of your collection to show the most useful filters by default, or perform advanced searches using powerful query syntax.

Graph the vital statistics

Want to know which year most of your publications were published? Which author is most common in your collection? Which journal publishes the most relevant articles? Just use the visualise option to create a great bubble or bar chart based on one of your filter values, and quickly find and click the one you want.

Improve your collection

You can use your BibServer display to quickly spot errors in your records; then you can export and make improvements. Soon you will also be able to edit directly in the BibServer, and share collection management with others.

Use your records on the internet

Once your collection is in BibServer, it is even possible to use the content directly as references within pages that you create on the web. There is a full RESTful API and every page returns JSON (just add .json or &format=json if have URL parameters).

Export as BibJSON

BibServer uses a particular flavour of JSON that we call BibJSON; JSON formats are really useful for using on the web, and can quite easily be exported and used for fun things like the visualisations we demonstrate.

Learn more

We are continuing to work hard on the code – making the records editable is our current focus, along with further development of the BibJSON conventions.

Upload from BibTex or RIS sources is available, which means you can import data from most major bibliographic tools already; you can even use the parsers programmatically if you like. We would be happy to talk to anyone interested in developing more parsers too.

The aim of this project is to show how Open Bibliography enables scholarship; to show our community what we are missing if we do not commit to Open Bibliography; and to show that Open Bibliography is a fundamental requirement of a community committed to discovery and dissemination of ideas.

We are building on the large collections – and the goodwill of the providers of those collections – that we received last year, to demonstrate how the content of such collections can be useful to the individual or small group, by enabling them to identify and share records of value to them (earlier BibServer work was funded with partial support from U.S. National
Science Foundation Award 0835773).

We hope you like BibSoup and find all this useful; do add your collections so we can share them with the rest of the world, too. Also if you would like your own BibServer, go ahead and download the code, or contact us for help / support options.

Since finally blogging about OpenPhilosophy.org last month I’ve been thinking about how one could make a generic open source platform that could be used to power it, and other things like it. Enter ‘TEXTUS':

TEXTUS is an open source platform for working with collections of texts and metadata. It enables users to transcribe, translate, and annotate texts, and to manage associated bibliographic data.

Here’s the rationale:

The combination of freely available digital copies of public domain works, open bibliographic data and open source tools has the potential to revolutionise research in the humanities. However there are currently numerous obstacles which mean that they are often under-utilised by scholars and students in teaching and research:

From classic literary and cultural works, to letters, drafts, notes, and other historical documents, there is a huge amount of freely available public domain material that is highly relevant to scholars and students engaged in research in the humanities. But these works can be difficult to find, difficult to work with, and works by a given author may be scattered in a variety of locations. Search results may be confusing or unclear. Automated Optical Character Recognition of texts may be inaccurate or incomplete. The metadata for the work for may be unclear and the provenance and rights status for a given digital edition may be unknown. It is not always clear how to cite passages from digital editions of public domain works.

Over the past few years, libraries and other cultural heritage organisations have been releasing open data about works they hold. This has the potential to be a rich resource for scholars interested in building scholarly bibliographies and working with large collections of texts. While there are a growing number of tools and services for working with bibliographic data, many researchers may not know how to use these, and online bibliographies may not link through to digital copies of public domain works which are available online.

There are a growing number of open source tools for transcribing, translating and annotating texts. However many of these are one off projects and it may not be clear how to deploy the tools in relation to a given text or collection of texts.

Here’s what it would do:

The TEXTUS platform will enable users to:

Transcribe texts from images, PDFs or other non-machine readable sources.

View texts and translations side by side – and create new translations of texts for use in teaching or research.

Annotate texts, and share annotations with groups of users, or with the public.

Curate, share and export collections of bibliographic metadata (scholarly references), including metadata associated with texts published on the platform.

Here’s a peek under the hood:

TEXTUS builds on and utilises existing best of breed open source components and software packages such as:

Annotator – an open-source Javascript tool to enable annotations to be added to any webpage

Bibserver – which includes numerous tools, services and standards for working with bibliographic metadata

How can we encourage more galleries, libraries, archives and museums (GLAM institutions) to open up their holdings – including metadata about their collections, and digital copies of works which have entered the public domain?

Following on from my post on opening up the public domain in July, we’re organising a workshop in Warsaw to bring together key stakeholders who are working alongside GLAM institutions to open up material for the public to reuse. It is just before the Creative Commons Global Summit. You can register and find further information here:

We want to make it easier for everyone to find and reuse works which are in the public domain. At the heart of this vision is opening up material from cultural heritage organizations around the world for all to reuse.

Join us in Warsaw before the CC Global Summit for a kickoff workshop that will bring together stakeholders to begin a global project to encourage GLAM institutions to make their metadata and digital copies of public domain works freely and openly available for all to reuse without restriction.

During the day we would like to:

Articulate a vision for opening up material held by GLAM institutions

Create a roadmap and a strategy for how key stakeholders can work together towards the realisation of this vision

Look at how key projects and initiatives fit together

Create an outline for a collaborative initiative involving key stakeholders – including a basic website, work plan, a strategy for funding, community building and dissemination

Put together a role description for a full time evangelist

Create a package of materials to use as the basis for animations on this topic

This guest post is written by Carl Grant, chief librarian at Ex Libris and past president of Ex Libris North America, in answer to some questions that Adrian Pohl, coordinator of the OKFN Working Group on Open bibliographic Data, posed in the beginning of July in response to Ex Libris’ announcement of an “Expert Advisory Group for Open Data”. It is cross-posted on the OKFN blog and openbiblio.net.

The Ex Librisannouncement in June 2011 that we were forming an “Expert Advisory Group for Open Data” has generated much discussion and an equal number of questions. Many of the questions bring to light the ever-present tensions and dynamics that exist between the various sectors and advocates of open data and systems. It also raises ongoing questions about how the goals of openness can be reasonably and properly achieved and in what timeframe? Particularly when it involves companies, products and data structures that have roots in proprietary environments.

For those who are not part of the Ex Libris community, allow me to define some of the Ex Libris terminology involved in this discussion:

Alma. The Ex Libris next generation, cloud-based, library management service package that supports the entire suite of library operations—selection, acquisition, metadata management, digitization, and fulfillment—for the full spectrum of library materials.

Community Zone: A major component of Alma that includes the Community Catalog (bibliographic records) and the Central Knowledgebase and Global Authorities Catalog. This zone is where customers may contribute content to the Community Catalog and in so doing, agree to allow users to share, edit, copy and redistribute the contributed records that are linked to their inventory.

Community Zone Advisory Group: A group of independent experts and Alma early adopters advising Ex Libris on policies regarding the Community Catalog.

Taking into consideration the many emails and conversations we’ve had around the topic, this original set of questions seems to have shared interest:

These are good questions so let’s work our way through this list.

Q: What is this working group actually about? About open licensing, open formats, open vocabularies, open source software or all of them?

A: The Community Zone Advisory Group is tasked with creating high-level guidelines to govern the contribution, maintenance, use and extraction of bibliographic metadata placed in the Community Catalog of Alma. As such, this group is also advising us on suggested licenses to use. Given that each Alma customer will have a local, private catalog that need not be shared, we’ve taken the position that we want to promote the most open policies possible on data that libraries contribute to the Community Catalog. Much of the discussion centers around what approach would be best for libraries and lead to the clearest terms of use for Alma users.

We’ve said to the group, that we’ll leave it to them to determine if the group will need to exist beyond the time this original task charge is completed. We fully expect that we will have similar groups, if not this same one, advise us on other data that we plan to place in the Community Zone in the future.

Q: Does the working group only cover openness inside Ex Libris’ future metadata management system, e.g. openness between paying members in a closed ecosystem, or will it address openness of the service within a web-wide context?

A: A more complicated question because it is really two interlaced questions. First it is asking if the data is open? Second, it is asking if the system is open?

We are most assuredly making the bibliographic metadata open and the answer to the next question provides more detail on how we’re approaching this.

As for the systems holding the data, we are planning on being open, but this is a place where we clearly must wait until this new system is up and running well for our paying customers before we open the Community Catalog up to others. Even then, we’ll want to closely monitor the impact on bandwidth, compute cycles and data to determine any costs associated with this use and how to best proceed and participate in making the Community Catalog open to larger communities on a web-wide context.

The goal is to work with our customers and the community to set achievable goals that move us down that path while factoring practicality into ideology. However, in the first deployment, our primary constituents will be institutions who have adopted Alma.

Q: Will Ex Libris push towards real open data for the Alma community zone? That would mean 1.) Using open (web) standards like HTTP, HTML and RDF as data model etc. 2.) Conforming to the Principles on Open Bibliographic Data by open licensing, 3.) providing APIs and the possibility to download data dumps.

A: Specifically:

Open standards and data formats are core to our design of Alma. They serve as our mechanisms of data exchange and the basis of our data structures when appropriate. Not all standards will be the basis of our internal functionality, however. For example, we’re building the infrastructure for RDF as a data exchange mechanism, but it does not fundamentally underpin our data structure, just as MARC21 binary format is not the root of our bibliographic record structure. When appropriate and possible, we are implementing these standards for data exchange based on libraries’ needs.

We are currently examining the open licenses to determine if we can utilize them given the other data we’d planned for the Community Zone. Currently, our Alma agreements include language that largely replicates the key terms of the Creative Commons PDDL license for customer-contributed records in the Community Catalog.

We will be providing API’s and plan to support downloads, but again, as we move forward, we plan to do this in a phased approach so that we can monitor infrastructure and human resource demands associated with the adoption. As noted above, our first priority will be providing service to our existing Alma users. This means that in the first release, providing Alma institutions with full access to their own data (in the form of API’s and data dumps) is where we’re focusing our attention.

In the final analysis, we feel our approach is really quite supportive of librarianship and quite open. We’re balancing the competing needs of our stakeholders, but it is important to note that Ex Libris is not artificially implementing restrictions that limit open data. Once libraries join the Alma community, there are no limits on their ability to manage their own projects or collaboration. Where we have the resources, we’re helping promote this open approach. We’ll be allocating our resources to provide best-in-class service to our customers and, at the same time, in a closely monitored and managed approach, continuing to expand access to larger communities.

We think all of this combined is a powerful statement about how proprietary and open approaches can beneficially co-exist and thus will help to move libraries forward in substantive ways.

Ultimately I think we should help to rally existing stakeholders from around the world behind a simple vision, and encourage them to work together to realise it.

At its simplest this vision is:

We want to make it easier for everyone to find and reuse works which are in the public domain.

Adorning my ‘Massively Ambitious’ hat, this has two main parts:

Gathering open information (‘metadata’) about every public domain work in the world – including books, recordings, films, paintings, photographs, and so on. In practise this means opening up existing collections of metadata from galleries, libraries, archives and museums – and combining this with information from other sources (e.g. DBpedia, crowdsourced data) to make sure it is complete and accurate.

Publishing an open digital copy of every public domain work in the world. Which means encouraging existing publishers to publish digital copies of works in a way which means that everyone is free to reuse them (i.e. trying to discourage copyfraud and legal/technical restrictions on reuse).

To pursue these things I would propose that the Open Knowledge Foundation should:

Start a high-profile campaign to encourage institutions to open up their metadata + digital copies of public domain works in the right way. Getting sign on from prominent stakeholders in this area (e.g. Creative Commons, Internet Archive, Wikimedia Foundation, exemplary institutions, etc). I’ve started discussing this with various people already – including the idea of 1-2 short animations to promote open metadata and open digital copies of works!

Create a ‘proof of concept’ project to show users what we want to do, and to encourage stakeholders to collaborate towards building this. Like an all-singing, all-dancing, elegant, beautiful and feature-heavy version of PublicDomainWorks.net. Less focused on hard-core data integration/exposure, and more focused on interesting and useful front-end features for ordinary users (e.g. lets me create a library of my favourite books, a list of my favourite paintings, get material from multiple sources, etc). Ultimately we need to build shared infrastructure for the discovery of public domain works – that brings together and builds on the amazing collections and services that are already out there – from Open Library and Europeana to Project Gutenberg and Wikimedia Commons. We want open infrastructure that federates content – and allows it to be easily integrated, embedded and reused in lots of different contexts. This is a good way to demonstrate what we want to do!

The library, archives and museums (i.e. LAM) community is increasingly interested in the potential of Linked Open Data to enable new ways of leveraging and improving our digital collections, as recently illustrated by the first international Linked Open Data in Libraries Museums and Archives Summit (LOD-LAM) Summit in San Francisco. The Linked Open Data approach combines knowledge and information in new ways by linking data about cultural heritage and other materials coming from different Museums, Archives and Libraries. This not only allows for the enrichment of metadata describing individual cultural objects, but also makes our collections more accessible to users by supporting new forms of online discovery and data-driven research.

But as cultural institutions start to embrace the Linked Open Data practices, the intellectual property rights associated with their digital collections become a more pressing concern. Cultural institutions often struggle with rights issues related to the content in their collections, primarily due to the fact that these institutions often do not hold the (copy)rights to the works in their collections. Instead, copyrights often rest with the authors or creators of the works, or intermediaries who have obtained these rights from the authors, so that cultural institutions must get permission before they can make their digital collections available online.

However, the situation with regard to the metadata — individual metadata records and collections of records — to describe these cultural collections is generally less complex. Factual data are not protected by copyright, and where descriptive metadata records or record collections are covered by rights (either because they are not strictly factual, or because they are vested with other rights such as the European Union’s sui generis database right) it is generally the cultural institutions themselves who are the rights holders. This means that in most cases cultural institutions can independently decide how to publish their descriptive metadata records — individually and collectively — allowing them to embrace the Linked Open Data approach if they so choose.

As the word “open” implies, the Linked Open Data approach requires that data be published under a license or other legal tool that allows everyone to freely use and reuse the data. This requirement is one of most basic elements of the LOD architecture. And, according to Tim Berners-Lee’s 5 star scheme, the most basic way of making available data online is to make it ‘available on the web (whatever format), but with an open licence’. However, there still is considerable confusion in the field as to what exactly qualifies as “open” and “open licenses”.

While there are a number of definitions available such as the Open Knowledge Definition and the Definition of Free Cultural Works, these don’t easily translate into a licensing recommendation for cultural institutions that want to make their descriptive metadata available as Linked Open Data. To address this, participants of the LOD-LAM summit drafted ‘a 4-star classification-scheme for linked open cultural metadata’. The proposed scheme (obviously inspired by Tim Berners-Lee’s Linked Open Data star scheme) ranks the different options for metadata publishing — legal waivers and licenses — by their usefulness in the LOD context.

In line with the Open Knowledge Definition and the Definition of Free Cultural Works, licenses that either impose restrictions on the ways the metadata may be used (such as ‘non-commercial only’ or ‘no derivatives’) are not considered truly “open” licenses in this context. This means that metatdata made available under a more restrictive license than those proposed in the 4-star system above should not be considered Linked Open Data.

According to the classification there are 4 publishing options suitable for descriptive metadata as Linked Open Data, and libraries, archives and museums trying to maximize the benefits and interoperability of their metadata collections should aim for the approach with the highest number of stars that they’re comfortable with. Ideally the LAM community will come to agreement about the best approach to sharing metadata so that we all do it in a consistent way that makes our ambitions for new research and discovery services achievable.

Finally, it should be noted that the ranking system only addresses metadata licensing (individual records and collections of records) and does not specify how that metadata is made available, e.g., via APIs or downloadable files.