Linked data

In computing, linked data (often capitalized as Linked Data) is a method of publishing structured data so that it can be interlinked and become more useful through semantic queries. It builds upon standard Web technologies such as HTTP, RDF and URIs, but rather than using them to serve web pages for human readers, it extends them to share information in a way that can be read automatically by computers. This enables data from different sources to be connected and queried.[1]

Use HTTP URIs so that these things can be looked up (interpreted, "dereferenced").

Provide useful information about what a name identifies when it's looked up, using open standards such as RDF, SPARQL, etc.

Refer to other things using their HTTP URI-based names when publishing data on the Web.

Tim Berners-Lee gave a presentation on linked data at the TED 2009 conference.[3] In it, he restated the linked data principles as three "extremely simple" rules:

All kinds of conceptual things, they have names now that start with HTTP.

If I take one of these HTTP names and I look it up...I will get back some data in a standard format which is kind of useful data that somebody might like to know about that thing, about that event.

When I get back that information it's not just got somebody's height and weight and when they were born, its got relationships. And when it has relationships, whenever it expresses a relationship then the other thing that it's related to is given one of those names that starts with HTTP.

The above diagram shows which Linking Open Data datasets are connected, as of August 2014. This was produced by the Linked Open Data Cloud project, which was started in 2007. Some sets may include copyrighted data which is freely available.[11]

The goal of the W3C Semantic Web Education and Outreach group's Linking Open Data community project is to extend the Web with a data commons by publishing various opendatasets as RDF on the Web and by setting RDF links between data items from different data sources. In October 2007, datasets consisted of over two billion RDF triples, which were interlinked by over two million RDF links.[12][13] By September 2011 this had grown to 31 billion RDF triples, interlinked by around 504 million RDF links. A detailed statistical breakdown was published in 2014.[14]

There are a number of European Union projects[when defined as?] involving linked data. These include the linked open data around the clock (LATC) project,[15] the PlanetData project,[16] the DaPaaS (Data-and-Platform-as-a-Service) project,[17] and the Linked Open Data 2 (LOD2) project.[18][19][20] Data linking is one of the main goals of the EU Open Data Portal, which makes available thousands of datasets for anyone to reuse and link.

DBpedia – a dataset containing extracted data from Wikipedia; it contains about 3.4 million concepts described by 1 billion triples, including abstracts in 11 different languages

FOAF – a dataset describing persons, their properties and relationships

GeoNames provides RDF descriptions of more than 7,500,000 geographical features worldwide.

UMBEL – a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc, which can act as binding classes to external data; also has links to 1.5 million named entities from DBpedia and YAGO

Wikidata – a collaboratively-created linked dataset that acts as central storage for the structured data of its Wikimedia sister projects