Some context in the era of Linked Data

Introduction Knowledge Graphs (KGs) are currently on the rise. In their latest Hype Cycle for Artificial Intelligence (2018), Gartner highlighted: “The rising role of content and context for delivering insights with AI technologies, as well as recent knowledge graph offerings for AI applications have pulled knowledge graphs to the surface.” We can roughly di […]

Knowledge graphs are essential for any information architecture built upon semantics and AI. The Linked Data Life Cycle provides guideline for data governance within the semantic web framework. The post Knowledge Graphs – Connecting the Dots in an Increasingly Complex World appeared first on Semantic Web Company.

With the UN predicting that more than 70 percent of the world’s population will live in urban areas by 2050, the development of sustainable smart cities is a rising need. Cities are now capable of collecting and analyzing enormous amounts of data to automate processes, improve service quality, and to make better decisions. This opens ... The post How Semanti […]

Drupal is one of the favourite enterprise content management systems. Especially government and non-governmental organizations embrace this open source platform to build advanced digital experiences. Over the last years, we have been developing several PoolParty semantic technology features and modules that integrate natively into Drupal. In this blog post, […]

In our recent endeavor to import in PoolParty the Google Product taxonomy in different languages, we encountered some challenges that needed to be addressed. The first challenge was that the Google Product taxonomy is in Excel (XLS) format, and for each language there is a separate file. The second challenge is on how to align ... The post Data wrangling wit […]

Category Archives for linked data

Most information professionals already know: separation of content and presentation helps to manage and deliver complex information. This can only be done by using enriched structured content. Some call this intelligent content.

But why exactly is metadata per document (some call it “tagging”) not enough?

Here is a very brief slide-deck, which explains the difference between the traditional approach and the graph-based approach to develop not only a metadata layer seperated from the content layer, but also a knowledge layer on top of it.

As a long-term member of the Linked Data community, which has evolved from W3C’s Semantic Web, the latest developments around Data Science have become more and more attractive to me due to its complementary perspectives on similar challenges. Both disciplines work on questions like these:

How to extract meaningful information from large amounts of data?

How to connect pieces of information to other pieces in order to generate ‘bigger pictures’ of sometimes complex problems?

How to visualize complex information structures in a way that decision-makers benefit from it?

Two complementary approachesWhen taking a closer look to the approaches taken by those two ‘schools of advanced data management’ one aspect becomes obvious: Both try to develop models in order to be able to ‘codify and to calculate the data soup’.

While Linked Data technologies are built on top of knowledge models (‘ontologies’), which try to describe first of all data in distributed environments like the web, are Data Science methods mainly based on statistical models. One could say: ‘Causality and Reasoning over Distributed Data’ meets ‘Correlation and Machine Learning on Big Data’.

Graph databases are key to successIn contrast to this supposed contradiction, correlations and complementarities between those two disciplines prevail. Both approaches seek for solutions to overcome the problem with rigid data structures which can hardly adapt to the needs of dynamic knowledge graphs. Whenever relational databases cannot fulfill requirements about performance and simplicity, due to the complexity of database queries, graph databases can be used as an alternative.

Thus, both disciplines make use of these increasingly popular database technologies: While Linked Data can be stored and processed by standards-based RDF stores like Virtuoso, MarkLogic, GraphDB or Sesame, are the most popular graph databases for Data Scientists mainly based on the property graph model, for example: Titan or Neo4J. Some vendors like Bigdata support even both graph models.

the property graph model serves better the needs of Graph Data Analysts (e.g. for Social Network Analysis or for real-time recommendations)

RDF graph databases are great when distributed information sources should be linked to each other and mashed together (e.g. for Dynamic Semantic Publishing or for context-rich applications).

Connect both approaches and combine methodsI can see at least two options where methods from Data Science will benefit from Linked Data technologies and vice versa:

Machine learning algorithms benefit from the linking of various data sets by using ontologies and common vocabularies as well as reasoning, which leads to a broader data basis with (sometimes) higher data quality

Questions on the use of Linked Data in businessesWe want to learn more about the opinion of various stakeholders working in different industry verticals about the status of Linked Data technologies. The main question is: Is Linked Data perceived as mature enough to be used on a large scale in enterprises? The results will contribute to the development of the Linked Data market by reporting how enterprises currently think.

SEMANTiCS conference celebrated its 10th anniversary this September in Leipzig. And this year’s venue has been capable of opening a new age for the Semantic Web in Europe – a marketplace for the next generation of semantic technologies was born.

The challenges in implementing linked data technologies in enterprises are not limited to technical issues only. Projects like these deal also with organisational hurdles to be crossed, for instance the development of employee skills in the area of knowledge modelling and the implementation of a linked data strategy which foresees a cost-effective and sustainable infrastructure of high-quality and linked knowledge graphs. SKOS is able to play a key role in enterprise linked data strategies due to its relative simplicity in parallel with its ability to be mapped and extended by other controlled vocabularies, ontologies, entity extraction services and linked open data.

Whilst librarians, taxonomists, and specialists in the fields of text mining and entity extraction have started to embrace SKOS, especially ‘ontologists’ from artificial intelligence community still remain sceptical about the capabilities of SKOS.

With the latest release of PoolParty Thesaurus Server a full-blown ontology management facility has been introduced which can now be used to extend expressivity of SKOS knowledge models. For instance, SKOS concepts can become any other type of resource and by that schemas of additional relations and attributes can be applied to the concept.

PoolParty’s philosophy is to support users with Simple Knowledge Organization Systems (SKOS) first, to let them grow instantly by using various mechanisms like ontologies, text corpus analysis or linked data enrichment. All of them can nicely be combined. Users benefit from a step to step approach, not being bothered by an overarching approach from the very initial step. Learn more >>>

The ‘document’ has been the most prominent metaphor to present information as well as being the predominant information carrier for ages. With the rise of the Semantic Web, information has been broken down to tiny pieces, which can be put in various contexts dynamically.

This principle can be applied to tackle some of the most important challenges faced by publishers nowadays: the most efficient reuse of media assets and personalisation of information services.

In a workshop, I will moderate at this year’s Publishers’ Forum(Berlin, May 5-6), you will find out, why semantic web principles & linked data technologies are the key for ‘Dynamic Semantic Publishing’. Attendees will learn from best practices and get an overview over state-of-the-art technologies.

2014 is only a couple of days old. I have some expectations and visions for the new year with regards to linked data and its next evolution steps.

Smart data will receive a lot of attention: big data is the wave on which this certain topic surfs.

Trust and provenance of data has been discussed for a while and has been mentioned frequently to be an important step for linked data to be accepted especially by enterprises. W3C’s PROV ontology was just a first step towards this direction. More specifications and implementations will follow this year.

Automatic quality-checks for several types of linked data will become a matter of course (similar to test automation in software testing). One example is qSKOS which is provided as a web service for all people interested in controlled vocabularies like taxonomies or thesauri.

The LOD cloud as we know it won’t be updated anymore: the periodical updates of the LOD cloud won’t happen anymore in 2014. The image would be much too big. Instead, several domains will generate their own LOD clouds, each of them with a couple of central hubs in the middle (see also: The LOD cloud is dead, long live the trusted LOD cloud). Those sub-hubs connected will represent the overall LOD cloud in the future. DBpedia will remain in the centre.

Linked Data “Killer applications” will be established: Automatic linking of structured and unstructured information based on RDF could become a killer application for Linked Data technologies. Take a look at two example applications in the areas of medicine and clean energy which make use of this principle: true semantic search will become possible (the two demos wont’t work properly behind the firewall due to some software libraries used by it).

The year of semantic web standards: The Open Government Data movement will finally arrive at the point where standards based technologies like linked data become the obvious solution to the more or less chaotic collections of open data which have been accumulated in recent years.

Enterprise Linked Data: More and more integrations of linked data technologies like Semantic SP into enterprise platforms like SharePoint will be available as products on the software market.

SEMANTICS 2014 will take place in September in Germany and will be a great event. More to come soon.

ISWC 2014 will take place in October at beautiful Lake Garda (Italy) and will be a great event, too.

I am looking forward to meeting some of you once again, and also to meet some new linked data aficionados!!