Open Knowledge Extraction (OKE) Open Challenge 2017-2018

Challenge Motivation

After a successful organization of the OKE challenge by the HOBBIT project (https://project-hobbit.eu/) at the ESWC 2017, HOBBIT is proud to announce that OKE will be launched in September as an open challenge. The OKE Open Challenge will ensure continuous participation and systems evaluation.
A monetary prize of at least 250€ will be provided.
Stay tuned to get ready to participate! Express your interest!

Challenge Overview

The aim of this challenge is to test the performance of knowledge extraction systems in aspects that are relevant for the Semantic Web. These include precision, recall and runtime. The challenge will test the systems against data derived from several datasets and will comprise the following tasks:

Task 1 Focused Named Entity Identification and Linking

Task 2 Broader Named Entity Identification and Linking

Task 3 Focused Musical Named Entity Recognition and Linking

We herewith invite system developers to participate in the aforementioned tasks. To ensure that the system results are comparable, we will provide the HOBBIT benchmarking platform for the generation of the final results to be included into the system publications. A specification of the hardware on which the benchmarks will be ran will be released in due course.

Prizes

The winner of the challenge will get a prize of at least 250€. Further prizes are being organized and will be announced on the challenge website.

Technical requirements for participation

Each participant must provide a system as Docker image. This image has to be uploaded to the HOBBIT Gitlab (it is possible to use a private repository, i.e., the system will not be visible for other people). In general, the uploaded Docker image can contain either a) the system itself or b) a web service client that forwards requests to the system that is hosted by you. Note that we highly recommend the first solution since a web service client won’t enable you to take part in the scenario B of the tasks.

Implementing the API

To be able to benchmark your system, it needs to implement our NIF-based (e.g., using a wrapper). There are several scenarios how this can be achieved.

1st possibility: GERBIL compatible APIs

If your system already implements a NIF-based API that is compatible with the GERBIL benchmarking framework, you do not have to implement anything additional to that. You only need to provide a Docker image of your system that implements the same API as your original web service and an adapted version of the following system meta data file.

Where annotator.annotate(document) adds the named entities to the document. If your system is not already compatible to GERBIL, we recommend this way.

3rd possibility: Direct implementation of the API

If you want to use a different language to implement our NIF-based API, you need to implement the API of a system that can be benchmarked in HOBBIT. Every message of the task queue will be a single NIF-document. The response of your system has to be send to the result queue. Your system won’t receive data through the data queue.

The URI of the system is used as identifier – it does not have to be dereferencable. The system is defined as a system instance, it has a label and a description. The two last lines are very important since they define the image that is used to run the system and the API the system implements. Please note that the Tasks 1, 2 and 3 share the same API.

As described in the wiki page of the system meta data file it is possible to have several instances of a single system. Please, feel free to use this feature to adapt your system for the three different tasks.

————————————————————————————–
Open Knowledge Extraction (OKE) Open Challenge 2017-2018
————————————————————————————–
After a successful organization of the OKE challenge by the HOBBIT project (https://project-hobbit.eu/) at the ESWC 2017, HOBBIT is proud to announce that OKE will be launched in September as an open challenge.
The OKE Open Challenge will ensure continuous participation and systems evaluation.
Stay tuned to get ready to participate!

————————————————————————————-
OKE Open Challenge at a glance:
————————————————————————————-
The aim of this challenge is to test the performance of knowledge extraction systems in aspects that are relevant for the Semantic Web. These include precision, recall and runtime. The challenge will test the systems against data derived from several datasets and will comprise the following tasks:

We herewith invite system developers to participate in the aforementioned tasks. To ensure that the system results are comparable, we will provide the HOBBIT benchmarking platform for the generation of the final results to be included into the system publications. A specification of the hardware on which the benchmarks will be ran will be released in due course.

————————————————————————————–
Prizes
————————————————————————————–
The winner of the challenge will get a prize of at least 250€. Further prizes are being organized and will be announced on the challenge website.

Training data set and small test data (to communicate with HOBBIT) release: middle of August

Evaluation: release of evaluation results in every two weeks from the end of September onwards