benchmark Big Data platforms by being the first distributed benchmarking platform for Linked data.

We offer an open source evaluation platform that can be downloaded and executed locally. Secondly, we offer an online instance of the platform for

running public challenges and

making sure that even people without the required infrastructure are able to run the benchmarks they are interested in.

Underlying Principles

The HOBBIT benchmarking platform ensures that:

Users can test systems with the HOBBIT benchmarks without having to worry about finding standardized hardware

New benchmarks can be easily created and added to the platform by third parties.

The evaluation can be scaled out to large datasets and on distributed architectures.

The publishing and analysis of the results of different systems can be carried out in a uniform manner across the different benchmarks.

The first version of the platform has been released in February 2017. It offers the main features of an evaluation platform and has been further enhanced over time. The release of the second version is planned for February 2018. For this release we are mainly focussing on the usability of the platform as well as its support for additional features that can be used by the benchmark implementations (e.g., shared volumes or hardware statistics).