Fifth Workshop on Big Data Benchmarking

The Fifth Workshop on Big Data Benchmarking (5th WBDB) will be held on August 5-6, 2014 in Potsdam, Germany.

The objective of the WBDB workshops is to make progress towards development of industry standard application-level benchmarks for evaluating hardware and software systems for big data applications.

To be successful, a benchmark should be:

Simple to implement and execute;

Cost effective, so that the benefits of executing the benchmark justify its expense;

Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and

Verifiableso that results of the benchmark can be validated via independent means.

Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing (see http://cc.readytalk.com/play?id=1hws7t).

Topics

To make progress towards a big data benchmarking standard, the workshop will explore a range of issues including:

Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.

Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.

Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.

Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.

Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.

Enhancements: Proposals to augment these benchmarks, e.g. by adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.