Splice Machine integrates Hadoop and Apache Spark in Version 2.0

December 10, 2015

Today's insurmountable data volumes have given rise to a variety of new database options. Each of them has their own particular strengths and features.

Splice Machine 2.0 is the latest version of its RDBMS which integrates the open-source Apache Spark engine into its existing Hadoop-based architecture, creating a flexible hybrid SQL database that lets businesses perform transactional and analytical workloads at the same time.

"Most in-memory systems require you to store all data in memory," said Monte Zweben, Splice Machine's CEO, in an interview last month. “Such technologies can become prohibitively expensive as data volumes grow. We're doing just compute in memory -- you can store data elsewhere," he said.

At one-fourth the cost, the new, flexible hybrid database enables businesses to perform simultaneous OLAP and OLTP workloads and increase performance over traditional RDBMS, such as Oracle & MySQL, by 10-20X.

With separate processes and resource management for Hadoop and Spark, the Splice Machine RDBMS can ensure that large, complex analytical-processing queries do not overwhelm time-sensitive transactional ones. For example, users can set custom priority levels for analytical queries to ensure that important reports are not blocked behind a massive batch process that consumes all cluster resources.

“The result is performance between 10 and 20 times better than what's offered by traditional relational database management systems, at as little as one-fourth the cost”, the company said.

“Splice Machine 2.0 is particularly well-suited for use in applications including digital marketing, operational data lakes, data warehouse offloads and the Internet of Things”, it added.