Yale researchers create database-Hadoop hybrid

Yale University researchers on Monday released an open-source parallel database that they say combines the data-crunching prowess of a relational database with the scalability of next-generation technologies such as Hadoop and MapReduce.

Yale University researchers have released an open-source parallel database that they say combines the data-crunching prowess of a relational database with the scalability of next-generation technologies such as Hadoop and MapReduce.

HadoopDB was announced on Monday by Yale computer science professor Daniel J. Abadi on his blog.

Abadi and his students created HadoopDB from components including the open-source database, PostgreSQL, the Apache Hadoop data-sorting technology and Hive, the internal Hadoop project created by Facebook Inc.

Similarly, data processing is partly done in Hadoop and partly in "different PostgreSQL instances spread across many nodes in a shared-nothing cluster of machines," wrote Abadi.

"In essence, it is a hybrid of MapReduce and parallel DBMS technologies," he continued. But unlike already-developed projects and vendors such as Aster Data, Greenplum or Hive, HadoopDB "is not a hybrid simply at the language/interface level. It is a hybrid at a deeper, systems implementation level."

By combining the best of both approaches, HadoopDB can achieve the fault tolerance of massively parallel data infrastructures such as MapReduce, where a server failure has little effect on the overall grid. And it can perform complex analyses almost as quickly as existing commercial parallel databases, claims Abadi.

In an e-mail, Abadi said that his current research doesn't repudiate the previous paper, but comes to the strong conclusion that as databases continue to grow, systems such as HadoopDB will "scale much better than parallel databases."

Though built with PostgreSQL, HadoopDB can use other databases for engines. Abadi's team has already successfully used MySQL, said Abadi, and plan to also try using columnar databases such as Infobright and MonetDB to improve performance on analytical workloads.

"Although at this point this code is just an academic prototype and some ease-of-use features are yet to be implemented, I hope that this code will nonetheless be useful for your structured data analysis tasks!" Abadi said.