Monday, May 11, 2015

High Performance Computing Cluster (HPCC) is a distributed processing framework akin to Hadoop, except that it runs programs written in its own Domain Specific Language (DSL) called Enterprise Control Language (ECL). ECL is great, but occasionally you will want to call out to perform heavy lifting in other languages. For example, you may want to leverage an NLP library written in Java.

Additionally, HPCC typically operates against data residing on filesystems akin to HDFS. And just like with HDFS, once you move beyond log file processing and static data snapshots, you quickly develop a desire for a database backend.

In fact, I'd say this is a general industry trend: HDFS->HBase, S3->Redshift, etc. Eventually, you want to decrease the latency of analytics (to near zero). To do this, you setup some sort of distributed database, capable of supporting both batch processing as well as data streaming/micro-batching. And you adopt an immutable/incremental approach to data storage, which allows you to collapse your infrastructure and stream data into the system as it is being analyzed. (simplifying everything int he process)

But I digress, as a step in that direction...

We can leverage the Java Integration capabilities within HPCC to support User Defined Functions in Java. Likewise, we can leverage the same facilities to add additional backend storage mechanisms (e.g. Cassandra). More specifically, let's have a look at the streaming capabilities of HPCC/Java integration to get data out of an external source.

Let's first look at vanilla Java integration.

If you have an HPCC environment setup, the java integration starts with the /opt/HPCCSystems/classes path. You can drop classes and jar files into that location, and the funcions will be available from ECL. Follow this page for instructions.

If you run into issues, go through the troubleshooting guide on that page. The hardest part is getting HPCC to find your classes. For me, I ran into a nasty jdk version issue. By default, HPCC was picking up an old JDK version on my Ubuntu machine. Since it was using an old version, HPCC could not find the classes compiled with the "new" JDK(1.7), which resulted in the cryptic message, "Failed to resolve class name". If you run into this, pull the patch I submitted to fix this for Ubuntu.

Once you have that working, you will be able to call the Java from ECL using the following syntax:

This is pretty neat, and as the documentation suggests, you can return XML from the Java method if the data is complex. But what do you do if you have a TON of data, more than can reside in memory? Well, then you need Java streaming to HPCC. ;)

Instead of returning the actual data from the imported method, we return a java Iterator. HPCC then uses the Iterator to construct a dataset. The following is an example Iterator.

After the import statement, we define a type of record called rowrec. In the following line, we import the UDF, and type the result as a DATASET that contains rowrecs. The names of the fields in rowrec must match the names of the member variables on the java bean. HPCC will use the iterator, and populate the dataset with the return of the next() method. The final line of the ECL outputs the results returned.

I've committed all of the above code to a github repository with some instructions on getting it running. Have fun.

Stay tuned for more...
Imagine combining the java streaming capabilities outlined here, with the ability to stream data out out of Cassandra as detailed in my previous post. The result is a powerful means of running batch analytics using Thor, against data stored in Cassandra (with data locality!)... (possibly enabling ECL jobs against data ingested via live real-time event streams! =)

I'm working on a mechanism that will allow HPCC to access data stored in Cassandra with data locality, leveraging the Java streaming capabilities from HPCC (more on this in a followup post). More specifically, we want to allow people to write functions in ECL that will execute on all nodes in an HPCC cluster, using collocated Cassandra instances as the source of their data.

To do this however, we need a couple things. If you remember, Cassandra data is spread across multiple nodes using a token ring. Each host is assigned one or more slices of that token ring. Each slice has a token range, with a start and an end. Each partition/row key hashes into a token, which determines which node gets that data. (Typically, Murmur3 is used as the hashing algorithm.)

Thus, to scan over the data that is local to a node, we need to determine the token ranges for our local node, then we need to page over the data in that range.

First, let's look at how we determine the ranges for the local node. You could query the system tables directly: (SELECT * FROM system.local), but if you are using a connection pool (via the Java driver), it is unclear which node will receive that query. You could also query the information from the system.peers table using your IP address in the where clause, but you may not want to configure that IP address on each node. Instead, I was able to lean on the CQL java-driver to determine the localhost:

The code is very straightforward, with the exception of the call to unwrapTokenRanges. When you ask for the token ranges for a host, CQL will give you the tokens in ranges, but CQL does NOT handle wrapped ranges. For example, let's assume we had a global token space of [-16...16]. Our host may have token ranges of [-10...-3], [12...14] and [15...-2]. You can issue the following CQL queries (Notice that token ranges are start exclusive and end inclusive.):

SELECT token(id), id, name FROM test_table WHERE token(id)>-10 AND token(id)<=-3;
SELECT token(id), id, name FROM test_table WHERE token(id)>12 AND token(id)<=14;

However, you CANNOT issue the following CQL:

SELECT token(id), id, name FROM test_table WHERE token(id)>15 AND token(id)<=-2;

That range wraps around. To accommodate this, the java-driver provides a convenience method called unwrap(). You can then use that method, to create a set of token ranges usable in CQL queries that will account for the token range wrapping.

The above code issues a select statement and pages over the results, scanning the portion of the table specified within the token range.

If we throw a loop on top of all of this, to go through each token range and scan it, we have a means of executing a distributed processing job, that uses only the local portions of the Cassandra tables as input.