Return to page

H2O.ai Blog

November 8th, 2014

Hacking Algorithms into H2O: KMeans

By: H2O.ai

This is a presentation of hacking a simple algorithm into the new dev-friendlybranch of H2O, h2o-dev.

This is one of three “Hacking Algorithms into H2O” blogs. All blogsstart out the same – getting the h2o-dev code and building it. They are thesame until the section titled Building Our Algorithm: Copying from theExample, and then the content is customized for eachalgorithm. This blog describes the algorithm K-Means.

What is H2O-dev ?

As I mentioned, H2O-dev is a dev-friendly version of H2O, and is soon to be ouronly version. What does “dev-friendly” mean? It means:

Fully integrated into IdeaJ: You can right-click debug-as-junit any ofthe junit tests and they will Do The Right Thing in your IDE.

Fully gradle-ized and maven-ized: Running gradlew build will downloadall dependencies, build the project, and run the tests.

These are all external points. However, the code has undergone a majorrevision internally as well. What worked well was left alone, but whatwas… gruesome… has been rewritten. In particular, it’s now much easier towrite the “glue” wrappers around algorithms and get the full GUI, R, REST andJSON support on your new algorithm. You still have to write the math, ofcourse, but there’s not nearly so much pain on the top-level integration.

At some point, we’ll flip the usual H2Ogithub repo to have h2o-dev as our main repo, but at the moment, h2o-dev doesnot contain all the functionality in H2O, so it is in its own repo.

Building H2O-dev

I assume you are familiar with basic Java development and how githubrepo’s work – so we’ll start with a clean github repo of h2o-dev:

But faster yet will be IDE-based builds. There’s also a functioning Makefilesetup for old-schoolers like me; it’s a lot faster than gradle for incrementalbuilds.

While that build is going, let’s look at what we got. There are 4 top-leveldirectories of interest here:

h2o-core: The core H2O system – including clustering, clouding,distributed execution, distributed Key-Value store, the web, REST and JSONinterfaces. We’ll be looking at the code and javadocs in here – there are alot of useful utilities – but not changing it.

h2o-algos: Where most of the algorithms lie, including GLM and DeepLearning. We’ll be copying the Example algorithm and turning it intoa K-Means algorithm.

h2o-web: The web interface and JavaScript. We will use jar filesfrom here in our project, but probably not need to look at the code.

h2o-app: A tiny sample Application which drives h2o-core and h2o-algos,including the one we hack in. We’ll add one line here to teach H2O about ournew algorithm.

Within each top-level directory, there is a fairly straightforward maven’izeddirectory structure:

src/main/java - Java source code
src/test/java - Java test code

In the Java directories, we further use water directories to hold core H2Ofunctionality and hex directories to hold algorithms and math:

Running H2O-dev Tests in an IDE

Then I switched to IDEAJ from my command window. I launched IDEAJ, selected“Open Project”, navigated to the h2o-dev/ directory and clicked Open. After IDEAJ opened, I clicked the Make project button (or Build/Make Project or ctrl-F9) andafter a few seconds, IDEAJ reports the project is built (with a few dozenwarnings).

Let’s use IDEAJ to run the JUnit test for the Example algorithm I mentionedabove. Navigate to the ExampleTest.java file. I used a quick double-press ofShift to bring the generic project search, then typed some ofExampleTest.java and selected it from the picker. Inside the one obvioustestIris() function, right-click and select Debug testIris(). The testIris codeshould run, pass pretty quickly, and generate some output:

Ok, that’s a pretty big pile of output – but buried it in is some cool stuff we’ll need to be able to pick out later, so let’s break it down a little.

The yellow stuff is H2O booting up acluster of 1 JVM. H2O dumps out a bunch of stuff to diagnose initial clustersetup problems, including the git build version info, memory assigned to theJVM, and the network ports found and selected for cluster communication. Thissection ends with the line:

This tells us we formed a Cloud of size 1: one JVM will be running our program,and its IP address is given.

The lightblue stuff is ourExampleTest JUnit test starting up and loading some test data (the venerableiris dataset with headers, stored in the H2O-dev repo’ssmalldata/iris/ directory). The printout includes some basic stats about theloaded data (column header names, min/max values, compression ratios).Included in this output are the lines Start Parse and Done Parse. Thesecome directly from the System.out.println("Start Parse") lines we can see inthe ExampleTest.java code.

Finally, the green stuff is our Examplealgorithm running on the test data. It is a very simple algorithm (finds themax per column, and does it again and again, once per requested _max_iters).

## Building Our Algorithm: Copying from the Example

Now let’s get our own algorithm framework to start playing with in place.Because H2O-dev already has a KMeans algorithm, that name istaken… but we want our own. (Besides just doing it ourselves, there are somecool extensions to KMeans we can add and sometimes it’s easier to start from aclean[er] slate).

So this algorithm is called KMeans2 (not too creative, I know). I cloned themain code and model from the h2o-algos/src/main/java/hex/example/ directoryinto h2o-algos/src/main/java/hex/kmeans2/, and also the test fromh2o-algos/src/test/java/hex/example/ directory intoh2o-algos/src/test/java/hex/kmeans2/.

Then I copied the three GUI/REST files in h2o-algos/src/main/java/hex/schemaswith Example in the name (ExampleHandler.java, ExampleModelV2.java,ExampleV2) to their KMeans2* variants.

I also copied the h2o-algos/src/main/java/hex/api/ExampleBuilderHandler.javafile to its KMeans2 variant. Finally I renamed the files and file contentsfrom Example to KMeans2.

I also dove into h2o-app/src/main/java/water/H2OApp.java and copied the twoExample lines and made theirKMeans2 variants. Because I’m old-school,I did this with a combination of shell hacking and Emacs; about 5 minutes alltold.

At this point, back in IDEAJ, I nagivated to KMeans2Test.java, right-clickeddebug-test testIris again – and was rewarded with my KMeans2 clone runninga basic test. Not a very good KMeans, but definitely a start.

Whats in All Those Files?

What’s in all those files? Mainly there is a Model and a ModelBuilder, andthen some support files.

A model is a mathematical representation of the world, an effort toapproximate some interesting fact with numbers. It is a static concreteunchanging thing, completely defined by the rules (algorithm) and data usedto make it.

A model-builder builds a model; it is transient and active. It exists aslong as we are actively trying to make a model, and is thrown away once wehave the model in-hand.

In our case, K-Means is the algorithm – so that belongs in theKMeans2ModelBuilder.java file, and the result is a set of clusters (amodel), so that belongs in the KMeans2Model.java file.

We also split Schemas from Models – to isolate slow-moving external APIs fromrapidly-moving internal APIs: as a Java dev you can hack the guts of K-Meansto your hearts content – including the inputs and outputs – as long as theexternally facing V2 schemas do not change. If you want to report new stuff ortake new parameters, you can make a new V3 schema – which is not compatiblewith V2 – for the new stuff. Old external V2 users will not be affected byyour changes (you’ll still have to make the correct mappings in the V2 schemacode from your V3 algorithm).

One other important hack: K-Means is an unsupervised algorithm – no trainingdata (no “response”) tells it what the results “should” be. So we need to hackthe word Supervised out of all the various class names it appears in. Afterthis is done, your KMeans2Test probably fails to compile, because it is tryingto set the response column name in the test, and unsupervised models do notget a response to train with. Just delete the line for now:

parms._response_column = "class";

At this point we can run our test code again (still finding the max-per-column).

The KMeans2 Model

The KMeans2 model, in the file KMeans2Model.java, should contain what weexpect out of K-Means: a set of clusters. We’ll represent a single cluster asan N-dimensional point (an array of doubles). For our K clusters, this will be:

public double _clusters[/*K*/][/*N*/]; // Our K clusters, each an N-dimensional point

Inside the KMeans2Model class, there is a class for the model’s output:class KMeans2Output. We’ll put our clusters there. The various supportclasses and files will make sure our model’s output appears in the correct REST andJSON responses and gets pretty-printed in the GUI. There is also theleft-over _maxs array from the old Example code; we can delete that now.

To help assay the goodness our of model, we should also report some extrafacts about the training results. The obvious thing to report is the MeanSquared Error, or the average squared-error each training point has against itschosen cluster:

public double _mse; // Mean Squared Error of the training data

And finally a quick report on the effort used to train: the number ofiterations training actually took. K-Means runs in iterations, improving witheach iteration. The algorithm typically stops when the model quitsimproving; we report how many iterations it took here:

Now, let’s turn to the input to our model-building process. These are storedin the class KMeans2Model.KMeans2Parameters. We already inherit an inputtraining dataset (returned with train()), possibly a validation dataset(valid()), and some other helpers (e.g. which columns to ignore if we doan expensive scoring step with each iteration). For now, we can ignoreeverything except the input dataset from train().

However, we want some more parameters for K-Means: K, the number of clusters.Define it next to the left-over _max_iters from the old Example code (whichwe might as well keep since that’s a useful stopping conditon for K-Means):

A bit on field naming: I always use a leading underscore _ before allinternal field names – it lets me know at a glance whether I’m looking at afield name (stateful, can changed by other threads) or a function parameter(functional, private). The distinction becomes interesting when you aresorting through large piles of code. There’s no other fundamental reason touse (or not) the underscores. External APIs, on the other hand, generally donot care for leading underscores. Our JSON output and REST URLs will strip theunderscores from these fields.

To make the GUI functional, I need to add my new K field to the external inputSchema in h2o-algos/src/main/java/hex/schemas/KMeans2V2.java:

The KMeans2 Model Builder

Let’s turn to the K-Means model builder, which includes some boilerplate weinherited from the old Example code, and a place to put our real algorithm.There is a basic KMeans2 constructor which calls init:

public KMeans2( ... ) { super("KMeans2",parms); init(false); }

In this case, init(false) means “only do cheap stuff in init”. Init isdefined a little ways down and does basic (cheap) argument checking.init(false) is called every time the mouse clicks in the GUI and is used tolet the front-end sanity parameters function as people type. In this case “only docheap stuff” really means “only do stuff you don’t mind waiting on whileclicking in the browser”. No running K-Means in the init() call!

Speaking of the init() call, the one we got from the old Example code limitsour _max_iters to between 1 and 10 million. Let’s add some lines to checkthat K is sane:

Immediately when testing the code, I get a failure because theKMeans2Test.java code does not set K and the default is zero. I’ll set Kto 3 in the test code:

parms._K = 3;

In the KMeans2.java file there is a trainModel call that is used when youreally want to start running K-Means (as opposed to just checking arguments).In our case, the old boilerplate starts a KMeans2Driver in a backgroundthread. Not required, but for any long-running algorithm, it is nice to have itrun in the background. We’ll get progress reports from the GUI (and fromREST/JSON) with the option to cancel the job, or inspect partial results as themodel builds.

The class KMeans2Driver holds the algorithmic meat. The compute2() callwill be called by a background Fork/Join worker thread to drive all the hardwork. Again, there is some brief boilerplate we need to go over.

First up: we need to record Keys stored in H2O’s DKV: DistributedKey/Value store, so a later cleanup, Scope.exit();, will wipe out any tempkeys. When working with Big Data, we have to be careful to clean up afterourselves – or we can swamp memory with Big Temps.

Scope.enter();

Next, we need to prevent the input datasets from being manipulated by otherthreads during the model-build process:

_parms.lock_frames(KMeans2.this);

Locking prevents situations like accidentally deleting or loading a new datasetwith the same name while K-Means is running. Like the Scope.exit() above, wewill unlock in finally block. While it might be nice to use Java locking, oreven JDK 5.0 locks, we need a distributed lock, which is not provided byJDK 5.0. Note that H2O locks are strictly cooperative – we cannotenforce locking at the JVM level like the JVM does.

Next, we make an instance of our model object (with no clusters yet) and placeit in the DKV, locked (e.g., to prevent another user from overwriting ourmodel-in-progress with an unrelated model).

Also, near the file bottom is a leftover class Max from the old Example code.Might as well nuke it now.

The KMeans2 Main Algorithm

Finally we get to where the Math is!

K-Means starts with some clusters, generally picked from the datasetpopulation, and then optimizes the cluster centers and clusterassignments. The easiest (but not the best!) way to pick clusters is just topick points at (pseudo) random. So ahead of our iteration/main loop, let’s picksome clusters.

My KMeans2 now has a leftover loop from the old Example code running up tosome max iteration count. This sounds like a good start to K-Means – we’llneed several stopping conditions, and max-iterations is one of them.

I removed the “compute Max” code from the old Example code in the loop body.Next up, I see code to record any new model (e.g. clusters, mse), and savethe results back into the DKV, bump the progress bar, and log a little bit ofprogress:

// Fill in the model
model._output._clusters = ????? // we need to figure these out
model.update(_key); // Update model in K/V store
update(1); // One unit of work in the GUI progress bar
StringBuilder sb = new StringBuilder();
sb.append("KMeans2: iter: ").append(model._output._iters);
Log.info(sb);

The KMeans2 Main Loop

And now we need to figure what do in our main loop. Somewhere between theloop-top-isRunning check and the model.update() call, we need to computesomething to update our model with! This is the meat of K-Means – for eachpoint, assign it to the nearest cluster center, then compute new clustercenters from the assigned points, and iterate until the clusters quit moving.

Anything that starts out with the words “for each point” when you have a billionpoints needs to run in-parallel and scale-out to have a chance of completingfast – and this is exactly H2O is built for! So let’s write code that runsscale-out for-each-point… and the easiest way to do that is with an H2OMap/Reduce job – an instance of MRTask. For K-Means, this is an instance ofLloyd’s basic algorithm. We’ll call it from the main-loop like this, anddefine it below (extra lines included so you can see how it fits):

Basically, we just called some not-yet-defined Lloyds code, computed somecluster centers by computing the average point from the points in the newcluster, and copied the results into our model. I also printed out the MeanSquared Error and row counts, so we can watch the progress over time. Finallywe end with another stopping condition: stop if the latest model is really notmuch better than the last model. Now class Lloyds can be coded as an innerclass to the KMeans2Driver class:

A Quick H2O Map/Reduce Diversion

This isn’t your Hadoop-Daddy’s Map/Reduce. This is an in-memory super-fastmap-reduce… where “super-fast” generally means “memory bandwidth limited”,often 1000x faster than the usual hadoop-variant – MRTasks can often touch agigabyte of data in a millisecond, or a terabyte in a second (depending on how muchhardware is in your cluster – more hardware is faster for the same amount ofdata!)

The map() call takes data in Chunks – where each Chunk is basically asmall array-like slice of the Big Data. Data in Chunks is accessed with basicat0 and set0 calls (vs accessing data in Vecs with at and set). Theoutput of a map() is stored in the Lloyds object itself, as a Plain OldeJava Object (POJO). Each map() call has private access to its own fields andChunks, which implies there are lots of instances of Lloyds objectsscattered all over the cluster (one such instance per Chunk of data…well, actually one instance per call to map(), but each map call is handed analigned set of Chunks, one per feature or column in the dataset).

Since there are lots of little Lloyds running about, their results need to becombined. That’s what reduce does – combine two Lloyds into one.Typically, you can do this by adding similar fields together – often arrayelements are added side-by-side, similar to a saxpy operation.

This also means that any objects created or initialized in the constructor iscopied about and shared – generally read-only – in all the little Lloyds thatare running about. Objects made in the map() calls are private to thatinstance – and lots are getting made and must be reduced. Hence we do notset _sums and _rows in the constructor – these are our results – they willbe created new and empty in the map call instead.

All code here is written in a single-threaded style, even as it runs inparallel and distributed. H2O handles all the synchronizationissues.

A Quick Helper

A common op is to compute the distance between two points. We’ll computeit as the squared Euclidean distance (squared so as to avoid an expensivesquare-root operation):