Apache Jena Elephas

Apache Jena Elephas is a set of libraries which provide various basic building blocks which enable you to start writing Apache Hadoop based applications which work with RDF data.

Historically there has been no serious support for RDF within the Hadoop ecosystem and what support has existed has
often been limited and task specific. These libraries aim to be as generic as possible and provide the necessary
infrastructure that enables developers to create their application specific logic without worrying about the
underlying plumbing.

Apache Jena Elephas is published as a set of Maven module via its maven artifacts. The source for these libraries
may be downloaded as part of the source distribution. These modules are built against the Hadoop 2.x. APIs and no
backwards compatibility for 1.x is provided.

The core aim of these libraries it to provide the basic building blocks that allow users to start writing Hadoop applications that
work with RDF. They are mostly fairly low level components but they are designed to be used as building blocks to help users and developers
focus on actual application logic rather than on the low level plumbing.

Firstly at the lowest level they provide Writable implementations that allow the basic RDF primitives - nodes, triples and quads -
to be represented and exchanged within Hadoop applications, this support is provided by the Common library.

Secondly they provide support for all the RDF serialisations which Jena supports as both input and output formats subject to the specific
limitations of those serialisations. This support is provided by the IO library in the form of standard InputFormat and
OutputFormat implementations.

There are also a set of basic Mapper and Reducer implementations provided by the Map/Reduce library which contains code
that enables various common Hadoop tasks such as counting, filtering, splitting and grouping to be carried out on RDF data. Typically these
will be used as a starting point to build more complex RDF processing applications.

Finally there is a RDF Stats Demo which is a runnable Hadoop job JAR file that demonstrates using these libraries to calculate
a number of basic statistics over arbitrary RDF data.

To get started you will need to add the relevant dependencies to your project, the exact dependencies necessary will depend
on what you are trying to do. Typically you will likely need at least the IO library and possibly the Map/Reduce library:

Our libraries depend on the relevant Hadoop libraries but since these libraries are typically provided by the Hadoop cluster those dependencies are marked as provided and thus are not transitive. This means that you will typically also need to add the following additional dependencies:

<!-- Hadoop Dependencies --><!-- Note these will be provided on the Hadoop cluster hence the provided scope --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>2.6.0</version><scope>provided</scope></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-common</artifactId><version>2.6.0</version><scope>provided</scope></dependency>

You can then write code to launch a Map/Reduce job that works with RDF. For example let us consider a RDF variation of the classic Hadoop
word count example. In this example which we call node count we do the following:

Take in some RDF triples

Split them up into their constituent nodes i.e. the URIs, Blank Nodes & Literals

Assign an initial count of one to each node

Group by node and sum up the counts

Output the nodes and their usage counts

We will start with our Mapper implementation, as you can see this simply takes in a triple and splits it into its constituent nodes. It
then outputs each node with an initial count of 1:

So this really is no different from configuring any other Hadoop job, we simply have to point to the relevant input and output formats and provide our mapper and reducer. Note that here we use the TriplesInputFormat which can handle RDF in any Jena supported format, if you know your RDF is in a specific format it is usually more efficient to use a more specific input format. Please see the IO page for more detail on the available input formats and the differences between them.

We recommend that you next take a look at our RDF Stats Demo which shows how to do some more complex computations by chaining multiple jobs together.