Running with Cloudera and HortonWorks

Spark can run against all versions of Cloudera’s Distribution Including Apache Hadoop (CDH) and
the Hortonworks Data Platform (HDP). There are a few things to keep in mind when using Spark
with these distributions:

Compile-time Hadoop Version

The table below lists the corresponding SPARK_HADOOP_VERSION code for each CDH/HDP release. Note that
some Hadoop releases are binary compatible across client versions. This means the pre-built Spark
distribution may “just work” without you needing to compile. That said, we recommend compiling with
the exact Hadoop version you are running to avoid any compatibility errors.

CDH Releases

Release

Version code

CDH 4.X.X (YARN mode)

2.0.0-cdh4.X.X

CDH 4.X.X

2.0.0-mr1-cdh4.X.X

CDH 3u6

0.20.2-cdh3u6

CDH 3u5

0.20.2-cdh3u5

CDH 3u4

0.20.2-cdh3u4

HDP Releases

Release

Version code

HDP 1.3

1.2.0

HDP 1.2

1.1.2

HDP 1.1

1.0.3

HDP 1.0

1.0.3

HDP 2.0

2.2.0

Linking Applications to the Hadoop Version

In addition to compiling Spark itself against the right version, you need to add a Maven dependency on that
version of hadoop-client to any Spark applications you run, so they can also talk to the HDFS version
on the cluster. If you are using CDH, you also need to add the Cloudera Maven repository.
This looks as follows in SBT:

libraryDependencies+="org.apache.hadoop"%"hadoop-client"%"<version>"// If using CDH, also add Cloudera reporesolvers+="Cloudera Repository"at"https://repository.cloudera.com/artifactory/cloudera-repos/"

Where to Run Spark

Using dedicated set of Spark nodes in your cluster. These nodes should be co-located with your
Hadoop installation.

Running on the same nodes as an existing Hadoop installation, with a fixed amount memory and
cores dedicated to Spark on each node.

Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.

These options are identical for those using CDH and HDP.

Inheriting Cluster Configuration

If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that
should be included on Spark’s classpath:

hdfs-site.xml, which provides default behaviors for the HDFS client.

core-site.xml, which sets the default filesystem name.

The location of these configuration files varies across CDH and HDP versions, but
a common location is inside of /etc/hadoop/conf. Some tools, such as Cloudera Manager, create
configurations on-the-fly, but offer a mechanisms to download copies of them.

There are a few ways to make these files visible to Spark:

You can copy these files into $SPARK_HOME/conf and they will be included in Spark’s
classpath automatically.

If you are running Spark on the same nodes as Hadoop and your distribution includes both
hdfs-site.xml and core-site.xml in the same directory, you can set HADOOP_CONF_DIR
in $SPARK_HOME/spark-env.sh to that directory.

Apache Spark is an effort undergoing incubation at the Apache Software Foundation.