Quick Start

This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark’s
interactive shell (in Python or Scala),
then show how to write applications in Java, Scala, and Python.
See the programming guide for a more complete reference.

To follow along with this guide, first download a packaged release of Spark from the
Spark website. Since we won’t be using HDFS,
you can download a package for any version of Hadoop.

Interactive Analysis with the Spark Shell

Basics

Spark’s shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively.
It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries)
or Python. Start it by running the following in the Spark directory:

./bin/spark-shell

Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. Let’s make a new RDD from the text of the README file in the Spark source directory:

scala>textFile.filter(line=>line.contains("Spark")).count()// How many lines contain "Spark"?res3:Long=15

./bin/pyspark

Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. Let’s make a new RDD from the text of the README file in the Spark source directory:

>>>textFile=sc.textFile("README.md")

RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:

>>>textFile.count()# Number of items in this RDD126>>>textFile.first()# First item in this RDDu'# Apache Spark'

Now let’s use a transformation. We will use the filter transformation to return a new RDD with a subset of the items in the file.

>>>linesWithSpark=textFile.filter(lambdaline:"Spark"inline)

We can chain together transformations and actions:

>>>textFile.filter(lambdaline:"Spark"inline).count()# How many lines contain "Spark"?15

More on RDD Operations

RDD actions and transformations can be used for more complex computations. Let’s say we want to find the line with the most words:

This first maps a line to an integer value, creating a new RDD. reduce is called on that RDD to find the largest line count. The arguments to map and reduce are Scala function literals (closures), and can use any language feature or Scala/Java library. For example, we can easily call functions declared elsewhere. We’ll use Math.max() function to make this code easier to understand:

Here, we combined the flatMap, map and reduceByKey transformations to compute the per-word counts in the file as an RDD of (String, Int) pairs. To collect the word counts in our shell, we can use the collect action:

This first maps a line to an integer value, creating a new RDD. reduce is called on that RDD to find the largest line count. The arguments to map and reduce are Python anonymous functions (lambdas),
but we can also pass any top-level Python function we want.
For example, we’ll define a max function to make this code easier to understand:

Here, we combined the flatMap, map and reduceByKey transformations to compute the per-word counts in the file as an RDD of (string, int) pairs. To collect the word counts in our shell, we can use the collect action:

Caching

Spark also supports pulling data sets into a cluster-wide in-memory cache. This is very useful when data is accessed repeatedly, such as when querying a small “hot” dataset or when running an iterative algorithm like PageRank. As a simple example, let’s mark our linesWithSpark dataset to be cached:

It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is
that these same functions can be used on very large data sets, even when they are striped across
tens or hundreds of nodes. You can also do this interactively by connecting bin/spark-shell to
a cluster, as described in the programming guide.

It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is
that these same functions can be used on very large data sets, even when they are striped across
tens or hundreds of nodes. You can also do this interactively by connecting bin/pyspark to
a cluster, as described in the programming guide.

Self-Contained Applications

Now say we wanted to write a self-contained application using the Spark API. We will walk through a
simple application in both Scala (with SBT), Java (with Maven), and Python.

We’ll create a very simple Spark application in Scala. So simple, in fact, that it’s
named SimpleApp.scala:

/* SimpleApp.scala */importorg.apache.spark.SparkContextimportorg.apache.spark.SparkContext._importorg.apache.spark.SparkConfobjectSimpleApp{defmain(args:Array[String]){vallogFile="YOUR_SPARK_HOME/README.md"// Should be some file on your systemvalconf=newSparkConf().setAppName("Simple Application")valsc=newSparkContext(conf)vallogData=sc.textFile(logFile,2).cache()valnumAs=logData.filter(line=>line.contains("a")).count()valnumBs=logData.filter(line=>line.contains("b")).count()println("Lines with a: %s, Lines with b: %s".format(numAs,numBs))}}

This program just counts the number of lines containing ‘a’ and the number containing ‘b’ in the
Spark README. Note that you’ll need to replace YOUR_SPARK_HOME with the location where Spark is
installed. Unlike the earlier examples with the Spark shell, which initializes its own SparkContext,
we initialize a SparkContext as part of the program.

We pass the SparkContext constructor a
SparkConf
object which contains information about our
application.

Our application depends on the Spark API, so we’ll also include an sbt configuration file,
simple.sbt which explains that Spark is a dependency. This file also adds a repository that
Spark depends on:

For sbt to work correctly, we’ll need to layout SimpleApp.scala and simple.sbt
according to the typical directory structure. Once that is in place, we can create a JAR package
containing the application’s code, then use the spark-submit script to run our program.

This example will use Maven to compile an application jar, but any similar build system will work.

We’ll create a very simple Spark application, SimpleApp.java:

/* SimpleApp.java */importorg.apache.spark.api.java.*;importorg.apache.spark.SparkConf;importorg.apache.spark.api.java.function.Function;publicclassSimpleApp{publicstaticvoidmain(String[]args){StringlogFile="YOUR_SPARK_HOME/README.md";// Should be some file on your systemSparkConfconf=newSparkConf().setAppName("Simple Application");JavaSparkContextsc=newJavaSparkContext(conf);JavaRDD<String>logData=sc.textFile(logFile).cache();longnumAs=logData.filter(newFunction<String,Boolean>(){publicBooleancall(Strings){returns.contains("a");}}).count();longnumBs=logData.filter(newFunction<String,Boolean>(){publicBooleancall(Strings){returns.contains("b");}}).count();System.out.println("Lines with a: "+numAs+", lines with b: "+numBs);}}

This program just counts the number of lines containing ‘a’ and the number containing ‘b’ in a text
file. Note that you’ll need to replace YOUR_SPARK_HOME with the location where Spark is installed.
As with the Scala example, we initialize a SparkContext, though we use the special
JavaSparkContext class to get a Java-friendly one. We also create RDDs (represented by
JavaRDD) and run transformations on them. Finally, we pass functions to Spark by creating classes
that extend spark.api.java.function.Function. The
Spark programming guide describes these differences in more detail.

To build the program, we also write a Maven pom.xml file that lists Spark as a dependency.
Note that Spark artifacts are tagged with a Scala version.

Now we will show how to write an application using the Python API (PySpark).

As an example, we’ll create a simple Spark application, SimpleApp.py:

"""SimpleApp.py"""frompysparkimportSparkContextlogFile="YOUR_SPARK_HOME/README.md"# Should be some file on your systemsc=SparkContext("local","Simple App")logData=sc.textFile(logFile).cache()numAs=logData.filter(lambdas:'a'ins).count()numBs=logData.filter(lambdas:'b'ins).count()print"Lines with a: %i, lines with b: %i"%(numAs,numBs)

This program just counts the number of lines containing ‘a’ and the number containing ‘b’ in a
text file.
Note that you’ll need to replace YOUR_SPARK_HOME with the location where Spark is installed.
As with the Scala and Java examples, we use a SparkContext to create RDDs.
We can pass Python functions to Spark, which are automatically serialized along with any variables
that they reference.
For applications that use custom classes or third-party libraries, we can also add code
dependencies to spark-submit through its --py-files argument by packaging them into a
.zip file (see spark-submit --help for details).
SimpleApp is simple enough that we do not need to specify any code dependencies.