newSparkContext()

a Spark Config object describing the application configuration. Any settings in
this config overrides the default configs as well as system properties.

Value Members

final def!=(arg0: Any): Boolean

Definition Classes

AnyRef → Any

final def##(): Int

Definition Classes

AnyRef → Any

final def==(arg0: Any): Boolean

Definition Classes

AnyRef → Any

defaddFile(path: String, recursive: Boolean): Unit

Add a file to be downloaded with this Spark job on every node.

Add a file to be downloaded with this Spark job on every node.
The path passed can be either a local file, a file in HDFS (or other Hadoop-supported
filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs,
use SparkFiles.get(fileName) to find its download location.

A directory can be given if the recursive option is set to true. Currently directories are only
supported for Hadoop-supported filesystems.

defaddFile(path: String): Unit

Add a file to be downloaded with this Spark job on every node.

Add a file to be downloaded with this Spark job on every node.
The path passed can be either a local file, a file in HDFS (or other Hadoop-supported
filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs,
use SparkFiles.get(fileName) to find its download location.

defaddJar(path: String): Unit

Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.

Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
The path passed can be either a local file, a file in HDFS (or other Hadoop-supported
filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on every worker node.

defaddSparkListener(listener: SparkListenerInterface): Unit

:: DeveloperApi ::
Register a listener to receive up-calls from events that happen during execution.

:: DeveloperApi ::
Register a listener to receive up-calls from events that happen during execution.

Annotations

@DeveloperApi()

defappName: String

defapplicationAttemptId: Option[String]

defapplicationId: String

A unique identifier for the Spark application.

A unique identifier for the Spark application.
Its format depends on the scheduler implementation.
(i.e.
in case of local spark app something like 'local-1433865536131'
in case of YARN something like 'application_1433865536131_34483'
)

Create and register a CollectionAccumulator, which starts with empty list and accumulates
inputs by adding them into the list.

defdefaultMinPartitions: Int

Default min number of partitions for Hadoop RDDs when not given by user
Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.

Default min number of partitions for Hadoop RDDs when not given by user
Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
The reasons for this are discussed in https://github.com/mesos/spark/pull/718

defdefaultParallelism: Int

Default level of parallelism to use when not given by user (e.g.

Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD).

Smarter version of hadoopFile() that uses class tags to figure out the classes of keys,
values and the InputFormat so that users don't need to pass them directly.

Smarter version of hadoopFile() that uses class tags to figure out the classes of keys,
values and the InputFormat so that users don't need to pass them directly. Instead, callers
can just write, for example,

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Smarter version of hadoopFile() that uses class tags to figure out the classes of keys,
values and the InputFormat so that users don't need to pass them directly.

Smarter version of hadoopFile() that uses class tags to figure out the classes of keys,
values and the InputFormat so that users don't need to pass them directly. Instead, callers
can just write, for example,

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf given its InputFormat and other
necessary info (e.g.

Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf given its InputFormat and other
necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable),
using the older MapReduce API (org.apache.hadoop.mapred).

conf

JobConf for setting up the dataset. Note: This will be put into a Broadcast.
Therefore if you plan to reuse this conf to create multiple RDDs, you need to make
sure you won't modify the conf. A safe approach is always creating a new conf for
a new RDD.

inputFormatClass

Class of the InputFormat

keyClass

Class of the keys

valueClass

Class of the values

minPartitions

Minimum number of Hadoop Splits to generate.
Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

defhashCode(): Int

Definition Classes

AnyRef → Any

definitializeLogIfNecessary(isInterpreter: Boolean): Unit

Attributes

protected

Definition Classes

Logging

final defisInstanceOf[T0]: Boolean

Definition Classes

Any

defisLocal: Boolean

defisStopped: Boolean

returns

true if context is stopped or in the midst of stopping.

defisTraceEnabled(): Boolean

Attributes

protected

Definition Classes

Logging

defjars: Seq[String]

defkillExecutor(executorId: String): Boolean

Note: This is an indication to the cluster manager that the application wishes to adjust
its resource usage downwards. If the application wishes to replace the executor it kills
through this method with a new one, it should follow up explicitly with a call to
{{SparkContext#requestExecutors}}.

defkillExecutors(executorIds: Seq[String]): Boolean

Note: This is an indication to the cluster manager that the application wishes to adjust
its resource usage downwards. If the application wishes to replace the executors it kills
through this method with new ones, it should follow up explicitly with a call to
{{SparkContext#requestExecutors}}.

final defne(arg0: AnyRef): Boolean

Get an RDD for a given Hadoop file with an arbitrary new API InputFormat
and extra configuration options to pass to the input format.

Get an RDD for a given Hadoop file with an arbitrary new API InputFormat
and extra configuration options to pass to the input format.

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Get an RDD for a given Hadoop file with an arbitrary new API InputFormat
and extra configuration options to pass to the input format.

Get an RDD for a given Hadoop file with an arbitrary new API InputFormat
and extra configuration options to pass to the input format.

conf

Configuration for setting up the dataset. Note: This will be put into a Broadcast.
Therefore if you plan to reuse this conf to create multiple RDDs, you need to make
sure you won't modify the conf. A safe approach is always creating a new conf for
a new RDD.

fClass

Class of the InputFormat

kClass

Class of the keys

vClass

Class of the values
Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

final defnotify(): Unit

final defnotifyAll(): Unit

Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and
BytesWritable values that contain a serialized partition.

Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and
BytesWritable values that contain a serialized partition. This is still an experimental
storage format and may not be supported exactly as is in future Spark releases. It will also
be pretty slow if you use the default serializer (Java serialization),
though the nice thing about it is that there's very little effort required to save arbitrary
objects.

avoid using parallelize(Seq()) to create an empty RDD. Consider emptyRDD for an
RDD with no partitions, or parallelize(Seq[T]()) for an RDD of T with empty partitions.

,

Parallelize acts lazily. If seq is a mutable collection and is altered after the call
to parallelize and before the first action on the RDD, the resultant RDD will reflect the
modified collection. Pass a copy of the argument to avoid this.

Update the cluster manager on our scheduling needs. Three bits of information are included
to help it make decisions.

numExecutors

The total number of executors we'd like to have. The cluster manager
shouldn't kill any running executor to reach this number, but,
if all existing executors were to die, this is the number of executors
we'd want to be allocated.

localityAwareTasks

The number of tasks in all active stages that have a locality
preferences. This includes running, pending, and completed tasks.

hostToLocalTaskCount

A map of hosts to the number of tasks from all active stages
that would like to like to run on that host.
This includes running, pending, and completed tasks.

Version of sequenceFile() for types implicitly convertible to Writables through a
WritableConverter.

Version of sequenceFile() for types implicitly convertible to Writables through a
WritableConverter. For example, to access a SequenceFile where the keys are Text and the
values are IntWritable, you could simply write

sparkContext.sequenceFile[String, Int](path, ...)

WritableConverters are provided in a somewhat strange way (by an implicit function) to support
both subclasses of Writable and types for which we define a converter (e.g. Int to
IntWritable). The most natural thing would've been to have implicit objects for the
converters, but then we couldn't have an object for every subclass of Writable (you can't
have a parameterized singleton object). We use functions instead to create a new converter
for the appropriate type. In addition, we pass the converter a ClassTag of its type to
allow it to figure out the Writable class to use in the subclass case.

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

Note: Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
operation will create many references to the same object.
If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
copy them using a map function.

defsetCallSite(shortCallSite: String): Unit

Set the thread-local property for overriding the call sites
of actions and RDDs.

defsetCheckpointDir(directory: String): Unit

Set the directory under which RDDs are going to be checkpointed.

Set the directory under which RDDs are going to be checkpointed. The directory must
be a HDFS path if running on a cluster.

defsetJobDescription(value: String): Unit

Assigns a group ID to all the jobs started by this thread until the group ID is set to a
different value or cleared.

Assigns a group ID to all the jobs started by this thread until the group ID is set to a
different value or cleared.

Often, a unit of execution in an application consists of multiple Spark actions or jobs.
Application programmers can use this method to group all those jobs together and give a
group description. Once set, the Spark web UI will associate such jobs with this group.

// In the main thread:
sc.setJobGroup("some_job_to_cancel", "some job description")
sc.parallelize(1 to 10000, 2).map { i => Thread.sleep(10); i }.count()
// In a separate thread:
sc.cancelJobGroup("some_job_to_cancel")

If interruptOnCancel is set to true for the job group, then job cancellation will result
in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure
that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208,
where HDFS may respond to Thread.interrupt() by marking nodes as dead.

defsetLocalProperty(key: String, value: String): Unit

Set a local property that affects jobs submitted from this thread, such as the Spark fair
scheduler pool.

Set a local property that affects jobs submitted from this thread, such as the Spark fair
scheduler pool. User-defined properties may also be set here. These properties are propagated
through to worker tasks and can be accessed there via
org.apache.spark.TaskContext#getLocalProperty.

These properties are inherited by child threads spawned from this thread. This
may have unexpected consequences when working with thread pools. The standard java
implementation of thread pools have worker threads spawn other worker threads.
As a result, local properties may propagate unpredictably.

final defwait(arg0: Long): Unit

Read a directory of text files from HDFS, a local file system (available on all nodes), or any
Hadoop-supported file system URI.

Read a directory of text files from HDFS, a local file system (available on all nodes), or any
Hadoop-supported file system URI. Each file is read as a single record and returned in a
key-value pair, where the key is the path of each file, the value is the content of each file.

Create an org.apache.spark.Accumulable shared variable, with a name for display in the
Spark UI. Tasks can add values to the accumulable using the += operator. Only the driver can
access the accumulable's value.

Create an org.apache.spark.Accumulator variable of a given type, with a name for display
in the Spark UI. Tasks can "add" values to the accumulator using the += method. Only the
driver can access the accumulator's value.