Also note that the collections library was carefully designed to include several implementations of
each of the three basic collection types. These implementations have specific performance
characteristics which are described
in the guide.

The base trait for all combiners.
A combiner incremental collection construction just like
a regular builder, but also implements an efficient merge operation of two builders
via combine method. Once the collection is constructed, it may be obtained by invoking
the result method.

The complexity of the combine method should be less than linear for best
performance. The result method doesn't have to be a constant time operation,
but may be performed in parallel.

It can be used with the default execution context implementation in the
scala.concurrent package. It internally forwards the call to either a
forkjoin based task support or a thread pool executor one, depending on
what the execution context uses.

By default, parallel collections are parameterized with this task support
object, so parallel collections share the same execution context backend
as the rest of the scala.concurrent package.

As an optimization, it internally checks whether the execution context is the
standard implementation based on fork/join pools, and if it is, creates a
ForkJoinTaskSupport that shares the same pool to forward its request to it.

Otherwise, it uses an execution context exclusive Tasks implementation to
divide the tasks into smaller chunks and execute operations on it.

This is a base trait for Scala parallel collections. It defines behaviour
common to all parallel collections. Concrete parallel collections should
inherit this trait and ParIterable if they want to define specific combiner
factories.

Parallel operations are implemented with divide and conquer style algorithms that
parallelize well. The basic idea is to split the collection into smaller parts until
they are small enough to be operated on sequentially.

All of the parallel operations are implemented as tasks within this trait. Tasks rely
on the concept of splitters, which extend iterators. Every parallel collection defines:

def splitter: IterableSplitter[T]

which returns an instance of IterableSplitter[T], which is a subtype of Splitter[T].
Splitters have a method remaining to check the remaining number of elements,
and method split which is defined by splitters. Method split divides the splitters
iterate over into disjunct subsets:

def split: Seq[Splitter]

which splits the splitter into a sequence of disjunct subsplitters. This is typically a
very fast operation which simply creates wrappers around the receiver collection.
This can be repeated recursively.

Method newCombiner produces a new combiner. Combiners are an extension of builders.
They provide a method combine which combines two combiners and returns a combiner
containing elements of both combiners.
This method can be implemented by aggressively copying all the elements into the new combiner
or by lazily binding their results. It is recommended to avoid copying all of
the elements for performance reasons, although that cost might be negligible depending on
the use case. Standard parallel collection combiners avoid copying when merging results,
relying either on a two-step lazy construction or specific data-structure properties.

Methods:

def seq: Sequential
def par: Repr

produce the sequential or parallel implementation of the collection, respectively.
Method par just returns a reference to this parallel collection.
Method seq is efficient - it will not copy the elements. Instead,
it will create a sequential version of the collection using the same underlying data structure.
Note that this is not the case for sequential collections in general - they may copy the elements
and produce a different underlying data structure.

The combination of methods toMap, toSeq or toSet along with par and seq is a flexible
way to change between different collection types.

Since this trait extends the GenIterable trait, methods like size must also
be implemented in concrete collections, while iterator forwards to splitter by
default.

Each parallel collection is bound to a specific fork/join pool, on which dormant worker
threads are kept. The fork/join pool contains other information such as the parallelism
level, that is, the number of processors used. When a collection is created, it is assigned the
default fork/join pool found in the scala.parallel package object.

Parallel collections are not necessarily ordered in terms of the foreach
operation (see Traversable). Parallel sequences have a well defined order for iterators - creating
an iterator and traversing the elements linearly will always yield the same order.
However, bulk operations such as foreach, map or filter always occur in undefined orders for all
parallel collections.

Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware
of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators,
which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollection to false. This will make some operations unavailable.

To create a new parallel collection, extend the ParIterable trait, and implement size, splitter,
newCombiner and seq. Having an implicit combiner factory requires extending this trait in addition, as
well as providing a companion object, as with regular collections.

Method size is implemented as a constant time operation for parallel collections, and parallel collection
operations rely on this assumption.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

This is a base trait for Scala parallel collections. It defines behaviour
common to all parallel collections. Concrete parallel collections should
inherit this trait and ParIterable if they want to define specific combiner
factories.

Parallel operations are implemented with divide and conquer style algorithms that
parallelize well. The basic idea is to split the collection into smaller parts until
they are small enough to be operated on sequentially.

All of the parallel operations are implemented as tasks within this trait. Tasks rely
on the concept of splitters, which extend iterators. Every parallel collection defines:

def splitter: IterableSplitter[T]

which returns an instance of IterableSplitter[T], which is a subtype of Splitter[T].
Splitters have a method remaining to check the remaining number of elements,
and method split which is defined by splitters. Method split divides the splitters
iterate over into disjunct subsets:

def split: Seq[Splitter]

which splits the splitter into a sequence of disjunct subsplitters. This is typically a
very fast operation which simply creates wrappers around the receiver collection.
This can be repeated recursively.

Method newCombiner produces a new combiner. Combiners are an extension of builders.
They provide a method combine which combines two combiners and returns a combiner
containing elements of both combiners.
This method can be implemented by aggressively copying all the elements into the new combiner
or by lazily binding their results. It is recommended to avoid copying all of
the elements for performance reasons, although that cost might be negligible depending on
the use case. Standard parallel collection combiners avoid copying when merging results,
relying either on a two-step lazy construction or specific data-structure properties.

Methods:

def seq: Sequential
def par: Repr

produce the sequential or parallel implementation of the collection, respectively.
Method par just returns a reference to this parallel collection.
Method seq is efficient - it will not copy the elements. Instead,
it will create a sequential version of the collection using the same underlying data structure.
Note that this is not the case for sequential collections in general - they may copy the elements
and produce a different underlying data structure.

The combination of methods toMap, toSeq or toSet along with par and seq is a flexible
way to change between different collection types.

Since this trait extends the GenIterable trait, methods like size must also
be implemented in concrete collections, while iterator forwards to splitter by
default.

Each parallel collection is bound to a specific fork/join pool, on which dormant worker
threads are kept. The fork/join pool contains other information such as the parallelism
level, that is, the number of processors used. When a collection is created, it is assigned the
default fork/join pool found in the scala.parallel package object.

Parallel collections are not necessarily ordered in terms of the foreach
operation (see Traversable). Parallel sequences have a well defined order for iterators - creating
an iterator and traversing the elements linearly will always yield the same order.
However, bulk operations such as foreach, map or filter always occur in undefined orders for all
parallel collections.

Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware
of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators,
which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollection to false. This will make some operations unavailable.

To create a new parallel collection, extend the ParIterable trait, and implement size, splitter,
newCombiner and seq. Having an implicit combiner factory requires extending this trait in addition, as
well as providing a companion object, as with regular collections.

Method size is implemented as a constant time operation for parallel collections, and parallel collection
operations rely on this assumption.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

A template trait for mutable parallel maps. This trait is to be mixed in
with concrete parallel maps to override the representation type.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

Parallel sequences inherit the Seq trait. Their indexing and length computations
are defined to be efficient. Like their sequential counterparts
they always have a defined order of elements. This means they will produce resulting
parallel sequences in the same way sequential sequences do. However, the order
in which they perform bulk operations on elements to produce results is not defined and is generally
nondeterministic. If the higher-order functions given to them produce no sideeffects,
then this won't be noticeable.

This trait defines a new, more general split operation and reimplements the split
operation of ParallelIterable trait using the new split operation.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

Parallel sequences inherit the Seq trait. Their indexing and length computations
are defined to be efficient. Like their sequential counterparts
they always have a defined order of elements. This means they will produce resulting
parallel sequences in the same way sequential sequences do. However, the order
in which they perform bulk operations on elements to produce results is not defined and is generally
nondeterministic. If the higher-order functions given to them produce no sideeffects,
then this won't be noticeable.

This trait defines a new, more general split operation and reimplements the split
operation of ParallelIterable trait using the new split operation.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

A template trait for parallel sets. This trait is mixed in with concrete
parallel sets to override the representation type.

The higher-order functions passed to certain operations may contain side-effects. Since implementations
of bulk operations may not be sequential, this means that side-effects may not be predictable and may
produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer
to either avoid using side-effects or to use some form of synchronization when accessing mutable data.

A trait implementing the scheduling of a parallel collection operation.

A trait implementing the scheduling of a parallel collection operation.

Parallel collections are modular in the way operations are scheduled. Each
parallel collection is parameterized with a task support object which is
responsible for scheduling and load-balancing tasks to processors.

A task support object can be changed in a parallel collection after it has
been created, but only during a quiescent period, i.e. while there are no
concurrent invocations to parallel collection methods.