Any number of objects. If it is a dask object, it’s computed and the
result is returned. By default, python builtin collections are also
traversed to look for dask objects (for more information see the
traverse keyword). Non-dask arguments are passed through unchanged.

traverse : bool, optional

By default dask traverses builtin python collections looking for dask
objects passed to compute. For large collections this can be
expensive. If none of the arguments contain any dask objects, set
traverse=False to avoid doing this traversal.

get : callable, optional

A scheduler get function to use. If not provided, the default is
to check the global settings first, and then fall back to defaults for
the collections.

optimize_graph : bool, optional

If True [default], the optimizations for each collection are applied
before computation. Otherwise the graph is run as is. This can be
useful for debugging.

Returns equivalent dask collections that all share the same merged and
optimized underlying graph. This can be useful if converting multiple
collections to delayed objects, or to manually apply the optimizations at
strategic points.

Note that in most cases you shouldn’t need to call this method directly.

Parameters:

*args : objects

Any number of objects. If a dask object, its graph is optimized and
merged with all those of all other dask objects before returning an
equivalent dask collection. Non-dask arguments are passed through
unchanged.

This turns lazy Dask collections into Dask collections with the same
metadata, but now with their results fully computed or actively computing
in the background.

For example a lazy dask.array built up from many lazy calls will now be a
dask.array of the same shape, dtype, chunks, etc., but now with all of
those previously lazy tasks either computed in memory as many small numpy.array
(in the single-machine case) or asynchronously running in the
background on a cluster (in the distributed case).

This function operates differently if a dask.distributed.Client exists
and is connected to a distributed scheduler. In this case this function
will return as soon as the task graph has been submitted to the cluster,
but before the computations have completed. Computations will continue
asynchronously in the background. When using this function with the single
machine scheduler it blocks until the computations have finished.

When using Dask on a single machine you should ensure that the dataset fits
entirely within memory.

Parameters:

*args: Dask collections

get : callable, optional

A scheduler get function to use. If not provided, the default
is to check the global settings first, and then fall back to
the collection defaults.

optimize_graph : bool, optional

If True [default], the graph is optimized before computation.
Otherwise the graph is run as is. This can be useful for debugging.