Changed behaviour for delayed(nout=0) and delayed(nout=1):
delayed(nout=1) does not default to out=None anymore, and
delayed(nout=0) is also enabled. I.e. functions with return
tuples of length 1 or 0 can be handled correctly. This is especially
handy, if functions with a variable amount of outputs are wrapped by
delayed. E.g. a trivial example:
delayed(lambda*args:args,nout=len(vals))(*vals)

DataFrames now enforce knowing full metadata (columns, dtypes) everywhere.
Previously we would operate in an ambiguous state when functions lost dtype
information (such as apply). Now all dataframes always know their dtypes
and raise errors asking for information if they are unable to infer (which
they usually can). Some internal attributes like _pd and
_pd_nonempty have been moved.

The internals of the distributed scheduler have been refactored to
transition tasks between explicit states. This improves resilience,
reasoning about scheduling, plugin operation, and logging. It also makes
the scheduler code easier to understand for newcomers.

Improve scheduler performance in the many-fast-tasks case (important for
shuffling)

Improve work stealing to be aware of expected function run-times and data
sizes. The drastically increases the breadth of algorithms that can be
efficiently run on the distributed scheduler without significant user
expertise.

All dask-related projects (dask, distributed, s3fs, hdfs, partd) are now
building conda packages on conda-forge.

Change credential handling in s3fs to only pass around delegated credentials
if explicitly given secret/key. The default now is to rely on managed
environments. This can be changed back by explicitly providing a keyword
argument. Anonymous mode must be explicitly declared if desired.

All S3/HDFS data ingest functions like db.from_s3 or
distributed.s3.read_csv have been moved into the plain read_text,
read_csvfunctions, which now support protocols, like
dd.read_csv('s3://bucket/keys*.csv')

This release improves coverage of the pandas API. Among other things
it includes nunique, nlargest, quantile. Fixes encoding issues
with reading non-ascii csv files. Performance improvements and bug fixes
with resample. More flexible read_hdf with globbing. And many more. Various
bug fixes in dask.imperative and dask.bag.

The array and dataframe collections create graphs with deterministic keys.
These tend to be longer (hash strings) but should be consistent between
computations. This will be useful for caching in the future.