These modules are intended to be imported qualified, to avoid name
clashes with Prelude functions, e.g.

import Data.IntSet (IntSet)
import qualified Data.IntSet as IntSet

The implementation is based on big-endian patricia trees. This data
structure performs especially well on binary operations like union
and intersection. However, my benchmarks show that it is also
(much) faster on insertions and deletions when compared to a generic
size-balanced set implementation (see Data.Set).

Additionally, this implementation places bitmaps in the leaves of the tree.
Their size is the natural size of a machine word (32 or 64 bits) and greatly
reduce memory footprint and execution times for dense sets, e.g. sets where
it is likely that many values lie close to each other. The asymptotics are
not affected by this optimization.

Many operations have a worst-case complexity of O(min(n,W)).
This means that the operation can become linear in the number of
elements with a maximum of W -- the number of bits in an Int
(32 or 64).

O(1). Decompose a set into pieces based on the structure of the underlying
tree. This function is useful for consuming a set in parallel.

No guarantee is made as to the sizes of the pieces; an internal, but
deterministic process determines this. However, it is guaranteed that the
pieces returned will be in ascending order (all elements in the first submap
less than all elements in the second, and so on).

Note that the current implementation does not return more than two subsets,
but you should not depend on this behaviour because it can change in the
future without notice. Also, the current version does not continue
splitting all the way to individual singleton sets -- it stops at some
point.

Debugging

O(n). The expression (showTreeWith hang wide map) shows
the tree that implements the set. If hang is
True, a hanging tree is shown otherwise a rotated tree is shown. If
wide is True, an extra wide version is shown.