The above example is really three examples, depending on whether or
not one includes rank 3 in list_b, and whether or not a
synchronize is included in lib_call. This example illustrates
that, despite contexts, subsequent calls to lib_call with the
same context need not be safe from one another (colloquially,
``back-masking''). Safety is realized if the MPI_Barrier is
added. What this demonstrates is that libraries have to be written
carefully, even with contexts. When rank 3 is excluded, then
the synchronize is not needed to get safety from back masking.

Algorithms like ``reduce'' and ``allreduce'' have strong enough source
selectivity properties so that they are inherently okay (no backmasking),
provided that MPI provides basic guarantees. So are multiple calls to a
typical tree-broadcast algorithm with the same root or different roots (see
[(ref Skj91rev)]). Here we rely on two guarantees of MPI: pairwise ordering of
messages between processes in the same context, and source selectivity ---
deleting either feature removes the guarantee that backmasking cannot
be required.

Algorithms that try to do non-deterministic broadcasts or other calls that
include wildcard operations will not generally have the good properties of the
deterministic implementations of ``reduce,'' ``allreduce,'' and ``broadcast.''
Such algorithms would have to utilize the monotonically increasing tags
(within a communicator scope) to keep things straight.

All of the foregoing is a supposition of ``collective calls'' implemented with
point-to-point operations. MPI implementations may or may not implement
collective calls using point-to-point operations. These algorithms are used
to illustrate the issues of correctness and safety, independent of how MPI
implements its collective calls. See also section Formalizing the Loosely Synchronous Model
.