The key problem with protecting shared data with a mutex is that there
is no easy way to associate the mutex with the data. It is thus relatively
easy to accidentally write code that fails to lock the right mutex -
or even locks the wrong mutex - and the compiler will not help you.

Moreover, managing the mutex lock also clutters the source code, making
it harder to see what is really going on.

The use of synchronized_value solves both these problems - the mutex
is intimately tied to the value, so you cannot access it without a lock,
and yet access semantics are still straightforward. For simple accesses,
synchronized_value behaves like a pointer-to-T; for example:

Both forms of pointer dereference return a proxy object rather than a
real reference, to ensure that the lock on the mutex is held across the
assignment or method call, but this is transparent to the user.

The pointer-like semantics work very well for simple accesses such as
assignment and calls to member functions. However, sometimes you need
to perform an operation that requires multiple accesses under protection
of the same lock, and that's what the synchronize() method provides.

By calling synchronize() you obtain an strict_lock_ptr object that holds
a lock on the mutex protecting the data, and which can be used to access
the protected data. The lock is held until the strict_lock_ptr object
is destroyed, so you can safely perform multi-part operations. The strict_lock_ptr
object also acts as a pointer-to-T, just like synchronized_value does,
but this time the lock is already held. For example, the following function
adds a trailing slash to a path held in a synchronized_value. The use
of the strict_lock_ptr object ensures that the string hasn't changed
in between the query and the update.

Though synchronized_value works very well for protecting a single object
of type T, nothing that we've seen so far solves the problem of operations
that require atomic access to multiple objects unless those objects can
be combined within a single structure protected by a single mutex.

One way to protect access to two synchronized_value objects is to construct
a strict_lock_ptr for each object and use those to access the respective
protected values; for instance:

This works well in some scenarios, but not all -- if the same two objects
are updated together in different sections of code then you need to take
care to ensure that the strict_lock_ptr objects are constructed in the
same sequence in all cases, otherwise you have the potential for deadlock.
This is just the same as when acquiring any two mutexes.

In order to be able to use the dead-lock free lock algorithms we need
to use instead unique_lock_ptr, which is Lockable.

While the preceding takes care of dead-lock, the access to the synchronized_value
via unique_lock_ptr requires a lock that is not forced by the interface.
An alternative on compilers providing a standard library that supports
movable std::tuple is to use the free synchronize function, which will
lock all the mutexes associated to the synchronized values and return
a tuple os strict_lock_ptr.

Copies the underlying value on a scope protected by the two mutexes.
The mutex is not copied. The locks are acquired avoiding deadlock.
For example, there is no problem if one thread assigns a=b and the other assigns
b=a.

Return:

*this

Throws:

Any exception thrown by value_type&operator(value_typeconst&)
or mtx_.lock().

If the synchronized_value
object involved is const-qualified, then you'll only be able to call
const methods through operator->. So, for example, vec->push_back("xyz")
won't work if vec were
const-qualified. The locking mechanism capitalizes on the assumption
that const methods don't modify their underlying data.

The synchronize() factory make easier to lock on a scope. As discussed,
operator->
can only lock over the duration of a call, so it is insufficient for
complex operations. With synchronize() you get to lock the object in a scoped
and to directly access the object inside that scope.

Queues provide a mechanism for communicating data between components of
a system.

The existing deque in the standard library is an inherently sequential
data structure. Its reference-returning element access operations cannot
synchronize access to those elements with other queue operations. So, concurrent
pushes and pops on queues require a different interface to the queue structure.

Moreover, concurrency adds a new dimension for performance and semantics.
Different queue implementation must trade off uncontended operation cost,
contended operation cost, and element order guarantees. Some of these trade-offs
will necessarily result in semantics weaker than a serial queue.

Locking queues can by nature block waiting for the queue to be non-empty
or non-full.

Lock-free queues will have some trouble waiting for the queue to be
non-empty or non-full queues. These queues can not define operations
such as push (and pull for bounded queues). That is, it could have
blocking operations (presumably emulated with busy wait) but not waiting
operations.

Threads using a queue for communication need some mechanism to signal
when the queue is no longer needed. The usual approach is add an additional
out-of-band signal. However, this approach suffers from the flaw that
threads waiting on either full or empty queues need to be woken up
when the queue is no longer needed. Rather than require an out-of-band
signal, we chose to directly support such a signal in the queue itself,
which considerably simplifies coding.

To achieve this signal, a thread may close a queue. Once closed, no
new elements may be pushed onto the queue. Push operations on a closed
queue will either return queue_op_status::closed (when they have a
queue_op_status return type), set the closed parameter if it has one
or throw sync_queue::closed (when they do not). Elements already on
the queue may be pulled off. When a queue is empty and closed, pull
operations will either return queue_op_status::closed (when they have
a status return), set the closed parameter if it has one or throw sync_queue::closed
(when they do not).

For cases when blocking for mutual exclusion is undesirable, we have
non-blocking operations. The interface is the same as the try operations
but is allowed to also return queue_op_status::busy in case the operation
is unable to complete without blocking.