Now suppose that instead of invoking square_root() from the current thread, you make an asynchronous call:

std::future<double> f=std::async(square_root,-1);
double y=f.get();

It would be ideal if the behavior were exactly the same. And in fact it is. If the function call invoked as part of std::async throws an exception, the exception is stored in the future in place of a stored value, the future becomes ready, and a call to get() throws the stored exception again. (Note: The standard leaves it unspecified whether it is the original exception object that is rethrown or a copy; different compilers and libraries make different choices on this matter.)

The same happens if you wrap the function in a std::packaged_task  if the wrapped function throws an exception, the exception is stored in the future in place of the result.

Naturally, std::promise provides the same facility, with an explicit function call. If you wish to store an exception rather than a value, you call the set_exception() member function rather than set_value(). This would typically be used in a catch block for an exception thrown as part of the algorithm, to populate the promise with that exception:

This is much cleaner than using a try/catch block if the type of the exception is known, and it should be used in preference. Not only does it simplify the code, but it also gives the compiler more opportunities to optimize your code.

Another way to store an exception in a future is to destroy the std::promise or std::packaged_task associated with the future without calling the set functions on the promise or invoking the packaged task. In either case, the destructor of the std::promise or std::packaged_task will store a std::future_error exception with an error code of std::future_errc::broken_promise in the associated state. When you create a future you make a promise to provide a value or exception, and by destroying the source of that value or exception without providing one, you break that promise. If the compiler didn't store anything in the future in this case, waiting threads could potentially wait forever.

Up until now, all of the examples have used std::future. However, std::future has its limitations, not the least of which is that only one thread can wait for the result. If you need to wait for the same event from more than one thread, then you need to use std::shared_future instead.

Waiting from Multiple Threads

Although std::future handles all the synchronization necessary to transfer data from one thread to another, calls to the member functions of a particular std::future instance are not synchronized with each other. If you access a single std::future object from multiple threads without additional synchronization, then you have a data race and undefined behavior. This is by design: std::future models unique ownership of the asynchronous result, and the one-shot nature of get() makes such concurrent access pointless anyway  only one thread can retrieve the value because after the first call to get() there's no value left to retrieve.

If the design of your concurrent application requires that multiple threads can wait for the same event, then don't despair; std::shared_future allows exactly that. While std::future is only movable, so ownership can be transferred among instances but only one instance can refer to a particular asynchronous result at a time, std::shared_future instances are copyable, so you can have multiple objects referring to the same associated state.

Now, with std::shared_future, member functions on an individual object are still unsynchronized, so to avoid data races when accessing a single object from multiple threads, you must protect accesses with a lock. The preferred method is to copy the object, giving each thread access to its own copy. Accesses to the shared asynchronous state from multiple threads are safe if each thread accesses that state through its own std::shared_future object.

One potential use of std::shared_future is for implementing parallel execution of something akin to a complex spreadsheet. Each cell has a single final value that may be used by formulas in other cells. The formulas for calculating the results of the dependent cells can use a std::shared_future to reference the first cell. If all the formulas for the individual cells are executed in parallel, tasks that can proceed to completion will do so, while those that depend on others will block until their dependencies are ready. This allows the system to make maximum use of available hardware concurrency.

Instances of std::shared_future that reference a particular asynchronous state are constructed from instances of std::future that reference that state. Since std::future objects don't share ownership of the asynchronous state with other objects, ownership must be transferred into std::shared_future using std::move, leaving std::future in an empty state, as if it had been default-constructed:

Here, the transfer of ownership is implicit. The std::shared_future<> is constructed from an rvalue of type std::future<std::string>.

std::future has an additional feature to ease the use of std::shared_future with the new facility for automatically deducing the type of a variable from its initializer. std::future's share() member function creates a new std::shared_future and transfers ownership to it directly. This can save a lot of typing and make code easier to change:

In this case, the type of sf is deduced to be std::shared_future<std::map< SomeIndexType, SomeDataType, SomeComparator, SomeAllocator>::iterator>, which is rather a mouthful. If the comparator or allocator is changed, then you need to change only the type of the promise; the type of the future is automatically updated to match.

Conculsion

Synchronizing operations among threads is an important part of writing an application that uses concurrency. If there is no synchronization, after all, the threads might as well be written as separate applications. There are many design strategies and language features to help manage synchronization. I hope this article will inspire you to consider futures when you choose among them.

Anthony Williams is a UK-based developer and consultant. He has been the maintainer of the Boost Thread library since 2006, and is the developer of the just::thread implementation of the C++11 thread library from Just Software Solutions Ltd. This article is adapted from material that appears in his book, C++ Concurrency in Action: Practical Multithreading (Manning Publications, 2012).

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Video

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!