You can implement the same functionality for your program in 10 minutes. And all you need
is the Boost.ProgramOptions library.

If you have been programming in Java, C#, or Delphi, you will definitely miss the ability to
create containers with the Object value type in C++. The Object class in those languages is
a basic class for almost all types, so you are able to assign (almost) any value to it at any time.

Just imagine how great it would be to have such a feature in C++

C++03 unions can only hold extremely simple types called Plain Old Data (POD). For
example in C++03, you cannot store std::string or std::vector in a union.

Are you aware of the concept of unrestricted unions in C++11? Let me tell you about it
briefly. C++11 relaxes requirements for unions, but you have to manage the construction
and destruction of non POD types by yourself. You have to call in-place
construction/destruction and remember what type is stored in a union. A huge amount of
work, isn't it?

Can we have an unrestricted union like variable in C++03 that manages the object lifetime
and remembers the type it has?

Imagine that you are creating a wrapper around some SQL database interface. You decided
that boost::any will perfectly match the requirements for a single cell of the database
table. Some other programmer will be using your classes, and his task would be to get a
row from the database and count the sum of the arithmetic types in a row.

This is what such a code would look like:

Imagine that we have a function that does not throw an exception and returns a value or
indicates that an error has occurred. In Java or C# programming languages, such cases are
handled by comparing a return value from a function value with a null pointer; if it is null then
an error has occurred. In C++, returning a pointer from a function confuses library users and
usually requires dynamic memory allocation (which is slow).

Let's play a guessing game! What can you tell about the following function?

char* vector_advance(char* val);

Should return values be deallocated by the programmer or not? Does the function attempt
to deallocate the input parameter? Should the input parameter be zero-terminated, or
should the function assume that the input parameter has a specified width?

Now, let's make the task harder! Take a look at the following line:

char ( &vector_advance( char (&val)[4] ) )[4];

Do not worry. I've also been scratching my head for half an hour before getting an idea of
what is happening here. vector_advance is a function that accepts and returns an array of
four elements. Is there a way to write such a function clearly?

There is a very nice present for those who like std::pair. Boost has a library called Boost.Tuple.
It is just like std::pair , but it can also work with triples, quads, and even
bigger collections of types.

If you work with the standard library a lot and use the <algorithm> header, you definitely
write a lot of functional objects. In C++14, you can use generic lambdas for that. In C++11,
you only have non generic lambdas. In the earlier versions of the C++ standard, you can
construct functional objects using adapter functions such as bind1st , bind2nd , ptr_fun ,
mem_fun , mem_fun_ref , or you can write them by hand (because adapter functions look
scary). Here is some good news: Boost.Bind can be used instead of ugly adapter functions,
and it provides a more human-readable syntax.

However, the example from earlier is not very portable. It does not work when RTTI is
disabled, and it does not always produce a nice human-readable name. On some platforms,
code from earlier will output just i or d.

Things get worse if we need a type name without stripping the const , volatile , and references...

One of the greatest features of the C++11 standard is rvalue references. This feature allows
us to modify temporary objects, "stealing" resources from them. As you can guess, the C++03
standard has no rvalue references, but using the Boost.Move library you can write some
portable code that uses them, and even more, you actually get started with the emulation of
move semantics.

You have almost certainly encountered certain situations, where a class owns some
resources that must not be copied for technical reasons:

The C++ compiler in the preceding example generates a copy constructor and an assignment
operator, so the potential user of the descriptor_owner class will be able to create the
following awful things:

But such a workaround won't allow us to use descriptor_owner in STL or Boost containers.
And by the way, it looks awful!

C++11 has a bunch of new cool algorithms in <algorithm> header. C++14 has even more
algorithms. If you're stuck with the pre-C++11 compiler, you have to write those from
scratch. For example, if you wish to output characters from 65 to 125 code points, you have
to write the following code on a pre-C++11 compiler:

Sometimes we are required to dynamically allocate memory and construct a
class in that memory. And, that's where the troubles start. Have a look at the following code:

We cannot deallocate p at the end of the while loop because it can still be used by threads
that run process functions. Process functions cannot delete p because they do not know that
other threads are not using it anymore.

We already saw how to manage pointers to a resource in the Managing pointers to classes
that do not leave scope recipe. But, when we deal with arrays, we need to call delete[]
instead of a simple delete , otherwise there will be a memory leak. Have a look at the
following code:

We continue coping with pointers, and our next task is to reference count an array. Let's
take a look at a program that gets some data from the stream and processes it in different
threads. The code to do this is as follows:

#include <cstring>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
void do_process(const char* data, std::size_t size);
void do_process_in_background(const char* data, std::size_t size) {
// We need to copy data, because we do not know,
// when it will be deallocated by the caller
char* data_cpy = new char[size];
std::memcpy(data_cpy, data, size);
// Starting thread of execution to process data
boost::thread(boost::bind(&do_process, data_cpy, size))
.detach();
// We cannot delete[] data_cpy, because
// do_process1 or do_process2 may still work with it
}

Consider the situation when you are developing a library that has its API declared in the
header files and implementation in the source files. This library shall have a function that
accepts any functional objects. Take a look at the following code:

We are continuing with the previous example, and now we want to pass a pointer to a function
in our process_integeres() method. Shall we add an overload for just function pointers,
or is there a more elegant way?

We are continuing with the previous example, and now we want to use a lambda function with
our process_integers() method.

There are such cases when we need to store pointers in the container. The examples are:
storing polymorphic data in containers, forcing fast copy of data in containers, and strict
exception requirements for operations with data in containers. In such cases, the C++
programmer has the following choices:

* Store pointers in containers and take care of their destructions using the operator
delete. Such an approach is error prone and requires a lot of writing.

* Store smart pointers in containers. For the C++03 you'll have to use std::auto_ptr. However the
std::auto_ptr class is deprecated, and it is not recommended to use it in containers. For the C++11 version
you'll have to use std::unique_ptr. This solution is a good one, but it cannot be used in C++03, and you still need to
write a comparator functional object.

* Use Boost.SmartPtr in the container. This solution is portable, but you still need to write comparators, and it adds
performance penalties (an atomic counter requires additional memory,
and its increments/decrements are not as fast as nonatomic operations).

If you were dealing with languages such as Java, C#, or Delphi, you were obviously using the
try{} finally{} construction or scope(exit) in the D programming language. Let me
briefly describe to you what do these language constructions do.

When a program leaves the current scope via return or exception, code in the finally or
scope(exit) blocks is executed. This mechanism is perfect for implementing the RAII
pattern as shown in the following code snippet:

This is not a very common case in programming, but when such mistakes happen, it is not
always simple to get the idea of bypassing it. Some people try to bypass it by changing the
order of logger_ and the base type initialization:

It won't work as they expect because direct base classes are initialized before nonstatic
data members, regardless of the order of the member initializers.

Converting strings to numbers in C++ makes a lot of people depressed because of its
inefficiency and user unfriendliness. Let's see how string 100 can be converted to int:

#include <sstream>
std::istringstream iss("100");
int i;
iss >> i;
// And now, 'iss' variable will get in the way all the time,
// till end of the scope
// It is better not to think, how many unnecessary operations,
// virtual function calls and memory allocations occurred
// during those operations

C methods are not much better:

#include <cstdlib>
char * end;
int i = std::strtol ("100", &end, 10);
// Did it converted all the value to int, or stopped somewhere
// in the middle?
// And now we have 'end' variable will getting in the way
// By the way, we want an integer, but strtol returns long
// int... Did the converted value fit in int?

In this recipe we will continue discussing lexical conversions, but now we will be converting
numbers to strings using Boost.LexicalCast . And as usual, boost::lexical_cast
will provide a very simple way to convert the data.

You might remember situations where you wrote something like the following code:

void some_function(unsigned short param);
int foo();
// Somewhere in code
// Some compilers may warn that int is being converted to
// unsigned short and that there is a possibility of losing
// data
some_function(foo());

Usually, programmers just ignore such warnings by implicitly casting to unsigned short
datatype, as demonstrated in the following code snippet:

There is a feature in Boost.LexicalCast that allows users to use their own types in
lexical_cast . This feature just requires the user to write the correct std::ostream
and std::istream operators for their types.

Imagine that some programmer designed an awful interface as follows (this is a good example
of how interfaces should not be written):

Our task is to make a function that eats bananas, and throws exceptions if something
instead of banana came along (eating pidgins gross!). If we dereference a value returned
by the try_produce_banana() function, we are getting in danger of dereferencing
a null pointer.

Our task is to make a function that eats bananas, and throws exceptions if something
different came instead of banana ( try_produce_banana() may return nullptr )so if we
dereference a value returned by it without checking we are in danger of dereferencing a
null pointer.

It is a common task to parse a small text. And such situations are always a dilemma: shall we
use some third-party professional tools for parsing such as Bison or ANTLR, or shall we try to
write it by hand using only C++ and STL? The third-party tools are good for handling the parsing
of complex texts and it is easy to write parsers using them, but they require additional tools
for creating C++ or C code from their grammar, and add more dependencies to your project.
Handwritten parsers are usually hard to maintain, but they require nothing except C++ compiler.

Let's start with a very simple task to parse a date in ISO format as follows:

YYYY-MM-DD

The following are the examples of possible input:

2013-03-01
2012-12-31 // (woo-hoo, it almost a new year!)

In the previous recipe we were writing a simple parser for dates. Imagine that some time
has passed and the task has changed. Now we need to write a date-time parser that will
support multiple input formats plus zone offsets. So now our parser should understand the
following inputs:

But, this is a bad solution. The BufSizeV and sizeof(value) values are known at compile
time, so we can potentially make this code to fail compilation if the buffer is too small, instead
of having a runtime assert (which may not trigger during debug, if function was not called, and
may even be optimized out in release mode, so very bad things may happen).

It's a common situation, when we have a templated class that implements some functionality.
Have a look at the following code snippet:

Now the question: How to make the compiler to automatically choose the correct class for a
specified type?

We continue working with Boost metaprogramming libraries. In the previous recipe, we saw
how to use enable_if_c with classes, now it is time to take a look at its usage in template
functions. Consider the following example.

Initially, we had a template function that works with all the available types:

And, we have the same function optimized for sizes 1, 4, and 8 bytes. How do we rewrite
process function, so that it can dispatch calls to optimized versions?

We need to implement a type trait that returns true if the std::vector type is passed to it
as a template parameter.

Imagine that we are working with classes from different vendors that implement different
amounts of arithmetic operations and have constructors from integers. And, we do want to
make a function that increments by one when any class is passed to it. Also, we want this
function to be effective! Take a look at the following code:

In C++11, we can use auto keyword instead of ??? , and that will work. Is there a way to
do it in C++03?

On modern multi-core compilers, to achieve maximal performance (or just to provide a good
user experience), programs usually must use multiple execution threads. Here is a motivating
example in which we need to create and fill a big file in a thread that draws the user interface:

This 'Oops!' is not written there accidentally. For some people it will be a surprise, but there
is a big chance that shared_i won't be equal to 0:

shared_i == 19567

And it will get even worse in cases when a common resource has some non-trivial classes;
segmentation faults and memory leaks may (and will) occur.
We need to change the code so that only one thread modifies the shared_i variable at a
single moment of time and so that all of the processor and compiler optimizations that inflict
multithreaded code are bypassed.

In the previous recipe, we saw how to safely access a common resource from different
threads. But in that recipe, we were doing two system calls (in locking and unlocking the
mutex) just to get the value from an integer:

This looks lame and slow! Can we make the code from the previous recipe better?

Let's call the functional object that takes no arguments a task.

typedef boost::function<void()> task_t;

Now, imagine a situation where we have threads that post tasks and threads that execute
posted tasks. We need to design a class that can be safely used by both types of thread. This
class must have methods for getting a task (or blocking and waiting for a task until it is posted
by another thread), checking and getting a task if we have one (returning an empty task if no
tasks remain), and a method to post tasks.

Imagine that we are developing some online services. We have a map of registered users
with some properties for each user. This set is accessed by many threads, but it is very rarely
modified. All operations with the following set are done in a thread-safe manner via acquireing an unique lock on the mutex.

But any operation, even getting/reading
resources will result in waiting on a locked mutex; therefore, this class will become a
bottleneck very soon.

Can we fix it?

Let's take a glance at the recipe Creating a work_queue class. Each task there can be
executed in one of many threads and we do not know which one. Imagine that we want to
send the results of an executed task using some connection.

* Have a single connection for all the threads and wrap them in mutex
(which is also slow)

* Have a pool of connections, get a connection from it in a thread-safe manner
and use it (a lot of coding is required, but this solution is fast)

* Have a single connection per thread (fast and simple to implement)

So, how can we implement the last solution?

Sometimes, we need to kill a thread that ate too many resources or that is just executing for
too long. For example, some parser works in a thread (and actively uses Boost.Thread),
but we already have the required amount of data from it, so parsing can be stopped. All we
have is:

Note the return read_defaults(); line. There may be situations when server does not
respond because of networking issues or some other problems. In those cases, we attempt to
read defaults from file:

// Executes for a long time.
std::vector<std::string> read_defaults();

From the preceding code, we hit the problem: the server may be unreachable for some
noticeable time, and for all that time we'll be rereading the file on each act call. This
significantly affects performance.

So, we have to concurrent-safely read and store the data in the current instance on the first
remote server failure and do not read it again on the next failures. There are many ways to
do that, but let's look at the most right one.

For the next few paragraphs, you'll be one of the people who write games. Congratulations,
you can play at work!

You're developing a server and you have to write code for exchanging loot between two
users:

Each user action could be concurrently processed by different threads on a server, so you
have to guard the resources by mutexes. The junior developer tried to deal with the
problem, but his solution does not work:

The issue in the preceding code is a well-known ABBA deadlock problem. Imagine that
thread 1 locks mutex A and thread 2 locks mutex B. And now thread 1 attempts to lock the
already locked mutex B and thread 2 attempts to lock the already locked mutex A. This
results in two threads locked for infinity by each other, as they need a resource locked by
other thread to proceed while the other thread waits for a resource owned by the current
thread.

Now, if user1 and user2 call exchange_loot for each other concurrently, then we may end
up with a situation that user1.exchange_loot(user2) has locked
user1.loot_mutex_ and user2.exchange_loot(user1) has locked
user2.loot_mutex_. Now user1.exchange_loot(user2) waits for infinity in attempt to lock
user2.loot_mutex_ and user2.exchange_loot(user1) waits for infinity in an attempt
to lock user1.loot_mutex_.

First of all, let's take care of the class that will hold all the tasks and provide methods for their
execution. We were already doing something like this in the Creating a work_queue class
recipe, but some of the following problems were not addressed:

* A task may throw an exception that leads a call to std::terminate

* An interrupted thread may not notice interruption but will finish its task and interrupt
only during the next task (which is not what we wanted; we wanted to interrupt the
previous task)

* Our work_queue class was only storing and returning tasks, but we need to add
methods for executing existing tasks

* We need a way to stop processing the tasks

It is a common task to check something at specified intervals; for example, we need to check
some session for an activity once every 5 seconds. There are two popular solutions to such
a problem:

* The bad solution creates a thread that does the checking and then sleeps for 5
seconds. This is a lame solution that eats a lot of system resources and scales
badly.

* The right solution uses system specific APIs for manipulating timers
asynchronously. This is a better solution, that requires some work and is not
portable, unless you use Boost.Asio.

Receiving or sending data by network is a slow operation. While packets are received by the
machine, and while OS verifies them and copies the data to the user-specified buffer,
multiple seconds may pass.

We may do a lot of work rather than waiting! Let's modify our tasks_processor class so
that it would be capable of sending and receiving data in an asynchronous manner. In
nontechnical terms, we ask it to receive at least N bytes from the remote host and after that
is done, call our functor. By the way, do not block on this call. Those readers who know
about libev, libevent, or Node.js may find a lot of familiar things in this recipe.

A server-side working with a network often looks like a sequence where we first get the
new connection, read data, then process it, and then send the result. Imagine that we are
creating some kind of authorization server that must process huge amount of requests per
second. In that case, we need to accept, receive, send asynchronously, and process tasks in
multiple threads.

In this recipe, we'll see how to extend our tasks_processor class to accept and process
incoming connections, and, in the next recipe, we'll see how to make it multithreaded.

Now it is time to make our tasks_queue process tasks in multiple threads. How hard could
this be?

Sometimes there is a requirement to process tasks within a specified time interval. Compared
to previous recipes, where we were trying to process tasks in the order of their appearance in
the queue, this is a big difference.

Consider an example where we are writing a program that connects two subsystems, one of
which produces data packets and the other writes modified data to the disk (something like
this can be seen in video cameras, sound recorders, and other devices). We need to process
data packets one by one, smoothly with the least jitter, and in multiple threads.

Our previous tasks_queue was bad at processing tasks in a specified order, so how can we solve this?

In multithreaded programming, there is an abstraction called barrier. It stops execution
of threads that reach it until the requested number of threads are not blocked on it. After that,
all the threads are released and they continue with their execution.

For example, we want to process different parts of the data in different threads and then send the data:

The data_barrier.wait() method blocks until all the threads fill the data. After that,
all the threads are released; the thread with the index 0 will compute data to be sent using
compute_send_data(data), while others are again waiting at the barrier.

Looks lame, isn't it?

Processing exceptions is not always trivial and may consume a lot of time. Consider the
situation when an exception must be serialized and sent by the network. This may take
milliseconds and a few thousands of lines of code. After the exception is caught, it is not
always the best time and place to process it.

Can we store exceptions and delay their processing?

When writing some server application (especially for Linux OS), catching and processing
the signals is required. Usually, all the signal handlers are set up at server start and do not
change during the application's execution.

The goal of this recipe is to make our tasks_processor class capable of processing signals.

This is a pretty common task. We have two non-Unicode or ANSI character strings:

We need to compare them in a case-insensitive manner. There are a lot of methods to do
that, let's take a look at Boost's.

Let's do something useful! It is a common case when the user's input must be checked using
some regular expression. The problem is that there are a lot of regular expression syntaxes,
expressions written using one syntax are not treated well by the other syntaxes. Another
problem is that long regexes are not so easy to write.

So in this recipe, we are going to write a program that supports different regular expression
syntaxes and checks that the input strings match the specified regexes.

My wife enjoyed the Matching strings using regular expressions recipe very much. But, she
wanted more and told me that I'll get no food until I promote the recipe to be able to replace
parts of the input string according to a regex match.

Ok, here it comes. Each matched sub-expression (part of the regex in parenthesis) must get a
unique number starting from 1; this number would be used to create a new string.

The printf family of functions is a threat to security. It is a very bad design to allow
users to put their own strings as the type and format specifiers. So what do we do when
user-defined format is required? How shall we implement the
std::string to_string(const std::string& format_specifier) const; member function
of the following class?

class i_hold_some_internals {
int i;
std::string s;
char c;
// ...
};

Situations where we need to erase something in a string, replace a part of the string, or erase
the first or last occurrence of some sub-string are very common. Standard library allows us
to do more parts of this, but it usually involves too much code writing.

There are situations when we need to split some strings into substrings and do something
with those substrings. In this recipe, we want to split string into sentences, count characters,
and white-spaces and, of course, we want to use Boost and be as efficient as possible.

This recipe is the most important recipe in this chapter! Let's take a look at a very common
case, where we write some function that accepts a string and returns a part of the string
between character values passed in the starts and ends arguments:

Do you like this implementation? In my opinion, it looks awful. Consider the following call to it:

between_str("Getting expression (between brackets)", '(', ')');

In this example, a temporarystd::string variable is constructed from "Getting
expression (between brackets)". The character array is long enough, so there is a big
chance that dynamic memory allocation is called inside the std::string constructor and
the character array is copied into it. Then, somewhere inside thebetween_str
function, new std::string is being constructed, which may also lead to another dynamic memory
allocation and copying.

So, this simple function may, and in most cases will:

* Call dynamic memory allocation (two times)

* Copy string (two times)

* Deallocate memory (two times)

Can we do better?

There are situations when it would be great to work with all the template parameters like
they were in a container. Imagine that we are writing something, such as Boost.Variant:

The preceding code is the place where all the following interesting tasks start to happen:

* How can we remove constant and volatile qualifiers from all the types?

* How can we remove duplicate types?

* How can we get the sizes of all the types?

* How can we get the maximum size of the input parameters?

All these tasks can be easily solved using Boost.MPL.

The task of this recipe is to modify the content of one boost::mpl::vector type
depending on the content of a second boost::mpl::vector type. We'll be calling
the second vector as the vector of modifiers and each of those modifiers can have the
following type:

// Make unsigned
struct unsigne; // No typo: 'unsigned' is a keyword, we cannot use it.
// Make constant
struct constant;
// Otherwise we do not change type
struct no_change;

So where shall we start from?

Many good features were added to C++11 to simplify the metaprogramming. One such
feature is the alternative function syntax. It allows deducing the result type of a template
function. Here is an example:

In this recipe, we'll try to make our own higher-order metafunction named coalesce,
which accepts two types and two metafunctions. The coalesce metafunction applies
the first type-parameter to the first metafunction and compares the resulting type
with the boost::mpl::false_ type metafunction. If the resulting type is the
boost::mpl::false_ type metafunction, it returns the result of applying the second
type-parameter to the second metafunction, otherwise, it returns the first result type:

Lazy evaluation means that the function is not called until we really need its result.
Knowledge of this recipe is highly recommended for writing good metafunctions. The
importance of lazy evaluation will be shown in the following example.

Imagine that we are writing some metafunction that accepts a function Func , a parameter
Param , and a condition Cond . The resulting type of that function must be a fallback type
if applying the Cond to Param returns false , otherwise the result must be a Func applied
to Param :

This metafunction is the place where we cannot live without lazy evaluation, because it may
be impossible to apply Func to Param if the Cond is not met. Such attempts will always
result in compilation failure and Fallback is never returned.

This recipe and the next one are devoted to a mix of compile time and runtime features.
We'll be using the Boost.Fusion library and see what it can do.

Remember that we were talking about tuples and arrays in the first chapter? Now, we want
to write a single function that can stream elements of tuples and arrays to strings.

This recipe will show a tiny piece of the Boost.Fusion library's abilities. We'll be splitting
a single tuple into two tuples, one with arithmetic types and the other with all other types.

Most of the metaprogramming tricks that we saw in this chapter were invented long before
C++11. Probably, you've already heard about some of that stuff.

How about something brand new? How about implementing the previous recipe in C++14
with a library that puts the metaprogramming upside down and makes your eyebrows go
up? Fasten your seatbelts, we're diving into the world of Boost.Hana .

For the past two decades, C++ programmers were using std::vector as a default
sequence container. It is a fast container that does not do a lot of allocations, stores elements
in a CPU cache friendly way and because container stores the elements contiguously
std::vector::data() like functions allows to inter-operate with pure C functions.

But, we want more! There are cases when we do know the typical elements count to store in
the vector, and we need to improve the performance of the vector by totally eliminating the
memory allocations for that case.

Imagine that we are writing a high performance system for processing bank transactions.
Transaction is a sequence of operations that must all succeed or fail if at least one of the
operations failed. We know that the 99% of transactions consist of 8 or less operations and
wish to speed up things:

Here's a question: what container should we use to return the sequence from function if we
know that the sequence never has more than N elements and N is not big. For example, how
we must write the get_events() function that returns at most five events:

#include <vector>
std::vector<event> get_events();

The std::vector<event> allocates memory, so the code from earlier is not a good
solution.

#include <boost/array.hpp>
boost::array<event, 5> get_events();

boost::array<event, 5>does not allocate memory, but it constructs all the five
elements. There's no way to return less than five elements.

The boost::container::small_vector<event, 5> does not allocate memory for five
or less elements and allows us to return less than five elements. However, the solution is
not perfect, because it is not obvious from the function interface that it never returns more
than five elements.

It is a common task to manipulate strings. Here, we'll see how an operation of string
comparison can be done quickly using some simple tricks. This recipe is a trampoline for
the next one, where the techniques described here will be used to achieve constant time-
complexity searches.

So, we need to make some class that is capable of quickly comparing strings for equality.

In the previous recipe, we saw how string comparison can be optimized using hashing.
After reading it, the following question may arise: can we make a container that will cache
hashed values to use faster comparison?

The answer is yes, and we can do much more. We may achieve almost constant search,
insertion, and removal times for elements.

Several times in a year, we need something that may store and index a pair of values.
Moreover, we need to get the first part of the pair using the second, and get the second part
using the first. Confused? Let me show you an example. We create a vocabulary class.
When the users put values into it, the class must return identifiers, and when the users put
identifiers into it, the class must return values.

To be more practical, users are putting login names in our vocabulary and wish to get the
unique identifier out of it. They also wish to get all the logins for an identifier.

Let's see how it can be implemented using Boost.

In the previous recipe, we made some kind of vocabulary, which is good when we need to
work with pairs. But, what if we need much more advanced indexing? Let's make a
program that indexes persons:

We will need a lot of indexes, for example, by name, ID, height, and weight.

Nowadays, we usually use std::vector when we need nonassociative and nonordered
containers. This is recommended by Andrei Alexandrescu and Herb Sutter in the book
C++ Coding Standards. Even those users who did not read the book usually use
std::vector. Why? Well, std::list is slower, and uses much more resources than
std::vector. The std::deque container is very close to std::vector, but does not store values
continuously.

If we need a container where erasing and inserting elements does not invalidate iterators,
then we are forced to choose a slow std::list.

But wait, we may assemble a better solution using Boost!

After reading the previous recipe, some of the readers may start using fast pool allocators
everywhere; especially, for std::set and std::map. Well, I'm not going to stop you from
doing that, but at least let's take a look at an alternative: flat associative containers. These
containers are implemented on top of the traditional vector container and store the values
ordered.

I'm guessing you've seen a bunch of ugly macros to detect the compiler on which the code is
compiled. Something like this is a typical practice in C word:

Now, try to come up with a good macro to detect the GCC compiler. Try to make that
macro usage as short as possible.

Take a look at the following recipe to verify your guess.

Some compilers have support for extended arithmetic types such as 128 bit floats or
integers. Let's take a quick glance at how to use them using Boost.

We'll be creating a method that accepts three parameters and returns the multiplied value of
those methods. If compiler supports 128-bit integers, then we use them. If compiler
supports long long,then we use it; otherwise, we need to issue a compile-time error.

Some companies and libraries have specific requirements for their C++ code, such as
successful compilation without RTTI.

In this small recipe, we'll not just detect disabled RTTI, but also write a Boost like library
from scratch that stores information about types, and compares types at runtime, even
without typeid.

Chapter 4, Compile-time Tricks, and Chapter 8, Metaprogramming, were devoted to
metaprogramming. If you were trying to use techniques from those chapters, you may
have noticed that writing a metafunction can take a lot of time. So it may be a good idea
to experiment with metafunctions using more user-friendly methods, such as C++11
constexpr , before writing a portable implementation.

In this recipe, we'll take a look at how to detect constexpr support.

C++11 has very specific logic when user-defined types (UDTs) are used in standard library
containers. Some containers use move assignment and move construction only if the move
constructor does not throw exceptions or there is no copy constructor.

Let's see how we can ensure the compiler that the out class move_nothrow has a non-
throwing move assignment operator and a non-throwing move constructor.

Almost all modern languages have the ability to make libraries, a collection of classes, and
methods that have a well-defined interface. C++ is no exception to this rule. We have two
types of libraries: runtime (also called shared or dynamic) and static. But, writing libraries is
not a simple task in C++. Different platforms have different methods for describing which
symbols must be exported from the shared library.

Let's take a look at how to manage symbol visibility in a portable way using Boost.

Boost is being actively developed, so each release contains new features and libraries. Some
people wish to have libraries that compile for different versions of Boost and also want to
use some of the features of the new versions.

Let's take a look at the boost::lexical_cast change log. According to it, Boost 1.53 has
a lexical_cast(const CharType* chars, std::size_t count) function overload.
Our task for this recipe will be to use that function overload for new versions of Boost, and
work around that missing function overload for older versions.

There are standard library functions and classes to read and write data to files. But before
C++17, there were no functions to list files in a directory, get the type of a file, or get access
rights for a file.

Let's see how such iniquities can be fixed using Boost. We'll be doing a program that lists
names, write accesses, and types of files in the current directory.

In these lines, we attempt to write something to file.txt in the dir/subdir directory.
This attempt will fail if there is no such directory. The ability to work with filesystems is
necessary for writing a good working code.

In this recipe, we'll construct a directory and a subdirectory, write some data to a file, and
try to create symlink . If the symbolic link's creation fails, erase the created entities. We
should also avoid using exceptions as a mechanism of error reporting, preferring some kind
of return codes.

Let's see how that can be done in an elegant way using Boost.

Here's a tricky question: we want to allow users to write extensions to the functionality of
our program, but we do not want to give them the source codes. In other words we'd like to
say, "Write a function X and pack it into a shared library. We may use your function along with
functions of some other users!"

You meet this technique in everyday life: your browser uses it to allow
third-party plugins, your text editor may use it for syntax highlighting,
games use dynamic library loading for downloadable content (DLCs)
and for adding gamer's content, web pages are returned by servers that
use modules/plugins for encryption/authentication and so forth.

What are the requirements for a user's function and how can we use that function at some
point without linking it to the shared library?

When reporting errors or failures, it is more important to report the steps that lead to the
error rather than the error itself. Consider the naive trading simulator:

int main() {
int money = 1000;
start_trading(money);
}

All it reports is a line:

Sorry, you're bankrupt!

That's a no go. We want to know how did it happened, what were the steps that led to
bankruptcy!

Okay. Let's fix the following function and make it report the steps that led to bankruptcy:

Sometimes, we write programs that communicate with each other a lot. When programs are
run on different machines, using sockets is the most common technique for communication.
But if multiple processes run on a single machine, we can do much better!

Let's take a look at how to make a single memory fragment available from different
processes using the Boost.Interprocess library.

In the previous recipe, we saw how to create shared memory and how to place some objects in
it. Now it's time to do something useful. Let's take an example from the Creating a work_queue class
recipe, and make it work for multiple processes. At the end of
this example, we'll get a class that can store different tasks and pass them between processes.

It is hard to imagine writing some C++ core classes without pointers. Pointers and references
are everywhere in C++, and they do not work in shared memory! So if we have a structure like
this in shared memory and assign the address of some integer variable in shared memory
to pointer_, we won't get the correct address in the other process that will attempt to use
pointer_ from that instance of with_pointer:

struct with_pointer {
int* pointer_;
// ...
int value_holder_;
};

How can we fix that?

All around the Internet, people are asking "What is the fastest way to read files?". Let's
make our task for this recipe even harder: "What is the fastest and most portable way to
read binary files?"

Nowadays, plenty of embedded devices still have only a single core. Developers write for
those devices, trying to squeeze maximum performance out of them.

Using Boost.Threads or some other thread library for such devices is not effective. The
OS will be forced to schedule threads for execution, manage resources, and so on, as the
hardware cannot run them in parallel.

So, how can we force a program to switch to the execution of a subprogram while waiting
for some resource in the main part? Moreover, how can we control the time of the
subprogram's execution?

Some tasks require representing data as a graph. The Boost.Graph is a library that was
designed to provide a flexible way of constructing and representing graphs in memory. It
also contains a lot of algorithms to work with graphs, such as topological sort, breadth first
search, depth first search, and Dijkstra shortest paths.

Well, let's perform some basic tasks with Boost.Graph !

Making programs that manipulate graphs was never easy because of issues with
visualization. When we work with standard library containers such as std::map and
std::vector, we can always print the container's contents and see what is going on inside.
But when we work with complex graphs, it is hard to visualize the content in a clear way;
textual representation is not human friendly because it typically contains too many vertexes
and edges.

In this recipe, we'll take a look at the visualization of Boost.Graph using the Graphviz
tool.

I know of many examples of commercial products that use incorrect methods for getting
random numbers. It's a shame that some companies still use rand() in cryptography and
banking software.

Let's see how to get a fully random uniform distribution using Boost.Random that is
suitable for banking software.

Some projects require specific trigonometric functions, a library for numerically solving
ordinary differential equations and working with distributions and constants. All those
parts of Boost.Math will be hard to fit even in a separate book. A single recipe definitely
won't be enough. So, let's focus on very basic everyday-use functions to work with float
types.

We'll write a portable function that checks an input value for infinity and not-a-number (NaN)
values and changes the sign if the value is negative.

This recipe and the next one are devoted to auto-testing using the Boost.Test library,
which is used by many Boost libraries. Let's get hands-on with it and write some tests for
our own class.

Writing auto tests is good for your project. However, managing test cases is hard when the
project is big and many developers work on it. In this recipe, we'll take a look at how to run
individual tests and how to combine multiple test cases in a single module.

Let's pretend that two developers are testing the foo structure declared in the foo.hpp
header and we wish to give them separate source files to write tests to. In that case, both
developers won't bother each other and may work in parallel. However, the default test run
must execute tests of both developers.

I've left you something really tasty for dessert - Boost's Generic Image Library or just
Boost.GIL , which allows you to manipulate images without worrying too much about
image formats.

Let's do something simple and interesting with it. For example, let's make a program that
negates any picture.

Compile & Run

Program arguments:

Compilation command:

Output:

Waiting...

This recipe has runtime issues because of the limited capabilities of online executor.

This recipe has compile issues because of the limited capabilities of online compiler.