The text output stream sink backend is the most generic backend provided
by the library out of the box. The backend is implemented in the basic_text_ostream_backend
class template (text_ostream_backend
and wtext_ostream_backend
convenience typedefs provided for narrow and wide character support). It
supports formatting log records into strings and putting into one or several
streams. Each attached stream gets the same result of formatting, so if
you need to format log records differently for different streams, you will
need to create several sinks - each with its own formatter.

The backend also provides a feature that may come useful when debugging
your application. With the auto_flush
method one can tell the sink to automatically flush the buffers of all
attached streams after each log record is written. This will, of course,
degrade logging performance, but in case of an application crash there
is a good chance that last log records will not be lost.

voidinit_logging(){boost::shared_ptr<logging::core>core=logging::core::get();// Create a backend and attach a couple of streams to itboost::shared_ptr<sinks::text_ostream_backend>backend=boost::make_shared<sinks::text_ostream_backend>();backend->add_stream(boost::shared_ptr<std::ostream>(&std::clog,boost::null_deleter()));backend->add_stream(boost::shared_ptr<std::ostream>(newstd::ofstream("sample.log")));// Enable auto-flushing after each log record writtenbackend->auto_flush(true);// Wrap it into the frontend and register in the core.// The backend requires synchronization in the frontend.typedefsinks::synchronous_sink<sinks::text_ostream_backend>sink_t;boost::shared_ptr<sink_t>sink(newsink_t(backend));core->add_sink(sink);}

Although it is possible to write logs into files with the text
stream backend the library also offers a special sink backend with
an extended set of features suitable for file-based logging. The features
include:

Log file rotation based on file size and/or time

Flexible log file naming

Placing the rotated files into a special location in the file system

Deleting the oldest files in order to free more space on the file system

File rotation is implemented by the sink backend itself. The file name
pattern and rotation thresholds can be specified when the text_file_backend
backend is constructed.

voidinit_logging(){boost::shared_ptr<logging::core>core=logging::core::get();boost::shared_ptr<sinks::text_file_backend>backend=boost::make_shared<sinks::text_file_backend>(keywords::file_name="file_%5N.log",keywords::rotation_size=5*1024*1024,keywords::time_based_rotation=sinks::file::rotation_at_time_point(12,0,0));// Wrap it into the frontend and register in the core.// The backend requires synchronization in the frontend.typedefsinks::synchronous_sink<sinks::text_file_backend>sink_t;boost::shared_ptr<sink_t>sink(newsink_t(backend));core->add_sink(sink);}

file name pattern

rotate the file upon reaching 5 MiB size...

...or every day, at noon, whichever comes first

Note

The file size at rotation can be imprecise. The implementation counts
the number of characters written to the file, but the underlying API
can introduce additional auxiliary data, which would increase the log
file's actual size on disk. For instance, it is well known that Windows
and DOS operating systems have a special treatment with regard to new-line
characters. Each new-line character is written as a two byte sequence
0x0D 0x0A instead of a single 0x0A. Other platform-specific character
translations are also known.

The time-based rotation is not limited by only time points. There are following
options available out of the box:

Time point rotations: rotation_at_time_point
class. This kind of rotation takes place whenever the specified time
point is reached. The following variants are available:

Every day rotation, at the specified time. This is what was presented
in the code snippet above:

sinks::file::rotation_at_time_point(12,0,0)

Rotation on the specified day of every week, at the specified
time. For instance, this will make file rotation to happen every
Tuesday, at midnight:

sinks::file::rotation_at_time_point(date_time::Tuesday,0,0,0)

in case of midnight, the time can be omitted:

sinks::file::rotation_at_time_point(date_time::Tuesday)

Rotation on the specified day of each month, at the specified
time. For example, this is how to rotate files on the 1-st of
every month:

sinks::file::rotation_at_time_point(gregorian::greg_day(1),0,0,0)

like with weekdays, midnight is implied:

sinks::file::rotation_at_time_point(gregorian::greg_day(1))

Time interval rotations: rotation_at_time_interval
class. With this predicate the rotation is not bound to any time points
and happens as soon as the specified time interval since the previous
rotation elapses. This is how to make rotations every hour:

sinks::file::rotation_at_time_interval(posix_time::hours(1))

If none of the above applies, one can specify his own predicate for time-based
rotation. The predicate should take no arguments and return bool (the true
value indicates that the rotation should take place). The predicate will
be called for every log record being written to the file.

The log file rotation takes place on an attempt to write a new log record
to the file. Thus the time-based rotation is not a strict threshold,
either. The rotation will take place as soon as the library detects that
the rotation should have happened.

The file name pattern may contain a number of wildcards, like the one you
can see in the example above. Supported placeholders are:

Current date and time components. The placeholders conform to the ones
specified by Boost.DateTime
library.

File counter (%N)
with an optional width specification in the printf-like
format. The file counter will always be decimal, zero filled to the
specified width.

A percent sign (%%).

A few quick examples:

Template

Expands to

file_%N.log

file_1.log, file_2.log...

file_%3N.log

file_001.log, file_002.log...

file_%Y%m%d.log

file_20080705.log, file_20080706.log...

file_%Y-%m-%d_%H-%M-%S.%N.log

file_2008-07-05_13-44-23.1.log, file_2008-07-06_16-00-10.2.log...

Important

Although all Boost.DateTime
format specifiers will work, there are restrictions on some of them,
if you intend to scan for old log files. This functionality is discussed
in the next section.

The sink backend allows hooking into the file rotation process in order
to perform pre- and post-rotation actions. This can be useful to maintain
log file validity by writing headers and footers. For example, this is
how we could modify the init_logging
function in order to write logs into XML files:

After being closed, the rotated files can be collected. In order to do
so one has to set up a file collector by specifying the target directory
where to collect the rotated files and, optionally, size thresholds. For
example, we can modify the init_logging
function to place rotated files into a distinct directory and limit total
size of the files. Let's assume the following function is called by init_logging with the constructed sink:

The max_size and min_free_space parameters are optional,
the corresponding threshold will not be taken into account if the parameter
is not specified.

One can create multiple file sink backends that collect files into the
same target directory. In this case the most strict thresholds are combined
for this target directory. The files from this directory will be erased
without regard for which sink backend wrote it, i.e. in the strict chronological
order.

Warning

The collector does not resolve log file name clashes between different
sink backends, so if the clash occurs the behavior is undefined, in general.
Depending on the circumstances, the files may overwrite each other or
the operation may fail entirely.

The file collector provides another useful feature. Suppose you ran your
application 5 times and you have 5 log files in the "logs" directory.
The file sink backend and file collector provide a scan_for_files
method that searches the target directory for these files and takes them
into account. So, if it comes to deleting files, these files are not forgotten.
What's more, if the file name pattern in the backend involves a file counter,
scanning for older files allows updating the counter to the most recent
value. Here is the final version of our init_logging
function:

voidinit_logging(){// Create a text file sinkboost::shared_ptr<file_sink>sink(newfile_sink(keywords::file_name="%Y%m%d_%H%M%S_%5N.xml",keywords::rotation_size=16384));// Set up where the rotated files will be storedinit_file_collecting(sink);// Upon restart, scan the directory for files matching the file_name patternsink->locked_backend()->scan_for_files();sink->set_formatter(expr::format("\t<record id=\"%1%\" timestamp=\"%2%\">%3%</record>")%expr::attr<unsignedint>("RecordID")%expr::attr<boost::posix_time::ptime>("TimeStamp")%expr::xml_decor[expr::stream<<expr::smessage]);// Set header and footer writing functorsnamespacebll=boost::lambda;sink->locked_backend()->set_open_handler(bll::_1<<"<?xml version=\"1.0\"?>\n<log>\n");sink->locked_backend()->set_close_handler(bll::_1<<"</log>\n");// Add the sink to the corelogging::core::get()->add_sink(sink);}

There are two methods of file scanning: the scan that involves file name
matching with the file name pattern (the default) and the scan that assumes
that all files in the target directory are log files. The former applies
certain restrictions on the placeholders that can be used within the file
name pattern, in particular only file counter placeholder and these placeholders
of Boost.DateTime
are supported: %y,
%Y,
%m,
%d,
%H,
%M,
%S,
%f.
The latter scanning method, in its turn, has its own drawback: it does
not allow updating the file counter in the backend. It is also considered
to be more dangerous as it may result in unintended file deletion, so be
cautious. The all-files scanning method can be enabled by passing it as
an additional parameter to the scan_for_files
call:

// Look for all files in the target directorybackend->scan_for_files(sinks::file::scan_all);

While the text stream and file backends are aimed to store all log records
into a single file/stream, this backend serves a different purpose. Assume
we have a banking request processing application and we want logs related
to every single request to be placed into a separate file. If we can associate
some attribute with the request identity then the text_multifile_backend
backend is the way to go.

voidinit_logging(){boost::shared_ptr<logging::core>core=logging::core::get();boost::shared_ptr<sinks::text_multifile_backend>backend=boost::make_shared<sinks::text_multifile_backend>();// Set up the file naming patternbackend->set_file_name_composer(sinks::file::as_file_name_composer(expr::stream<<"logs/"<<expr::attr<std::string>("RequestID")<<".log"));// Wrap it into the frontend and register in the core.// The backend requires synchronization in the frontend.typedefsinks::synchronous_sink<sinks::text_multifile_backend>sink_t;boost::shared_ptr<sink_t>sink(newsink_t(backend));// Set the formattersink->set_formatter(expr::stream<<"[RequestID: "<<expr::attr<std::string>("RequestID")<<"] "<<expr::smessage);core->add_sink(sink);}

You can see we used a regular formatter
in order to specify file naming pattern. Now, every log record with a distinct
value of the "RequestID" attribute will be stored in a separate
file, no matter how many different requests are being processed by the
application concurrently. You can also find the multiple_files example in the
library distribution, which shows a similar technique to separate logs
generated by different threads of the application.

If using formatters is not appropriate for some reason, you can provide
your own file name composer. The composer is a mere function object that
accepts a log record as a single argument and returns a value of the text_multifile_backend::path_type type.

Note

The multi-file backend has no knowledge of whether a particular file
is going to be used or not. That is, if a log record has been written
into file A, the library cannot tell whether there will be more records
that fit into the file A or not. This makes it impossible to implement
file rotation and removing unused files to free space on the file system.
The user will have to implement such functionality himself.

The syslog backend, as comes from its name, provides support for the syslog
API that is available on virtually any UNIX-like platform. On Windows there
exists at least one
public implementation of the syslog client API. However, in order to provide
maximum flexibility and better portability the library offers built-in
support for the syslog protocol described in RFC
3164. Thus on Windows only the built-in implementation is supported,
while on UNIX-like systems both built-in and system API based implementations
are supported.

The backend is implemented in the syslog_backend
class. The backend supports formatting log records, and therefore requires
thread synchronization in the frontend. The backend also supports severity
level translation from the application-specific values to the syslog-defined
values. This is achieved with an additional function object, level mapper,
that receives a set of attribute values of each log record and returns
the appropriate syslog level value. This value is used by the backend to
construct the final priority value of the syslog record. The other component
of the syslog priority value, the facility, is constant for each backend
object and can be specified in the backend constructor arguments.

Level mappers can be written by library users to translate the application
log levels to the syslog levels in the best way. However, the library provides
two mappers that would fit this need in obvious cases. The direct_severity_mapping
class template provides a way to directly map values of some integral attribute
to syslog levels, without any value conversion. The custom_severity_mapping
class template adds some flexibility and allows to map arbitrary values
of some attribute to syslog levels.

Anyway, one example is better than a thousand words.

// Complete sink typetypedefsinks::synchronous_sink<sinks::syslog_backend>sink_t;voidinit_native_syslog(){boost::shared_ptr<logging::core>core=logging::core::get();// Create a backendboost::shared_ptr<sinks::syslog_backend>backend(newsinks::syslog_backend(keywords::facility=sinks::syslog::user,keywords::use_impl=sinks::syslog::native));// Set the straightforward level translator for the "Severity" attribute of type intbackend->set_severity_mapper(sinks::syslog::direct_severity_mapping<int>("Severity"));// Wrap it into the frontend and register in the core.// The backend requires synchronization in the frontend.core->add_sink(boost::make_shared<sink_t>(backend));}voidinit_builtin_syslog(){boost::shared_ptr<logging::core>core=logging::core::get();// Create a new backendboost::shared_ptr<sinks::syslog_backend>backend(newsinks::syslog_backend(keywords::facility=sinks::syslog::local0,keywords::use_impl=sinks::syslog::udp_socket_based));// Setup the target address and port to send syslog messages tobackend->set_target_address("192.164.1.10",514);// Create and fill in another level translator for "MyLevel" attribute of type stringsinks::syslog::custom_severity_mapping<std::string>mapping("MyLevel");mapping["debug"]=sinks::syslog::debug;mapping["normal"]=sinks::syslog::info;mapping["warning"]=sinks::syslog::warning;mapping["failure"]=sinks::syslog::critical;backend->set_severity_mapper(mapping);// Wrap it into the frontend and register in the core.core->add_sink(boost::make_shared<sink_t>(backend));}

the logging facility

the native syslog API should be used

the logging facility

the built-in socket-based implementation should be used

Please note that all syslog constants, as well as level extractors, are
declared within a nested namespace syslog.
The library will not accept (and does not declare in the backend interface)
native syslog constants, which are macros, actually.

Also note that the backend will default to the built-in implementation
and user logging facility,
if the corresponding constructor parameters are not specified.

Tip

The set_target_address
method will also accept DNS names, which it will resolve to the actual
IP address. This feature, however, is not available in single threaded
builds.

Windows API has an interesting feature: a process, being run under a debugger,
is able to emit messages that will be intercepted and displayed in the
debugger window. For example, if an application is run under the Visual
Studio IDE it is able to write debug messages to the IDE window. The basic_debug_output_backend
backend provides a simple way of emitting such messages. Additionally,
in order to optimize application performance, a special
filter is available that checks whether the application is being
run under a debugger. Like many other sink backends, this backend also
supports setting a formatter in order to compose the message text.

The usage is quite simple and straightforward:

// Complete sink typetypedefsinks::synchronous_sink<sinks::debug_output_backend>sink_t;voidinit_logging(){boost::shared_ptr<logging::core>core=logging::core::get();// Create the sink. The backend requires synchronization in the frontend.boost::shared_ptr<sink_t>sink(newsink_t());// Set the special filter to the frontend// in order to skip the sink when no debugger is availablesink->set_filter(expr::is_debugger_present());core->add_sink(sink);}

Note that the sink backend is templated on the character type. This type
defines the Windows API version that is used to emit messages. Also, debug_output_backend and wdebug_output_backend convenience typedefs
are provided.

Windows operating system provides a special API for publishing events related
to application execution. A wide range of applications, including Windows
components, use this facility to provide the user with all essential information
about computer health in a single place - an event log. There can be more
than one event log. However, typically all user-space applications use
the common Application log. Records from different applications or their
parts can be selected from the log by a record source name. Event logs
can be read with a standard utility, an Event Viewer, that comes with Windows.

Although it looks very tempting, the API is quite complicated and intrusive,
which makes it difficult to support. The application is required to provide
a dynamic library with special resources that describe all events the application
supports. This library must be registered in the Windows registry, which
pins its location in the file system. The Event Viewer uses this registration
to find the resources and compose and display messages. The positive feature
of this approach is that since event resources can describe events differently
for different languages, it allows the application to support event internationalization
in a quite transparent manner: the application simply provides event identifiers
and non-localizable event parameters to the API, and it does the rest of
the work.

In order to support both the simplistic approach "it just works"
and the more elaborate event composition, including internationalization
support, the library provides two sink backends that work with event log
API.

The basic_simple_event_log_backend
backend is intended to encapsulate as much of the event log API as possible,
leaving interface and usage model very similar to other sink backends.
It contains all resources that are needed for the Event Viewer to function
properly, and registers the Boost.Log library in the Windows registry in
order to populate itself as the container of these resources.

Important

The library must be built as a dynamic library in order to use this backend
flawlessly. Otherwise event description resources are not linked into
the executable, and the Event Viewer is not able to display events properly.

The only thing user has to do to add Windows event log support to his application
is to provide event source and log names (which are optional and can be
automatically suggested by the library), set up an appropriate filter,
formatter and event severity mapping.

Having done that, all logging records that pass to the sink will be formatted
the same way they are in the other sinks. The formatted message will be
displayed in the Event Viewer as the event description.

The basic_event_log_backend
allows more detailed control over the logging API, but requires considerably
more scaffolding during initialization and usage.

First, the user has to build his own library with the event resources (the
process is described in MSDN).
As a part of this process one has to create a message file that describes
all events. For the sake of example, let's assume the following contents
were used as the message file:

After compiling the resource library, the path to this library must be
provided to the sink backend constructor, among other parameters used with
the simple backend. The path may contain placeholders that will be expanded
with the appropriate environment variables.

Like the simple backend, basic_event_log_backend
will register itself in the Windows registry, which will enable the Event
Viewer to display the emitted events.

Next, the user will have to provide the mapping between the application
logging attributes and event identifiers. These identifiers were provided
in the message compiler output as a result of compiling the message file.
One can use basic_event_composer
and one of the event ID mappings, like in the following example:

// Create an event composer. It is initialized with the event identifier mapping.sinks::event_log::event_composercomposer(sinks::event_log::direct_event_id_mapping<int>("EventID"));// For each event described in the message file, set up the insertion string formatterscomposer[LOW_DISK_SPACE_MSG]// the first placeholder in the message// will be replaced with contents of the "Drive" attribute%expr::attr<std::string>("Drive")// the second placeholder in the message// will be replaced with contents of the "Size" attribute%expr::attr<boost::uintmax_t>("Size");composer[DEVICE_INACCESSIBLE_MSG]%expr::attr<std::string>("Drive");composer[SUCCEEDED_MSG]%expr::attr<unsignedint>("Duration");// Then put the composer to the backendbackend->set_event_composer(composer);

As you can see, one can use regular formatters
to specify which attributes will be inserted instead of placeholders in
the final event message. Aside from that, one can specify mappings of attribute
values to event types and categories. Suppose our application has the following
severity levels:

Then these levels can be mapped onto the values in the message description
file:

// We'll have to map our custom levels to the event log event typessinks::event_log::custom_event_type_mapping<severity_level>type_mapping("Severity");type_mapping[normal]=sinks::event_log::make_event_type(MY_SEVERITY_INFO);type_mapping[warning]=sinks::event_log::make_event_type(MY_SEVERITY_WARNING);type_mapping[error]=sinks::event_log::make_event_type(MY_SEVERITY_ERROR);backend->set_event_type_mapper(type_mapping);// Same for event categories.// Usually event categories can be restored by the event identifier.sinks::event_log::custom_event_category_mapping<int>cat_mapping("EventID");cat_mapping[LOW_DISK_SPACE_MSG]=sinks::event_log::make_event_category(MY_CATEGORY_1);cat_mapping[DEVICE_INACCESSIBLE_MSG]=sinks::event_log::make_event_category(MY_CATEGORY_2);cat_mapping[SUCCEEDED_MSG]=sinks::event_log::make_event_category(MY_CATEGORY_3);backend->set_event_category_mapper(cat_mapping);

Tip

As of Windows NT 6 (Vista, Server 2008) it is not needed to specify event
type mappings. This information is available in the message definition
resources and need not be duplicated in the API call.

Now that initialization is done, the sink can be registered into the core.

// Create the frontend for the sinkboost::shared_ptr<sinks::synchronous_sink<sinks::event_log_backend>>sink(newsinks::synchronous_sink<sinks::event_log_backend>(backend));// Set up filter to pass only records that have the necessary attributesink->set_filter(expr::has_attr<int>("EventID"));logging::core::get()->add_sink(sink);

In order to emit events it is convenient to create a set of functions that
will accept all needed parameters for the corresponding events and announce
that the event has occurred.

BOOST_LOG_INLINE_GLOBAL_LOGGER_DEFAULT(event_logger,src::severity_logger_mt<severity_level>)// The function raises an event of the disk space depletionvoidannounce_low_disk_space(std::stringconst&drive,boost::uintmax_tsize){BOOST_LOG_SCOPED_THREAD_TAG("EventID",(int)LOW_DISK_SPACE_MSG);BOOST_LOG_SCOPED_THREAD_TAG("Drive",drive);BOOST_LOG_SCOPED_THREAD_TAG("Size",size);// Since this record may get accepted by other sinks,// this message is not completely uselessBOOST_LOG_SEV(event_logger::get(),warning)<<"Low disk "<<drive<<" space, "<<size<<" Mb is recommended";}// The function raises an event of inaccessible disk drivevoidannounce_device_inaccessible(std::stringconst&drive){BOOST_LOG_SCOPED_THREAD_TAG("EventID",(int)DEVICE_INACCESSIBLE_MSG);BOOST_LOG_SCOPED_THREAD_TAG("Drive",drive);BOOST_LOG_SEV(event_logger::get(),error)<<"Cannot access drive "<<drive;}// The structure is an activity guard that will emit an event upon the activity completionstructactivity_guard{activity_guard(){// Add a stop watch attribute to measure the activity durationm_it=event_logger::get().add_attribute("Duration",attrs::timer()).first;}~activity_guard(){BOOST_LOG_SCOPED_THREAD_TAG("EventID",(int)SUCCEEDED_MSG);BOOST_LOG_SEV(event_logger::get(),normal)<<"Activity ended";event_logger::get().remove_attribute(m_it);}private:logging::attribute_set::iteratorm_it;};

Now you are able to call these helper functions to emit events. The complete
code from this section is available in the event_log example in the library
distribution.