Multiple calls to logging.getLogger('someLogger') return a reference to the
same logger object. This is true not only within the same module, but also
across modules as long as it is in the same Python interpreter process. It is
true for references to the same object; additionally, application code can
define and configure a parent logger in one module and create (but not
configure) a child logger in a separate module, and all logger calls to the
child will pass up to the parent. Here is a main module:

importloggingimportauxiliary_module# create logger with 'spam_application'logger=logging.getLogger('spam_application')logger.setLevel(logging.DEBUG)# create file handler which logs even debug messagesfh=logging.FileHandler('spam.log')fh.setLevel(logging.DEBUG)# create console handler with a higher log levelch=logging.StreamHandler()ch.setLevel(logging.ERROR)# create formatter and add it to the handlersformatter=logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')fh.setFormatter(formatter)ch.setFormatter(formatter)# add the handlers to the loggerlogger.addHandler(fh)logger.addHandler(ch)logger.info('creating an instance of auxiliary_module.Auxiliary')a=auxiliary_module.Auxiliary()logger.info('created an instance of auxiliary_module.Auxiliary')logger.info('calling auxiliary_module.Auxiliary.do_something')a.do_something()logger.info('finished auxiliary_module.Auxiliary.do_something')logger.info('calling auxiliary_module.some_function()')auxiliary_module.some_function()logger.info('done with auxiliary_module.some_function()')

Loggers are plain Python objects. The addHandler() method has no minimum
or maximum quota for the number of handlers you may add. Sometimes it will be
beneficial for an application to log all messages of all severities to a text
file while simultaneously logging errors or above to the console. To set this
up, simply configure the appropriate handlers. The logging calls in the
application code will remain unchanged. Here is a slight modification to the
previous simple module-based configuration example:

Notice that the ‘application’ code does not care about multiple handlers. All
that changed was the addition and configuration of a new handler named fh.

The ability to create new handlers with higher- or lower-severity filters can be
very helpful when writing and testing an application. Instead of using many
print statements for debugging, use logger.debug: Unlike the print
statements, which you will have to delete or comment out later, the logger.debug
statements can remain intact in the source code and remain dormant until you
need them again. At that time, the only change that needs to happen is to
modify the severity level of the logger and/or handler to debug.

Let’s say you want to log to console and file with different message formats and
in differing circumstances. Say you want to log messages with levels of DEBUG
and higher to file, and those messages at level INFO and higher to the console.
Let’s also assume that the file should contain timestamps, but the console
messages should not. Here’s how you can achieve this:

importlogging# set up logging to file - see previous section for more detailslogging.basicConfig(level=logging.DEBUG,format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',datefmt='%m-%d %H:%M',filename='/temp/myapp.log',filemode='w')# define a Handler which writes INFO messages or higher to the sys.stderrconsole=logging.StreamHandler()console.setLevel(logging.INFO)# set a format which is simpler for console useformatter=logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')# tell the handler to use this formatconsole.setFormatter(formatter)# add the handler to the root loggerlogging.getLogger('').addHandler(console)# Now, we can log to the root logger, or any other logger. First the root...logging.info('Jackdaws love my big sphinx of quartz.')# Now, define a couple of other loggers which might represent areas in your# application:logger1=logging.getLogger('myapp.area1')logger2=logging.getLogger('myapp.area2')logger1.debug('Quick zephyrs blow, vexing daft Jim.')logger1.info('How quickly daft jumping zebras vex.')logger2.warning('Jail zesty vixen who grabbed pay from quack.')logger2.error('The five boxing wizards jump quickly.')

Sometimes you have to get your logging handlers to do their work without
blocking the thread you’re logging from. This is common in Web applications,
though of course it also occurs in other scenarios.

A common culprit which demonstrates sluggish behaviour is the
SMTPHandler: sending emails can take a long time, for a
number of reasons outside the developer’s control (for example, a poorly
performing mail or network infrastructure). But almost any network-based
handler can block: Even a SocketHandler operation may do a
DNS query under the hood which is too slow (and this query can be deep in the
socket library code, below the Python layer, and outside your control).

One solution is to use a two-part approach. For the first part, attach only a
QueueHandler to those loggers which are accessed from
performance-critical threads. They simply write to their queue, which can be
sized to a large enough capacity or initialized with no upper bound to their
size. The write to the queue will typically be accepted quickly, though you
will probably need to catch the queue.Full exception as a precaution
in your code. If you are a library developer who has performance-critical
threads in their code, be sure to document this (together with a suggestion to
attach only QueueHandlers to your loggers) for the benefit of other
developers who will use your code.

The second part of the solution is QueueListener, which has been
designed as the counterpart to QueueHandler. A
QueueListener is very simple: it’s passed a queue and some handlers,
and it fires up an internal thread which listens to its queue for LogRecords
sent from QueueHandlers (or any other source of LogRecords, for that
matter). The LogRecords are removed from the queue and passed to the
handlers for processing.

The advantage of having a separate QueueListener class is that you
can use the same instance to service multiple QueueHandlers. This is more
resource-friendly than, say, having threaded versions of the existing handler
classes, which would eat up one thread per handler for no particular benefit.

An example of using these two classes follows (imports omitted):

que=queue.Queue(-1)# no limit on sizequeue_handler=QueueHandler(que)handler=logging.StreamHandler()listener=QueueListener(que,handler)root=logging.getLogger()root.addHandler(queue_handler)formatter=logging.Formatter('%(threadName)s: %(message)s')handler.setFormatter(formatter)listener.start()# The log output will display the thread which generated# the event (the main thread) rather than the internal# thread which monitors the internal queue. This is what# you want to happen.root.warning('Look out!')listener.stop()

Let’s say you want to send logging events across a network, and handle them at
the receiving end. A simple way of doing this is attaching a
SocketHandler instance to the root logger at the sending end:

importlogging,logging.handlersrootLogger=logging.getLogger('')rootLogger.setLevel(logging.DEBUG)socketHandler=logging.handlers.SocketHandler('localhost',logging.handlers.DEFAULT_TCP_LOGGING_PORT)# don't bother with a formatter, since a socket handler sends the event as# an unformatted picklerootLogger.addHandler(socketHandler)# Now, we can log to the root logger, or any other logger. First the root...logging.info('Jackdaws love my big sphinx of quartz.')# Now, define a couple of other loggers which might represent areas in your# application:logger1=logging.getLogger('myapp.area1')logger2=logging.getLogger('myapp.area2')logger1.debug('Quick zephyrs blow, vexing daft Jim.')logger1.info('How quickly daft jumping zebras vex.')logger2.warning('Jail zesty vixen who grabbed pay from quack.')logger2.error('The five boxing wizards jump quickly.')

At the receiving end, you can set up a receiver using the socketserver
module. Here is a basic working example:

importpickleimportloggingimportlogging.handlersimportsocketserverimportstructclassLogRecordStreamHandler(socketserver.StreamRequestHandler):"""Handler for a streaming logging request. This basically logs the record using whatever logging policy is configured locally. """defhandle(self):""" Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """whileTrue:chunk=self.connection.recv(4)iflen(chunk)<4:breakslen=struct.unpack('>L',chunk)[0]chunk=self.connection.recv(slen)whilelen(chunk)<slen:chunk=chunk+self.connection.recv(slen-len(chunk))obj=self.unPickle(chunk)record=logging.makeLogRecord(obj)self.handleLogRecord(record)defunPickle(self,data):returnpickle.loads(data)defhandleLogRecord(self,record):# if a name is specified, we use the named logger rather than the one# implied by the record.ifself.server.lognameisnotNone:name=self.server.lognameelse:name=record.namelogger=logging.getLogger(name)# N.B. EVERY record gets logged. This is because Logger.handle# is normally called AFTER logger-level filtering. If you want# to do filtering, do it at the client end to save wasting# cycles and network bandwidth!logger.handle(record)classLogRecordSocketReceiver(socketserver.ThreadingTCPServer):""" Simple TCP socket-based logging receiver suitable for testing. """allow_reuse_address=1def__init__(self,host='localhost',port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,handler=LogRecordStreamHandler):socketserver.ThreadingTCPServer.__init__(self,(host,port),handler)self.abort=0self.timeout=1self.logname=Nonedefserve_until_stopped(self):importselectabort=0whilenotabort:rd,wr,ex=select.select([self.socket.fileno()],[],[],self.timeout)ifrd:self.handle_request()abort=self.abortdefmain():logging.basicConfig(format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')tcpserver=LogRecordSocketReceiver()print('About to start TCP server...')tcpserver.serve_until_stopped()if__name__=='__main__':main()

First run the server, and then the client. On the client side, nothing is
printed on the console; on the server side, you should see something like:

Note that there are some security issues with pickle in some scenarios. If
these affect you, you can use an alternative serialization scheme by overriding
the makePickle() method and implementing your alternative there, as
well as adapting the above script to use your alternative serialization.

Sometimes you want logging output to contain contextual information in
addition to the parameters passed to the logging call. For example, in a
networked application, it may be desirable to log client-specific information
in the log (e.g. remote client’s username, or IP address). Although you could
use the extra parameter to achieve this, it’s not always convenient to pass
the information in this way. While it might be tempting to create
Logger instances on a per-connection basis, this is not a good idea
because these instances are not garbage collected. While this is not a problem
in practice, when the number of Logger instances is dependent on the
level of granularity you want to use in logging an application, it could
be hard to manage if the number of Logger instances becomes
effectively unbounded.

An easy way in which you can pass contextual information to be output along
with logging event information is to use the LoggerAdapter class.
This class is designed to look like a Logger, so that you can call
debug(), info(), warning(), error(),
exception(), critical() and log(). These methods have the
same signatures as their counterparts in Logger, so you can use the
two types of instances interchangeably.

When you create an instance of LoggerAdapter, you pass it a
Logger instance and a dict-like object which contains your contextual
information. When you call one of the logging methods on an instance of
LoggerAdapter, it delegates the call to the underlying instance of
Logger passed to its constructor, and arranges to pass the contextual
information in the delegated call. Here’s a snippet from the code of
LoggerAdapter:

defdebug(self,msg,*args,**kwargs):""" Delegate a debug call to the underlying logger, after adding contextual information from this adapter instance. """msg,kwargs=self.process(msg,kwargs)self.logger.debug(msg,*args,**kwargs)

The process() method of LoggerAdapter is where the contextual
information is added to the logging output. It’s passed the message and
keyword arguments of the logging call, and it passes back (potentially)
modified versions of these to use in the call to the underlying logger. The
default implementation of this method leaves the message alone, but inserts
an ‘extra’ key in the keyword argument whose value is the dict-like object
passed to the constructor. Of course, if you had passed an ‘extra’ keyword
argument in the call to the adapter, it will be silently overwritten.

The advantage of using ‘extra’ is that the values in the dict-like object are
merged into the LogRecord instance’s __dict__, allowing you to use
customized strings with your Formatter instances which know about
the keys of the dict-like object. If you need a different method, e.g. if you
want to prepend or append the contextual information to the message string,
you just need to subclass LoggerAdapter and override process()
to do what you need. Here’s an example script which uses this class, which
also illustrates what dict-like behaviour is needed from an arbitrary
‘dict-like’ object for use in the constructor:

importloggingclassConnInfo:""" An example class which shows how an arbitrary class can be used as the 'extra' context information repository passed to a LoggerAdapter. """def__getitem__(self,name):""" To allow this instance to look like a dict. """fromrandomimportchoiceifname=='ip':result=choice(['127.0.0.1','192.168.0.1'])elifname=='user':result=choice(['jim','fred','sheila'])else:result=self.__dict__.get(name,'?')returnresultdef__iter__(self):""" To allow iteration over keys, which will be merged into the LogRecord dict before formatting and output. """keys=['ip','user']keys.extend(self.__dict__.keys())returnkeys.__iter__()if__name__=='__main__':fromrandomimportchoicelevels=(logging.DEBUG,logging.INFO,logging.WARNING,logging.ERROR,logging.CRITICAL)a1=logging.LoggerAdapter(logging.getLogger('a.b.c'),{'ip':'123.231.231.123','user':'sheila'})logging.basicConfig(level=logging.DEBUG,format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')a1.debug('A debug message')a1.info('An info message with %s','some parameters')a2=logging.LoggerAdapter(logging.getLogger('d.e.f'),ConnInfo())forxinrange(10):lvl=choice(levels)lvlname=logging.getLevelName(lvl)a2.log(lvl,'A message at %s level with %d %s',lvlname,2,'parameters')

You can also add contextual information to log output using a user-defined
Filter. Filter instances are allowed to modify the LogRecords
passed to them, including adding additional attributes which can then be output
using a suitable format string, or if needed a custom Formatter.

For example in a web application, the request being processed (or at least,
the interesting parts of it) can be stored in a threadlocal
(threading.local) variable, and then accessed from a Filter to
add, say, information from the request - say, the remote IP address and remote
user’s username - to the LogRecord, using the attribute names ‘ip’ and
‘user’ as in the LoggerAdapter example above. In that case, the same format
string can be used to get similar output to that shown above. Here’s an example
script:

importloggingfromrandomimportchoiceclassContextFilter(logging.Filter):""" This is a filter which injects contextual information into the log. Rather than use actual contextual information, we just use random data in this demo. """USERS=['jim','fred','sheila']IPS=['123.231.231.123','127.0.0.1','192.168.0.1']deffilter(self,record):record.ip=choice(ContextFilter.IPS)record.user=choice(ContextFilter.USERS)returnTrueif__name__=='__main__':levels=(logging.DEBUG,logging.INFO,logging.WARNING,logging.ERROR,logging.CRITICAL)logging.basicConfig(level=logging.DEBUG,format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')a1=logging.getLogger('a.b.c')a2=logging.getLogger('d.e.f')f=ContextFilter()a1.addFilter(f)a2.addFilter(f)a1.debug('A debug message')a1.info('An info message with %s','some parameters')forxinrange(10):lvl=choice(levels)lvlname=logging.getLevelName(lvl)a2.log(lvl,'A message at %s level with %d %s',lvlname,2,'parameters')

Although logging is thread-safe, and logging to a single file from multiple
threads in a single process is supported, logging to a single file from
multiple processes is not supported, because there is no standard way to
serialize access to a single file across multiple processes in Python. If you
need to log to a single file from multiple processes, one way of doing this is
to have all the processes log to a SocketHandler, and have a separate
process which implements a socket server which reads from the socket and logs
to file. (If you prefer, you can dedicate one thread in one of the existing
processes to perform this function.) This section
documents this approach in more detail and includes a working socket receiver
which can be used as a starting point for you to adapt in your own
applications.

If you are using a recent version of Python which includes the
multiprocessing module, you could write your own handler which uses the
Lock class from this module to serialize access to the file from
your processes. The existing FileHandler and subclasses do not make
use of multiprocessing at present, though they may do so in the future.
Note that at present, the multiprocessing module does not provide
working lock functionality on all platforms (see
http://bugs.python.org/issue3770).

Alternatively, you can use a Queue and a QueueHandler to send
all logging events to one of the processes in your multi-process application.
The following example script demonstrates how you can do this; in the example
a separate listener process listens for events sent by other processes and logs
them according to its own logging configuration. Although the example only
demonstrates one way of doing it (for example, you may want to use a listener
thread rather than a separate listener process – the implementation would be
analogous) it does allow for completely different logging configurations for
the listener and the other processes in your application, and can be used as
the basis for code meeting your own specific requirements:

# You'll need these imports in your own codeimportloggingimportlogging.handlersimportmultiprocessing# Next two import lines for this demo onlyfromrandomimportchoice,randomimporttime## Because you'll want to define the logging configurations for listener and workers, the# listener and worker process functions take a configurer parameter which is a callable# for configuring logging for that process. These functions are also passed the queue,# which they use for communication.## In practice, you can configure the listener however you want, but note that in this# simple example, the listener does not apply level or filter logic to received records.# In practice, you would probably want to do this logic in the worker processes, to avoid# sending events which would be filtered out between processes.## The size of the rotated files is made small so you can see the results easily.deflistener_configurer():root=logging.getLogger()h=logging.handlers.RotatingFileHandler('mptest.log','a',300,10)f=logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')h.setFormatter(f)root.addHandler(h)# This is the listener process top-level loop: wait for logging events# (LogRecords)on the queue and handle them, quit when you get a None for a# LogRecord.deflistener_process(queue,configurer):configurer()whileTrue:try:record=queue.get()ifrecordisNone:# We send this as a sentinel to tell the listener to quit.breaklogger=logging.getLogger(record.name)logger.handle(record)# No level or filter logic applied - just do it!except(KeyboardInterrupt,SystemExit):raiseexcept:importsys,tracebackprint('Whoops! Problem:',file=sys.stderr)traceback.print_exc(file=sys.stderr)# Arrays used for random selections in this demoLEVELS=[logging.DEBUG,logging.INFO,logging.WARNING,logging.ERROR,logging.CRITICAL]LOGGERS=['a.b.c','d.e.f']MESSAGES=['Random message #1','Random message #2','Random message #3',]# The worker configuration is done at the start of the worker process run.# Note that on Windows you can't rely on fork semantics, so each process# will run the logging configuration code when it starts.defworker_configurer(queue):h=logging.handlers.QueueHandler(queue)# Just the one handler neededroot=logging.getLogger()root.addHandler(h)root.setLevel(logging.DEBUG)# send all messages, for demo; no other level or filter logic applied.# This is the worker process top-level loop, which just logs ten events with# random intervening delays before terminating.# The print messages are just so you know it's doing something!defworker_process(queue,configurer):configurer(queue)name=multiprocessing.current_process().nameprint('Worker started: %s'%name)foriinrange(10):time.sleep(random())logger=logging.getLogger(choice(LOGGERS))level=choice(LEVELS)message=choice(MESSAGES)logger.log(level,message)print('Worker finished: %s'%name)# Here's where the demo gets orchestrated. Create the queue, create and start# the listener, create ten workers and start them, wait for them to finish,# then send a None to the queue to tell the listener to finish.defmain():queue=multiprocessing.Queue(-1)listener=multiprocessing.Process(target=listener_process,args=(queue,listener_configurer))listener.start()workers=[]foriinrange(10):worker=multiprocessing.Process(target=worker_process,args=(queue,worker_configurer))workers.append(worker)worker.start()forwinworkers:w.join()queue.put_nowait(None)listener.join()if__name__=='__main__':main()

A variant of the above script keeps the logging in the main process, in a
separate thread:

importloggingimportlogging.configimportlogging.handlersfrommultiprocessingimportProcess,Queueimportrandomimportthreadingimporttimedeflogger_thread(q):whileTrue:record=q.get()ifrecordisNone:breaklogger=logging.getLogger(record.name)logger.handle(record)defworker_process(q):qh=logging.handlers.QueueHandler(q)root=logging.getLogger()root.setLevel(logging.DEBUG)root.addHandler(qh)levels=[logging.DEBUG,logging.INFO,logging.WARNING,logging.ERROR,logging.CRITICAL]loggers=['foo','foo.bar','foo.bar.baz','spam','spam.ham','spam.ham.eggs']foriinrange(100):lvl=random.choice(levels)logger=logging.getLogger(random.choice(loggers))logger.log(lvl,'Message no. %d',i)if__name__=='__main__':q=Queue()d={'version':1,'formatters':{'detailed':{'class':'logging.Formatter','format':'%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'}},'handlers':{'console':{'class':'logging.StreamHandler','level':'INFO',},'file':{'class':'logging.FileHandler','filename':'mplog.log','mode':'w','formatter':'detailed',},'foofile':{'class':'logging.FileHandler','filename':'mplog-foo.log','mode':'w','formatter':'detailed',},'errors':{'class':'logging.FileHandler','filename':'mplog-errors.log','mode':'w','level':'ERROR','formatter':'detailed',},},'loggers':{'foo':{'handlers':['foofile']}},'root':{'level':'DEBUG','handlers':['console','file','errors']},}workers=[]foriinrange(5):wp=Process(target=worker_process,name='worker %d'%(i+1),args=(q,))workers.append(wp)wp.start()logging.config.dictConfig(d)lp=threading.Thread(target=logger_thread,args=(q,))lp.start()# At this point, the main process could do some useful work of its own# Once it's done that, it can wait for the workers to terminate...forwpinworkers:wp.join()# And now tell the logging thread to finish up, tooq.put(None)lp.join()

This variant shows how you can e.g. apply configuration for particular loggers
- e.g. the foo logger has a special handler which stores all events in the
foo subsystem in a file mplog-foo.log. This will be used by the logging
machinery in the main process (even though the logging events are generated in
the worker processes) to direct the messages to the appropriate destinations.

Sometimes you want to let a log file grow to a certain size, then open a new
file and log to that. You may want to keep a certain number of these files, and
when that many files have been created, rotate the files so that the number of
files and the size of the files both remain bounded. For this usage pattern, the
logging package provides a RotatingFileHandler:

importglobimportloggingimportlogging.handlersLOG_FILENAME='logging_rotatingfile_example.out'# Set up a specific logger with our desired output levelmy_logger=logging.getLogger('MyLogger')my_logger.setLevel(logging.DEBUG)# Add the log message handler to the loggerhandler=logging.handlers.RotatingFileHandler(LOG_FILENAME,maxBytes=20,backupCount=5)my_logger.addHandler(handler)# Log some messagesforiinrange(20):my_logger.debug('i = %d'%i)# See what files are createdlogfiles=glob.glob('%s*'%LOG_FILENAME)forfilenameinlogfiles:print(filename)

The result should be 6 separate files, each with part of the log history for the
application:

The most current file is always logging_rotatingfile_example.out,
and each time it reaches the size limit it is renamed with the suffix
.1. Each of the existing backup files is renamed to increment the suffix
(.1 becomes .2, etc.) and the .6 file is erased.

Obviously this example sets the log length much too small as an extreme
example. You would want to set maxBytes to an appropriate value.

When logging was added to the Python standard library, the only way of
formatting messages with variable content was to use the %-formatting
method. Since then, Python has gained two new formatting approaches:
string.Template (added in Python 2.4) and str.format()
(added in Python 2.6).

Logging (as of 3.2) provides improved support for these two additional
formatting styles. The Formatter class been enhanced to take an
additional, optional keyword parameter named style. This defaults to
'%', but other possible values are '{' and '$', which correspond
to the other two formatting styles. Backwards compatibility is maintained by
default (as you would expect), but by explicitly specifying a style parameter,
you get the ability to specify format strings which work with
str.format() or string.Template. Here’s an example console
session to show the possibilities:

Logging calls (logger.debug(), logger.info() etc.) only take
positional parameters for the actual logging message itself, with keyword
parameters used only for determining options for how to handle the actual
logging call (e.g. the exc_info keyword parameter to indicate that
traceback information should be logged, or the extra keyword parameter
to indicate additional contextual information to be added to the log). So
you cannot directly make logging calls using str.format() or
string.Template syntax, because internally the logging package
uses %-formatting to merge the format string and the variable arguments.
There would no changing this while preserving backward compatibility, since
all logging calls which are out there in existing code will be using %-format
strings.

There is, however, a way that you can use {}- and $- formatting to construct
your individual log messages. Recall that for a message you can use an
arbitrary object as a message format string, and that the logging package will
call str() on that object to get the actual format string. Consider the
following two classes:

Either of these can be used in place of a format string, to allow {}- or
$-formatting to be used to build the actual “message” part which appears in the
formatted log output in place of “%(message)s” or “{message}” or “$message”.
It’s a little unwieldy to use the class names whenever you want to log
something, but it’s quite palatable if you use an alias such as __ (double
underscore – not to be confused with _, the single underscore used as a
synonym/alias for gettext.gettext() or its brethren).

The above classes are not included in Python, though they’re easy enough to
copy and paste into your own code. They can be used as follows (assuming that
they’re declared in a module called wherever):

While the above examples use print() to show how the formatting works, you
would of course use logger.debug() or similar to actually log using this
approach.

One thing to note is that you pay no significant performance penalty with this
approach: the actual formatting happens not when you make the logging call, but
when (and if) the logged message is actually about to be output to a log by a
handler. So the only slightly unusual thing which might trip you up is that the
parentheses go around the format string and the arguments, not just the format
string. That’s because the __ notation is just syntax sugar for a constructor
call to one of the XXXMessage classes.

Every logging event is represented by a LogRecord instance.
When an event is logged and not filtered out by a logger’s level, a
LogRecord is created, populated with information about the event and
then passed to the handlers for that logger (and its ancestors, up to and
including the logger where further propagation up the hierarchy is disabled).
Before Python 3.2, there were only two places where this creation was done:

makeLogRecord(), which is called with a dictionary containing
attributes to be added to the LogRecord. This is typically invoked when a
suitable dictionary has been received over the network (e.g. in pickle form
via a SocketHandler, or in JSON form via an
HTTPHandler).

This has usually meant that if you need to do anything special with a
LogRecord, you’ve had to do one of the following.

Add a Filter to a logger or handler, which does the
necessary special manipulation you need when its
filter() method is called.

The first approach would be a little unwieldy in the scenario where (say)
several different libraries wanted to do different things. Each would attempt
to set its own Logger subclass, and the one which did this last would
win.

The second approach works reasonably well for many cases, but does not allow
you to e.g. use a specialized subclass of LogRecord. Library
developers can set a suitable filter on their loggers, but they would have to
remember to do this every time they introduced a new logger (which they would
do simply by adding new packages or modules and doing

logger=logging.getLogger(__name__)

at module level). It’s probably one too many things to think about. Developers
could also add the filter to a NullHandler attached to their
top-level logger, but this would not be invoked if an application developer
attached a handler to a lower-level library logger – so output from that
handler would not reflect the intentions of the library developer.

This approach allows a custom factory to control all aspects of LogRecord
creation. For example, you could return a subclass, or just add some additional
attributes to the record once created, using a pattern similar to this:

This pattern allows different libraries to chain factories together, and as
long as they don’t overwrite each other’s attributes or unintentionally
overwrite the attributes provided as standard, there should be no surprises.
However, it should be borne in mind that each link in the chain adds run-time
overhead to all logging operations, and the technique should only be used when
the use of a Filter does not provide the desired result.

You can use a QueueHandler subclass to send messages to other kinds
of queues, for example a ZeroMQ ‘publish’ socket. In the example below,the
socket is created separately and passed to the handler (as its ‘queue’):

importzmq# using pyzmq, the Python binding for ZeroMQimportjson# for serializing records portablyctx=zmq.Context()sock=zmq.Socket(ctx,zmq.PUB)# or zmq.PUSH, or other suitable valuesock.bind('tcp://*:5556')# or whereverclassZeroMQSocketHandler(QueueHandler):defenqueue(self,record):data=json.dumps(record.__dict__)self.queue.send(data)handler=ZeroMQSocketHandler(sock)

Of course there are other ways of organizing this, for example passing in the
data needed by the handler to create the socket:

The following working example shows how logging can be used with multiprocessing
using configuration files. The configurations are fairly simple, but serve to
illustrate how more complex ones could be implemented in a real multiprocessing
scenario.

In the example, the main process spawns a listener process and some worker
processes. Each of the main process, the listener and the workers have three
separate configurations (the workers all share the same configuration). We can
see logging in the main process, how the workers log to a QueueHandler and how
the listener implements a QueueListener and a more complex logging
configuration, and arranges to dispatch events received via the queue to the
handlers specified in the configuration. Note that these configurations are
purely illustrative, but you should be able to adapt this example to your own
scenario.

Here’s the script - the docstrings and the comments hopefully explain how it
works:

importloggingimportlogging.configimportlogging.handlersfrommultiprocessingimportProcess,Queue,Event,current_processimportosimportrandomimporttimeclassMyHandler:""" A simple handler for logging events. It runs in the listener process and dispatches events to loggers based on the name in the received record, which then get dispatched, by the logging system, to the handlers configured for those loggers. """defhandle(self,record):logger=logging.getLogger(record.name)# The process name is transformed just to show that it's the listener# doing the logging to files and consolerecord.processName='%s (for %s)'%(current_process().name,record.processName)logger.handle(record)deflistener_process(q,stop_event,config):""" This could be done in the main process, but is just done in a separate process for illustrative purposes. This initialises logging according to the specified configuration, starts the listener and waits for the main process to signal completion via the event. The listener is then stopped, and the process exits. """logging.config.dictConfig(config)listener=logging.handlers.QueueListener(q,MyHandler())listener.start()ifos.name=='posix':# On POSIX, the setup logger will have been configured in the# parent process, but should have been disabled following the# dictConfig call.# On Windows, since fork isn't used, the setup logger won't# exist in the child, so it would be created and the message# would appear - hence the "if posix" clause.logger=logging.getLogger('setup')logger.critical('Should not appear, because of disabled logger ...')stop_event.wait()listener.stop()defworker_process(config):""" A number of these are spawned for the purpose of illustration. In practice, they could be a heterogenous bunch of processes rather than ones which are identical to each other. This initialises logging according to the specified configuration, and logs a hundred messages with random levels to randomly selected loggers. A small sleep is added to allow other processes a chance to run. This is not strictly needed, but it mixes the output from the different processes a bit more than if it's left out. """logging.config.dictConfig(config)levels=[logging.DEBUG,logging.INFO,logging.WARNING,logging.ERROR,logging.CRITICAL]loggers=['foo','foo.bar','foo.bar.baz','spam','spam.ham','spam.ham.eggs']ifos.name=='posix':# On POSIX, the setup logger will have been configured in the# parent process, but should have been disabled following the# dictConfig call.# On Windows, since fork isn't used, the setup logger won't# exist in the child, so it would be created and the message# would appear - hence the "if posix" clause.logger=logging.getLogger('setup')logger.critical('Should not appear, because of disabled logger ...')foriinrange(100):lvl=random.choice(levels)logger=logging.getLogger(random.choice(loggers))logger.log(lvl,'Message no. %d',i)time.sleep(0.01)defmain():q=Queue()# The main process gets a simple configuration which prints to the console.config_initial={'version':1,'formatters':{'detailed':{'class':'logging.Formatter','format':'%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'}},'handlers':{'console':{'class':'logging.StreamHandler','level':'INFO',},},'root':{'level':'DEBUG','handlers':['console']},}# The worker process configuration is just a QueueHandler attached to the# root logger, which allows all messages to be sent to the queue.# We disable existing loggers to disable the "setup" logger used in the# parent process. This is needed on POSIX because the logger will# be there in the child following a fork().config_worker={'version':1,'disable_existing_loggers':True,'handlers':{'queue':{'class':'logging.handlers.QueueHandler','queue':q,},},'root':{'level':'DEBUG','handlers':['queue']},}# The listener process configuration shows that the full flexibility of# logging configuration is available to dispatch events to handlers however# you want.# We disable existing loggers to disable the "setup" logger used in the# parent process. This is needed on POSIX because the logger will# be there in the child following a fork().config_listener={'version':1,'disable_existing_loggers':True,'formatters':{'detailed':{'class':'logging.Formatter','format':'%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'},'simple':{'class':'logging.Formatter','format':'%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'}},'handlers':{'console':{'class':'logging.StreamHandler','level':'INFO','formatter':'simple',},'file':{'class':'logging.FileHandler','filename':'mplog.log','mode':'w','formatter':'detailed',},'foofile':{'class':'logging.FileHandler','filename':'mplog-foo.log','mode':'w','formatter':'detailed',},'errors':{'class':'logging.FileHandler','filename':'mplog-errors.log','mode':'w','level':'ERROR','formatter':'detailed',},},'loggers':{'foo':{'handlers':['foofile']}},'root':{'level':'DEBUG','handlers':['console','file','errors']},}# Log some initial events, just to show that logging in the parent works# normally.logging.config.dictConfig(config_initial)logger=logging.getLogger('setup')logger.info('About to create workers ...')workers=[]foriinrange(5):wp=Process(target=worker_process,name='worker %d'%(i+1),args=(config_worker,))workers.append(wp)wp.start()logger.info('Started worker: %s',wp.name)logger.info('About to create listener ...')stop_event=Event()lp=Process(target=listener_process,name='listener',args=(q,stop_event,config_listener))lp.start()logger.info('Started listener')# We now hang around for the workers to finish their work.forwpinworkers:wp.join()# Workers all done, listening can now stop.# Logging in the parent still works normally.logger.info('Telling listener to stop ...')stop_event.set()lp.join()logger.info('All done.')if__name__=='__main__':main()

RFC 5424 requires that a
Unicode message be sent to a syslog daemon as a set of bytes which have the
following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the relevant
section of the specification.)

In Python 3.1, code was added to
SysLogHandler to insert a BOM into the message, but
unfortunately, it was implemented incorrectly, with the BOM appearing at the
beginning of the message and hence not allowing any pure-ASCII component to
appear before it.

As this behaviour is broken, the incorrect BOM insertion code is being removed
from Python 3.2.4 and later. However, it is not being replaced, and if you
want to produce RFC 5424-compliant messages which include a BOM, an optional
pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
UTF-8, then you need to do the following:

The Unicode code point U+FEFF, when encoded using UTF-8, will be
encoded as a UTF-8 BOM – the byte-string b'\xef\xbb\xbf'.

Replace the ASCII section with whatever placeholders you like, but make sure
that the data that appears in there after substitution is always ASCII (that
way, it will remain unchanged after UTF-8 encoding).

Replace the Unicode section with whatever placeholders you like; if the data
which appears there after substitution contains characters outside the ASCII
range, that’s fine – it will be encoded using UTF-8.

The formatted message will be encoded using UTF-8 encoding by
SysLogHandler. If you follow the above rules, you should be able to produce
RFC 5424-compliant messages. If you don’t, logging may not complain, but your
messages will not be RFC 5424-compliant, and your syslog daemon may complain.

Although most logging messages are intended for reading by humans, and thus not
readily machine-parseable, there might be cirumstances where you want to output
messages in a structured format which is capable of being parsed by a program
(without needing complex regular expressions to parse the log message). This is
straightforward to achieve using the logging package. There are a number of
ways in which this could be achieved, but the following is a simple approach
which uses JSON to serialise the event in a machine-parseable manner:

Note that the order of items might be different according to the version of
Python used.

If you need more specialised processing, you can use a custom JSON encoder,
as in the following complete example:

from__future__importunicode_literalsimportjsonimportlogging# This next bit is to ensure the script runs unchanged on 2.x and 3.xtry:unicodeexceptNameError:unicode=strclassEncoder(json.JSONEncoder):defdefault(self,o):ifisinstance(o,set):returntuple(o)elifisinstance(o,unicode):returno.encode('unicode_escape').decode('ascii')returnsuper(Encoder,self).default(o)classStructuredMessage(object):def__init__(self,message,**kwargs):self.message=messageself.kwargs=kwargsdef__str__(self):s=Encoder().encode(self.kwargs)return'%s >>> %s'%(self.message,s)_=StructuredMessage# optional, to improve readabilitydefmain():logging.basicConfig(level=logging.INFO,format='%(message)s')logging.info(_('message 1',set_value=set([1,2,3]),snowman='\u2603'))if__name__=='__main__':main()

When the above script is run, it prints:

message1>>>{"snowman":"\u2603","set_value":[1,2,3]}

Note that the order of items might be different according to the version of
Python used.