Description

Tracking ticket for work to integrate Python's logging module in to Django, providing logging hooks at various points in the framework as well as a Django-style interface for enabling / disabling / configuring logging. See also LoggingProposal.

I know there's an open issue about where to place setup code in Django to run once only, so I assume this placement of the logging configuration code is just temporary because no better mechanism exists? As a user, I would prefer to control exactly when and how logging gets set up, and not to have it done automagically for me. If you feel that automagic configuration is wanted by the majority of users, you can still have an additional setting called e.g. AUTOLOG, set by default to True if you want, which does the logging setup automatically when true but can be overridden when more precise control is wanted. In fact if more granularity over logging settings is wanted, you could have a settings dict LOGGING_CONFIG { "enabled": True, "automatic": True } etc.

Re. the change in db/backends/__init__.py:

You might want to control via settings.py whether a CursorLoggingWrapper is returned rather than a plain cursor. There might be some petrol-heads for whom speed is paramount ;-)

Re. the change in "/dev/null", presumably this is your new django.utils.log:

I've proposed on python-dev a change to Python logging to add a dictConfig() function to the logging.config module. The start of the thread is ​here (though some of the posts don't appear in this thread because of broken email clients) and the draft of the schema I'm proposing is ​here. It would be good if we could align the schemata for the LOGGING dict.

I might be going about the initialisation the wrong way then. Here's what I want to achieve: without the user needing to do anything at all (including modifying a settings.py created before logging was added to Django) I want ALL messages to any logger beneath "django" to be silently swallowed, no matter what the severity. My understanding is that the correct way to do this is to hook up a NullHandler and set "propogate = False" on the "django" logger as early as possible. Is there a better way of achieving that?

No, you're right about the best way to achieve what you want - i.e. by adding the NullHandler and setting propagate to False on the "django" logger, you will indeed ensure that any thing logged to "django.*" will disappear into the ether. That'll be fine from a backward compatibility point of view - users who have old settings files shouldn't see any unexpected logging messages.

The other question is what different modes of initialisation of logging might be wanted by different users. If a user wants Django to automatically configure logging for them, they can define a dict bound to LOGGING in settings.py. This covers the simplest use case - users don't need to do anything else. About the schema for the dict, it would be nice if we could align the schema with what I'm proposing on python-dev, linked-to above (the schema is different to what Ivan Sagalaev suggested on the django-developers thread, but then it has to cover functionality of the logging module such as shared handlers, filters, formatters etc. which wasn't covered by Ivan's overview, but will be needed in certain cases).

In a more complex use case, say the user wants to set up logging using their own code, but this needs to be done as early as possible and be called just once. For this, I suggested a callback mechanism ​here on your thread about "Best place for code that processes stuff from settings.py once". There, the user could define callbacks for logging or anything else which needs to be done just once. Their logging setup callback can use any mechanism to configure logging, e.g. programmatically using the logging getLogger/addHandler APIs or via loading a dict from YAML or JSON or just invoking the dict config mechanism using a dict obtained from some other source - including a literal dict in settings.py not called LOGGING ;-)

One more point about handlers. Because propagate is set to False for the "django" logger, in order for users to see django events, they will explicitly have to add appropriate handlers to the "django" logger. These can be the same handlers as they e.g. attach to the root logger, or completely different handlers.

A fairly common usage pattern is to attach console, file handlers to the root logger and nowhere else (but that wouldn't work for Django events, as I've explained above). Another common pattern is to attach console and file handlers to the root, and additional handlers (e.g. SMTP or file handlers pointing to files which just store errors) for particular severities (e.g. ERROR or CRITICAL) and/or particular areas of the application.

Of course, knowledgeable users can, if they wish, set the "django" logger's propagate flag to True in their logging setup code, which means that they then don't need to attach handlers specifically to the "django" logger, as events will propagate up to the root logger.

I've added a Launchpad branch which is an up-to-date branch of Django trunk (as at today - 13 May 2010), with updated logging functionality (including a copy of dictConfig, from my project of the same name which is a standalone version of the new PEP 391 logging configuration functionality, usable with Python 2.4, 2.5 and 2.6). I've inserted some logging statements in app loading, SQL execution and request handling (including handling of uncaught and other exceptions).

There's also an example project which uses the Django logging API to configure logging in settings.py, via PEP 391-compatible configuration dictionary (the initialisation code is only called once, even though settings.py is imported at least twice).

I've made a working logger by just using a simple encapsulation of database table with two main field: level and text, this has the advantage of being instantly available in Django admin. What am I missing with this approach?

@anon - Nothing in particular; if you want to log to database, that's certainly an option with an appropriate handler.

My only concern would be the extra database load that this logging technique would introduce, but that isn't something that you can declare as universally bad; it entirely depends on your load pattern and the amount of logging you intend to do.

If you're using database transactions in your app, the logging will seem to be working fine, until you actually reach an error... The errors will be successfully written out to your log table, but then comes along your transaction manager and rolls everything back due to the failed transaction. ;)