Please tell me how your log4net can deadlock if it is only acquiring one lock? There is no chance for deadlock in this case.

In the case where multiple locks are used, the simple solution might be just to disable the logger lock if you can. The logger lock should only be to serialize the output to a non-threadsafe resource.

The real problem here is likely poor documentation in log4net on how to setup logging to a device that implements its own thread-safe locking and/or a discussion on this exact problem which should have been realized before log4net was ever released.

If you have to perform any post-mortem debugging on remote systems, you better be logging as much as you need to clearly recreate the sequence of events that led up to the crash.

Also, log4net is already thread safe due to its own internal locking. Even from your description, I can’t imagine a scenario where those internal locks could affect your app. Why on earth would you add your own nested log locking mechanisms on top of that? You’re asking for obscure deadlocks…

Why on earth would you add your own nested log locking mechanisms on top of that?

We didn’t. Referencing one object caused that object (hey, database object abstraction magic!) to reference another object which wrote to the logs. It’s extremely timing sensitive, would only happen about once a day on a fairly heavily loaded public website.

I agree on most things, especially that too much logging counters productivity.

In the example, I think a major flaw is that INFO seems to be in the wrong position, according to most ligging tools it should be between DEBUG and ERROR/WARNING. The things that are logged with the INFO level now should normally be logged at the TRACE level, which generally is at the position that INFO holds now.

Naturally no items that would only interest developers should be logged at the INFO, WARN or ERROR levels.

FATAL Level
Unrecoverable Errors (the application will end after this log)

ERROR Level
Recoverable Errors (the application is not going to stop)

NORMAL Level
Here is the main trick. I like to show what is logged at this level to the end user in a way he can read what the application is doing. Is just like if the application could tell to the user what is doing.

Sometimes logging can be useful as a tool to prove to the customer that he’s doing something wrong.

Our programs interface to large PBX’s and a host of other telephone-related systems. Looking into the log, peering over the different binary exchanges between the systems, and being able to show to the customer that it is, in fact, his PBX that is doing something wrong, has saved us hours and days of unbillable support time.

I’d say that in any situation where you’re communication between different systems, log what’s happening. If not, you’ll have no clue which system that actually malfunctioned.

I think it’s safe to add that logging can be viewed like a set of requirements for a software project: start simple and increase complexity if need be, rather then anticipating this info would be useful in a log! (similarly, the user would probably want these additional features!).

The problem I’ve seen is that people are off creating new logging frameworks. Things way more complex than they need to be, with lots of little moving parts, but not a lot of sense…and there’s so many of them, none of them get tested.

It’s yet another recreation of the wheel. On Unix-like systems, we’ve got syslog. Three calls: openlog, syslog, closelog. One call to open early on, one call to close nicely, and just syslog() whenever you want. The manpage, while pithy, is usually enough to figure out the levels.

There is nothing worse than poorly implemented logging code! We log alot as we are doing data transporation and ETL. We treat logging as the cross-cuting concern it is an apply it transparently using AOP. Logging code does NOT belong in your core logic and only poor or lazy developers haven’t sorted themselves out