A Piece of My Mind, One Byte at a Time

logging

Some time ago I wrote a comparison of lua-nginx-module’s per-request context table and per-worker shared memory dictionaries. Silly me- our examination of the usage of hitting ngx.ctx is pretty naive. Of course, constantly doing the magic lookup to get to the table will be expensive. What happens when we localize the table, do our work, and shove it back into the ngx API at the end of our handler?

Last year I took a look at generating Mod_Security audit logs as JSON data, rather than the module’s native format (which is… err… difficult to parse). My initial approach was incomplete, needlessly introduced additional dependencies, and leaked like a sieve; I ended up abandoning this to work on FreeWAF. Some new use uses came up that would benefit from more structured Mod_Security audit logs, so I’ve revamped a patch to emit JSON data using a more sane approach.

There are already plentyofarticles on optimizing Nginx performance, so I’m not going to focus on that here. Instead, I’m going to demonstrate a quick-and-dirty method of measuring traffic without the use of the biggest performance drag in a standard Nginx setup- on-disk logging.

mod_sec audit logs are atrocious. There have been a few attempts to make parsing audit data more palatable- BitsOfInfo recently wrote up a proof of concept of working through audit logs with logstash, and the AuditConsole project from JWall provides a more comprehensive approach to collecting log data, but neither solutions addresses the inherent problem of how messy native audit log data is.

As part of my Master’s thesis I need a way to efficiently audit, sort and store audit logs; building my own parser, while doable, would be a waste of time, so I’ve forked the ModSecurity project on Github and built a patch that implements JSON logging directly in the application, replacing the old data log structures entirely. There’s still a long way to go, but the initial patch seems stable and produces valid JSON. Continue reading →