With a default configuration of PostgreSQL, the information in the PostgreSQL logs is generally pretty small with some warnings and the occasional typo in psql. However, once you start tweaking the configuration parameters a bit and you start logging more things like query times, the logs get a lot bigger. Having PostgreSQL generate over 10GB of logs per day is not unheard of if you set log_min_duration_statement low enough on a busy server. You can analyze logs of that size with pgBadger pretty easily, but to get the real value, you want to look at the logs of the course of time. Things like how does the query load look compared to the same time last week, last month or last year. If you save the daily reports from pgBadger, you can look at them over time, but that’s a manaul process. Another option is to put your logs into Hadoop and use MapReduce to analyze the information over time.

The Hadoop project has a service that is built for exactly that purpose called Flume. It lets you setup a Flume agent that will listen for the logs and then push them into Hadoop. Setting everything up is pretty simple.

First, you’ll want to setup PostgreSQL to send the logs to syslog. In postgresql.conf, you’ll set:

log_destination = 'syslog'

Then you want to setup your syslog server to forward messages to Flume. Depending on your syslog server, you’ll setup a rule like this in rsyslog.conf

*.* @127.0.0.1:5140

Finally, you’ll need to start Flume where it will listen for syslog messages and push the messages into an HFDS sink.