Installation

However, just installing Graphite is not enough. In order for it to work, three configuration steps are necessary:

Initialize database for Graphite-web

Configure Carbon

Install and configure web server

Initialize database for Graphite-web

Graphite-web, a UI for collected metrics, needs its own database for storing users, their profiles and other data that web applications normally store. Though it can use MySQL, PostgreSQL or SQLite, SQLite is the one enabled by default, so we’ll stick to it – one less config file to edit.

Graphite’s
syncdb command will create new database, but chances are that web server won’t be able to write to it – most likely the server will have its own user account with readonly access to database file. For sake of simplicity, we will nuke the problem by granting write rights to everybody. This should do it:

Shell

1

2

graphite-manage syncdb

chmoda+w/var/lib/graphite/graphite.db

Configure Carbon

Carbon consists of several daemons with different responsibilities, but only one of them is actually required for accepting and storing the data – carbon-cache.

Firstly, we need to enable it. Head to carbon config file
/etc/default/graphite-carbon and put ‘true’ next to CARBON_CACHE_ENABLED:

Shell

1

CARBON_CACHE_ENABLED=true

Then, start the daemon:

Shell

1

service carbon-cache start

That should be enough.

Install and configure web server

Graphite-app doesn’t have own web server, so we have to install it separately. Apache with WSGI module (Graphite-web is written in Django) will do.

Shell

1

apt-getinstall apache2 libapache2-mod-wsgi

Apache comes with overly optimistic “It works!” web site configured, which we have to replace with something more useful, like Graphite. Fortunately, it’s fairly easy to do:

Viewing data

Behold! If gods of programming didn’t forsake you, opening 127.0.0.1 will greet you with this:

Left hand side has a tree with all known data sources. Graphite comes with a few metrics on its own, so even for brand new installation there will be something. Clicking on data sources selects/deselects them, so it’s fairly easy to start getting something useful:

There’s also Dashboards page where we can combine several graphs on one screen:.

What’s really cool is Carbon’s Render URL API. You can build a url with data source name and render parameters and receive PNG, PDF or SVG graphs in response. There’re also non-graphical response formats, like JSON or CSV. URL will look something like this:
http://127.0.0.1/render?target=collectd.cpu-*.cpu-system .

Finally, Graphite-web also has concept of events. If your releases come with a payload of extra bugs and degraded performance counters, release event can be registered at Graphite, so it can be seen in a context of other monitoring data. New features, exceptions, or anything else that has a timestamp can be an event.

Feeding in the data

Graphite doesn’t collect metrics by himself. It accepts them via Carbon, but there must be somebody else feeding in that data. Fortunately, that ‘somebody’ can be a lot of things.

Data sources

Firstly, our old friend collectd can write metrics to Graphite via write_graphite plugin. He’s not alone and there’re other tools that can do the same. For instance, Carbonator Windows Service or ssc serv.

Alternatively, Carbon can connect to AMQP compatible message queue, like RabbitMQ or ActiveMQ, and receive data from there.

Finally, we can send metrics in plain text from command line using e.g. netcat utility:

Plaintext protocol

Shell

1

echo"hostname.cpu.cpu-total 100 `date +%s`"|nc-q0127.0.0.12003

Data pre/post processing

It might look weird that Graphite has a whole dedicated service that just forwards data from TCP socket to storage. Why not write directly to the database? Thing is Carbon is not just dumb socket listener. It can do stuff.

Carbon can redirectsome portions of data to other listeners, thus distributing data among the peers or even creating a replica of what it has.

It also can do some math on the data it receives before sending it to database. For instance, my server has 4 cores and produces 8 separate feeds of CPU data. Most of the time I’d rather see and probably store average value, and Carbon can do that.

I could continue with ability to rewrite metric names or have white/blacklist of senders, but you probably already got the message: Carbon is not dumb, it deserves to be a separate service.

Storing data

All the data that came into Carbon and passed its rules and aggregation steps ends up in the database – Whisper. You can choose other storages like Ceres, InfluxDB or something based on top of Cassandra or HBase for greater availability. But Whisper is default one.

In many ways Whisper is very similar to RRDtool. It has fixed size flat file structure and similar concept of archives, but Whisper is noticeably slower than RRD, so obvious question is why it still exists? Well, it can write data in irregular time intervals, while rrdtool can not. That seems to be the main reason today.

Like rrdtool, Whisper can store the same data in different precision for different time windows. While single archive instruction in rrdtool has both retention rate and aggregation rules (e.g.
RRA:AVERAGE:0.5:10:60 ), in Whisper it’s two different things: retention rate goes to storage-schemas.conf file, and aggregation rules go to storage-aggregation.conf.

xFilesFactor=0.5 has the same meaning as rrdtool’s
xff parameter in Round-robin archive (
RRA:AVERAGE:0.5...) – “if more than half of aggregated values are undefined, result should be also undefined”.

Summary

Graphite is a tool that can receive, store and display time series of data. It sounds like a definition of rrdtool, but in fact they are very different. They tackle similar problem at different angles and with different scale in mind. What’s special about Graphite is its ‘store’ and ‘display’ components can be easily replaced with their alternatives. If you’re not happy with graphite-web (and you probably won’t), you could use something like Grafana instead. If Whisper is not scalable enough, it could be replaced with InfluxDB or something else. This is the quality of software that I like the most: it works out of the box, but you are free to change it the way that works best for you.