The answer to your question is somewhat complex. First, it depends on exactly which RRD file you look at; Zenoss does not create them all with exactly the same structure. Second, each Zenoss RRD file actually contains several archives (the one I just looked at, the system uptime for a Linux server, had seven, four for averages and three for maximum values - we'll ignore the maximum value ones for this discussion) Each of these archives spans a different period of time. The first one just stores the data points as they come in. That happens every five minutes (specified as 300 seconds for the RRD "step" size). The database I looked at has 600 records in each of the archives, so the first archive can store data for 50 hours. After 50 hours, each new data point overwrites the oldest data point in that archive; it's a circular buffer.

But that's not the whole story. The second archive in that same database file averages six of the input data points to create each one of its data points. So each data point in that archive gets added every 30 minutes instead of every five minutes. That archive has 600 records, so it covers 12.5 days.

The next archive averages 24 data points for each of its data points. 5 minutes X 24 X 600 = 14,400 minutes = 240 hours = 50 days. The last archive averages 288 points for each of its points, so it spans 600 days, or almost two years.

When Zenoss requests data from the RRD, the RRD software looks so see which archives cover the time span being requested. If there is more than one, it picks the one with the least amount of time between samples. In general, the greater the time span you request or the farther back in time you go, the further down the list of archives you have to go to find one that will cover it. The data that is returned will have poorer time resolution than the archives that average fewer data points but cannot cover the requested time span.

So, for the RRD file that I looked at, the answer to your question is 600 days for the coarsest resolution data. You can look at your own files with "rrdtool info <filename>". To determine how often input data points are expected, look for something like "step=300" to tell you the base RRD step size in seconds. Then for the round robin archives, look for "rra[<num>].pdp_per_row" to see how many of those points are averaged (or fed into a min or max function, see "rra[<num>].cf" for the function being used) to create a point for the archive. Take the biggest pdp_per_row for that data source, multiply that by the step size, and then multiply that by the number of records in the archive (rra[<num>].rows) and you will have the time span in seconds for the longest duration archive before new data starts overwriting old data.

Also notice that there are two ways to define the contents of an rrd file:

First, an SNMP data point in a template can be assigned a custom Create Command. This consists of RRA lines (see link above). If you leave this setting blank, the system will use the system-wide default rrd create command which is found at Advanced > Collectors > localhost > Edit > Default RRD Create Command.

I want to provide some more detailed information from something that I wrote up. It should make calculating this RRD stuff easier.

Default RRD settings

RRDTool does data consolidation, not aggregation. You can read about how it does this under the RRA section of http://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html. You can define what RRAs Zenoss creates by setting the "Default RRD Create Command" under the Edit tab for your performance monitor.

Of course the more of these archives you create, and the more "rows" you put in them, the larger each RRD file will be on the disk. This will in turn use more cache memory to remain up to date, and thus give you less monitoring capacity per Zenoss collector.

Example RRD setups

Here are some other examples of RRD archives one might choose to setup:

RRA:AVERAGE:0.5:1:8640 > average on a single data point, stored 8640 times = 30d (this is the as collected data)RRA:AVERAGE:0.5:6:2880 > 30min average for 60dRRA:AVERAGE:0.5:12:1872 > 60min average for 90dRRA:AVERAGE:0.5:288:600 > 1 day average for 2 yearsRRA:MAX:0.5:1:8640 >RRA:MAX:0.5:12:1872 > same as above just max instead of averageRRA:MAX:0.5:288:600 >File Size: 198kAt 100.000 data points performance data will consume 18.9GB

RRA:AVERAGE:0.5:1:25920 > average on a single data point, stored 25920 times = 90d (this is the as collected data)RRA:AVERAGE:0.5:6:2880 > 30min average for 60dRRA:AVERAGE:0.5:12:1872 > 60min average for 90dRRA:AVERAGE:0.5:288:600 > 1 day average for 2 yearsRRA:MAX:0.5:1:25920 >RRA:MAX:0.5:12:1872 > same as above just max instead of averageRRA:MAX:0.5:288:600 >File Size: 468kAt 100.000 data points performance data will consume 38.8GB

RRA:AVERAGE:0.5:1:4032 > average on a single data point, stored 4032 times = 14d (this is the as collected data)RRA:AVERAGE:0.5:12:1440 > 60min average for 60dRRA:AVERAGE:0.5:288:180 > 1 day average for 6moRRA:AVERAGE:0.5:2016:52 > 1 week average for 1 yearRRA:AVERAGE:0.5:8064:60 > 1 month average for 5 yearsRRA:AVERAGE:0.5:96768:5 > 1 year average for 5 yearsRRA:MAX:0.5:1:4032 >RRA:MAX:0.5:12:1440 > same as above just max instead of averageRRA:MAX:0.5:288:180 >