netdata supports backends for archiving the metrics, or providing long term dashboards,
using Grafana or other tools, like this:

Since netdata collects thousands of metrics per server per second, which would easily congest any backend
server when several netdata servers are sending data to it, netdata allows sending metrics at a lower
frequency, by resampling them.

So, although netdata collects metrics every second, it can send to the backend servers averages or sums every
X seconds (though, it can send them per second if you need it to).

metrics are sent to opentsdb as prefix.chart.dimension with tag host=hostname.

json document DBs

metrics are sent to a document db, JSON formatted.

prometheus is described at prometheus page since it pulls data from netdata.

Only one backend may be active at a time.

Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.

Netdata supports three modes of operation for all backends:

as-collected sends to backends the metrics as they are collected, in the units they are collected.
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.

average sends to backends normalized metrics from the netdata database.
In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
send the average percentage to the backend.

sum or volume: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
netdata charts will be used.

Time-series databases suggest to collect the raw values (as-collected). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use as-collected.

If, on the other hand, you just need long term archiving of netdata metrics and you plan to mainly work with netdata, we suggest to use average. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use average, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.

This code is smart enough, not to slow down netdata, independently of the speed of the backend server.

destination = host1 host2 host3 ..., accepts a space separated list of hostnames,
IPs (IPv4 and IPv6) and ports to connect to.
Netdata will use the first available to send the metrics.

The format of each item in this list, is: [PROTOCOL:]IP[:PORT].

PROTOCOL can be udp or tcp. tcp is the default and only supported by the current backends.

IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6).
For IPv6 you can to enclose the IP in [] to separate it from the port.

PORT can be a number of a service name. If omitted, the default port for the backend will be used
(graphite = 2003, opentsdb = 4242).

Example IPv4:

destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242

Example IPv6 and IPv4 together:

destination = [ffff:...:0001]:2003 10.11.12.1:2003

When multiple servers are defined, netdata will try the next one when the first one fails. This allows
you to load-balance different servers: give your backend servers in different order on each netdata.

netdata also ships nc-backend.sh,
a script that can be used as a fallback backend to save the metrics to disk and push them to the
time-series database when it becomes available again. It can also be used to monitor / trace / debug
the metrics netdata generates.

data source = as collected, or data source = average, or data source = sum, selects the kind of
data that will be sent to the backend.

hostname = my-name, is the hostname to be used for sending data to the backend server. By default
this is [global].hostname.

prefix = netdata, is the prefix to add to all metrics.

update every = 10, is the number of seconds between sending data to the backend. netdata will add
some randomness to this number, to prevent stressing the backend server when many netdata servers send
data to the same backend. This randomness does not affect the quality of the data, only the time they
are sent.

buffer on failures = 10, is the number of iterations (each iteration is [backend].update every seconds)
to buffer data, when the backend is not available. If the backend fails to receive the data after that
many failures, data loss on the backend is expected (netdata will also log it).

timeout ms = 20000, is the timeout in milliseconds to wait for the backend server to process the data.
By default this is 2 * update_every * 1000.

send hosts matching = localhost * includes one or more space separated patterns, using * as wildcard
(any number of times within each pattern). The patterns are checked against the hostname (the localhost
is always checked as localhost), allowing us to filter which hosts will be sent to the backend when
this netdata is a central netdata aggregating multiple hosts. A pattern starting with ! gives a
negative match. So to match all hosts named *db* except hosts containing *slave*, use!*slave* *db* (so, the order is important: the first pattern matching the hostname will be used - positive
or negative).

send charts matching = * includes one or more space separated patterns, using * as wildcard (any
number of times within each pattern). The patterns are checked against both chart id and chart name.
A pattern starting with ! gives a negative match. So to match all charts named apps.*
except charts ending in *reads, use !*reads apps.* (so, the order is important: the first pattern
matching the chart id or the chart name will be used - positive or negative).

send names instead of ids = yes | no controls the metric names netdata should send to backend.
netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
by the system and names are human friendly labels (also unique). Most charts and metrics have the same
ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
statsd synthetic charts, etc.

host tags = list of TAG=VALUE defines tags that should be appended on all metrics for the given host.
These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
time-series db. For example opentsdb likes them like TAG1=VALUE1 TAG2=VALUE2, but prometheus liketag1="value1",tag2="value2". Host tags are mirrored with database replication (streaming of metrics
between netdata servers).

Backend latency, the time the backend server needed to process the data netdata sent.
If there was a re-connection involved, this includes the connection time.
(this chart has been removed, because it only measures the time netdata needs to give the data
to the O/S - since the backend servers do not ack the reception, netdata does not have any means
to measure this properly).

Backend operations, the number of operations performed by netdata.

Backend thread CPU usage, the CPU resources consumed by the netdata thread, that is responsible
for sending the metrics to the backend server.