Local netdata (slave), without any database or alarms, collects metrics and sends them to
another netdata (master).

The my-netdata menu shows a list of all “databases streamed to” the master. Clicking one of those links allows the user to view the full dashboard of the slave netdata. The URL has the form http://master-host:master-port/host/slave-host/.

Alarms for the slave are served by the master.

In this mode the slave is just a plain data collector. It spawns all external plugins, but instead
of maintaining a local database and accepting dashboard requests, it streams all metrics to themaster. The memory footprint is reduced significantly, to between 6 MiB and 40 MiB, depending on the enabled plugins. To reduce the memory usage as much as possible, refer to running netdata in embedded devices.

Local netdata (slave), with or without a database, collects metrics and sends them to another
netdata (proxy), which may or may not maintain a database, which forwards them to another
netdata (master).

Alarms for the slave can be triggered by any of the involved hosts that maintains a database.

Any number of daisy chaining netdata servers are supported, each with or without a database and
with or without alarms for the slave metrics.

[web].mode = none disables the API (netdata will not listen to any ports).
This also disables the registry (there cannot be a registry without an API).

accept a streaming request every seconds can be used to set a limit on how often a master Netdata server will accept streaming requests from the slaves. 0 sets no limit, 1 means maximum once every second. If this is set, you may see error log entries “… too busy to accept new streaming request. Will be allowed in X secs”.

A new file is introduced: stream.conf (to edit it on your system run/etc/netdata/edit-config stream.conf). This file holds streaming configuration for both the
sending and the receiving netdata.

API keys are used to authorize the communication of a pair of sending-receiving netdata.
Once the communication is authorized, the sending netdata can push metrics for any number of hosts.

You can generate an API key with the command uuidgen. API keys are just random GUIDs.
You can use the same API key on all your netdata, or use a different API key for any pair of
sending-receiving netdata.

The above is the receiver configuration of a single host, at the receiver end. MACHINE_GUID is
the unique id the netdata generating the metrics (i.e. the netdata that originally collects
them /var/lib/netdata/registry/netdata.unique.id). So, metrics for netdata A that pass through
any number of other netdata, will have the same MACHINE_GUID.

allow from settings are netdata simple patterns: string matches
that use * as wildcard (any number of times) and a ! prefix for a negative match.
So: allow from = !10.1.2.3 10.* will allow all IPs in 10.* except 10.1.2.3. The order is
important: left to right, the first positive or negative match is used.

Auto-scaling is probably the most trendy service deployment strategy these days.

Auto-scaling detects the need for additional resources and boots VMs on demand, based on a template. Soon after they start running the applications, a load balancer starts distributing traffic to them, allowing the service to grow horizontally to the scale needed to handle the load. When demands falls, auto-scaling starts shutting down VMs that are no longer needed.

What a fantastic feature for controlling infrastructure costs! Pay only for what you need for the time you need it!

In auto-scaling, all servers are ephemeral, they live for just a few hours. Every VM is a brand new instance of the application, that was automatically created based on a template.

So, how can we monitor them? How can we be sure that everything is working as expected on all of them?

zero configuration, all ephemeral servers should have exactly the same configuration, and nothing should be configured at any system for each of the ephemeral nodes. We shouldn’t care if 10 or 100 servers are spawned to handle the load.

self-cleanup, so that nothing needs to be done for cleaning up the monitoring infrastructure from the hundreds of nodes that may have been monitored through time.

netdata used to be self-contained, so that all these functions were handled entirely by each server. The changes we made, allow each netdata to be configured independently for each function. So, each netdata can now act as:

a self contained system, much like it used to be.

a data collector, that collects metrics from a host and pushes them to another netdata (with or without a local database and alarms).

a proxy, that receives metrics from other hosts and pushes them immediately to other netdata servers. netdata proxies can also be store and forward proxies meaning that they are able to maintain a local database for all metrics passing through them (with or without alarms).

a time-series database node, where data are kept, alarms are run and queries are served to visualise the metrics.

API keys are just random GUIDs. Use the Linux command uuidgen to generate one. You can use the same API key for all your slaves, or you can configure one API for each of them. This is entirely your decision.

We suggest to use the same API key for each ephemeral node template you have, so that all replicas of the same ephemeral node will have exactly the same configuration.

I will use this API_KEY: 11111111-2222-3333-4444-555555555555. Replace it with your own.

On the master, edit /etc/netdata/stream.conf (to edit it on your system run /etc/netdata/edit-config stream.conf) and set these:

[11111111-2222-3333-4444-555555555555]# enable/disable this API keyenabled= yes
# one hour of data for each of the slaves
default history=3600# do not save slave metrics on disk
default memory= ram
# alarms checks, only while the slave is connected
health enabled by default= auto

stream.conf on master, to enable receiving metrics from slaves using the API key.

If you used many API keys, you can add one such section for each API key.

When done, restart netdata on the master node. It is now ready to receive metrics.

On each of the slaves, edit /etc/netdata/stream.conf (to edit it on your system run /etc/netdata/edit-config stream.conf) and set these:

[stream]# stream metrics to another netdataenabled= yes
# the IP and PORT of the masterdestination=10.11.12.13:19999
# the API key to use
api key=11111111-2222-3333-4444-555555555555

stream.conf on slaves, to enable pushing metrics to master at 10.11.12.13:19999.

Using just the above configuration, the slaves will be pushing their metrics to the master netdata, but they will still maintain a local database of the metrics and run health checks. To disable them, edit /etc/netdata/netdata.conf and set:

netdata.conf configuration on slaves, to disable the local database and health checks.

Keep in mind that setting memory mode = none will also force [health].enabled = no (health checks require access to a local database). But you can keep the database and disable health checks if you need to. You are however sending all the metrics to the master server, which can handle the health checking ([health].enabled = yes)

The file /var/lib/netdata/registry/netdata.public.unique.id contains a random GUID that uniquely identifies each netdata. This file is automatically generated, by netdata, the first time it is started and remains unaltaired forever.

If you are building an image to be used for automated provisioning of autoscaled VMs, it important to delete that file from the image, so that each instance of your image will generate its own.

A proxy is a netdata that is receiving metrics from a netdata, and streams them to another netdata.

netdata proxies may or may not maintain a database for the metrics passing through them.
When they maintain a database, they can also run health checks (alarms and notifications)
for the remote host that is streaming the metrics.

To configure a proxy, configure it as a receiving and a sending netdata at the same time,
using stream.conf.

The sending side of a netdata proxy, connects and disconnects to the final destination of the
metrics, following the same pattern of the receiving side.