Keep in mind, this isn't just pretty charts, it has alarms too! One day when I was using bit torrent to download a bunch of Linux ISO's and mistakenly savings them to a small SSD drive, it gave me an alert that at the current data ingestion rate, my hard drive would be full in 5 hours! Very Cool

Keep in mind, this isn't just pretty charts, it has alarms too! One day when I was using bit torrent to download a bunch of Linux ISO's and mistakenly savings them to a small SSD drive, it gave me an alert that at the current data ingestion rate, my hard drive would be full in 5 hours! Very Cool

Seems like a neat product, just trying to figure out the best process for deployment. Sadly it looks to be Linux only - would be more useful if it covered more things.

Keep in mind, this isn't just pretty charts, it has alarms too! One day when I was using bit torrent to download a bunch of Linux ISO's and mistakenly savings them to a small SSD drive, it gave me an alert that at the current data ingestion rate, my hard drive would be full in 5 hours! Very Cool

Seems like a neat product, just trying to figure out the best process for deployment.

I plan to install it on all my servers (I just have it on a few for testing right now) and then just setup nginx to point to the "main" netdata server for authentication and SSL.

Keep in mind, this isn't just pretty charts, it has alarms too! One day when I was using bit torrent to download a bunch of Linux ISO's and mistakenly savings them to a small SSD drive, it gave me an alert that at the current data ingestion rate, my hard drive would be full in 5 hours! Very Cool

Seems like a neat product, just trying to figure out the best process for deployment.

I plan to install it on all my servers (I just have it on a few for testing right now) and then just setup nginx to point to the "main" netdata server for authentication and SSL.

But the individual machines will all still be exposed. Sending out the data from the individual machines to anything that asks for it. You could at least use the firewall to lock down to whom they will speak.

most centralized monitoring solutions are usually only good for presenting statistics of past performance (i.e. cannot be used for real-time performance troubleshooting).

Netdata has a different approach:

data collection happens per second

thousands of metrics per server are collected

data do not leave the server they are collected

netdata servers do not talk to each other

your browser connects all the netdata servers

Using netdata, your monitoring infrastructure is embedded on each server, limiting significantly the need of additional resources. netdata is blazingly fast, very resource efficient and utilizes server resources that already exist and are spare (on each server). This allows scaling out the monitoring infrastructure.

However, the netdata approach introduces a few new issues that need to be addressed, one being the list of netdata we have installed, i.e. the URLs our netdata servers are listening.

To solve this, netdata utilizes a central registry. This registry, together with certain browser features, allow netdata to provide unified cross server dashboards. For example, using the latest git version of netdata, when you jump from server to server using the my-netdata menu, several session settings (like the currently viewed charts, the current zoom and pan operations on the charts, etc) are propagated to the new server, so that the new dashboard will come with exactly the same view.

Yes, I've read that. It didn't explain why I'd this. It sounds like netdata is just "giving up" rather than providing a solution. It's just a nice, graphical top command? I've got that with top already. But is this getting me that I don't already have? Centralization is the key goal. THe last thing I want to do is have to expose and secure every machine individually, but a pain and risk. I guess I'm missing the "here is the good part" about it, other that the nice looking display.

I did see that you can tie this INTO a central system, but once you have that central system, what is the purpose of netdata?

How do I see my servers taht are not on the Internet, for example? Let's say I have 1,000 servers, how do I view them? The purpose of a central console is so that I have one place, one secured place, to go view them. If each machine has its own dashboard, I have to get every one of them out on the Internet so that I can view them?

How do I see my servers taht are not on the Internet, for example? Let's say I have 1,000 servers, how do I view them? The purpose of a central console is so that I have one place, one secured place, to go view them. If each machine has its own dashboard, I have to get every one of them out on the Internet so that I can view them?

I can't even think of 1 server (that I have) where I would this level and speed of real time data let alone 1,000. I know they are out there but I know I don't have any. SNMP and collectors give me way more info than I can use and allow for central monitoring. This seems like a super niche product that could have security implications.

How do I see my servers taht are not on the Internet, for example? Let's say I have 1,000 servers, how do I view them? The purpose of a central console is so that I have one place, one secured place, to go view them. If each machine has its own dashboard, I have to get every one of them out on the Internet so that I can view them?

I can't even think of 1 server (that I have) where I would this level and speed of real time data let alone 1,000. I know they are out there but I know I don't have any. SNMP and collectors give me way more info than I can use and allow for central monitoring. This seems like a super niche product that could have security implications.

On Wall St. we needed this from time to time, but we wouldn't want it running or exposed normally. Very rare, even there, though. Only .1% of servers, literally. For normal needs, what we need is roughly: