Metrics are collected from each node in the cluster so that administrators can use the data to monitor the cluster. In
general, the collectd service collects metrics every 10 seconds. The exception is volume metrics which are collected every
10 minutes.

Metrics are collected from each node in the cluster so that administrators can use the data to monitor the cluster. In
general, the collectd service collects metrics every 10 seconds. The exception is volume metrics which are collected every
10 minutes.

The YARN application metrics that are collected by JMX have the metric name syntax mapr.rm.<metric_name> and the metric values are aggregated among all the queues in the default queue. However, you can configure collectd
to create a filter for each queue. As an alternative, you can use the REST API queue metrics (mapr.rm_queue.<metric_name>) which are by default set up for filtering by queue.

The collectd service uses an embedded JVM when it gathers metrics from the CLDB, Node Manager, Resource Manager, Drill,
and HBase. You can edit the Plugin Java section of collectd.conf to configure limits to the collectd virtual memory
footprint.

Every 60 seconds, the collectd service uses a MapR plugin to gather the following topology metrics on each node in the
cluster. Use these metrics to understand disk utilization across a topology or rack. By default, these metrics include
all racks and topologies associated with the cluster. However, you can use tags to specify which rack(s) or topologies(s)
to include. Note: Racks and topologies can span multiple nodes and one rack can be associated with multiple topologies.

Every 10 seconds, the collectd service uses a MapR plugin to gather Resource Manager metrics on the active Resource Manager.
Collectd gathers metrics on the Resource Manager JVM process, YARN applications, and nodes that are managed by the Resource
Manager. The method used to gather the metrics differs based on the metric type.

Using REST API, each collectd service aggregates and writes metrics to one OpenTSDB node at a time. In the event that
an OpenTSDB node is unavailable, collectd can fail over metric aggregations and storage to another OpenTSDB node. All
OpenTSDB nodes write to tables in the mapr.monitoring volume.

Fluentd collects log events from each node in the cluster and stores them in a centralized location so that administrators
can search the logs when troubleshooting issues in the cluster. The process that fluentd uses to parse and send log events
to Elasticsearch differs based on the formatting of log events in each log file.

Administration of the MapR-DB is done primarily via the commmand line (maprcli) or with the MapR Control System (MCS).
Regardless of whether the MapR-DB table is used for binary files or JSON documents, the same types of commands are used
with slightly different parameter options. MapR-DB administration is associated with tables, columns and column families,
and table regions.