Monitor performance metrics in the OpsCenter Dashboard. Real-time and historical performance metrics are available at
different granularities: cluster-wide, per node, per table (column family), or storage tier.

OpsCenter manages multiple DataStax Enterprise clusters with a single install of the central opscenterd server. Administer
your clusters using the options available from the Cluster Actions menu. Generate reports from the Help menu.

Monitor performance metrics in the OpsCenter Dashboard. Real-time and historical performance metrics are available at
different granularities: cluster-wide, per node, per table (column family), or storage tier.

Cluster metrics monitor cluster performance at a high level. Cluster metrics are aggregated across all nodes in the cluster.
OpsCenter tracks a number of cluster-wide metrics for read performance, write performance, memory, and capacity.

Pending task metrics track requests that have been received by a node but are waiting to be processed. An accumulation
of pending tasks on a node can indicate a potential bottleneck in performance and should be investigated.

Table (formerly column family) metrics allow drilling down and locating specific areas of application workloads that are
the source of performance issues. If you notice a performance trend at the OS or cluster level, viewing table metrics
can provide a more granular level of detail.

Configure alert thresholds for Cassandra cluster-wide, table, and operating system metrics in the Alerts area of OpsCenter.
This proactive monitoring feature is available for DataStax Enterprise clusters.

Comprehensive reference of performance metrics available in OpsCenter.

OpsCenter Metrics Tooltips Reference

Comprehensive reference of performance metrics available in OpsCenter.

Metrics are available to add to any graph. View descriptions of any metric by hovering over
a metric in the Add Metric dialog, or by hovering over a graph legend.

The following list of metric descriptions available in tooltips is provided for your convenience:

Write Requests [write-ops]

The number of write requests per second on the coordinator nodes, analogous to client writes. Monitoring the number of requests over a given time period reveals system write workload and usage patterns.

Write Request Latency (percentiles) [write-histogram]

The min, median, max, 90th, and 99th percentiles of a client writes. The time period starts when a node receives a client write request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from writing to the replicas.

Write Failures [write-failures]

The number of write requests on the coordinator nodes that fail due to errors returned from replicas.

Write Timeouts [write-timeouts]

The number of server write timeouts per second on the coordinator nodes.

Write Unavailable Errors [write-unavailables]

The number of write requests per second on the coordinator nodes, that fail because not enough replicas are available.

Read Requests [read-ops]

The number of read requests per second on the coordinator nodes, analogous to client reads. Monitoring the number of requests over a given time period reveals system read workload and usage patterns.

Read Request Latency (percentiles) [read-histogram]

The min, median, max, 90th, and 99th percentiles of a client reads. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data’s replicas.

Read Failures [read-failures]

The number of read requests on the coordinator nodes that fail due to errors returned from replicas.

Read Timeouts [read-timeouts]

The number of server read timeouts per second on the coordinator nodes.

Read Unavailable Errors [read-unavailables]

The number of read requests per second on the coordinator nodes, that fail because not enough replicas are available.

Non Heap Committed [nonheap-committed]

Allocated memory, guaranteed for Java nonheap.

Non Heap Max [nonheap-max]

Maximum amount that the Java nonheap can grow.

Non Heap Used [nonheap-used]

Average amount of Java nonheap memory used.

Heap Commited [heap-committed]

Allocated memory guaranteed for the Java heap.

Heap Max [heap-max]

Maximum amount that the Java heap can grow.

Heap Used [heap-used]

Average amount of Java heap memory used.

JVM CMS Collection Count [cms-collection-count]

Number of concurrent mark sweep garbage collections performed per second.

JVM ParNew Collection Count [par-new-collection-count]

Number of ParNew garbage collections performed per second. ParNew collections pause all work in the JVM but should finish quickly.

JVM CMS Collection Time [cms-collection-time]

Average number of milliseconds spent performing CMS garbage collections per second.

JVM ParNew Collection Time [par-new-collection-time]

Average number of milliseconds spent performing ParNew garbage collections per second. ParNew collections pause all work in the JVM but should finish quickly.

JVM G1 Old Collection Count [g1-old-collection-count]

Number of G1 old generation garbage collections performed per second.

JVM G1 Old Collection Time [g1-old-collection-time]

Average number of milliseconds spent performing G1 old generation garbage collections per second.

JVM G1 Young Collection Count [g1-young-collection-count]

Number of G1 young generation garbage collections performed per second.

JVM G1 Young Collection Time [g1-young-collection-time]

Average number of milliseconds spent performing G1 young generation garbage collections per second.

Data Size [data-load]

The live disk space used by all tables on a node.

Total Bytes Compacted [total-bytes-compacted]

Number of bytes compacted per second.

Total Compactions Completed [actual-total-compactions-completed]

Number of compaction tasks completed per second.

Total Compactions [total-compactions-completed]

Number of sstable scans per second that could result in a compaction.

Compactions Pending [pending-compaction-tasks]

Estimated number of compactions required to achieve the desired state. This includes the pending queue to the compaction executor and additional tasks that may be created from their completion.

Task Queues [all-pending]

Aggregate of thread pools pending queues that can be used to identify where things are backing up internally. This doesn't include pending compactions because it includes an estimate outside of the task queue or the hinted hand off queue, which can be in constant state of being on.

Dropped Messages: All [all-dropped]

Aggregate of all messages that have been dropped server-side due to not having been processed before their respective timeout.

Dropped Messages: Counter Mutations [dropped-counter-mutations]

Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.

Dropped Messages: Mutations [dropped-mutations]

Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.

Dropped Messages: Reads [dropped-reads]

A local read request was received after the timeout (read_request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.

Dropped Messages: Ranged Slice Reads [dropped-ranged-slice-reads]

A local ranged read request was received after the timeout (range_request_timeout_in_ms) so it was thrown away because it would have already either been completed and sent to client or sent back as a timeout error.

Dropped Messages: Read Repairs [dropped-read-repairs]

The Mutation was seen after the timeout (write_request_timeout_in_ms) so was thrown away. With the read repair timeout, the node still exists in an inconsistent state.

TP: Flushes Pending [pending-flushes]

Number of memtables queued for the flush process. A flush sorts and writes the memtables to disk.

TP: Gossip Tasks Pending [pending-gossip-stage]

Number of gossip messages and acknowledgments queued and waiting to be sent or received.

TP: Internal Responses Pending [pending-internal-response-stage]

Number of pending tasks from internal tasks, such as nodes joining and leaving the cluster.

TP: Manual Repair Tasks Pending [pending-anti-entropy-stage]

Repair tasks pending, such as handling the merkle tree transfer after the validation compaction.

TP: Cache Cleaning Pending [pending-cache-cleanup-stage]

Tasks pending to clean row caches during a cleanup compaction.

TP: Post Flushes Pending [pending-memtable-post-flush]

Tasks related to the last step in flushing memtables to disk as SSTables. Includes removing unnecessary commitlog files and committing Solr-based secondary indexes.

TP: Migrations Pending [pending-migration-stage]

Number of pending tasks from system methods that modified the schema.

TP: Misc. Tasks Pending [pending-misc-stage]

Number of pending tasks from infrequently run operations, such as taking a snapshot or processing the notification of a completed replication.

TP: Read Repair Tasks Pending [pending-read-repair-stage]

Number of read repair operations in the queue waiting to run.

TP: Request Responses Pending [pending-request-response-stage]

Number of pending callbacks to execute after a task on a remote node completes.

TP: Validation Executor Pending [pending-validation-executor]

Pending task to read data from sstables and generate a merkle tree for a repair.

TP: Compaction Executor Pending [pending-compaction-executor]

Pending compactions that are known. This metric could deviate from "pending compactions," which includes an estimate of tasks that these pending tasks might create after completion.

Completed tasks to calculate the ranges according to bootstrapping and leaving nodes.

KeyCache Hits [key-cache-hits]

The number of key cache hits per second. This will avoid possible disk seeks when finding a partition in an SSTable. This metric only applies to SSTables created by DSE versions earlier than 6.0.

KeyCache Requests [key-cache-requests]

The number of key cache requests per second. This metric only applies to SSTables created by DSE versions earlier than 6.0.

KeyCache Hit Rate [key-cache-hit-rate]

The percentage of key cache lookups that resulted in a hit. This metric only applies to SSTables created by DSE versions earlier than 6.0.

RowCache Hits [row-cache-hits]

The number of row cache hits per second.

RowCache Requests [row-cache-requests]

The number of row cache requests per second.

RowCache Hit Rate [row-cache-hit-rate]

The percentage of row cache lookups that resulted in a hit.

Native Clients [native-connections]

The number of clients connected using the native protocol.

Read Repairs Attempted [read-repair-attempted]

Number of read requests where the number of nodes queried possibly exceeds the consistency level requested in order to check for a possible digest mismatch.

Asynchronous Read Repairs [read-repaired-background]

Corresponds to a digest mismatch that occurred after a completed read, outside of the client read loop.

Synchronous Read Repairs [read-repaired-blocking]

Corresponds to the number of times there was a digest mismatch within the requested consistency level and a full data read was started.

TBL: Local Writes [cf-write-ops]

Local write requests per second. Local writes update the table's memtable and appends to a commitlog.

TBL: Local Write Latency (percentiles) [cf-local-write-latency]

The min, median, max, 90th, and 99th percentile of the response times to write data to a table's memtable. The elapsed time from when the replica receives the request from a coordinator and returns a response.

TBL: Local Reads [cf-read-ops]

Local read requests per second. Local reads retrieve data from a table's memtable and any necessary SSTables on disk.

TBL: Local Read Latency (percentiles) [cf-local-read-latency]

The min, median, max, 90th, and 99th percentile of the response time to read data from the memtable and sstables for a specific table. The elapsed time from when the replica receives the request from a coordinator and returns a response.

TBL: Live Disk Used [cf-live-disk-used]

Disk space used by live SSTables. There might be obsolete SSTables not included.

TBL: Total Disk Used [cf-total-disk-used]

Disk space used by a table by SSTables, including obsolete ones waiting to be garbage collected.

TBL: SSTable Count [cf-live-sstables]

Total number of SSTables for a table.

TBL: SSTables per Read (percentiles) [cf-sstables-per-read]

The min, median, max, 90th, and 99th percentile of how many SSTables are accessed during a read. Includes sstables that undergo bloom-filter checks, even if no data is read from the sstable.

TBL: Partition Size (percentiles) [cf-partition-size]

The min, median, max, 90th, and 99th percentile of the size (in bytes) of partitions of this table.

TBL: Cell Count (percentiles) [cf-column-count]

The min, median, max, 90th, and 99th percentile of how many cells exist in partitions for this table.

Operating system load average. One minute value parsed from /proc/loadavg on Linux systems.

OS: Disk Usage (%) [os-disk-usage]

Disk space used by Cassandra at a given time.

OS: Disk Free [os-disk-free]

Free space on a specific disk partition.

OS: Disk Used [os-disk-used]

Disk space used by Cassandra at a given time.

OS: Disk Read Throughput [os-disk-read-throughput]

Average disk throughput for read operations.

OS: Disk Write Throughput [os-disk-write-throughput]

Average disk throughput for write operations.

OS: Disk Throughput [os-disk-throughput]

Average disk throughput for read and write operations.

OS: Disk Read Rate [os-disk-read-rate]

Rate of reads per second to the disk.

OS: Disk Writes Rate [os-disk-write-rate]

Rate of writes per second to the disk.

OS: Disk Latency [os-disk-await]

Average completion time of each request to the disk.

OS: Disk Request Size [os-disk-request-size]

Average size of read requests issued to the disk.

OS: Disk Request Size [os-disk-request-size-kb]

Average size of read requests issued to the disk.

OS: Disk Queue Size [os-disk-queue-size]

Average number of requests queued due to disk latency issues.

OS: Disk Utilization [os-disk-utilization]

CPU time consumed by disk I/O.

OS: Net Received [os-net-received]

Speed of data received from the network.

OS: Net Sent [os-net-sent]

Speed of data sent across the network.

OS: Net Sent [os-net-sent-win]

Speed of data sent across the network.

OS: Net Received [os-net-received-win]

Speed of data received from the network.

Speculative Retries [speculative-retries]

Number of speculative retries for all column families.

TBL: Speculative Retries [cf-speculative-retries]

Number of speculative retries for this table.

Stream Data Out - Total [stream-out-total]

Data streamed out from this node to all other nodes, for all tables.

Stream Data In - Total [stream-in-total]

Data streams in to this node from all other nodes, for all tables.

Hint Creation Rate [hint-creation-rate]

Rate at which new individual hints are stored on this node, to be replayed to peers.

TBL: Bloom Filter Off Heap [cf-bf-offheap]

Total off heap memory used by bloom filters from all live SSTables in a table.

TBL: Index Summary Off Heap [cf-index-summary-offheap]

Total off heap memory used by the index summary of all live SSTables in a table.

TBL: Compression Metadata Off Heap [cf-compression-data-offheap]

Total off heap memory used by the compression metadata of all live SSTables in a table.

TP: Memtable Reclaims Pending [memtable-reclaim-pending]

Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.

TP: Memtable Reclaims Active [memtable-reclaim-active]

Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.

TP: Memtable Reclaims Completed [completed-memtable-reclaim]

Waits for current reads to complete and then frees the memory formerly used by the obsoleted memtables.

TBL: Memtable Off Heap [cf-memtable-offheap]

Off heap memory used by a table's current memtable.

TBL: Total Memtable Heap Size [cf-all-memtables-heapsize]

An estimate of the space used in JVM heap memory for all memtables. This includes ones that are currently being flushed and related secondary indexes.

TBL: Total Memtable Live Data Size [cf-all-memtables-livedatasize]

An estimate of the space used for 'live data' (off-heap, excluding overhead) for all memtables. This includes ones that are currently being flushed and related secondary indexes.

TBL: Total Memtable Off-Heap Size [cf-all-memtables-offheapsize]

An estimate of the space used in off-heap memory for all memtables. This includes ones that are currently being flushed and related secondary indexes.

In-Memory Percent Used [in-memory-percent-used]

The percentage of memory allocated for in-memory tables currently in use.

TBL: Partition Count [cf-row-size]

Approximate number of partitions. This may be off given duplicates in memtables and sstables are both counted and there is a very small error percentage inherited from the HyperLogLog data structure.

Write Request Latency [write-latency-legacy]

Deprecated. The median response times (in milliseconds) of a client write. The time period starts when a node receives a client write request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from writing to the replicas.

Read Request Latency [read-latency-legacy]

Deprecated. The median response times (in milliseconds) of a client read. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data's replicas.

View Write Latency (percentiles) [view-write-histogram]

The min, median, max, 90th, and 99th percentiles of the time from when base mutation is applied to memtable until CL.ONE is achieved on the async write to the tables materialized views. An estimate to determine the lag between base table mutations and the views consistency.

View Write Successes [view-replicas-success]

Number of view mutations sent to replicas that have been acknowledged.

View Write Pending [view-replicas-pending]

Number of view mutations sent to replicas where the replicas acknowledgement hasn't been received.

TP: Hint Dispatcher Pending [pending-hint-dispatcher]

Pending tasks to send the stored hinted handoffs to a host.

TP: Hint Dispatcher Active [active-hint-dispatcher]

Up to max_hints_delivery_threads tasks, each dispatching all hinted handoffs to a host.

TP: Hint Dispatcher Completed [completed-hint-dispatcher]

Number of tasks to transfer hints to a host that have completed.

TP: Index Management Pending [pending-secondary-index-management]

Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.

TP: Index Management Active [active-secondary-index-management]

Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.

TP: Index Management Completed [completed-secondary-index-management]

Any initialization work when a new index instance is created. This may involve costly operations such as (re)building the index.

TBL: Tombstones per Read (percentiles) [cf-tombstones-per-read]

The min, median, max, 90th, and 99th percentile of how many tombstones are read during a read.

TBL: Local Write Latency [cf-write-latency-legacy]

Deprecated. Median response time to write data to a table's memtable. The elapsed time from when the replica receives the request from a coordinator and returns a response.

TBL: Local Read Latency [cf-read-latency-legacy]

Deprecated. Median response time to read data from the memtable and SSTables for a specific table. The elapsed time from when the replica receives the request from a coordinator and returns a response.

The min, median, max, 90th, and 99th percentiles of client reads on this table. The time period starts when a node receives a client read request, and ends when the node responds back to the client. Depending on consistency level and replication factor, this may include the network latency from requesting the data's replicas.

TBL: Coordinator Read Requests [cf-coordinator-read-ops]

The number of read requests per second for a particular table on the coordinator nodes. Monitoring the number of requests over a given time period reveals table read workload and usage patterns.

Cells Scanned (percentiles) [cells-scanned-during-read]

The min, median, max, 90th, and 99th percentile of how many cells were scanned during a read.

TBL: Cells Scanned (percentiles) [cf-cells-scanned-during-read]

The min, median, max, 90th, and 99th percentile of how many cells were scanned during a read.

TIER: Total Disk Used [cf-tier-size]

Disk space used by a table by SSTables for the tier.

TIER: sstables [cf-tier-sstables]

Number of SSTables in a tier for a table.

TIER: Max Data Age [cf-tier-max-data-age]

Timestamp in local server time that represents an upper bound to the newest piece of data stored in the SSTable. When a new SSTable is flushed, it is set to the time of creation. When an SSTable is created from compaction, it is set to the max of all merged SSTables.

Graph: Adjacency Cache Hits [graph-adjacency-cache-hit]

Number of hits against the adjacency cache for this graph.

Graph: Adjacency Cache Misses [graph-adjacency-cache-miss]

Number of misses against the adjacency cache for this graph.

Graph: Index Cache Hits [graph-index-cache-hit]

Number of hits against the index cache for this graph.

Graph: Index Cache Misses [graph-index-cache-miss]

Number of misses against the index cache for this graph.

Graph: Request Latencies [graph-request-latencies]

The min, median, max, 90th, and 99th percentile of request latencies during the period.

Rate of coordinated reads to a node where that node did not choose itself as a replica for the read request.

Hints on Disk [hints-on-disk]

The number of hints currently stored on disk, to be replayed to peers.

Hint Replay Success Rate [hint-replay-success-rate]

Rate of successful individual hint replays to peers. If one or more individual hints fail to replay in a batch, the successful hints in that batch will be replayed again and double counted in this metric.

Hint Replay Error Rate [hint-replay-error-rate]

Rate of failed individual hint replays. Replay of a single hint can fail more than once if retried.

Hint Replay Timeout Rate [hint-replay-timeout-rate]

Rate of timed out individual hint replays. Replay of a single hint can timeout more than once if retried.

Hint Replay Received Rate [hint-replay-received-rate]

Rate of successful individual hints replayed to this node, from other peers.

Node Messaging Latency [cross-node-latency]

The min, median, max, 90th, and 99th percentiles of the latency of messages between nodes. The time period starts when a node sends a message and ends when the current node receives it.

Datacenter Messaging Latency [cross-dc-latency]

The min, median, max, 90th, and 99th percentiles of the message latency between nodes in the same or different destination datacenter. This metric measures how long it takes a message from a node in the source datacenter to reach a node in the destination datacenter. Selecting a destination node within the source datacenter yields lower latency values.

NodeSync: Data Repaired [nodesync-data-repaired]

Bytes of data that were inconsistent and needed synchronization.

NodeSync: Data Validated [nodesync-data-validated]

Bytes of data checked for consistency.

NodeSync: Repair Data Sent [nodesync-repair-data-sent]

Total bytes of data transferred between all nodes during synchronization.

NodeSync: Objects Repaired [nodesync-objects-repaired]

Number of rows and range tombstones that were inconsistent and needed synchronization.

NodeSync: Objects Validated [nodesync-objects-validated]

Number of rows and range tombstones checked for consistency.

NodeSync: Repair Objects Sent [nodesync-repair-objects-sent]

Total number of rows and range tombstones transferred between all nodes during synchronization.

NodeSync: Processed Pages [nodesync-processed-pages]

Number of pages (internal groupings of data) processed.

NodeSync: Full In Sync Pages [nodesync-full-in-sync-pages]

Number of processed pages that were not in need of synchronization.

NodeSync: Full Repaired Pages [nodesync-full-repaired-pages]

Number of processed pages that were in need of synchronization.

NodeSync: Partial In Sync Pages [nodesync-partial-in-sync-pages]

Number of in sync pages for which a response was gotten from only a partial number of replicas.

NodeSync: Partial Repaired Pages [nodesync-partial-repaired-pages]

Number of repaired pages for which a response was gotten from only a partial number of replicas.

NodeSync: Uncompleted Pages [nodesync-uncompleted-pages]

Number of processed pages not having enough responses to perform synchronization.

NodeSync: Failed Pages [nodesync-failed-pages]

Number of processed pages for which an unknown error prevented proper synchronization completion.

NodeSync TBL: Data Repaired [nodesync-tbl-data-repaired]

Bytes of data that were inconsistent and needed synchronization.

NodeSync TBL: Data Validated [nodesync-tbl-data-validated]

Bytes of data checked for consistency.

NodeSync TBL: Repair Data Sent [nodesync-tbl-repair-data-sent]

Total bytes of data transferred between all nodes during synchronization.

NodeSync TBL: Objects Repaired [nodesync-tbl-objects-repaired]

Number of rows and range tombstones that were inconsistent and needed synchronization.

NodeSync TBL: Objects Validated [nodesync-tbl-objects-validated]

Number of rows and range tombstones checked for consistency.

NodeSync TBL: Repair Objects Sent [nodesync-tbl-repair-objects-sent]

Total number of rows and range tombstones transferred between all nodes during synchronization.

Mutation of Materialized View was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.

Dropped Messages: Lightweight Transactions [dropped-lwt]

Lightweight Transaction was seen after the timeout (write_request_timeout_in_ms) so was thrown away. This client might have timed out before it met the required consistency level, but might have succeeded as well. Hinted handoffs and read repairs should resolve inconsistencies but a repair can ensure it.

Dropped Messages: Hinted Handoffs [dropped-hints]

Hinted Handoff was seen after the timeout (write_request_timeout_in_ms) so was thrown away. Repairing the data or using NodeSync, should resolve data inconsistencies.

Dropped Messages: Truncate Operations [dropped-truncates]

Truncate operation was seen after the timeout (truncate_request_timeout_in_ms) so was thrown away.

Dropped Messages: Snapshot Requests [dropped-snapshots]

Snapshot Request was seen after the timeout (request_timeout_in_ms) so was thrown away. Snapshot should be retried.

Dropped Messages: Schema Changes [dropped-schemas]

Schema change was seen after the timeout (request_timeout_in_ms) so was thrown away. Schema agreement may not have been reached immediately, but this will eventually resolve itself.