Pivotal Cloud Ops Tools

For Cloud Foundry, Pivotal Cloud Ops uses several monitoring tools.
The Datadog Config
repository
provides an example of how the Pivotal Cloud Ops team uses a customized Datadog dashboard to monitor the health of its open-source Cloud Foundry deployments.

Key Inputs for Platform Monitoring

BOSH VM and PCF Component Health Metrics

Most monitoring service tiles for PCF come packaged with the Firehose
nozzle necessary to extract the BOSH and CF metrics leveraged for platform
monitoring. Nozzles are programs that consume data from the Loggregator Firehose. Nozzles can be configured to select, buffer, and transform data, and to forward it to other apps and services.

The nozzles gather the component logs and metrics streaming from the Loggregator Firehose endpoint. For more information about the Firehose, see Loggregator Architecture.

As of PCF v2.0, both BOSH VM Health metrics and Cloud Foundry component metrics stream through the Firehose by default.

PCF component metrics originate from the Metron agents on their
source components, then travel through Dopplers to the Traffic
Controller.

The Traffic Controller aggregates both metrics and log messages
system-wide from all Dopplers, and emits them from its Firehose
endpoint.

The following topic list high-signal-value metrics and capacity scaling
indicators in a PCF deployment:

Continuous Functional Smoke Tests

PCF includes smoke
tests,
which are functional unit and integration tests on all major system
components. By default, whenever an operator upgrades to a new version
of PAS, these smoke tests run as a post-deploy errand.

Pivotal recommends additional higher-resolution monitoring by the execution
of continuous smoke tests, or Service Level Indicator tests, that measure user-defined features and check them against expected levels.

See the Metrics topic in the Concourse documentation for how to set up Concourse to generate custom component metrics.

Warning and Critical Thresholds

To properly configure your monitoring dashboard and alerts, you must establish what thresholds should drive alerting and red/yellow/green dashboard behavior.

Some key metrics have more fixed thresholds, with similar threshold numbers numbers recommended across different foundations and use cases. These metrics tend to revolve around the health and performance of key components that can
impact the performance of the entire system.

Other metrics of operational value are more dynamic in nature. This means that you must establish a baseline and yellow/red thresholds suitable for your system and its use cases. You can establish initial baselines by watching values of key metrics over time and noting what seems to be a good starting threshold level that divides acceptable and unacceptable system performance and health.

Continuous Evolution

Effective platform monitoring requires continuous evolution.

After you establish initial baselines, Pivotal recommends that you continue to refine your metrics and tests to maintain the appropriate balance between early detection and reducing
unnecessary alert fatigue. The dynamic measures recommended in Key
Performance
Indicators
and Key Capacity Scaling
Indicators
should be revisited on occasion to ensure they are still appropriate to
the current system configuration and its usage patterns.