Introducing Prometheus support for Datadog Agent 6

If you’ve configured your application to expose metrics to a Prometheus backend, you can now send that data to Datadog. Starting with version 6.1.0 of the Datadog Agent, you can monitor Prometheus metrics alongside all the other data collected by Datadog’s built-in integrations and custom instrumentation libraries. In this post, we’ll take a look at how the new Prometheus integration works.

What is Prometheus?

Prometheus is an open source monitoring system for timeseries metric data. Prometheus provides a dimensional data model—metrics are enriched with metadata known as labels, which are key-value pairs that add dimensions such as hostname, service, or data center to your timeseries. Labels are equivalent to Datadog tags and allow you to categorize, filter, and aggregate your metrics by any attribute that is important to you. Applications expose monitoring data to Prometheus using a text-based exposition format that encodes the name of the metric, the metric value corresponding to a given timestamp, and any associated labels. The Prometheus exposition format forms the basis of the new OpenMetrics standard.

Below is an example of a Prometheus metric counting HTTP requests to a server, with labels denoting the HTTP request type, response code, and environment:

Why monitor Prometheus metrics with Datadog?

Datadog strives to make instrumentation as easy as possible. As members of the Cloud Native Computing Foundation, we are dedicated to open source and committed to providing you with the ability to seamlessly monitor all your systems with our integrations and support for common data formats. Whether your applications send metrics using the StatsD protocol, via JMX, or using one of our open and extensible instrumentation libraries, you can start monitoring your data with minimal changes to your infrastructure and services. We have added the Prometheus exposition format to our supported data types to make it just as easy to monitor your applications that are already set up to report metrics to Prometheus or to another OpenMetrics backend.

Datadog + Prometheus

Datadog pulls Prometheus metrics for monitoring by running a customizable Agent check that scrapes available endpoints for any exposed metrics.

Configuring the Datadog Agent

To start collecting these metrics, you will need to edit the Agent’s conf.yaml.example configuration file for the Prometheus check. You can find it in your host’s /etc/datadog-agent/conf.d/prometheus.d/ directory.

This basic setup includes the Prometheus endpoint, a namespace that will be prepended to all collected metrics, and the metrics you want the Agent to scrape. You can use * wildcards to pull in all metrics that match a given pattern, and you can also map existing Prometheus metrics to custom metric names. In the example above, the Prometheus metric prometheus_target_interval_length_seconds will appear in Datadog as myapp.target_interval_length.

Save the file as conf.yaml, then restart the Agent. If configured properly, you should see the prometheus check under Running Checks when you run the Agent’s status command:

For a comprehensive list of options, take a look at this example config file for the generic Prometheus check.

Visualizing your data

Once the Agent is configured to grab Prometheus metrics, you can use them to build comprehensive Datadog graphs, dashboards, and alerts.

Our new integration works with the Prometheus data model by mapping labels to Datadog tags automatically for all collected metrics. With this integration, you can filter and aggregate your data using any dimension you care about.

In addition to pulling in application metrics, Datadog provides a service check that monitors the health of your Prometheus endpoint.

Going further: Building custom checks

If you need more control over what the Agent monitors and provides through its generic Prometheus check, you can create a custom check. Custom checks are useful when you need to include additional, pre-processing logic around the metrics you collect. Check out the advanced usage guide for more information on how to create a custom Prometheus check.

Monitoring Kubernetes clusters and containerized services

Prometheus is often deployed alongside Kubernetes and other cloud-native technologies. Building on our extensive support for Kubernetes monitoring, Datadog integrates seamlessly with Kubernetes components that expose metrics via the Prometheus exposition format, such as kube-dns. And Datadog’s Autodiscovery feature, which continually monitors dynamic containers and services, allows you to automate the collection of Prometheus metrics in your cluster. Autodiscovery tracks containerized services by applying configuration templates that are attached to those services via Kubernetes pod annotations. Similar to the configuration of the Agent check above, you can include the Prometheus URL, namespace, and metrics in an annotation to enable the Datadog Agent to start collecting metrics from that service whenever it starts up in your cluster:

With customizable Prometheus checks and Datadog’s Autodiscovery feature, you can proactively monitor all the containerized services in your cluster.

A word about Prometheus and custom metrics

Note that Prometheus metrics monitored by the Agent fall under the custom metric category and are therefore subject to certain limits. Read up on custom metric allowances to learn more.

Get started with Prometheus metrics in Datadog

Datadog makes it easy to gather all the monitoring data you need by integrating with a wide variety of platforms, applications, programming languages, and data formats. Setting up Datadog’s new integration with Prometheus allows you to monitor your instrumented applications alongside all the infrastructure components, applications, and services that you have already integrated with Datadog.