Custom DataDog Metrics with Kubernetes Daemonsets

Our backend is running in AWS, and we package, deploy, and manage our microservices in AWS via Docker and Kubernetes.

Without too much trouble, we set up basic metrics for Kubernetes in DataDog by making a DaemonSet as described here. Soon we were able to create a DataDog dashboard with info on cpu and memory usage of pods, disk space, network traffic, and so on.

Custom Metrics — Not So Easy

We wanted custom metrics to track our own product specific info. Things like: how many robots are active in our system; how many are licensed; how many websocket messages are we processing from robots each minute; how many anomalies have been detected across all robots via our predictive maintenance feature.

The documentation on custom metrics says that you only need three lines of code. Those three lines of code should work, as long as your set up is very simple. But if you are running a DataDog agent as a DaemonSet in a Kubernetes cluster in AWS, knowing how to set up the StatsD client and which port to send the metrics to is not well-documented.

We finally got a custom metric working from our web app to DataDog, and this post is to help others needing to do the same. We write our microservices primarily in Scala, but the major steps apply to any language.

Making Custom Metrics Work

Start by adding the statsd client to your build. We use sbt so we add this to our build.sbt:

“com.datadoghq” % “java-dogstatsd-client” % “2.3”

Now we created a helper trait so we can easily add a custom metric to any class by simply mixing in the trait:

Now when you go to DataDog after deploying all this code, you will see a custom metric in your dropdown with the prefix you chose (we use “kuka”) so for us it shows up in the long list as kuka.awesome_data.added and we can choose which type of visualization we want for the data.