Aggregated Logging in Tectonic

Tectonic does not preconfigure any particular aggregated logging stack. Tectonic recommends several example logging configurations that can be customized for site requirements. The recommended logging setup uses Fluentd to retrieve logs on each node and forward them to a log storage backend. The Tectonic examples use Elasticsearch for log storage. Elasticsearch can be replaced by any destination Fluentd supports with an Output plugin. For a list of Fluentd plugins, check http://www.fluentd.org/plugins/all

If you want to run these examples locally, all of the files mentioned are available in the Tectonic Docs repo.

Elasticsearch

In this setup, we will not be configuring or setting up Elasticsearch, or Kibana. If you are looking for something to get started with, https://github.com/pires/kubernetes-elasticsearch-cluster is a good reference, and has examples of deploying an Elasticsearch cluster on Kubernetes while following best practices.

If you want to customize your Elasticsearch output configuration or look at examples using different storage destinations, see the customizing log destination document.

Deploying Fluentd

First create the logging namespace for all of our resources to live in:

$ kubectl create ns logging

Then setup all the service account and roles that Fluentd needs to query Kubernetes for metadata about the containers logs it's watching:

Once all the pods are ready, everything should be functioning. To double check, use kubectl logs on one of the pods listed above to make sure there aren't any errors, and that Fluentd is able to send logs to where you've configured.

(Optional) Deploy Prometheus to monitor Fluentd

Tectonic includes the promtheus-operator in installations by default. This operator can be used to create additional instances of Prometheus to monitor your apps.

If you wish to enable Prometheus monitoring of your Fluentd pods, run the following commands: