Install Tectonic on Azure with Terraform

Following this guide will deploy a Tectonic cluster within your Azure account.

Generally, the Azure platform templates adhere to the standards defined by the project conventions and generic platform requirements. This document aims to document the implementation details specific to the Azure platform.

Alpha: These modules and instructions are currently considered alpha. See the platform life cycle for more details.

Prerequsities

Go

Terraform

Tectonic Installer includes and requires a specific version of Terraform. This is included in the Tectonic Installer tarball. See the Tectonic Installer release notes for information about which Terraform versions are compatible.

DNS

A few means of providing DNS for your Tectonic installation are supported:

Azure-provided DNS

This is Azure's default DNS implementation. For more information, see the Azure DNS overview.

To use Azure-provided DNS, tectonic_base_domain must be set to ""(empty string).

DNS delegation and custom zones via Azure DNS

To configure a custom domain and the associated records in an Azure DNS zone (e.g., ${cluster_name}.foo.bar):

The custom domain must be specified using tectonic_base_domain

The domain must be publically discoverable. The Tectonic installer uses the created record to access the cluster and complete configuration. See the Microsoft Azure documentation for instructions on how to delegate a domain to Azure DNS.

An Azure DNS zone which matches tectonic_base_domain must be created prior to running the installer. The full resource ID of the DNS zone must then be referenced in tectonic_azure_external_dns_zone_id

Tectonic Account

Register for a Tectonic Account, which is free for up to 10 nodes. You must provide the cluster license and pull secret during installation.

Currently, the LB is configured with a public IP address. Future work is planned to convert this to an internal LB.

Master nodes

Master node VMs are managed by the templates in modules/azure/master-as

Node VMs are created as an Availability Set (stand-alone instances, deployed across multiple fault domains)

Master nodes are fronted by one load balancer for the API and one for the Ingress controller.

The API LB is configured with SourceIP session stickiness, to ensure that TCP (including SSH) sessions from the same client land reliably on the same master node. This allows for provisioning the assets and starting bootkube reliably via SSH.

Worker nodes

Worker node VMs are managed by the templates in modules/azure/worker-as

Node VMs are created as an Availability Set (stand-alone instances, deployed across multiple fault domains)

Worker nodes are not fronted by an LB and don't have public IP addresses. They can be accessed through SSH from any of the master nodes.