AWS: Installation requirements

The following tools and access rights are required to use Tectonic Installer with and Amazon Web Services (AWS) account.

Tectonic License and Pull Secret

A public AWS Route 53 Hosted Zone identifier. Public Route 53 DNS resolution is a requirement for controller-worker TLS communication. Choose a domain or subdomain and configure it for name service at Route 53. Tectonic will create 2 subdomains in this Hosted Zone during provisioning.

Add your user's ARN, found on the IAM user detail page, to the trusted entities for the tectonic-installer role. To do so, click on the Trust Relationships tab and then on the Edit Trust Relationship button to bring up the trusted entities JSON editor. You'll then need to add a new section for your user's ARN.

The example Trust Relationship below has been edited to add a user's (named tectonic) ARN:

Use the SECRET_ACCESS_KEY, ACCESS_KEY_ID, and SESSION_TOKEN to authenticate in the installer.

If building the Tectonic cluster using the CLI directly, then you can configure terraform to perform the STS assume-role operation automatically on every run. It will automatically retrieve and use the temporary credentials every time so you don't have to refresh them manually when they expire.

To enable Terraform to perform the assume-role operation edit the file platforms/aws/main.tf and change the provider "aws" { ... } block to include the following configuration:

You can then run Terraform using an unpriviledged user that only has permissions to assume the tectonic-installer role.

SSH key

The final step of the Tectonic install requires an SSH key and access to standard utilities like ssh and scp. Setting up a new key on AWS should take less than 5 minutes.

Tectonic uses AWS S3 to store all credentials, using server-side AES encryption for storage, and TLS encryption for upload/download. Any pod run in the system can query the AWS metadata, get node AWS credentials, and pull down cluster credentials from AWS S3. CoreOS plans to address this issue in a later release.

First, create a key.

Open a new terminal. Check if you already have a key by running ls ~/.ssh/. If you've previously created a key, you may see a file like id_rsa.pub. If you'd like to use this key, skip to upload the key to AWS below.

Type ssh-keygen --help to validate you have the openssh utilities installed. If you cannot find the binaries on your system, please consult your distro's documentation.

Follow the prompts on screen to finish creating your keypair. If you chose the default file name and location, your key should be in $HOME/.ssh/id_rsa.pub. Otherwise, the key-pair is in your current directory.

Next, upload the key to AWS.

Sign in using your IAM user or temporary credentials.

Go to Services > Compute > EC2.

Use the pulldown menu to select the same region as that selected for Tectonic installation.

On the left navigation under Network & Security, click Key Pairs.

Click Import Key Pair. Follow the displayed instructions to import your public key file, whose name should end in .pub.

Access

In order to access the cluster two ELB backed services are exposed. Both are accessible over the standard TLS port (443).

Install Tectonic

With temporary credentials and an SSH key, you'll be ready to install Tectonic. Head over to the install doc to get started.

Subnet/VPC requirements

The following table includes the high level networking features required to install Tectonic into new or existing VPCs, with or without public access to cluster services.

Public facing cluster

Internal cluster

New VPC

Installer creates public subnets

Select 'internal' in Tectonic installer

Existing VPC

2 subnets, connected to an IGW

Create 2 subnets, Establish a VPN

Configuring a public cluster

Subnets for Controllers must have an attached and routed Internet Gateway.

Subnets for Workers must be able to route requests to the Controller subnets and must have an associated route table that specifies a default gateway.

The route tables should be explicitly attached to their subnets.

Configuring an internal cluster

Subnets for Controllers and Workers must be able to route requests to each other and must have an associated route table that specifies a default gateway.

The route tables should be explicitly attached to their subnets.

You must have VPN access to the subnet as it is does not offer an inbound connection to the Internet.

Tectonic installer must be able to:

resolve DNS records in Route53 hosted zone used by installer

establish a TCP connection with the Tectonic Ingress ELB

If you are experiencing issues with an install involving VPC-internal components, you may find the troubleshooting section useful.

Using an existing VPC

By default, Tectonic Installer creates a new AWS Virtual Private Cloud (VPC) for each cluster. Advanced users can choose to use an existing VPC instead. An existing VPC must have an Internet Gateway. Tectonic Installer will not create an Internet Gateway in an existing VPC.

An existing VPC for a public cluster must have a public subnet for controllers, and a private subnet for workers. An existing VPC for an internal cluster must have 2 private subnets, one each for controllers and workers.

Public subnets have a default route to the Internet Gateway and should auto-assign IP addresses. Private subnets have a default route to a default gateway, such as a NAT Gateway or a Virtual Private Gateway.

DHCP Options Set attached to the VPC must have an AWS private domain name. In us-east-1 region, an AWS private domain name is ec2.internal whereas other regions use region.compute.internal.

When using an existing VPC, tag AWS VPC subnets with the kubernetes.io/cluster/my-cluster-name = shared tag. shared is used to tag resources shared between multiple clusters, which should not be destroyed if any individual cluster is destroyed. If this tag is not specified, AWS ELB integration with Tectonic may not be able to use VPC subnets.