Frequently Asked Questions

Every instance that shares the same tag is addressable using the same domain name. This makes discovery of those instances easy, and the domain name for those instances won’t change as you scale the number of instances up and down. See the global DNS example for more information.

While Triton CNS makes it easy to support scaling applications, it does not trigger the automatic scaling of applications.

Triton CNS’s DNS systems are designed for high availability. In Triton Compute Service, the CNS replicas are deployed across multiple availability zones in the US and Europe to ensure name resolution will survive even failures of multiple entire data centers. The operator guide will include details for private cloud users to achieve high availability as well.

Triton Container Name Service is designed for maximum convenience and ease of use. To support that, nodes are considered “healthy” at an infrastructure level. Newly provisioned instances will appear in DNS when the Triton infrastructure marks them as “running” and will be removed from DNS if the infrastructure detects that they have stopped or are restarting.

The following list of conditions defines when an instance, whether it be a container or VM, is reported in DNS. The steps apply strictly in this order (so lower down the list is higher priority):

By default, instances are enabled for CNS

Instances belonging to users without the triton_cns_enabled flag and users who are not approved for provisioning are disabled

Instances that have the triton.cns.disable tag set to non-false are disabled

Instances that are marked as destroyed are disabled

All instances that are disabled by these steps will neither be listed in individual instance records, nor service records. Instances that are enabled by these steps are always listed in instance records.

Additional logic applies to decide whether instances are listed in service records:

Instances are by default listed in all services contained in their triton.cns.services tag

Instances with the triton.cns.status metadata set to non-up are taken out of services

Instances on a compute node that is not running, has not answered a heartbeat in 60 seconds, or has booted up in the last 120 seconds, are taken out of services

Stopped containers will be removed from service address records within seconds, but the TTL for address records cached downstream may continue to drive traffic to the stopped instance. Some clients, including some web browsers, will automatically retry requests with all the IPs in a given address record, and these clients will not experience any downtime. If a container restarts on Triton, it will restart with the same IP address, further minimizing any downtime that may be experienced.

In a scenario with poorly behaved clients that don’t re-try different IPs when making requests to an address record with multiple IPs for an application with five instances, if one of those instances stops, then it’s possible that 20% of traffic to the DNS name from those old clients will be directed to the stopped instance until the TTL expires.

Triton users can tell Triton CNS to take an instance out of all service records in DNS, making it possible to do planned changes with no loss of traffic. By removing containers from DNS before stopping them, they have time to complete any requests that are in progress or that they may receive during the DNS TTL.

Inside Docker containers and container-native Linux infrastructure containers, use the following command:

Code inside an instance can monitor the health of services provided by the instance and remove it from DNS if it detects those services are unhealthy. See marking instances for maintenance. However, Triton CNS cannot detect the failure of the healthcheck itself, and instances will remain healthy by default. Please see additional caveats.

Triton CNS will report instances in DNS that are in a “running” state and not explicitly marked for maintenance. Because instances are considered “healthy” by default, they may be discoverable in DNS even if the service(s) they provide are not operating correctly.

Further, Triton CNS is designed to support global DNS services with DNS TTL times that work well for discovery on the larger internet, but are longer than are likely acceptable for connections between components within a data center.

Because of those limitations, DNS-based discovery is not ideal between components of an application, such as from the application to its database, between a front-end proxy and the application. Alternative discovery mechanisms, such as those using Consul with ContainerPilot may be a better fit for supporting connections between application components inside a single data center. Please see our blueprint for automating this discovery method on Triton.

A common usage is to point a CNAME to Triton CNS in your normal DNS provider (see example), but it is also possible to add a DNAME entry that will map all names within the hierarchy of the DNAME to the specified Triton CNS UUID and data center. In other words, a CNAME is for individual host records while a DNAME is for an entire domain name sub-tree.

For example, adding the following DNAME records to the DNS zone example.net.

Triton CNS is a public nameserver and the DNS names it generates are resolvable on the public internet by any working nameserver with recursive resolution. There is no need to set specific resolv.conf entries or take other steps to take advantage of Triton CNS' name servers.

You can also disable CNS just for individual instances in your account, even if you have enabled it for all others, by setting the triton.cns.disable tag for that instance (also see marking nodes for maintenance). This tag overrules all other rules in the CNS engine, guaranteeing that instance will not be listed in CNS for any reason.

Yes. Triton CNS is one of a number of Triton components that is IPv6-ready. Triton CNS can serve both A and AAAA records (including AAAA records for statically assigned IPv6 addresses). Joyent is committed to full IPv6 compatibility for all Triton components. In true open source fashion, the work plan for that is published on Github and we welcome pull requests to speed up that process.

Triton CNS is built for easy use in private data centers. Operators can select their preferred base domain names for internal and external use. CNS is also built to be able to integrate with your existing DNS infrastructure for Private Cloud use, including full support for secondary nameservers running ISC BIND and other 3rd-party software.

Existing Triton DataCenter installations can be upgraded to support Triton CNS. Please contact your support representative for details and requirements for this upgrade. Full documentation for deploying and managing Triton CNS in private data centers will be available with the general availability launch.

Triton CNS is designed to fit the majority of use cases with great simplicity at no added cost, but it’s not perfect for every use case. For those situations, Joyent has partnered with Brocade to offer the Brocade Virtual Traffic Manager (formerly Steelapp) in our public cloud. Please contact the Joyent sales team for pricing and support options.

Triton CNS can be used in a similar way to --link in Docker, but can be used to connect any type of Triton infrastructure, including Docker containers, infrastructure containers, and VMs. Additionally, some may prefer Triton CNS’ ability to set service names separately from container names, and appreciate Triton CNS as a universal DNS that connects all instances in their account.

An advantage of Triton CNS over --link is that DNS lookups for a service will always return the current set of instances registered for that service, even if those instances have changed over time. Docker's --link feature, however, does not respond to changes in instances, making it difficult to scale services or replace instances over the life of an application.

The implementation of --link in Docker on Triton differs from behavior in Docker in that it has always supported connections between containers running on different compute nodes. That feature will continue to work without any change in behavior or implementation on Triton. It should be noted that Docker Inc. has recently changed the implementation and behavior of --link in the Docker daemon, but those changes do not affect the implementation or behavior of --link on Joyent’s Triton. However, --link in Docker (on Triton or elsewhere) cannot link Docker containers to infrastructure containers or VMs, limiting its usefulness in applications that are not fully Dockerized.

Triton CNS is completely optional, however, and can be turned off if it is undesirable for any reason.