Joyent CloudAPI

CloudAPI is one of the public APIs for a Triton cloud: it allows end users of
the cloud to manage their accounts, instances, networks, images, and to
inquire about other relevant details. CloudAPI provides a single view of
docker containers, infrastructure containers and hardware virtual machines
owned by the user.

This is the reference documentation for the CloudAPI that is part of Joyent's
Triton stack. This guide provides descriptions of the APIs available, as well
as supporting information -- such as how to use the software developer kits
(SDK), command line interface (CLI), and where to find more information.

Triton also provides a Docker API, which Docker clients can use, but this
documentation does not cover. For more information about Triton visit
Joyent Triton.

Conventions

Any content formatted as follows is a command-line example that you can run from
a shell:

sdc-listmachines

All other examples and information are formatted like so:

GET /my/machines HTTP/1.1

Introduction to CloudAPI

What is CloudAPI?

CloudAPI is one of the two public APIs you can use to interact with Triton.
Using CloudAPI, you can:

Create and manage containers and hardware virtual machines (collectively known as instances)

Manage your account credentials

Create custom analytics for monitoring your infrastructure

Create and modify virtual private networks for your instances

Manage snapshots of instances

Manage sub-users and their permissions using RBAC

And more! Oh yes!

While CloudAPI provides visibility into Docker containers, the regular
Docker CLI should be used
for provisioning and managing Docker containers; Triton provides an endpoint
that represents the entire datacenter as a single DOCKER_HOST, which Docker
clients can communicate with. Refer to Joyent's
Docker documentation for more information.

How do I access CloudAPI?

If you don't want to write any code, use one of the two CLIs. The CLIs let you
use command-line tools to perform every action available in the SDK and REST
API.

There are two CLIs available for calling CloudAPI: node-triton and node-smartdc.
node-triton is newer and easier to use, while node-smartdc is more stable and
complete, but both CLIs are supported. These docs will provide examples for
both, although node-triton will be omitted where it does not yet support that
functionality.

Getting Started

If you choose to use node-triton or node-smartdc, be aware that they both
require Node.js.

You can get Node.js from nodejs.org as source code, and as
precompiled packages for Windows, Macintosh, Linux and Illumos distributions.
Alternatively, when using a *nix, you can usually install Node.js using a
package manager as well (e.g. pkgsrc, brew, apt-get, yum). The version of
Node.js should be at least v0.10, so npm (Node.js's package manager) should come
with it as well.

Once you've installed Node.js, to install node-triton invoke:

npm install -g triton

or, to install node-smartdc:

npm install -g smartdc

You will probably want to install json as
well. It is a tool that makes it easier to work with JSON-formatted output. You
can install it like this:

npm install -g json

In all cases above, the -g switch installs the tools globally, usually in
/usr/local/bin, so that you can use them easily from the command line. Omit
this switch if you'd rather the tools be installed in your home hierarchy, but
you'll need to set your PATH appropriately.

Generate an SSH key

Both CLIs require an SSH key to communicate with CloudAPI, as well as logging-in
to many instances.

If you haven't already generated an SSH key (required to use both SSH and HTTP
Signing), run the following command:

ssh-keygen -b 2048 -t rsa

This will prompt you with a place to save the key. You should probably just
accept the defaults, as many programs (SSH and CloudAPI CLIs) will first look
for a file called ~/.ssh/id_rsa. Before running the above command, ensure that
~/.ssh/id_rsa does not already exist; overwriting it may have unintended
consequences.

Set Up your CLI

You need to set the following environment variables information in order to
interact with CloudAPI using either node-triton or node-smartdc:

An example for SDC_URL is https://us-west-1.api.joyent.com. Each
datacenter in a cloud has its own CloudAPI endpoint; a different cloud that uses
Triton would have a different URL.

In this document, we'll use api.example.com as the SDC_URL endpoint; please
replace it with the URL of your datacenter(s). Note that CloudAPI always uses
SSL/TLS, which means that the endpoint URL must begin with https.

You can quickly get your key fingerprint for SDC_KEY_ID by running:

ssh-keygen -l -f ~/.ssh/id_rsa.pub | awk '{print $2}' | tr -d '\n'

where you replace ~/.ssh/id_rsa.pub with the path to the public key you want
to use for signing requests.

You can set environment variables for the following flags so that you don't have
to type them for each request (e.g. in your .bash_profile). All the examples in
this document assume that these variables have been set:

Provision a new instance

To provision a new instance, you first need to get the ids for the image and
package you want to use as the base for your instance.

An image is a snapshot of a filesystem and its software (for some types of
container), or a disk image (for hardware virtual machines). You can get the
list of available images using the triton image list or sdc-listimages
commands; see the ListImages section below for a detailed
explanation of these commands.

A package is a set of dimensions for the new instance, such as RAM and disk
size. You can get the list of available packages using the
triton package list or sdc-listpackages commands; see the
ListPackages section below for a detailed explanation of these
commands.

You can use the --name flag to name your instance; if you do not specify a
name, Triton will generate one for you. --image is the id of the image
you'd like to use as the new instance's base. --package is the id of the
package to use to set instance dimensions. For the triton command, you can
also pass the name of the image or the package instead of their id.

When you provision a new instance, the instance will take time to be initialized
and booted; the state attribute will reflect this. Once the state attribute
"running", you can login to your new instance (assuming it's a Unix-based
instance), with the following:

ssh-add ~/.ssh/<key file>
$ ssh -A root@<new instance IP address>

These two commands set up your SSH agent (which has some magical properties,
so you need to handle your SSH keys less often), and logs you in as the admin
user on an instance. Note that the admin user has password-less sudo
capabilities, so you may want to set up some less privileged users. The SSH
keys on your account will allow you to login as root or admin on your new
instance.

An alternative of using SSH directly is:

triton ssh <name of instance>

Now that we've done some basics with an instance, let's introduce a few
concepts:

Images

By default, SmartOS images should be available to your for use. Your Triton
cloud may have other images available as well, such as Linux or Windows images.
The list of available images can be obtained with:

Packages are the Triton name for the dimensions of an instance (how much CPU will
be available, how much RAM, disk and swap, and so forth). Packages are provided
so that you do not need to select individual settings, such as RAM or disk size.

Managing SSH keys

For instances which don't have a brand of kvm or bhyve (see
triton instance list -o id,brand or sdc-listmachines), you can manage the
SSH keys that allow logging into the instance via CloudAPI. For example, to
rotate keys:

triton key add --name=my-other-rsa-key ~/.ssh/my_other_rsa_key.pub

or

sdc-createkey --name=my-other-rsa-key ~/.ssh/my_other_rsa_key.pub

The --name option sets the name of the key. If you don't provide one,
CloudAPI sets it to the name of the file; in this case my_other_rsa_key.pub.

To use the new key, you will need to update the environment variables:

At this point you could delete your other key from the system; see
Cleaning Up for a quick example.

You cannot manage the SSH keys of instances with a brand of kvm or bhyve.
Hardware virtual machines are static, and whatever keys were in your account at
instance creation time are used, provided the OS inside KVM is a *nix.

Creating Analytics

Now that you have a container up and running, and you logged in and did
whatever it is you thought was awesome, let's create an instrumentation to
monitor performance. Analytics are one of the more powerful features of
Triton, so for more information, be sure to read
Appendix B: Cloud Analytics.

Where 1 is the id you got back from sdc-createinstrumentation. You should
be able to run this a few times and see the changes. This is just a starting
point, for a full discussion of analytics, be sure to read
Appendix B: Cloud Analytics.

Cleaning up

After going through this Getting Started section, you should now have at least
one SSH key, one instance and one instrumentation. The rest of the commands
assume you have json installed.

Deleting Instrumentations

Before cleaning up your instances, let's get rid of the instrumentation we
created:

Deleting keys

Finally, you probably have one or two SSH keys uploaded to Triton after going
through the guide, so to delete the one we setup:

triton key delete id_rsa

or

sdc-deletekey id_rsa

RBAC: Users, Roles & Policies

Starting at version 7.2.0, CloudAPI supports Role Based Access Control (RBAC),
which means that accounts can have multiple users and roles
associated with them.

While the behaviour of the main account remains the same,
including the SSH keys associated with it, it's now possible to have
multiple Users subordinate to the main account. Each of these
users have a different set of SSH Keys. Both the users and their
associated SSH keys have the same format as the main account object (and the
keys associated with it).

It's worth mentioning that the login for an account's users must be different
only between the users of that account, not globally. We could have an account
with login "mark", another account "exampleOne" with a user with login "mark",
another account "exampleTwo" with another user with login "mark", and so
forth.

The rules in policies are used for the access control of an account's users.
These rules use Aperture as the
policy language, and are described in detail in the next section.

Our recommendation is to limit each policy's set of rules to a very scoped
collection, and then add one or more of these policies to each group. This aids
easily reusing existing policies for one or more roles, allowing fine-grained
definition of each role's abilities.

Rules definition for access control

You should refer to the
Aperture documentation for the
complete details about the different possibilities when defining new rules.
This section will only cover a limited set strictly related to CloudAPI's usage.

In the case of CloudAPI, <principal> will be always the user performing the
HTTP request. Likewise, <resource> will always be the URL of such request,
for example /:account/machines/:instance_id.

We add one or more roles to a resource to explicitly define the active roles a
user trying to access a given resource must have. Therefore, we don't need to
specify <principal> in our rules, since it'll always be defined by the
role-tags of the resource the user is trying to get access to. For the same
reason, we don't need to specify <resource> in our rules.

Therefore, CloudAPI's Aperture rules have the format:

CAN <actions> WHEN <conditions>

By default, the access policy will DENY any attempt made by any account user
to access a given resource, unless:

that resource is tagged with a role

that role is active

that role has a policy

that policy contains a rule which explicity GRANTS access to that resource

For example, a user with an active role read, which includes a policy rule
like CAN listmachines and getmachines will not get access to resources like
/:account/machines or /:account/machines/:instance_id unless these resources
are role-tagged with the role read too.

Additionally, given that the <actions> included in the policy rule are just
listmachines and getmachine, the user will be able to retrieve an instance's
details provided by the GetMachine action, but will not be able
to perform any other instance actions (like StopMachine).
However, if the role has a rule including that <action> (like StopMachine), or
the user has an additional role which includes that rule, then the user can
invoke that action too.

As an aside, the active roles of a user are set by the default_members
attribute in a role. If three different roles contain the "john" user (amongst
others) in their default-members list, then the "john" user will have those
three roles as active roles by default. This can be overridden by passing in
?as-role=<comma-separated list of role names> as part of the URL, or adding a
--role flag when using a node-smartdc command; provided that each role contains
that user in their members list, then those roles are set as the
currently-active roles for a request instead.

For more details on how Access Control works for both CloudAPI and Manta,
please refer to Role Based Access Control
documentation.

An important note about RBAC and certain reads after writes

CloudAPI uses replication and caching behind the scenes for user, role and
policy data. This implies that API reads after a write on these particular
objects can be up to several seconds out of date.

For example, when a user is created, CloudAPI returns both a user object
(which is up to date), and a location header indicating where that new user
object actually lives. Following that location header may result in a 404 for
a short period.

As another example, if a policy is updated, the API call will return a policy
object (which is up to date), but GETing that URL again may temporarily return
a outdated object with old object details.

For the time being, please keep in mind that user, role and policy
creation/updates/deletion may potentially take several seconds to settle. They
have eventual consistency, not read-after-write.

API Introduction

CloudAPI exposes a REST API over HTTPS. You can work with the REST API by
either calling it directly via tooling you already know about (such as curl, et
al), or by using the CloudAPI CLIs and SDKs from Joyent. The node-triton
CloudAPI SDK & CLI is available as an npm module, which you can install with:

npm install triton

Alternatively, there is the more stable and feature-complete node-smartdc:

npm install smartdc

Although node-triton has fewer features -- for now -- it will continue to
receive the most development effort and future support. node-smartdc is in
maintenance.

The rest of this document will show all APIs in terms of both the raw HTTP
specification, the CLI commands, and sometimes the node-smartdc SDK.

Issuing Requests

All HTTP calls to CloudAPI must be made over TLS, and requests must carry at
least two headers (in addition to standard HTTP headers): Authorization and
Api-Version. The details are explained below. In addition to these headers,
any requests requiring content must be sent in an acceptable scheme to
CloudAPI. Details are also below.

Content-Type

For requests requiring content, you can send parameters encoded with
application/json, application/x-www-form-urlencoded or
multipart/form-data. Joyent recommends application/json. The value of the
Accept header determines the encoding of content returned in responses.
CloudAPI supports application/json response encodings only.

In order to leverage HTTP Signature Authentication, only RSA signing mechanisms
are supported, and your keyId must be equal to the path returned from a
ListKeys API call. For example, if your Triton login is demo,
and you've uploaded an RSA SSH key with the name foo, an Authorization
header would look like:

The default value to sign for CloudAPI requests is simply the value of the HTTP
Date header. For more information on the Date header value, see
RFC 2616. All requests to
CloudAPI using the Signature authentication scheme must send a Date header.
Note that clock skew will be enforced to within 300 seconds (positive or
negative) from the value sent.

Full support for the HTTP Signature Authentication scheme is provided in both
CloudAPI SDKs; an additional reference implementation for Node.js is available
in the npm http-signature module, which you can install with:

npm install http-signature

Using cURL with CloudAPI

Since cURL is commonly used to script requests to web
services, here's a simple Bash function you can use to wrap cURL when
communicating with CloudAPI:

You may need to alter the path to your SSH key in the above function, as well as
the path its public-key is saved under in Triton.

With that function, you could just do:

cloudapi /my/machines

CloudAPI HTTP Responses

CloudAPI returns all response objects as application/json encoded HTTP bodies.
In addition to the JSON body, all responses have the following headers:

Header

Description

Date

When the response was sent (RFC 1123 format)

Api-Version

The exact version of the CloudAPI server you spoke with

Request-Id

A unique id for this request; you should log this

Response-Time

How long the server took to process your request (ms)

If there is content, you can expect:

Header

Description

Content-Length

How much content, in bytes

Content-Type

Formatting of the response (almost always application/json)

Content-MD5

An MD5 checksum of the response; you should check this

HTTP Status Codes

Your client should check for each of the following status codes from any API
request:

Code

Description

Details

400

Bad Request

Invalid HTTP Request

401

Unauthorized

Either no Authorization header was sent, or invalid credentials were used

403

Forbidden

No permissions to the specified resource

404

Not Found

Resource was not found

405

Method Not Allowed

Method not supported for the given resource

406

Not Acceptable

Try sending a different Accept header

409

Conflict

Most likely invalid or missing parameters

413

Request Entity Too Large

You sent too much data

415

Unsupported Media Type

Request was encoded in a format CloudAPI does not understand

420

Slow Down

You're sending too many requests too quickly

449

Retry With

Invalid Version header; try with a different Api-Version string

500

Internal Error

An unexpected error occurred; see returned message for more details.

503

Service Unavailable

Either there's no capacity in this datacenter, or it's in a maintenance window

Error Responses

In the event of an error, CloudAPI will return a standard JSON error response
object in the body with the scheme:

{
"code": "CODE",
"message": "human readable string"
}

Where the code element is one of:

Code

Description

BadRequest

You sent bad HTTP

InternalError

Something went wrong in Triton

InUseError

The object is in use and cannot be operated on

InvalidArgument

You sent bad arguments or a bad value for an argument

InvalidCredentials

Authentication failed

InvalidHeader

You sent a bad HTTP header

InvalidVersion

You sent a bad Api-Version string

MissingParameter

You didn't send a required parameter

NotAuthorized

You don't have access to the requested resource

RequestThrottled

You were throttled

RequestTooLarge

You sent too much request data

RequestMoved

HTTP Redirect

ResourceNotFound

What you asked for wasn't found

UnknownError

Something completely unexpected happened!

Clients are expected to check HTTP status code first, and if it's in the 4xx
range, they can leverage the codes above.

API Versions

A CloudAPI endpoint has two relevant version values: the code version and the
"API version". The former includes the full major.minor.patch version value
of the deployed server and, as of CloudAPI v8.3.0, is available in the "Server"
header of all responses:

Server: cloudapi/8.3.1

The API version is only changed for major versions, e.g. API version "8.0.0"
is used for all 8.x code versions. (Older CloudAPI v7 would bump the API version
at the minor version level.)

All requests to CloudAPI must specify an acceptable API version
range via the 'Accept-Version' (or
for backward compatibility the 'Api-Version') header. For example:

For new applications using CloudAPI SDKs, it is recommended that one explicitly
accept a particular major version, e.g. Accept-Version: ~8, so that
future CloudAPI backward incompatible changes (always done with a major
version bump) don't break your application.

The triton tool uses
Accept-Version: ~9||~8 by default. Users can restrict the API version via the
triton --accept-version=RANGE ... option. The older sdc-* tools from
node-smartdc use ~8||~7 by default, and users can restrict the API
version via the SDC_API_VERSION=RANGE environment variable or the
--api-version=RANGE option to each command.

9.1.0

Added Clone Image. This can be used to create a your own copy
of an image owned by another account that has been shared with you (via
triton image share).

[Backward incompatible] Shared images will no longer be provisioned by default
when an Accept-Version of ~9 or higher is used. You will need to
explicitly add the allow_shared_images param to CreateMachine (which is what
triton create --allow-shared-images does). Older versions of the
CreateMachine interface will allow the provisioning of shared images.

9.0.0

New object-based format for Roles: the "members" and "policies" properties
are now arrays of objects describing their values, rather than arrays of
strings as they were before. The "default_members" array is replaced by the
"default" boolean property on the objects. You can still elect to use the old
interface by using a lower Accept-Version than 9.0.0.

8.11.0

Added a new API method to the plugin interface: modifyProvisionNetworks. This
can be used to modify network arguments sent in the vmapi provision call.

8.10.0

GetImage now includes information about the brand requirements in the
requirements.brand member of the returned JSON.

8.9.0

The plugin interface has changed. preProvision/postProvision hooks have been
replaced with allowProvision/postProvision and an expanded API. This is a
change invisible to CloudAPI REST consumers.

8.8.0

CreateMachine now takes brand from the package's brand parameter if brand is
not specified by the image, and ensures that package and image brand
requirements do not conflict.

Fixed some bugs in the brand handling for packages.

8.7.0

CreateMachine no longer accepts the brand field for specifying the brand
of the instance to create.

8.6.0

CreateMachine now accepts the brand field for specifying the brand of the
instance to create. This is currently only useful when provisioning a
virtualmachine in a datacenter that supports both kvm (default) and bhyve.

Added Deletion Protection. Setting
deletion_protection to true when creating or updating an instances will stop
stop both DeleteMachine and SDC Docker from destroying an
instance. This remains true until that attribute is set to false.

8.5.0

CreateMachine and AddNic now accept specifying a network
object instead of just a network UUID. The network object
extends functionality by allowing a machine to be provisioned with specific
IPs. It's also now possible to add a NIC to an instance with a specific IP.
It's worth noting that it's still possible to pass in just the UUID of a
network, however you cannot mix the new and old formats in the same request.

8.4.0

8.3.0

CreateMachine supports a new affinity field for specifying affinity rules.
Affinity rules (inspired by Docker Swarm affinity filters) allow a more
powerful mechanism for controlling server placement of instances.
This deprecates the locality field for "locality hints" on CreateMachine.
Limitation: Affinity rules currently do not properly consider concurrent
provisions (see TRITON-9).

8.2.1

GetMachine works with machines that do not have a package or a network. Such
machines cannot be created through CloudAPI, so this isn't applicable to most
people unless they have an operator do this for them. ListMachines no longer
breaks for such machines either.

8.2.0

This version adds support for {{shortId}} tags in the 'name' parameter when
creating a machine using CreateMachine machine. Any
instances of {{shortId}} in the name will be replaced with the shortened
version (first 8 characters) of the machine's id.

8.1.1

It's now possible to query packages using wildcards. See the
ListPackages section.

8.1.0

This version should have no visible API changes, but updates a lot of
libraries that Cloudapi depends on, as well as the nodejs major version.
Visible differences with 8.0.0 are bugs, but it's possible some might have
crept through.

8.0.0

Instance/machine objects (from GetMachine, ListMachines) now has a brand
attribute, which is more granular than the existing type (now deprecated).
Also a docker boolean attribute, which indicates whether the instance
is a Docker container.

[Backward incompatible] This version also makes a breaking change to the
attribute type on images. In API versions 7 and earlier, <image>.type
was either "virtualmachine" (for zvol images) or "smartmachine" for other
image types. In version 8, <image>.type is the untranslated type value
from the image in the IMGAPI.

[Backward incompatible] ListDatasets and GetDataset have been removed.
Use ListImages and GetImage, respectively.

[Backward incompatible] The long deprecated support for API version 6.5
has been dropped. The default attribute on package objects is deprecated,
since it only had meaning in 6.5.

7.3.0

7.2.0

RBAC v1 has been made available on the CloudAPI interface. Accounts can create
users, rules can be created and combined to make policies, policies and users
can be associated together using roles, and role tags can be applied to
CloudAPI resources.

Firewall rules include information regarding rules being global or not
(global attribute), and will optionally include a human-readable
description attribute for the rules (which can be modified except for global
rules).

7.0.0

HTTP signature auth.

Account

You can obtain your account details and update them through CloudAPI, although
login cannot be changed, and password can not be retrieved.

Keys

This part of the API is the means by which you operate on your SSH/signing keys.
These keys are needed in order to login to instances over SSH, as well as signing
requests to this API (see the HTTP Signature Authentication Scheme outlined in
Appendix C for more details).

Currently CloudAPI supports uploads of public keys in the OpenSSH format.

Note that while it's possible to provide a name attribute for an SSH key in
order to use it as a human-friendly alias, this attribute's presence is
optional. When it's not provided, the ssh key fingerprint will be used as the
name instead.

Keys can optionally be submitted along with a hardware attestation certificate
signed by a trusted hardware manufacturer, which will be validated and
processed. Keys generated in hardware devices which require some form of
multi-factor authentication to sign requests (e.g. the device requires a PIN or
Touch input) are marked by this mechanism and may be specially treated by
Triton and Manta as providing a kind of 2-factor authentication (depending on
administrator policy).

For the following routes, the parameter placeholder :key can be replaced with
with either the key's name or its fingerprint. It's strongly recommended to
use fingerprint when possible, since the name attribute does not have any
uniqueness constraints.

ListKeys(GET/:login/keys)

Lists all public keys we have on record for the specified account.

Inputs

None

Returns

Array of key objects. Each key object has the following fields:

Field

Type

Description

name

String

Name for this key

fingerprint

String

Key fingerprint

key

String

Public key in OpenSSH format

attested

Boolean

Indicates if the key has a hardware device attestation

multifactor

Array[String]

Lists any additional factors required to use (if attested)

Possible multifactor values:

Value

Meaning

pin

Input of a PIN or password is required for key use

touch

Touch input (not authenticated -- i.e. not a fingerprint) is required for key use

UpdateUser(POST/:account/users/:id)

Update a user's modifiable properties.

Note: Password changes are not allowed using this endpoint; there is an
additional endpoint (ChangeUserPassword) for password
changes so it can be selectively allowed/disallowed for users using policies.

Role Tags

SetRoleTags(PUT/:resource_path)

Sets the given role tags to the provided resource path. resource_path
can be the path to any of the CloudAPI resources described in this document:
account, keys, users, roles, policies, user's ssh keys, datacenters, images,
packages, instances, analytics, instrumentations, firewall rules and networks.

For each of these you can set role tags either for an individual resource or
for the whole group; i.e., you can set role tags for all the instances using:

Contains a grouping of various minimum requirements for provisioning an instance with this image. For example 'password' indicates that a password must be provided

requirements.max_ram

String

Indicates the maximum RAM requirement that must be provided in the VM manifest to provision a VM based on this image.

requirements.max_memory

String

Indicates the maximum RAM requirement that must be provided in the VM manifest to provision a VM based on this image.

requirements.min_ram

String

Indicates the minimum RAM requirement that must be provided in the VM manifest to provision a VM based on this image.

requirements.min_memory

String

Indicates the minimum RAM requirement that must be provided in the VM manifest to provision a VM based on this image.

requirements.brand

String

Indicates which brand has to be used in the VM manifest to provision a VM based on this image.

homepage

String

The URL for a web page with more detailed information for this image

files

Array

An array of image files that make up each image. Currently only a single file per image is supported

files[0].compression

String

The type of file compression used for the image file. One of 'bzip2', 'gzip', 'none'

files[0].sha1

String

SHA-1 hex digest of the file content. Used for corruption checking

files[0].size

Number

File size in bytes

published_at

ISO8859 date

The time this image has been made publicly available

owner

String

The UUID of the user who owns this image

public

Boolean

Indicates if this image is publicly available

state

String

The current state of the image. One of 'active', 'unactivated', 'disabled', 'creating', 'failed'

tags

Object

An object of key/value pairs that allows clients to categorize images by any given criteria

eula

String

URL of the End User License Agreement (EULA) for the image

acl

Array

Access Control List. An array of account UUIDs given access to a private image. The field is only relevant to private images.

error

Object

If state=="failed", resulting from CreateImageFromMachine failure, then there may be an error object of the form {"code": "<string error code>", "message": "<string desc>"}

error.code

String

A CamelCase string code for this error, e.g. "PrepareImageDidNotRun". See GetImage docs for a table of error.code values

error.message

String

A short description of the image creation failure

Possible error.code values:

error.code

Details

PrepareImageDidNotRun

This typically means that the target harware virtual machine (e.g. Linux) has old guest tools that pre-date the image creation feature. Guest tools can be upgraded with installers at https://download.joyent.com/pub/guest-tools/. Other possibilities are: a boot time greater than the five-minute timeout, or a bug or crash in the image-preparation script

VmHasNoOrigin

Origin image data could not be found for the instance. Typically this is for an instance migrated before image creation support was added

NotSupported

Indicates an error due to functionality that isn't currently supported. One example is that custom image creation of an instance based on a custom image isn't currently supported

ExportImage(POST/:login/images/:id?action=export)

Exports an image to the specified Manta path. Caller must be the owner of the
image, and the correspondent Manta path prefix, in order to export it. Both the
image manifest and the image file will be exported, and their filenames will
default to the following format when the specified manta path is a directory:

<manta_path>/NAME-VER.imgmanifest
<manta_path>/NAME-VER.zfs.FILE-EXT

Where NAME is the image name and VER is the image version. FILE-EXT is the file
extension of the image file. As an example, exporting a foo-1.0.0 image to
/user/stor/cloudapi would result in the following files being exported:

By contrast, if the basename of the given prefix is not a directory, then
"MANTA_PATH.imgmanifest" and "MANTA_PATH.zfs[.EXT]" are created. As an example,
the following shows how to export foo-1.0.0 with a custom name:

This will copy the image with id from the source datacenter into this
datacenter. The copied image will retain all fields (e.g. id, published_at)
as the original image. All incremental images in the origin chain will also be
copied.

CloneImage(POST/:login/images/:id?action=clone)

Creates an independent copy of the source image. The login account must be on
the source image ACL to be able to make an image clone.

The resulting cloned image will have the same properties as the source image,
but the cloned image will have a different id, it will be owned by the login
account and the image will have an empty ACL.

All incremental images in the image origin chain that are not operator images
(i.e. are not owned by admin) will also be cloned, though all cloned incremental
images will have state disabled so that they are not visible in the default
image listings.

Inputs

None.

Returns

A cloned image object. See GetImage docs for the image fields
returned.

Packages

Packages are named collections of resources that are
used to describe the dimensions of either a container or a hardware virtual
machine. These resources include (but are not limited to) RAM size, CPUs, CPU
caps, lightweight threads, disk space, swap size, and logical networks.

ListPackages(GET/:login/packages)

Provides a list of packages available in this datacenter.

Inputs

The following are all optional inputs:

Field

Type

Description

name

String

The "friendly" name for this package

memory

Number

How much memory will by available (in MiB)

disk

Number

How much disk space will be available (in MiB)

swap

Number

How much swap space will be available (in MiB)

lwps

Number

Maximum number of light-weight processes (threads) allowed

vcpus

Number

Number of vCPUs for this package

version

String

The version of this package

group

String

The group this package belongs to

When any values are provided for one or more of the aforementioned inputs, the
retrieved packages will match all of them.

When querying, wildcards (i.e. '') are allowed for string fields. For example,
to list all packages with a name that starts with "foo", give "foo" as the
package name.

Infrastructure and Docker containers are lightweight, offering the most
performance, observability and operational flexibility. Harware-virtualized
machines are useful for non-SmartOS or non-Linux stacks.

ListMachines(GET/:login/machines)

Lists all instances we have on record for your account. If you have a large
number of instances, you can filter using the input parameters listed below.
Note that deleted instances are returned only if the instance history has not
been purged from Triton.

You can paginate this API by passing in offset and limit. HTTP responses
will contain the additional headers x-resource-count and x-query-limit. If
x-resource-count is less than x-query-limit, you're done, otherwise call the
API again with offset set to offset + limit to fetch additional instances.

Note that there is a HEAD /:login/machines form of this API, so you can
retrieve the number of instances without retrieving a JSON describing the
instances themselves.

Inputs

Field

Type

Description

type

String

(deprecated) The type of instance (virtualmachine or smartmachine)

brand

String

(v8.0+) The type of instance (e.g. lx)

name

String

Machine name to find (will make your list size 1, or 0 if nothing found)

image

String

Image id; returns instances provisioned with that image

state

String

The current state of the instance (e.g. running)

memory

Number

The current size of the RAM deployed for the instance (in MiB)

tombstone

Boolean

Include destroyed and failed instances available in instance history

limit

Number

Return a max of N instances; default is 1000 (which is also the maximum allowable result set size)

offset

Number

Get a limit number of instances starting at this offset

tag.$name

String

An arbitrary set of tags can be used for querying, assuming they are prefixed with "tag."

docker

Boolean

Whether to only list Docker instances, or only non-Docker instances, if present. Defaults to showing all instances.

credentials

Boolean

Whether to include the generated credentials for instances, if present. Defaults to false

Note that if the special input tags=* is provided, any other input will be
completely ignored and the response will return all instances with any tag.

Be aware that in the case of instances created with vmadm directly (i.e. not
through CloudAPI), ips, networks, primaryIp and package may be in a different
format than expected. The ips array can contain the value "dhcp", not just
IP strings, the networks array can contain null values for networks that
CloudAPI was unable to determine (e.g. as a result of a "dhcp" IP), primaryIp
too can have the value of "dhcp", and the package string can be empty instead of
a UUID. Unless ops is bypassing CloudAPI and creating instances directly, it is
unlikely you need concern yourself with this caveat.

Returns

An array of instance objects, which contain:

Field

Type

Description

id

UUID

Unique id for this instance

name

String

The "friendly" name for this instance

type

String

(deprecated) The type of instance (virtualmachine or smartmachine)

brand

String

(v8.0+) The type of instance (e.g. lx)

state

String

The current state of this instance (e.g. running)

image

String

The image id this instance was provisioned with

memory

Number

The amount of RAM this instance has (in MiB)

disk

Number

The amount of disk this instance has (in MiB)

metadata

Object[String => String]

Any additional metadata this instance has

tags

Object[String => String]

Any tags this instance has

created

ISO8601 date

When this instance was created

updated

ISO8601 date

When this instance's details was last updated

docker

Boolean

Whether this instance is a Docker container, if present

ips

Array[String]

The IP addresses this instance has

networks

Array[String]

The network UUIDs of the nics this instance has

primaryIp

String

The IP address of the primary NIC of this instance. The "primary" NIC is used to determine the default gateway for an instance. Commonly it is also on an external network (i.e. accessible on the public internet) and hence usable for SSH'ing into an instance, but not always. (Note: In future Triton versions it will be possible to have multiple IPv4 and IPv6 addresses on a particular NIC, at which point the current definition of primaryIp will be ambiguous and will need to change.)

GetMachine(GET/:login/machines/:id)

Deleted instances are returned only if the instance history has not
been purged from Triton.

Inputs

Field

Type

Description

credentials

Boolean

Whether to include the generated credentials for instances, if present. Defaults to false.

Returns

Field

Type

Description

id

UUID

Unique id for this instance

name

String

The "friendly" name for this instance

type

String

(deprecated) The type of instance (virtualmachine or smartmachine)

brand

String

(v8.0+) The type of instance (e.g. lx)

state

String

The current state of this instance (e.g. running)

image

String

The image id this instance was provisioned with

memory

Number

The amount of RAM this instance has (in MiB)

disk

Number

The amount of disk this instance has (in MiB)

metadata

Object[String => String]

Any additional metadata this instance has

tags

Object[String => String]

Any tags this instance has

created

ISO8601 date

When this instance was created

updated

ISO8601 date

When this instance's details was last updated

docker

Boolean

Whether this instance is a Docker container, if present

ips

Array[String]

The IP addresses this instance has

networks

Array[String]

The network UUIDs of the nics this instance has

primaryIp

String

The IP address of the primary NIC of this instance. The "primary" NIC is used to determine the default gateway for an instance. Commonly it is also on an external network (i.e. accessible on the public internet) and hence usable for SSH'ing into an instance, but not always. (Note: In future Triton versions it will be possible to have multiple IPv4 and IPv6 addresses on a particular NIC, at which point the current definition of primaryIp will be ambiguous and will need to change.)

Be aware that in the case of instances created with vmadm directly (i.e. not
through CloudAPI), ips, networks, primaryIp and package may be in a different
format than expected. The ips array can contain the value "dhcp", not just
IP strings, the networks array can contain null values for networks that
CloudAPI was unable to determine (e.g. as a result of a "dhcp" IP), primaryIp
too can have the value of "dhcp", and the package string can be empty instead of
a UUID. Unless ops is bypassing CloudAPI and creating instances directly, it is
unlikely you need concern yourself with this caveat.

CreateMachine(POST/:login/machines)

Allows you to provision an instance.

If you do not specify a name, CloudAPI will generate a random one for you. If
you have enabled Triton CNS on your account, this name will also be used in
DNS to refer to the new instance (and must therefore consist of DNS-safe
characters only).

Your instance will initially be not available for login (Triton must provision
and boot it); you can poll GetMachine for its status. When the
state field is equal to running, you can log in. If the instance is a
brand other than kvm or bhyve, you can usually use any of the SSH keys
managed under the keys section of CloudAPI to login as any POSIX user
on the OS. You can add/remove keys over time, and the instance will
automatically work with that set.

If the the instance has a brand kvm or bhyve, and of a UNIX-derived OS (e.g.
Linux), you must have keys uploaded before provisioning; that entire set of
keys will be written out to /root/.ssh/authorized_keys in the new instance,
and you can SSH in using one of those keys. Changing the keys over time under
your account will not affect a running hardware virtual machine in any way;
those keys are statically written at provisioning-time only, and you will need
to manually manage them on the instance itself.

If the image you create an instance from is set to generate passwords for you,
the username/password pairs will be returned in the metadata response as a
nested object, like so:

More generally, the metadata keys can be set either at the time of instance
creation, or after the fact. You must either pass in plain-string values, or a
JSON-encoded string. On metadata retrieval, you will get a JSON object back.

Networks can be specified using the networks attribute. It is possible to have
an instance attached to an internal network, external network or both. If the
networks attribute is absent from the input, the instance will be attached to
one externally-accessible network (i.e. assigned a public IP), and any one of
internal/private networks. If the account owns or has access to multiple private
networks, it will be important to include the desired network(s) in the request
payload instead of letting the system assign the network automatically.

Be aware that CreateMachine does not return IP addresses or networks. To
obtain the IP addresses and networks of a newly-provisioned instance, poll
GetMachine until the instance state is running.

Typically, Triton will allocate the new instance somewhere reasonable within the
cloud. See affinity rules below for options on controlling
server placement of new instances.

When Triton CNS is enabled, the DNS search domain of the new VM will be
automatically set to the suffix of the "instance" record that is created for
that VM. For example, if the full CNS name of the new VM would be
"foo.inst.35ad1ec4-2eab-11e6-ac02-8f56c66976a1.us-west-1.triton.zone", its
automatic DNS search path would include
"inst.35ad1ec4-2eab-11e6-ac02-8f56c66976a1.us-west-1.triton.zone". This can
be changed later within the instance, if desired.

Inputs

Field

Type

Description

name

String

Friendly name for this instance; default is the first 8 characters of the machine id. If the name includes the string {{shortId}}, any instances of that tag within the name will be replaced by the first 8 characters of the machine id.

Network objects

As of CloudAPI v8.5.0 the networks parameter to CreateMachine takes an array of
network objects to add flexibility and more control. It is also still possible
to pass in an array of network UUID strings instead of the new network object
format.

At a minimum the network object must contain an ipv4_uuid parameter that is
the UUID of the network you wish the machine to have a NIC on. In addition you
may pass in a ipv4_ips property that is an array made up of a single IP on
that network's subnet.

When specifying an ipv4_ips array, the ipv4_uuid cannot be the UUID of a
network pool, or a public network.

Affinity rules

As of CloudAPI v8.3.0 an "affinity" field can be specified with CreateMachine.
It is an array of "affinity rules" to specify rules (or hints, "soft rules") for
placement of the new instance.

By default, Triton makes a reasonable attempt to spread all containers (and
non-Docker containers and VMs) owned by a single account across separate
physical servers.

Affinity rules are of one of the following forms:

instance<op><value>
container<op><value>
<tagName><op><value>

is one of:

==: The new instance must be on the same node as the instance(s) identified
by .

!=: The new instance must be on a different node to the instance(s)
identified by .

==~: The new instance should be on the same node as the instance(s)
identified by . I.e. this is a best effort or "soft" rule.

!=~: The new instance should be on a different node to the instance(s)
identified by . I.e. this is a best effort or "soft" rule.

is an exact string, simple *-glob, or regular expression to match
against instance names or IDs, or against the named tag's value. Some examples:

# Run on the same node as instance silent_bob.
triton instance create -a instance==silent_bob ...
# Run on a different node to all instances tagged with 'role=database'.
triton instance create -a 'role!=database' ...
# Run on a different node to all instances with names starting with "foo".
triton instance create -a 'instance!=foo*' ...
# Same, using a regular expression.
triton instance create -a 'instance!=/^foo/' ...

Locality hints

(Deprecated in CloudAPI v8.3.0.)

You may want this instance to be placed on the same server as another
instance you have, or have it placed on an entirely different server from your
existing instances so that you can spread them out. In either case, you can
provide locality hints to CloudAPI.

UUIDs provided should be the ids of instances belonging to you. If there is only
a single UUID entry in an array, you can omit the array and provide the UUID
string directly as the value to a near/far key.

strict defaults to false, meaning that Triton will attempt to meet all the
near and/or far criteria but will still provision the instance when no
server fits all the requirements. If strict is set to true, the creation of
the new instance will fail if the affinity criteria cannot be met.

User-script

The special value metadata.user-script can be specified to provide a custom
script which will be executed by the instance right after creation, and on every
instance reboot. This script can be specified using the command-line option
--script, which should be an absolute path to the file you want to upload to
the instance.

StopMachine(POST/:login/machines/:id?action=stop)

Allows you to shut down an instance. POST to the instance name with an action
of stop.

CreateMachineSnapshot(POST/:login/machines/:id/snapshots)

Allows you to take a snapshot of an instance. Once you have one or more
snapshots, you can boot the instance from a previous snapshot.

Snapshots are not usable with other instances; they are a point-in-time snapshot
of the current instance. Snapshots can also only be taken of instances that are
not of brand 'kvm' or 'bhyve'.

Since instance instances use a copy-on-write filesystem, snapshots take up
increasing amounts of space as the filesystem changes over time. There is a
limit to how much space snapshots are allowed to take. Plan your snapshots
accordingly.

AddMachineTags(POST/:login/machines/:id/tags)

Set tags on the given instance. A pre-existing tag with the same name as one
given will be overwritten.

Note: This action is asynchronous. You can poll on ListMachineTags to wait for
the update to be complete (the triton instance tag set -w,--wait option does
this).

Inputs

Tag name/value pairs. Input data is typically as a application/json POST body.
However, query params or application/x-www-form-urlencoded-encoded body also
works. Tag values may be strings, numbers or booleans.

Deletion Protection

If you want to decrease the risk of accidental instance destruction, it is
possible to make instance destruction (e.g. through
DeleteMachine) a two-step process.

Instances that have the attribute deletion_protection set to boolean true
cannot be deleted, either through CloudAPI or SDC Docker. In order to delete
such an instance, the above attribute needs to be set to false first.

CLI Commands

Analytics

It is strongly recommended that before you read the API documentation for
Analytics, you first read through
Appendix B: Cloud Analytics. Most supporting
documentation and explanation of types and interactions are described there.

DescribeAnalytics(GET/:login/analytics)

Supports retrieving the "schema" for instrumentations which can be created using
the analytics endpoint.

Inputs

None

Returns

A large object that reflects the analytics available to you.

Each of the items listed below is an object; the keys in each are what can be
used. For example, in 'modules', you'll get something like:

Type of the field, which determines how to label it, as well as whether the field is numeric or discrete

Fields are either numeric or discrete based on the "arity" of their type.

Numeric fields

In predicates, values of numeric fields can be compared using numeric equality
and inequality operators (=, <, >, etc).

In decompositions, a numeric field yields a numeric decomposition (see
"Numeric decompositions" above).

Discrete fields

In predicates, values of discrete fields can only be compared using string
equality.

In decompositions, a discrete field yields a discrete decomposition (see
"Discrete decompositions" above).

Note that some fields look like numbers but are used by software as identifiers,
and so are actually discrete fields. Examples include process identifiers,
which are numbers, but don't generally make sense comparing using inequalities
or decomposing to get a numeric distribution.

Types

Types are used with both metrics and fields for two purposes: to hint to clients
at how to best label values, and to distinguish between numeric and discrete
quantities.

Retrieves metadata and a base64-encoded PNG image of a particular
instrumentation's heatmap.

Inputs

Field

Type

Description

height

Number

Height of the image in pixels

width

Number

Width of the image in pixels

ymin

Number

Y-Axis value for the bottom of the image (default: 0)

ymax

Number

Y-Axis value for the top of the image (default: auto)

nbuckets

Number

Number of buckets in the vertical dimension

selected

Array

Array of field values to highlight, isolate or exclude

isolate

Boolean

If true, only draw selected values

exclude

Boolean

If true, don't draw selected values at all

hues

Array

Array of colors for highlighting selected field values

decompose_all

Boolean

Highlight all field values (possibly reusing hues)

Returns

Field

Type

Description

bucket_time

Number

Time corresponding to the bucket (Unix seconds)

bucket_ymin

Number

Minimum y-axis value for the bucket

bucket_ymax

Number

Maximum y-axis value for the bucket

present

Object

If the instrumentation defines a discrete decomposition, this property's value is an object whose keys are values of that field and whose values are the number of data points in that bucket for that key

Fabrics

CloudAPI provides a way to create and manipulate a fabric. On the fabric you can
create VLANs, and then under that create layer three networks.

A fabric is the basis for building your own private networks that cannot be
accessed by any other user. It represents the physical infrastructure
that makes up a network; however, you don't have to cable or program it. Every
account has its own unique fabric in every datacenter.

On a fabric, you can create your own VLANs and layer-three IPv4 networks. You
can create any VLAN from 0-4095, and you can create any number of IPv4 networks
on top of the VLANs, with all of the traditional IPv4 private addresses spaces
-- 10.0.0.0/8, 192.168.0.0/16, and 172.16.0.0/12 -- available for use.

You can create networks on your fabrics to create most network topologies. For
example, you could create a single isolated private network that nothing else
could reach, or you could create a traditional configuration where you have a
database network, a web network, and a load balancer network, each on their own
VLAN.

ListFabricVLANs(GET/:login/fabrics/default/vlans)

Inputs

None

Returns

An array of VLAN objects that exist on the fabric. Each VLAN object has the
following properties:

Example Response

Networks

CloudAPI provides a way to get details on public and customer-specific networks
in a datacenter. This also includes all of the networks available in your
fabric. Your fabric networks are exclusive to your account. All other networks
may be usable by other tenants.

ListNetworks(GET/:login/networks)

List all the networks which can be used by the given account. If a network was
created on a fabric, then additional information will be shown:

Inputs

None

Returns

An array of network objects. Networks are:

Field

Type

Description

id

UUID

Unique id for this network

name

String

The network name

public

Boolean

Whether this a public or private (rfc1918) network

fabric

Boolean

Whether this network is created on a fabric

description

String

Description of this network (optional)

Each object returned may be an individual network, or a network pool. A network
pool is a logical grouping of one or more networks that share the same
routability characteristics. See AddNic about the behavior of
provisioning with a network pool. This also means that the network id(s)
returned by GetMachine or GetNic will not be in the list of networks returned
by ListNetworks if it was originally provisioned using a pool.

If the network is on a fabric, the following additional fields are included:

ListNetworkIPs(GET/:login/networks/:id/ips)

List a network's IPs. On a public network only IPs owned by the user will be
returned. On a private network all IPs that are either reserved or allocated
will be returned.

Note that not every network from ListNetworks will work. Some
UUIDs are for pools which are not supported at this time. However, every
network UUID from GetMachine and GetNic will work, as
they are UUIDs for a specific network.

The reserved field determines if the IP can be used automatically when
provisioning a new instance. If reserved is set to true, then the IP will not
be given out.

The managed field in the IP object tells you if the IP is manged by Triton
itself. An example of this is the gateway and broadcast IPs on a network.

If the IP is associated with an instance then owner_uuid will be shown as
well, so that on shared private networks it is clear who is using the IP. The
belongs_to_uuid field will tell you which instance owns the IP if any, and
will only be present if that instance is owned by you.

You can paginate this API by passing in offset and limit. HTTP responses
will contain the additional headers x-resource-count and x-query-limit. If
x-resource-count is less than x-query-limit, you're done, otherwise call the
API again with offset set to offset + limit to fetch additional instances.

Inputs

Field

Type

Description

limit

Number

Return a max of N IPs; default is 1000 (which is also the maximum allowable result set size)

GetNetworkIP(GET/:login/networks/:id/ips/:ip_address)

Get a network's IP. On a public network you can only get an IP owned by you. On
private network you can get an IP owned by any of the network's shared owners,
however the belongs_to_uuid field will be omitted if you do not own the
instance the IP is assocaited with.

UpdateNetworkIP(PUT/:login/networks/:id/ips/:ip_address)

Update a network's IP to toggle the reserved flag. If reserved is set to
true the IP will not be given out automatically at provision time. You cannot
update an IP on a public network. On private networks you can update an IP that
is already in use by an instance owned by you, or an IP that is not yet in use
as long as it's within the network's subnet.

It also returns the Location in the headers where the new NIC lives in the HTTP
API. If a NIC already exists for that network, a 302 redirect will be returned
instead of the object.

As of CloudAPI v8.5.0, AddNic now accepts a network object
for the network parameter. It's still possible to pass in just the network UUID
string instead of using the new network object format.

If the input network uuid is a network pool (a logical grouping of one or more
networks that share the same routability characteristics) then a NIC will be
provisioned from one of the associated networks. The network UUID returned will
always be the UUID of the actual network assigned, not the UUID of the pool.

NICs do not appear on an instance immediately, so the state of the new NIC can
be checked by polling GetNic. While the NIC is provisioning, it will
have a state of 'provisioning'. Once the Nic is active on the instance the
NIC will have a state of 'running'. If the provision fails, the NIC will be
removed and GetNic will start returning 404.

Volumes

_The API endpoints documented in this section are considered experimental. There
is no guarantee on backward compatibility for them. Breaking changes can and
will be made to them. They are available only for CloudAPI services running in
datacenters for which NFS volumes support has been explicitly enabled. By
default, it is disabled._

Volume objects

Volumes are represented as objects that share a common set of properties:

Name

Type

Description

id

String

The UUID of the volume itself

owner_uuid

String

The UUID of the volume's owner. In the example of a NFS shared volume, the owner is the user who created the volume

name

String

The volume's name. It must be unique for a given user. It must match the regular expression /^[a-zA-Z0-9][a-zA-Z0-9_\.\-]+$/. The maximum length for a volume's name is 256 characters. Trying to create or update a volume with a name longer than 256 characters will result in an error

type

String

Identifies the volume's type. There is currently one possible value for this property: tritonnfs. Additional types may be added in the future, and they can all have different sets of type specific properties

created

String

A timestamp that indicates the time at which the volume was created

state

String

creating, ready, deleting, deleted or failed. Indicates in which state the volume currently is. failed volumes are still persisted to Moray for troubleshooting/debugging purposes. See the section Volumes state machine for a diagram and further details about the volumes' state machine

networks

Array of string

A list of network UUIDs that represents the networks on which this volume can be reached

Deletion and usage semantics

A volume is considered to be "in use" if its refs property is a non-empty
array. When a container which mounts shared volumes is created and becomes
"active", it is added as a "reference" to those shared volumes.

A container is considered to be active when it's in any state except failed or
destroyed -- in other words in any state that can transition to running.

For instance, even if a _stopped_ machine is the only remaining machine that
references a given shared volume, it won't be possible to delete that volume
until that machine is _deleted_.

Deleting a shared volume when there's still at least one active machine that
references it will result in an error.

A shared volume can be deleted if its only users are mounting it using something
other than Triton APIs (e.g., by using the mount command manually from within
a VM).

Volumes state machine

ListVolumes (GET /:login/volumes)

_Available only for CloudAPI services running in datacenters for which NFS
volumes support has been explicitly enabled. By default, it is disabled._

By default (if the request doesn't include the state and predicate
parameters), volumes in state === 'failed' are not included in the response.

Input

Param

Type

Description

name

String

Allows filtering volumes by name

predicate

String

URL-encoded JSON string representing an object that can be used to build a LDAP filter. This LDAP filter can search for volumes on arbitrary indexed properties. More details below

size

String

Allows filtering volumes by size, e.g size=10240

state

String

Allows filtering volumes by state, e.g state=failed

type

String

Allows filtering volumes by type, e.g tritonnfs

Searching by name

name is a string containing either a full volume name, or a partial volume
name prefixed and/or suffixed with a * character. For example:

foo

foo*

*foo

*foo*

are all valid name= searches which will match respectively:

the exact name foo

any name that starts with foo such as foobar

any name that ends with foo such as barfoo

any name that contains foo such as barfoobar

Searching by predicate

The predicate parameter is a JSON string that can be used to build an LDAP
filter to search on the following indexed properties:

name

billing_id

type

state

tags

Important: when using a predicate, the same parameter cannot be found in both
the predicate and the non-predicate query parameters. For example, if a
predicate includes the name field, passing the name= query parameter is an
error.

CreateVolume (POST /:login/volumes)

_Available only for CloudAPI services running in datacenters for which NFS
volumes support has been explicitly enabled. By default, it is disabled._

Input

Param

Type

Mandatory

Description

name

String

No

The desired name for the volume. If missing, a unique name for the current user will be generated. The maximum length of a volume name is 256 characters, trying to create a volume with a name longer than 256 characters will generate an error

A list of UUIDs representing networks on which the volume is reachable. These networks must be fabric networks owned by the user sending the request

Output

A volume object representing the volume being created. When
the response is sent, the volume and all its resources are not yet created and
its state is creating. Users need to poll the newly-created volume with the
GetVolume API to determine when it's ready to use (its state transitions to
ready).

If the creation process fails, the volume object has its state set to failed.

GetVolume (GET /:login/volumes/:id)

_Available only for CloudAPI services running in datacenters for which NFS
volumes support has been explicitly enabled. By default, it is disabled._

GetVolume can be used to get data from an already-created volume, or to
determine when a volume being created is ready to be used.

Polling instance audit

There are some cases where polling for instance state change will not work
because there won't be a state change for the requested action (e.g. "rename"),
or because the state change is short-lived thus making the transition easy to
miss (e.g. "reboot").

In such cases, consider polling an instance's historical of actions available
through an instance's Machine Audit, wait for the desired
action to appear on that list, and check successfulness there. Taking our
example from previous section, this is how we could check for a reboot:

Appendix B: Cloud Analytics

Cloud Analytics (CA) provides deep observability for systems and applications in
a Triton cloud. The CA service enables you to dynamically instrument
systems in the cloud to collect performance data that can be visualized in
real-time (through the portal), or collected using the API and analyzed later by
custom tools. This data can be collected and saved indefinitely for capacity
planning and other historical analysis.

Building blocks: metrics, instrumentations, and fields

A metric is any quantity that can be instrumented using CA. For examples:

Disk I/O operations

Kernel thread executions

TCP connections established

MySQL queries

HTTP server operations

System load average

Each metric also defines which fields are available when data is collected.
These fields can be used to filter or decompose data. For example, the Disk I/O
operations metric provides the fields "hostname" (for the current server's
hostname) and "disk" (for the name of the disk actually performing an
operation), which allows users to filter out data from a physical server or
break out the number of operations by disk.

When we create an instrumentation, the system dynamically instruments the
relevant software and starts gathering data. The data is made available
immediately in real-time. To get the data for a particular point in time, you
retrieve the value of the instrumentation for that time:

To summarize: metrics define what data the system is capable of reporting.
Fields enhance the raw numbers with additional metadata about each event that
can be used for filtering and decomposition. Instrumentations specify which
metrics to actually collect, what additional information to collect from each
metric, and how to store that data. When you want to retrieve that data, you
query the service for the value of the instrumentation.

Values and visualizations

We showed above how fields can be used to decompose results. Let's look at that
in more detail. We'll continue using the "FS Operations" metric with
fields "optype".

Scalar values

Suppose we create an instrumentation with no filter and no decomposition. Then
the value of the instrumentation for a particular time interval might look
something like this:

{
start_time: 1308789361,
duration: 1,
value: 573
...
}

In this case, start_time denotes the start of the time interval in Unix time,
duration denotes the length of the interval in seconds, and value denotes
the actual value. This means that 573 FS operations completed on all
systems for a user in the cloud between times 1308789361 and 1308789362.

Discrete decompositions

Now suppose we create a new instrumentation with a decomposition by execname.
Then the raw value might look something like this:

We call the decomposition by execname a discrete decomposition because the
possible values of execname ("ls", "cat", ...) are not numbers.

Numeric decompositions

It's useful to decompose some metrics by numeric fields. For example, you might
want to view FS operations decomposed by latency (how long the operation
took). The result is a statistical distribution, which groups nearby
latencies into buckets and shows the number of disk I/O operations that fell
into each bucket. The result looks like this:

As we will see, this data allows clients to visualize the distribution of I/O
latency, and then highlight individual programs in the distribution (or whatever
field you broke it down along).

Value-related properties

We can now explain several of the instrumentation properties shown previously:

value-dimension: the number of dimensions in returned values, which is
the number of decompositions specified in the instrumentation, plus 1.
Instrumentations with no decompositions have dimension 1 (scalar values).
Instrumentations with a single discrete or numeric decomposition have value 2
(vector values). Instrumentations with both a discrete and numeric
decomposition have value 3 (vector of vectors).

value-arity: describes the format of individual values

scalar: the value is a scalar value (a number)

discrete-decomposition: the value is an object mapping discrete keys to
scalars

numeric-decomposition: the value is either an object (really an array of
arrays) mapping buckets (numeric ranges) to scalars, or an object mapping
discrete keys to such an object. That is, a numeric decomposition is one
which contains at the leaf a distribution of numbers.

The arity serves as a hint to visualization clients: scalars are typically
rendered as line or bar graphs, discrete decompositions are rendered as stacked
or separate line or bar graphs, and numeric decompositions are rendered as
heatmaps.

Predicate Syntax

Predicates allow you to filter out data points based on the fields of a
metric. For example, instead of looking at FS operations for your whole
cloud, you may only care about operations with latency over 100ms, or on a
particular instance.

Predicates are represented as JSON objects using an LISP-like syntax. The
primary goal for predicate syntax is to be very easy to construct and parse
automatically, making it easier for people to build tools to work with them.

This predicate could be used with the "logical filesystem operations" metric to
identify file operations performed by MySQL on instances "host1", "host2", or
"host3" that took longer than 100ms.

Heatmaps

Up to this point we have been showing raw values, which are JSON
representations of the data exactly as gathered by Cloud Analytics. However, the
service may provide other representations of the same data. For numeric
decompositions, the service provides several heatmap resources that generate
heatmaps, like this one:

Like raw values, heatmap values are returned using JSON, but instead of
specifying a value property, they specify an image property whose contents
are a base64-encoded PNG image. For details, see the API reference. Using the
API, it's possible to specify the size of the image, the colors used, which
values of the discrete decomposition to select, and many other properties
controlling the final result.

Heatmaps also provide a resource for getting the details of a particular heatmap
bucket, which looks like this:

This example indicates the following about the particular heatmap bucket we
clicked on:

the time represented by the bucket is 1308865185

the bucket covers a latency range between 10 and 20 microseconds

at that time and latency range, program ls completed 5 operations and
program cat completed 57 operations.

This level of detail is critical for understanding hot spots or other patterns
in the heatmap.

Data granularity and data retention

By default, CA collects and saves data each second for ten minutes. So if you
create an instrumentation for FS operations, the service will save the
per-second number of FS operations going back for the last ten minutes. These
parameters are configurable using the following instrumentation properties:

granularity: how frequently to aggregate data, in seconds. The default is
one second. For example, a value of 300 means to aggregate every five
minutes' worth of data into a single data point. The smaller this value, the
more space the raw data takes up. granularity cannot be changed after an
instrumentation is created.

retention-time: how long, in seconds, to keep each data point. The default
is 600 seconds (ten minutes). The higher this value, the more space the raw
data takes up. retention-time can be changed after an instrumentation is
created.

These values affect the space used by the instrumentation's data. For example,
all things being equal, the following all store the same amount of data:

10 minutes' worth of per-second data (600 data points)

50 minutes' worth of per-5-second data

25 days' worth of per-hour data

600 days' worth of per-day data

The system imposes limits on these properties so that each instrumentation's
data cannot consume too much space. The limits are expressed internally as a
number of data points, so you can adjust granularity and retention-time to match
your needs. Typically, you'll be interested in either per-second data for live
performance analysis, or an array of different granularities and retention-times
for historical usage patterns.

Data persistence

By default, data collected by the CA service is only cached in memory, not
persisted to disk. As a result, transient failures of the underlying CA service
instances can result in loss of the collected data. For live performance
analysis, this is likely not an issue, since the likelihood of a crash is low
and the data can probably be collected again. For historical data being kept
for days, weeks, or even months, it's necessary to persist data to disk. This
can be specified by setting the persist-data instrumentation property to
"true". In that case, CA will ensure that data is persisted at approximately
the granularity interval of the instrumentation, but no more frequently than
every few minutes. (For that reason, there's little value in persisting an
instrumentation whose retention time is only a few minutes.)

Transformations

Transformations are post-processing functions that can be applied to data when
it's retrieved. You do not need to specify transformations when you create an
instrumentation; you need only specify them when you retrieve the value.
Transformations map values of a discrete decomposition to something else. For
example, a metric that reports HTTP operations decomposed by IP address supports
a transformation that performs a reverse-DNS lookup on each IP address so that
you can view the results by hostname instead. Another transformation maps IP
addresses to geolocation data for displaying incoming requests on a world map.

Each supported transformation has a name, like "reversedns". When a
transformation is requested for a value, the returned value includes a
transformations object with keys corresponding to each transformation (e.g.,
"reversedns"). Each of these is an object mapping keys of the discrete
decomposition to transformed values. For example:

Transformations are always performed asynchronously and the results cached
internally for future requests. So the first time you request a transformation
like "reversedns", you may see no values transformed at all. As you retrieve
the value again, the system will have completed the reverse-DNS lookup for
addresses in the data and they will be included in the returned value.

Appendix C: HTTP Signature Authentication

In addition to HTTP Basic Authentication, CloudAPI supports a new mechanism for
authenticating HTTP requests based on signing with your SSH private key.
Specific examples of using this mechanism with Triton are given here. Reference
the HTTP Signature Authentication specification by Joyent, Inc. for complete
details.

A node.js library for HTTP Signature is available with:

npm install http-signature@0.9.11

CloudAPI Specific Parameters

The Signature authentication scheme is based on the model that the client must
authenticate itself with a digital signature produced by the private key
associated with an SSH key under your account (see /my/keys above). Currently
only RSA signatures are supported. You generate a signature by signing the
value of the HTTP Date header.

As an example, assuming that you have associated an RSA SSH key with your
account, called 'rsa-1', the following request is what you would send for a
ListMachines request:

Where the signature is attached with the
Base64(rsa(sha256(Sat, 11 Jun 2011 23:56:29 GMT))) output. Note that the
keyId parameter cannot use the my shortcut, as in the HTTP resource
paths. This is because CloudAPI must lookup your account to resolve the key, as
with Basic authentication. In short, you MUST use the login name associated
to your account to specify the keyId.