Joyent CloudAPI

CloudAPI is one of the public APIs for a Triton cloud: it allows end users of
the cloud to manage their accounts, instances, networks, images, and to
inquire about other relevant details. CloudAPI provides a single view of
docker containers, infrastructure containers and hardware virtual machines
owned by the user.

This is the reference documentation for the CloudAPI that is part of Joyent's
Triton stack. This guide provides descriptions of the APIs available, as well
as supporting information -- such as how to use the software developer kits
(SDK), command line interface (CLI), and where to find more information.

Triton also provides a Docker API, which Docker clients can use, but this
documentation does not cover. For more information about Triton visit
Joyent Triton.

Conventions

Any content formatted as follows is a command-line example that you can run from
a shell:

sdc-listmachines

All other examples and information are formatted like so:

GET /my/machines HTTP/1.1

Introduction to CloudAPI

What is CloudAPI?

CloudAPI is one of the two public APIs you can use to interact with Triton.
Using CloudAPI, you can:

Create and manage containers and hardware virtual machines (collectively known as instances)

Manage your account credentials

Create custom analytics for monitoring your infrastructure

Create and modify virtual private networks for your instances

Manage snapshots of instances

Manage sub-users and their permissions using RBAC

And more! Oh yes!

While CloudAPI provides visibility into Docker containers, the regular
Docker CLI should be used
for provisioning and managing Docker containers; Triton provides an endpoint
that represents the entire datacenter as a single DOCKER_HOST, which Docker
clients can communicate with. Refer to Joyent's
Docker documentation for more information.

How do I access CloudAPI?

If you don't want to write any code, use one of the two CLIs. The CLIs let you
use command-line tools to perform every action available in the SDK and REST
API.

There are two CLIs available for calling CloudAPI: node-triton and node-smartdc.
node-triton is newer and easier to use, while node-smartdc is more stable and
complete, but both CLIs are supported. These docs will provide examples for
both, although node-triton will be omitted where it does not yet support that
functionality.

Getting Started

If you choose to use node-triton or node-smartdc, be aware that they both
require Node.js.

You can get Node.js from nodejs.org as source code, and as
precompiled packages for Windows, Macintosh, Linux and Illumos distributions.
Alternatively, when using a *nix, you can usually install Node.js using a
package manager as well (e.g. pkgsrc, brew, apt-get, yum). The version of
Node.js should be at least v0.10, so npm (Node.js's package manager) should come
with it as well.

Once you've installed Node.js, to install node-triton invoke:

npm install -g triton

or, to install node-smartdc:

npm install -g smartdc

You will probably want to install json as
well. It is a tool that makes it easier to work with JSON-formatted output. You
can install it like this:

npm install -g json

In all cases above, the -g switch installs the tools globally, usually in
/usr/local/bin, so that you can use them easily from the command line. Omit
this switch if you'd rather the tools be installed in your home hierarchy, but
you'll need to set your PATH appropriately.

Generate an SSH key

Both CLIs require an SSH key to communicate with CloudAPI, as well as logging-in
to many instances.

If you haven't already generated an SSH key (required to use both SSH and HTTP
Signing), run the following command:

ssh-keygen -b 2048 -t rsa

This will prompt you with a place to save the key. You should probably just
accept the defaults, as many programs (SSH and CloudAPI CLIs) will first look
for a file called ~/.ssh/id_rsa. Before running the above command, ensure that
~/.ssh/id_rsa does not already exist; overwriting it may have unintended
consequences.

Set Up your CLI

You need to set the following environment variables information in order to
interact with CloudAPI using either node-triton or node-smartdc:

An example for SDC_URL is https://us-west-1.api.joyentcloud.com. Each
datacenter in a cloud has its own CloudAPI endpoint; a different cloud that uses
Triton would have a different URL.

In this document, we'll use api.example.com as the SDC_URL endpoint; please
replace it with the URL of your datacenter(s). Note that CloudAPI always uses
SSL/TLS, which means that the endpoint URL must begin with https.

You can quickly get your key fingerprint for SDC_KEY_ID by running:

ssh-keygen -l -f ~/.ssh/id_rsa.pub | awk '{print $2}' | tr -d '\n'

where you replace ~/.ssh/id_rsa.pub with the path to the public key you want
to use for signing requests.

You can set environment variables for the following flags so that you don't have
to type them for each request (e.g. in your .bash_profile). All the examples in
this document assume that these variables have been set:

Provision a new instance

To provision a new instance, you first need to get the ids for the image and
package you want to use as the base for your instance.

An image is a snapshot of a filesystem and its software (for some types of
container), or a disk image (for hardware virtual machines). You can get the
list of available images using the triton image list or sdc-listimages
commands; see the ListImages section below for a detailed
explanation of these commands.

A package is a set of dimensions for the new instance, such as RAM and disk
size. You can get the list of available packages using the
triton package list or sdc-listpackages commands; see the
ListPackages section below for a detailed explanation of these
commands.

You can use the --name flag to name your instance; if you do not specify a
name, Triton will generate one for you. --image is the id of the image
you'd like to use as the new instance's base. --package is the id of the
package to use to set instance dimensions. For the triton command, you can
also pass the name of the image or the package instead of their id.

When you provision a new instance, the instance will take time to be initialized
and booted; the state attribute will reflect this. Once the state attribute
"running", you can login to your new instance (assuming it's a Unix-based
instance), with the following:

ssh-add ~/.ssh/<key file>
$ ssh -A root@<new instance IP address>

These two commands set up your SSH agent (which has some magical properties,
so you need to handle your SSH keys less often), and logs you in as the admin
user on an instance. Note that the admin user has password-less sudo
capabilities, so you may want to set up some less privileged users. The SSH
keys on your account will allow you to login as root or admin on your new
instance.

An alternative of using SSH directly is:

triton ssh <name of instance>

Now that we've done some basics with an instance, let's introduce a few
concepts:

Images

By default, SmartOS images should be available to your for use. Your Triton
cloud may have other images available as well, such as Linux or Windows images.
The list of available images can be obtained with:

Packages are the Triton name for the dimensions of an instance (how much CPU will
be available, how much RAM, disk and swap, and so forth). Packages are provided
so that you do not need to select individual settings, such as RAM or disk size.

Managing SSH keys

For instance which don't have a brand of kvm (see
triton instance list -o id,brand or sdc-listmachines), you can manage the
SSH keys that allow logging into the instance via CloudAPI. For example, to
rotate keys:

triton key add --name=my-other-rsa-key ~/.ssh/my_other_rsa_key.pub

or

sdc-createkey --name=my-other-rsa-key ~/.ssh/my_other_rsa_key.pub

The --name option sets the name of the key. If you don't provide one,
CloudAPI sets it to the name of the file; in this case my_other_rsa_key.pub.

To use the new key, you will need to update the environment variables:

At this point you could delete your other key from the system; see
Cleaning Up for a quick example.

You cannot manage the SSH keys of instances with a brand of kvm. Hardware
virtual machines are static, and whatever keys were in your account at instance
creation time are used, provided the OS inside KVM is a *nix.

Creating Analytics

Now that you have a container up and running, and you logged in and did
whatever it is you thought was awesome, let's create an instrumentation to
monitor performance. Analytics are one of the more powerful features of
Triton, so for more information, be sure to read
Appendix B: Cloud Analytics.

Where 1 is the id you got back from sdc-createinstrumentation. You should
be able to run this a few times and see the changes. This is just a starting
point, for a full discussion of analytics, be sure to read
Appendix B: Cloud Analytics.

Cleaning up

After going through this Getting Started section, you should now have at least
one SSH key, one instance and one instrumentation. The rest of the commands
assume you have json installed.

Deleting Instrumentations

Before cleaning up your instances, let's get rid of the instrumentation we
created:

Deleting keys

Finally, you probably have one or two SSH keys uploaded to Triton after going
through the guide, so to delete the one we setup:

triton key delete id_rsa

or

sdc-deletekey id_rsa

RBAC: Users, Roles & Policies

Starting at version 7.2.0, CloudAPI supports Role Based Access Control (RBAC),
which means that accounts can have multiple users and roles
associated with them.

While the behaviour of the main account remains the same,
including the SSH keys associated with it, it's now possible to have
multiple Users subordinate to the main account. Each of these
users have a different set of SSH Keys. Both the users and their
associated SSH keys have the same format as the main account object (and the
keys associated with it).

It's worth mentioning that the login for an account's users must be different
only between the users of that account, not globally. We could have an account
with login "mark", another account "exampleOne" with a user with login "mark",
another account "exampleTwo" with another user with login "mark", and so
forth.

The rules in policies are used for the access control of an account's users.
These rules use Aperture as the
policy language, and are described in detail in the next section.

Our recommendation is to limit each policy's set of rules to a very scoped
collection, and then add one or more of these policies to each group. This aids
easily reusing existing policies for one or more roles, allowing fine-grained
definition of each role's abilities.

Rules definition for access control

You should refer to the
Aperture documentation for the
complete details about the different possibilities when defining new rules.
This section will only cover a limited set strictly related to CloudAPI's usage.

In the case of CloudAPI, <principal> will be always the user performing the
HTTP request. Likewise, <resource> will always be the URL of such request,
for example /:account/machines/:instance_id.

We add one or more roles to a resource to explicitly define the active roles a
user trying to access a given resource must have. Therefore, we don't need to
specify <principal> in our rules, since it'll always be defined by the
role-tags of the resource the user is trying to get access to. For the same
reason, we don't need to specify <resource> in our rules.

Therefore, CloudAPI's Aperture rules have the format:

CAN <actions> WHEN <conditions>

By default, the access policy will DENY any attempt made by any account user
to access a given resource, unless:

that resource is tagged with a role

that role is active

that role has a policy

that policy contains a rule which explicity GRANTS access to that resource

For example, a user with an active role read, which includes a policy rule
like CAN listmachines and getmachines will not get access to resources like
/:account/machines or /:account/machines/:instance_id unless these resources
are role-tagged with the role read too.

Additionally, given that the <actions> included in the policy rule are just
listmachines and getmachine, the user will be able to retrieve an instance's
details provided by the GetMachine action, but will not be able
to perform any other instance actions (like StopMachine).
However, if the role has a rule including that <action> (like StopMachine), or
the user has an additional role which includes that rule, then the user can
invoke that action too.

As an aside, the active roles of a user are set by the default_members
attribute in a role. If three different roles contain the "john" user (amongst
others) in their default-members list, then the "john" user will have those
three roles as active roles by default. This can be overridden by passing in
?as-role=<comma-separated list of role names> as part of the URL, or adding a
--role flag when using a node-smartdc command; provided that each role contains
that user in their members list, then those roles are set as the
currently-active roles for a request instead.

For more details on how Access Control works for both CloudAPI and Manta,
please refer to Role Based Access Control
documentation.

An important note about RBAC and certain reads after writes

CloudAPI uses replication and caching behind the scenes for user, role and
policy data. This implies that API reads after a write on these particular
objects can be up to several seconds out of date.

For example, when a user is created, CloudAPI returns both a user object
(which is up to date), and a location header indicating where that new user
object actually lives. Following that location header may result in a 404 for
a short period.

As another example, if a policy is updated, the API call will return a policy
object (which is up to date), but GETing that URL again may temporarily return
a outdated object with old object details.

For the time being, please keep in mind that user, role and policy
creation/updates/deletion may potentially take several seconds to settle. They
have eventual consistency, not read-after-write.

API Introduction

CloudAPI exposes a REST API over HTTPS. You can work with the REST API by
either calling it directly via tooling you already know about (such as curl, et
al), or by using the CloudAPI CLIs and SDKs from Joyent. The node-triton
CloudAPI SDK & CLI is available as an npm module, which you can install with:

npm install triton

Alternatively, there is the more stable and feature-complete node-smartdc:

npm install smartdc

Although node-triton has fewer features -- for now -- it will continue to
receive the most development effort and future support. node-smartdc is in
maintenance.

The rest of this document will show all APIs in terms of both the raw HTTP
specification, the CLI commands, and sometimes the node-smartdc SDK.

Issuing Requests

All HTTP calls to CloudAPI must be made over TLS, and requests must carry at
least two headers (in addition to standard HTTP headers): Authorization and
Api-Version. The details are explained below. In addition to these headers,
any requests requiring content must be sent in an acceptable scheme to
CloudAPI. Details are also below.

Content-Type

For requests requiring content, you can send parameters encoded with
application/json, application/x-www-form-urlencoded or
multipart/form-data. Joyent recommends application/json. The value of the
Accept header determines the encoding of content returned in responses.
CloudAPI supports application/json response encodings only.

In order to leverage HTTP Signature Authentication, only RSA signing mechanisms
are supported, and your keyId must be equal to the path returned from a
ListKeys API call. For example, if your Triton login is demo,
and you've uploaded an RSA SSH key with the name foo, an Authorization
header would look like:

The default value to sign for CloudAPI requests is simply the value of the HTTP
Date header. For more information on the Date header value, see
RFC 2616. All requests to
CloudAPI using the Signature authentication scheme must send a Date header.
Note that clock skew will be enforced to within 300 seconds (positive or
negative) from the value sent.

Full support for the HTTP Signature Authentication scheme is provided in both
CloudAPI SDKs; an additional reference implementation for Node.js is available
in the npm http-signature module, which you can install with:

npm install http-signature

Using cURL with CloudAPI

Since cURL is commonly used to script requests to web
services, here's a simple Bash function you can use to wrap cURL when
communicating with CloudAPI:

For new applications using CloudAPI SDKs, it is recommended that one explicitly
accept a particular major version, e.g. Accept-Version: ~8, so that
future CloudAPI backward incompatible changes (always done with a major
version bump) don't break your application.

The triton tool uses Accept-Version: ~8||~7 by default. Users can restrict
the API version via the triton --accept-version=RANGE ... option. The older
sdc-* tools from node-smartdc similarly use ~8||~7 by default, and users
can restrict the API version via the SDC_API_VERSION=RANGE environment
variable or the --api-version=RANGE option to each command.

The rest of this section describes API changes in each version.

8.0.0

Instance/machine objects (from GetMachine, ListMachines) now has a brand
attribute, which is more granular than the existing type (now deprecated).
Also a docker boolean attribute, which indicates whether the instance
is a Docker container.

[Backward incompatible] This version also makes a breaking change to the
attribute type on images. In API versions 7 and earlier, <image>.type
was either "virtualmachine" (for zvol images) or "smartmachine" for other
image types. In version 8, <image>.type is the untranslated type value
from the image in the IMGAPI.

[Backward incompatible] ListDatasets and GetDataset have been removed.
Use ListImages and GetImage, respectively.

[Backward incompatible] The long deprecated support for API version 6.5
has been dropped. The default attribute on package objects is deprecated,
since it only had meaning in 6.5.

7.3.0

7.2.0

RBAC v1 has been made available on the CloudAPI interface. Accounts can create
users, rules can be created and combined to make policies, policies and users
can be associated together using roles, and role tags can be applied to
CloudAPI resources.

Firewall rules include information regarding rules being global or not
(global attribute), and will optionally include a human-readable
description attribute for the rules (which can be modified except for global
rules).

7.0.0

HTTP signature auth.

Account

You can obtain your account details and update them through CloudAPI, although
login cannot be changed, and password can not be retrieved.

Keys

This part of the API is the means by which you operate on your SSH/signing keys.
These keys are needed in order to login to instances over SSH, as well as signing
requests to this API (see the HTTP Signature Authentication Scheme outlined in
Appendix C for more details).

Currently CloudAPI supports uploads of public keys in the OpenSSH format.

Note that while it's possible to provide a name attribute for an SSH key in
order to use it as a human-friendly alias, this attribute's presence is
optional. When it's not provided, the ssh key fingerprint will be used as the
name instead.

For the following routes, the parameter placeholder :key can be replaced with
with either the key's name or its fingerprint. It's strongly recommended to
use fingerprint when possible, since the name attribute does not have any
uniqueness constraints.

UpdateUser(POST/:account/users/:id)

Update a user's modifiable properties.

Note: Password changes are not allowed using this endpoint; there is an
additional endpoint (ChangeUserPassword) for password
changes so it can be selectively allowed/disallowed for users using policies.

Role Tags

SetRoleTags(PUT/:resource_path)

Sets the given role tags to the provided resource path. resource_path
can be the path to any of the CloudAPI resources described in this document:
account, keys, users, roles, policies, user's ssh keys, datacenters, images,
packages, instances, analytics, instrumentations, firewall rules and networks.

For each of these you can set role tags either for an individual resource or
for the whole group; i.e., you can set role tags for all the instances using:

Contains a grouping of various minimum requirements for provisioning an instance with this image. For example 'password' indicates that a password must be provided

homepage

String

The URL for a web page with more detailed information for this image

files

Array

An array of image files that make up each image. Currently only a single file per image is supported

files[0].compression

String

The type of file compression used for the image file. One of 'bzip2', 'gzip', 'none'

files[0].sha1

String

SHA-1 hex digest of the file content. Used for corruption checking

files[0].size

Number

File size in bytes

published_at

ISO8859 date

The time this image has been made publicly available

owner

String

The UUID of the user who owns this image

public

Boolean

Indicates if this image is publicly available

state

String

The current state of the image. One of 'active', 'unactivated', 'disabled', 'creating', 'failed'

tags

Object

An object of key/value pairs that allows clients to categorize images by any given criteria

eula

String

URL of the End User License Agreement (EULA) for the image

acl

Array

Access Control List. An array of account UUIDs given access to a private image. The field is only relevant to private images.

error

Object

If state=="failed", resulting from CreateImageFromMachine failure, then there may be an error object of the form {"code": "<string error code>", "message": "<string desc>"}

error.code

String

A CamelCase string code for this error, e.g. "PrepareImageDidNotRun". See GetImage docs for a table of error.code values

error.message

String

A short description of the image creation failure

Possible error.code values:

error.code

Details

PrepareImageDidNotRun

This typically means that the target harware virtual machine (e.g. Linux) has old guest tools that pre-date the image creation feature. Guest tools can be upgraded with installers at https://download.joyent.com/pub/guest-tools/. Other possibilities are: a boot time greater than the five-minute timeout, or a bug or crash in the image-preparation script

VmHasNoOrigin

Origin image data could not be found for the instance. Typically this is for an instance migrated before image creation support was added

NotSupported

Indicates an error due to functionality that isn't currently supported. One example is that custom image creation of an instance based on a custom image isn't currently supported

ExportImage(POST/:login/images/:id?action=export)

Exports an image to the specified Manta path. Caller must be the owner of the
image, and the correspondent Manta path prefix, in order to export it. Both the
image manifest and the image file will be exported, and their filenames will
default to the following format when the specified manta path is a directory:

<manta_path>/NAME-VER.imgmanifest
<manta_path>/NAME-VER.zfs.FILE-EXT

Where NAME is the image name and VER is the image version. FILE-EXT is the file
extension of the image file. As an example, exporting a foo-1.0.0 image to
/user/stor/cloudapi would result in the following files being exported:

By contrast, if the basename of the given prefix is not a directory, then
"MANTA_PATH.imgmanifest" and "MANTA_PATH.zfs[.EXT]" are created. As an example,
the following shows how to export foo-1.0.0 with a custom name:

Packages

Packages are named collections of resources that are
used to describe the dimensions of either a container or a hardware virtual
machine. These resources include (but are not limited to) RAM size, CPUs, CPU
caps, lightweight threads, disk space, swap size, and logical networks.

ListPackages(GET/:login/packages)

Provides a list of packages available in this datacenter.

Inputs

The following are all optional inputs:

Field

Type

Description

name

String

The "friendly" name for this package

memory

Number

How much memory will by available (in MiB)

disk

Number

How much disk space will be available (in MiB)

swap

Number

How much swap space will be available (in MiB)

lwps

Number

Maximum number of light-weight processes (threads) allowed

vcpus

Number

Number of vCPUs for this package

version

String

The version of this package

group

String

The group this package belongs to

When any values are provided for one or more of the aforementioned inputs, the
retrieved packages will match all of them.

Infrastructure and Docker containers are lightweight, offering the most
performance, observability and operational flexibility. Harware-virtualized
machines are useful for non-SmartOS or non-Linux stacks.

ListMachines(GET/:login/machines)

Lists all instances we have on record for your account. If you have a large
number of instances, you can filter using the input parameters listed below.
Note that deleted instances are returned only if the instance history has not
been purged from Triton.

You can paginate this API by passing in offset and limit. HTTP responses
will contain the additional headers x-resource-count and x-query-limit. If
x-resource-count is less than x-query-limit, you're done, otherwise call the
API again with offset set to offset + limit to fetch additional instances.

Note that there is a HEAD /:login/machines form of this API, so you can
retrieve the number of instances without retrieving a JSON describing the
instances themselves.

Inputs

Field

Type

Description

type

String

(deprecated) The type of instance (virtualmachine or smartmachine)

brand

String

(v8.0+) The type of instance (e.g. lx)

name

String

Machine name to find (will make your list size 1, or 0 if nothing found)

image

String

Image id; returns instances provisioned with that image

state

String

The current state of the instance (e.g. running)

memory

Number

The current size of the RAM deployed for the instance (in MiB)

tombstone

Boolean

Include destroyed and failed instances available in instance history

limit

Number

Return a max of N instances; default is 1000 (which is also the maximum allowable result set size)

offset

Number

Get a limit number of instances starting at this offset

tag.$name

String

An arbitrary set of tags can be used for querying, assuming they are prefixed with "tag."

docker

Boolean

Whether to only list Docker instances, or only non-Docker instances, if present. Defaults to showing all instances.

credentials

Boolean

Whether to include the generated credentials for instances, if present. Defaults to false

Note that if the special input tags=* is provided, any other input will be
completely ignored and the response will return all instances with any tag.

CreateMachine(POST/:login/machines)

Allows you to provision an instance.

If you do not specify a name, CloudAPI will generate a random one for you. If
you have enabled Triton CNS on your account, this name will also be used in
DNS to refer to the new instance (and must therefore consist of DNS-safe
characters only).

Your instance will initially be not available for login (Triton must provision
and boot it); you can poll GetMachine for its status. When the
state field is equal to running, you can log in. If the instance is a
brand other than kvm, you can usually use any of the SSH keys managed
under the keys section of CloudAPI to login as any POSIX user on the
OS. You can add/remove keys over time, and the instance will automatically work
with that set.

If the the instance has a brand kvm, and of a UNIX-derived OS (e.g. Linux),
you must have keys uploaded before provisioning; that entire set of keys will
be written out to /root/.ssh/authorized_keys in the new instance, and you can
SSH in using one of those keys. Changing the keys over time under your account
will not affect a running hardware virtual machine in any way; those keys are
statically written at provisioning-time only, and you will need to manually
manage them on the instance itself.

If the image you create an instance from is set to generate passwords for you,
the username/password pairs will be returned in the metadata response as a
nested object, like so:

More generally, the metadata keys can be set either at the time of instance
creation, or after the fact. You must either pass in plain-string values, or a
JSON-encoded string. On metadata retrieval, you will get a JSON object back.

Networks can be specified using the networks attribute. If it is absent from
the input, the instance will default to attaching to one externally-accessible
network (it will have one public IP), and one internally-accessible network from
the datacenter network pools. It is possible to have an instance attached to
only an internal network, or both public and internal, or just external.

Be aware that CreateMachine does not return IP addresses. To obtain the IP
address of a newly-provisioned instance, poll GetMachine until
the instance state is running.

Typically, Triton will allocate the new instance somewhere reasonable within the
cloud. You may want this instance to be placed on the same server as another
instance you have, or have it placed on an entirely different server from your
existing instances so that you can spread them out. In either case, you can
provide locality hints (aka 'affinity' criteria) to CloudAPI.

UUIDs provided should be the ids of instances belonging to you. If there is only
a single UUID entry in an array, you can omit the array and provide the UUID
string directly as the value to a near/far key.

strict defaults to false, meaning that Triton will attempt to meet all the
near and/or far criteria but will still provision the instance when no
server fits all the requirements. If strict is set to true, the creation of
the new instance will fail if the affinity criteria cannot be met.

When Triton CNS is enabled, the DNS search domain of the new VM will be
automatically set to the suffix of the "instance" record that is created for
that VM. For example, if the full CNS name of the new VM would be
"foo.inst.35ad1ec4-2eab-11e6-ac02-8f56c66976a1.us-west-1.triton.zone", its
automatic DNS search path would include
"inst.35ad1ec4-2eab-11e6-ac02-8f56c66976a1.us-west-1.triton.zone". This can
be changed later within the instance, if desired.

Inputs

Field

Type

Description

name

String

Friendly name for this instance; default is the first 8 characters of the machine id

User-script

The special value metadata.user-script can be specified to provide a custom
script which will be executed by the instance right after creation, and on every
instance reboot. This script can be specified using the command-line option
--script, which should be an absolute path to the file you want to upload to
the instance.

StopMachine(POST/:login/machines/:id?action=stop)

Allows you to shut down an instance. POST to the instance name with an action
of stop.

CreateMachineSnapshot(POST/:login/machines/:id/snapshots)

Allows you to take a snapshot of an instance. Once you have one or more
snapshots, you can boot the instance from a previous snapshot.

Snapshots are not usable with other instances; they are a point-in-time snapshot
of the current instance. Snapshots can also only be taken of instances that are
not of brand 'kvm'.

Since instance instances use a copy-on-write filesystem, snapshots take up
increasing amounts of space as the filesystem changes over time. There is a
limit to how much space snapshots are allowed to take. Plan your snapshots
accordingly.

AddMachineTags(POST/:login/machines/:id/tags)

Set tags on the given instance. A pre-existing tag with the same name as one
given will be overwritten.

Note: This action is asynchronous. You can poll on ListMachineTags to wait for
the update to be complete (the triton instance tag set -w,--wait option does
this).

Inputs

Tag name/value pairs. Input data is typically as a application/json POST body.
However, query params or application/x-www-form-urlencoded-encoded body also
works. Tag values may be strings, numbers or booleans.

Analytics

It is strongly recommended that before you read the API documentation for
Analytics, you first read through
Appendix B: Cloud Analytics. Most supporting
documentation and explanation of types and interactions are described there.

DescribeAnalytics(GET/:login/analytics)

Supports retrieving the "schema" for instrumentations which can be created using
the analytics endpoint.

Inputs

None

Returns

A large object that reflects the analytics available to you.

Each of the items listed below is an object; the keys in each are what can be
used. For example, in 'modules', you'll get something like:

Type of the field, which determines how to label it, as well as whether the field is numeric or discrete

Fields are either numeric or discrete based on the "arity" of their type.

Numeric fields

In predicates, values of numeric fields can be compared using numeric equality
and inequality operators (=, <, >, etc).

In decompositions, a numeric field yields a numeric decomposition (see
"Numeric decompositions" above).

Discrete fields

In predicates, values of discrete fields can only be compared using string
equality.

In decompositions, a discrete field yields a discrete decomposition (see
"Discrete decompositions" above).

Note that some fields look like numbers but are used by software as identifiers,
and so are actually discrete fields. Examples include process identifiers,
which are numbers, but don't generally make sense comparing using inequalities
or decomposing to get a numeric distribution.

Types

Types are used with both metrics and fields for two purposes: to hint to clients
at how to best label values, and to distinguish between numeric and discrete
quantities.

Retrieves metadata and a base64-encoded PNG image of a particular
instrumentation's heatmap.

Inputs

Field

Type

Description

height

Number

Height of the image in pixels

width

Number

Width of the image in pixels

ymin

Number

Y-Axis value for the bottom of the image (default: 0)

ymax

Number

Y-Axis value for the top of the image (default: auto)

nbuckets

Number

Number of buckets in the vertical dimension

selected

Array

Array of field values to highlight, isolate or exclude

isolate

Boolean

If true, only draw selected values

exclude

Boolean

If true, don't draw selected values at all

hues

Array

Array of colors for highlighting selected field values

decompose_all

Boolean

Highlight all field values (possibly reusing hues)

Returns

Field

Type

Description

bucket_time

Number

Time corresponding to the bucket (Unix seconds)

bucket_ymin

Number

Minimum y-axis value for the bucket

bucket_ymax

Number

Maximum y-axis value for the bucket

present

Object

If the instrumentation defines a discrete decomposition, this property's value is an object whose keys are values of that field and whose values are the number of data points in that bucket for that key

Fabrics

CloudAPI provides a way to create and manipulate a fabric. On the fabric you can
create VLANs, and then under that create layer three networks.

A fabric is the basis for building your own private networks that cannot be
accessed by any other user. It represents the physical infrastructure
that makes up a network; however, you don't have to cable or program it. Every
account has its own unique fabric in every datacenter.

On a fabric, you can create your own VLANs and layer-three IPv4 networks. You
can create any VLAN from 0-4095, and you can create any number of IPv4 networks
on top of the VLANs, with all of the traditional IPv4 private addresses spaces
-- 10.0.0.0/8, 192.168.0.0/16, and 172.16.0.0/12 -- available for use.

You can create networks on your fabrics to create most network topologies. For
example, you could create a single isolated private network that nothing else
could reach, or you could create a traditional configuration where you have a
database network, a web network, and a load balancer network, each on their own
VLAN.

ListFabricVLANs(GET/:login/fabrics/default/vlans)

Inputs

None

Returns

An array of VLAN objects that exist on the fabric. Each VLAN object has the
following properties:

It also returns the Location in the headers where the new NIC lives in the HTTP
API. If a NIC already exists for that network, a 302 redirect will be returned
instead of the object.

NICs do not appear on an instance immediately, so the state of the new NIC can
be checked by polling that location. While the NIC is provisioning, it will have
a state of 'provisioning'. Once it's 'running', the NIC is active on the
instance. If the provision fails, the NIC will be removed and the location will
start returning 404.

Polling instance audit

There are some cases where polling for instance state change will not work
because there won't be a state change for the requested action (e.g. "rename"),
or because the state change is short-lived thus making the transition easy to
miss (e.g. "reboot").

In such cases, consider polling an instance's historical of actions available
through an instance's Machine Audit, wait for the desired
action to appear on that list, and check successfulness there. Taking our
example from previous section, this is how we could check for a reboot:

Appendix B: Cloud Analytics

Cloud Analytics (CA) provides deep observability for systems and applications in
a Triton cloud. The CA service enables you to dynamically instrument
systems in the cloud to collect performance data that can be visualized in
real-time (through the portal), or collected using the API and analyzed later by
custom tools. This data can be collected and saved indefinitely for capacity
planning and other historical analysis.

Building blocks: metrics, instrumentations, and fields

A metric is any quantity that can be instrumented using CA. For examples:

Disk I/O operations

Kernel thread executions

TCP connections established

MySQL queries

HTTP server operations

System load average

Each metric also defines which fields are available when data is collected.
These fields can be used to filter or decompose data. For example, the Disk I/O
operations metric provides the fields "hostname" (for the current server's
hostname) and "disk" (for the name of the disk actually performing an
operation), which allows users to filter out data from a physical server or
break out the number of operations by disk.

When we create an instrumentation, the system dynamically instruments the
relevant software and starts gathering data. The data is made available
immediately in real-time. To get the data for a particular point in time, you
retrieve the value of the instrumentation for that time:

To summarize: metrics define what data the system is capable of reporting.
Fields enhance the raw numbers with additional metadata about each event that
can be used for filtering and decomposition. Instrumentations specify which
metrics to actually collect, what additional information to collect from each
metric, and how to store that data. When you want to retrieve that data, you
query the service for the value of the instrumentation.

Values and visualizations

We showed above how fields can be used to decompose results. Let's look at that
in more detail. We'll continue using the "FS Operations" metric with
fields "optype".

Scalar values

Suppose we create an instrumentation with no filter and no decomposition. Then
the value of the instrumentation for a particular time interval might look
something like this:

{
start_time: 1308789361,
duration: 1,
value: 573
...
}

In this case, start_time denotes the start of the time interval in Unix time,
duration denotes the length of the interval in seconds, and value denotes
the actual value. This means that 573 FS operations completed on all
systems for a user in the cloud between times 1308789361 and 1308789362.

Discrete decompositions

Now suppose we create a new instrumentation with a decomposition by execname.
Then the raw value might look something like this:

We call the decomposition by execname a discrete decomposition because the
possible values of execname ("ls", "cat", ...) are not numbers.

Numeric decompositions

It's useful to decompose some metrics by numeric fields. For example, you might
want to view FS operations decomposed by latency (how long the operation
took). The result is a statistical distribution, which groups nearby
latencies into buckets and shows the number of disk I/O operations that fell
into each bucket. The result looks like this:

As we will see, this data allows clients to visualize the distribution of I/O
latency, and then highlight individual programs in the distribution (or whatever
field you broke it down along).

Value-related properties

We can now explain several of the instrumentation properties shown previously:

value-dimension: the number of dimensions in returned values, which is
the number of decompositions specified in the instrumentation, plus 1.
Instrumentations with no decompositions have dimension 1 (scalar values).
Instrumentations with a single discrete or numeric decomposition have value 2
(vector values). Instrumentations with both a discrete and numeric
decomposition have value 3 (vector of vectors).

value-arity: describes the format of individual values

scalar: the value is a scalar value (a number)

discrete-decomposition: the value is an object mapping discrete keys to
scalars

numeric-decomposition: the value is either an object (really an array of
arrays) mapping buckets (numeric ranges) to scalars, or an object mapping
discrete keys to such an object. That is, a numeric decomposition is one
which contains at the leaf a distribution of numbers.

The arity serves as a hint to visualization clients: scalars are typically
rendered as line or bar graphs, discrete decompositions are rendered as stacked
or separate line or bar graphs, and numeric decompositions are rendered as
heatmaps.

Predicate Syntax

Predicates allow you to filter out data points based on the fields of a
metric. For example, instead of looking at FS operations for your whole
cloud, you may only care about operations with latency over 100ms, or on a
particular instance.

Predicates are represented as JSON objects using an LISP-like syntax. The
primary goal for predicate syntax is to be very easy to construct and parse
automatically, making it easier for people to build tools to work with them.

This predicate could be used with the "logical filesystem operations" metric to
identify file operations performed by MySQL on instances "host1", "host2", or
"host3" that took longer than 100ms.

Heatmaps

Up to this point we have been showing raw values, which are JSON
representations of the data exactly as gathered by Cloud Analytics. However, the
service may provide other representations of the same data. For numeric
decompositions, the service provides several heatmap resources that generate
heatmaps, like this one:

Like raw values, heatmap values are returned using JSON, but instead of
specifying a value property, they specify an image property whose contents
are a base64-encoded PNG image. For details, see the API reference. Using the
API, it's possible to specify the size of the image, the colors used, which
values of the discrete decomposition to select, and many other properties
controlling the final result.

Heatmaps also provide a resource for getting the details of a particular heatmap
bucket, which looks like this:

This example indicates the following about the particular heatmap bucket we
clicked on:

the time represented by the bucket is 1308865185

the bucket covers a latency range between 10 and 20 microseconds

at that time and latency range, program ls completed 5 operations and
program cat completed 57 operations.

This level of detail is critical for understanding hot spots or other patterns
in the heatmap.

Data granularity and data retention

By default, CA collects and saves data each second for ten minutes. So if you
create an instrumentation for FS operations, the service will save the
per-second number of FS operations going back for the last ten minutes. These
parameters are configurable using the following instrumentation properties:

granularity: how frequently to aggregate data, in seconds. The default is
one second. For example, a value of 300 means to aggregate every five
minutes' worth of data into a single data point. The smaller this value, the
more space the raw data takes up. granularity cannot be changed after an
instrumentation is created.

retention-time: how long, in seconds, to keep each data point. The default
is 600 seconds (ten minutes). The higher this value, the more space the raw
data takes up. retention-time can be changed after an instrumentation is
created.

These values affect the space used by the instrumentation's data. For example,
all things being equal, the following all store the same amount of data:

10 minutes' worth of per-second data (600 data points)

50 minutes' worth of per-5-second data

25 days' worth of per-hour data

600 days' worth of per-day data

The system imposes limits on these properties so that each instrumentation's
data cannot consume too much space. The limits are expressed internally as a
number of data points, so you can adjust granularity and retention-time to match
your needs. Typically, you'll be interested in either per-second data for live
performance analysis, or an array of different granularities and retention-times
for historical usage patterns.

Data persistence

By default, data collected by the CA service is only cached in memory, not
persisted to disk. As a result, transient failures of the underlying CA service
instances can result in loss of the collected data. For live performance
analysis, this is likely not an issue, since the likelihood of a crash is low
and the data can probably be collected again. For historical data being kept
for days, weeks, or even months, it's necessary to persist data to disk. This
can be specified by setting the persist-data instrumentation property to
"true". In that case, CA will ensure that data is persisted at approximately
the granularity interval of the instrumentation, but no more frequently than
every few minutes. (For that reason, there's little value in persisting an
instrumentation whose retention time is only a few minutes.)

Transformations

Transformations are post-processing functions that can be applied to data when
it's retrieved. You do not need to specify transformations when you create an
instrumentation; you need only specify them when you retrieve the value.
Transformations map values of a discrete decomposition to something else. For
example, a metric that reports HTTP operations decomposed by IP address supports
a transformation that performs a reverse-DNS lookup on each IP address so that
you can view the results by hostname instead. Another transformation maps IP
addresses to geolocation data for displaying incoming requests on a world map.

Each supported transformation has a name, like "reversedns". When a
transformation is requested for a value, the returned value includes a
transformations object with keys corresponding to each transformation (e.g.,
"reversedns"). Each of these is an object mapping keys of the discrete
decomposition to transformed values. For example:

Transformations are always performed asynchronously and the results cached
internally for future requests. So the first time you request a transformation
like "reversedns", you may see no values transformed at all. As you retrieve
the value again, the system will have completed the reverse-DNS lookup for
addresses in the data and they will be included in the returned value.

Appendix C: HTTP Signature Authentication

In addition to HTTP Basic Authentication, CloudAPI supports a new mechanism for
authenticating HTTP requests based on signing with your SSH private key.
Specific examples of using this mechanism with Triton are given here. Reference
the HTTP Signature Authentication specification by Joyent, Inc. for complete
details.

A node.js library for HTTP Signature is available with:

npm install http-signature@0.9.11

CloudAPI Specific Parameters

The Signature authentication scheme is based on the model that the client must
authenticate itself with a digital signature produced by the private key
associated with an SSH key under your account (see /my/keys above). Currently
only RSA signatures are supported. You generate a signature by signing the
value of the HTTP Date header.

As an example, assuming that you have associated an RSA SSH key with your
account, called 'rsa-1', the following request is what you would send for a
ListMachines request:

Where the signature is attached with the
Base64(rsa(sha256(Sat, 11 Jun 2011 23:56:29 GMT))) output. Note that the
keyId parameter cannot use the my shortcut, as in the HTTP resource
paths. This is because CloudAPI must lookup your account to resolve the key, as
with Basic authentication. In short, you MUST use the login name associated
to your account to specify the keyId.