The Rackspace Autoscale API enables developers to interact with the
Rackspace Autoscale service through a simple Representational State
Transfer (REST) web service interface.

You use the Autoscale service to automatically scale
resources in response to an increase or decrease in overall workload
based on user-defined policies. You can set up a schedule for launching
Auto Scale or define an event that is triggered by Cloud Monitoring. You
can also specify a minimum and maximum number of cloud servers, the
amount of resources that you want to increase or decrease, and the
thresholds in Cloud Monitoring that trigger the scaling activities.

To use Autoscale through the API, you submit API requests to define a
scaling group consisting of cloud servers and cloud load balancers or
RackConnect v3. Then you define policies, either schedule-based or monitoring-based. For
monitoring-based policies, you define cloud monitoring alerts to watch
the group's activity, and you define scaling rules to change the scaling
group's configuration in response to alerts. For schedule-based
policies, you simply set a schedule. Because you can change a scaling
group's configuration in response to changing workloads, you can begin
with a minimal cloud configuration and grow only when the cost of that
growth is justified.

Log in to the Rackspace Cloud Control panel to get your Rackspace Cloud account username,
API key, and account number. You'll need this information to communicate with Rackspace Cloud
services by using the REST API.

Note

In the API service documentation, the account number is referred to as your tenant ID
or tenant name.

After you log in, click your username on the upper-right side of the top navigation pane.
Then, select Account Settings to open the page.

On the Account Settings page, scroll down to the Account Details section.

Copy and save the account number.

Important

Protect your API key. Do not expose the value in code samples, screen captures, or
insecure client-server communications. Also, make sure that the value is not
included in source code that is stored in public repositories.

For API development, testing and workflow management in a graphical environment, try
interacting with the API by using an application such as
Postman or RESTClient for Firefox.

cURL is a command-line tool that you can use to interact with REST interfaces. cURL lets
you transmit and receive HTTP requests and responses from the command line or a shell
script, which enables you to work with the API directly. cURL is available for Linux
distributions, Mac OS® X, and Microsoft Windows®. For information about cURL, see
http://curl.haxx.se/.

To run the cURL request examples shown in this guide, copy each example from the HTML version
of this guide directly to the command line or a script.

The following example shows a cURL command for sending an authentication request to
the Rackspace Cloud Identity service.

In this example, $apiKey is an environment variable that stores your API key value.
Environment variables make it easier to reference account information in API requests,
to reuse the same cURL commands with different credentials, and also to keep sensitive
information like your API key from being exposed when you send requests to Rackspace
Cloud API services. For details about creating environment variables, see Configure
environment variables.

Note

The carriage returns in the cURL request examples use a backslash (\) as an
escape character. The escape character allows continuation of the command across
multiple lines.

The cURL examples in this guide use the following command-line options.

Option

Description

-d

Sends the specified data in a POST request to the HTTP server.
Use this option to send a JSON request body to the server.

-H

Specifies an extra HTTP header in the request. You can specify any
number of extra headers. Precede each header with the -H option.

Common headers in Rackspace API requests are as follows:

Content-Type: Required for operations with a request body.

Specifies the format of the request body. Following is the syntax
for the header where format is json:

Content-Type:application/json

X-Auth-Token: Required.

Specifies the authentication token.

X-Auth-Project-Id: Optional.

Specifies the project ID, which can be your account number or
another value.

Accept: Optional.

Specifies the format of the response body. Following is the syntax
for the header where the format is json, which is the
default:

Accept:application/json

-i

Includes the HTTP header in the output.

-s

Specifies silent or quiet mode, which makes cURL mute. No progress or
error messages are shown.

-T

Transfers the specified local file to the remote URL.

-X

Specifies the request method to use when communicating with the HTTP
server. The specified request is used instead of the default method,
which is GET.

For commands that return a response, use json.tool to pretty-print the output by
appending the following command to the cURL call:

Whether you use cURL, a REST client, or a command line client (CLI) to send requests
to the Autoscale API, you need an authentication token to include in the X-Auth-Token
header of each API request.

With a valid token, you can send API requests to any of the API service endpoints that you
are authorized to use. The authentication response includes a token expiration date. When a token
expires, you can send another authentication request to get a new one.

Note

For more information about authentication tokens, see the following topics in the
Rackspace Cloud Identity developer documentation.

The examples in the Getting Started Guide show how to authenticate by using username and API key credentials,
which is a more secure way to communicate with API services. The authentication
token operations reference describes other types of credentials that you can use for
authentication.

If your credentials are valid, the Identity service returns an authentication response
that includes the following information:

an authentication token

a service catalog with information about the services you can access.

user information and role assignments

In the following example, the ellipsis (...) represents other service endpoints, which
are not shown. The values shown in this and other examples vary because the information
returned is specific to your account.

{"access":{"token":{"id":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","expires":"2014-11-24T22:05:39.115Z","tenant":{"id":"110011","name":"110011"},"RAX-AUTH:authenticatedBy":["APIKEY"]},"serviceCatalog":[{"name":"cloudDatabases","endpoints":[{"publicURL":"https://syd.databases.api.rackspacecloud.com/v1.0/110011","region":"SYD","tenantId":"110011"},{"publicURL":"https://dfw.databases.api.rackspacecloud.com/v1.0/110011","region":"DFW","tenantId":"110011"},{"publicURL":"https://ord.databases.api.rackspacecloud.com/v1.0/110011","region":"ORD","tenantId":"110011"},{"publicURL":"https://iad.databases.api.rackspacecloud.com/v1.0/110011","region":"IAD","tenantId":"110011"},{"publicURL":"https://hkg.databases.api.rackspacecloud.com/v1.0/110011","region":"HKG","tenantId":"110011"}],"type":"rax:database"},...{"name":"cloudDNS","endpoints":[{"publicURL":"https://dns.api.rackspacecloud.com/v1.0/110011","tenantId":"110011"}],"type":"rax:dns"},{"name":"rackCDN","endpoints":[{"internalURL":"https://global.cdn.api.rackspacecloud.com/v1.0/110011","publicURL":"https://global.cdn.api.rackspacecloud.com/v1.0/110011","tenantId":"110011"}],"type":"rax:cdn"}],"user":{"id":"123456","roles":[{"description":"A Role that allows a user access to keystone Service methods","id":"6","name":"compute:default","tenantId":"110011"},{"description":"User Admin Role.","id":"3","name":"identity:user-admin"}],"name":"jsmith","RAX-AUTH:defaultRegion":"ORD"}}}

If the request was successful, you can find the authentication token and other information in the
authentication response. You'll need these values to submit requests to the API. See
Configure environment variables.

If the request failed, review the response message and
the following error message descriptions to determine next steps.

The authentication response returns the following values that you
need to include when you make service requests to the Autoscale API.

token ID

The token ID value is required to confirm your identity each time you access the service.
Include it in the X-Auth-Token header for each API request.

The expires attribute indicates the date and time that the token will expire,
unless it is revoked prior to the
expiration. To get a new token, submit another authentication request. For more
information, see
Manage tokens and token expiration.

tenant ID

The tenant ID provides your account number. For most Rackspace Cloud service APIs, the
tenant ID is appended to the API endpoint in the service catalog automatically. For
Rackspace Cloud Services, the tenant ID has the same value as the tenant name.

endpoint

The API endpoint provides the URL that you use to access the API service. For guidance
on choosing an endpoint, see Service access.

To make it easier to include the values in API requests, use the export command to create
environment variables that can be substituted for the actual values. For example, you can
create an API_ENDPOINT variable to store the URL for accessing an API service.
To reference the value in an API request, prefix the variable name with a $, for example
$API_ENDPOINT.

Note

The environment variables created with the export command are
valid only for the current terminal session. If you start a new session, run the
export commands again.

To reuse the variables across sessions, update the configuration file for your shell
environment to include the export statements. For details
about using and managing environment variables on different systems, see the
Environment variables wiki.

Create environment variables

In the token section of the authentication response, copy the token id and
tenant id values from the token object.

Replace publicURL with the publicURL value listed in the service catalog.

Note

Rackspace Cloud Identity returns an endpoint URL with your tenant ID (account ID).
With Rackspace Auto Scale, you have two options for including the tenant ID in API requests.

Include it in the URL.

https://dfw.autoscale.api.rackspacecloud.com/v1.0/123456

Submit API requests to the base endpoint
https://dfw/autoscale.api.rackspace.cloud.com/v1.0, and specify the tenant ID
value in the X-Project-ID header in each request with the value set to the tenant
ID.

You can use the examples in the following sections to create scaling groups
using a schedule-based configuration by using the Rackspace Auto Scale API.
Before running the examples, review the Rackspace Auto Scale concepts to understand the API workflow, scaling group configurations,
and use cases.

Note

These examples use the $API_ENDPOINT, $AUTH_TOKEN, and $TENANT_ID environment
variables to specify the API endpoint, authentication token, and project ID values
for accessing the service. Make sure you
configure these variables before running the
code samples.

Before you create a server through the Cloud Server API, you need to obtain
a list of available images so that you can choose one for your new server.

After you choose an image, copy its image ID. You use this image ID
when you create the server.

Use the Cloud Servers API to issue a List Images request
to retrieve a list of options available for configuring your server.
The following example shows how to request a list cloud server images.

Requesting a list of cloud server images

curl -X GET \
-H "Content-Type: application/json" \
-H "X-Auth-token:{auth-token}" \
https://ord.servers.api.rackspacecloud.com/v2/{tenant-id}/images?type=SNAPSHOT | python -mjson.tool
1. After creating your server you customize it so that it can process
your requests. For example, if you are building a webhead
scaling group, configure Apache to start on launch and serve
the files that you need.
2. After you have created and customized your server, save its image
and record the imageID value that is returned in the response body.

After you have obtained the imageID for the server image you want
to create, you need to create your cloud server. Auto Scale will
use the configuration info in this server image as a blueprint
for create new server images.

Now you are ready to create your first scaling group. For this
exercise, you will create a schedule-based scaling group that will
trigger a scaling event at 11 P.M. daily. The following example shows
how to create a schedule-based scaling group by submitting a
POST request using cURL.

If the DELETE request is successful, a 204 response code with no response
body will be returned. If the request fails, a 403 response code will
be returned with a message stating that your group still has active
entities as shown in the following example:

The Auto Scale API provides an option for users to force delete a
scaling group that has active servers. The FORCE DELETE option will
remove all servers in the configuration from the load balancer(s)
and then delete the server.

Warning

Using FORCE DELETE will remove all servers that are associated with the
scaling group. Users are discouraged from using the FORCE DELETE
option and to manually delete servers instead.

To use the FORCE DELETE option, submit a DELETE request with the
tenantId and groupIdparameters specified in the request URL,
and set the force parameter to true.

DELETE /{tenantId}/groups/{groupId}?force=true

Upon successful submission of this request the minEntities and maxEntities
parameters will automatically be set to 0 and the deletion of the
group will begin. If the DELETE request is successful, a 204
response code with an empty response body will be returned.

This document is intended for software developers who are interested in
developing applications by using the Rackspace Auto Scale API. To use the
information provided here, you should have a general understanding of the
Rackspace Auto Scale service and have a Rackspace Cloud account that has access to the service.
You should also be familiar with the following technologies:

Rackspace Autoscale is an API-based tool that automatically scales
resources in response to an increase or decrease in overall workload
based on user-defined thresholds.

Autoscale calls the Rackspace Cloud Servers,
Rackspace Cloud Load Balancers, and Rackspace RackConnect v3 APIs. All
Rackspace Cloud Server create server configuration parameters can be
used with Autoscale. For more information, see the following documentation:

You can set up a schedule for launching Autoscale or define an event
that triggers a webhook. You can also specify a minimum and maximum
number of cloud servers for your scaling group, the amount of resources
you want to increase or decrease, and policies based on percentage or
real numbers.

Note

Autoscale does not configure any information within a server. You must configure your services
to function when each server is started. We recommend automating your servers' startup processes
with Chef or a similar tool.

Autoscale can use all Rackspace Cloud Server create server API
parameters. For more details, see the Create servers documentation.

The scaling group is at the heart of an Autoscale deployment. The
scaling group specifies the basic elements of the Autoscale
configuration. It manages how many servers can participate in the
scaling group. It also specifies information related to load balancers
if your configuration uses a load balancer.

When you create a scaling group, you specify the details for group
configurations and launch configurations.

Configuration

Description

Group Configuration

Outlines the basic elements of the Autoscale configuration. The group configuration manages how many servers can participate in the scaling group. It sets a minimum and maximum limit for the number of entities that can be used in the scaling process. It also specifies information related to load balancers.

Launch Configuration

Creates a blueprint for how new servers will be created. The launch configuration specifies what type of server image will be started on launch, what flavor the new server is, and which cloud load balancer or RackConnect v3 load balancer pool the new server connects to.
Note: The launchConfiguration uses the admin user to scale up, usually the first admin user found on the tenant. Only that particular admin user's SSH key pair names can be used in the launchConfiguration.
Note: The launchConfiguration update operation overwrites all launchConfiguration settings.

The launch configuration specifies the launch type along with server and load balancer configuration for the components to start. Most launch configurations have both a server and a load balancer (can be RackConnect v3) configured as shown in the Launch configuration examples .

type

Set the type parameter to this value: launch_server.

args

Specifies the configuration for server and load balancers. Most launch
configurations have both a server and a
load balancer (can be RackConnect v3) configured. The following items can be configured:

server

Specifies configuration information for the Cloud server
image that will be created during the scaling process. If you are using Boot From
Volume, the server args are where you specify your create server
template. See Server parameters.

loadbalancers

Specifies the configuration information for the load balancer(s) used in
the cloud server deployment, including a RackConnect v3 load balancer
pool. For background information and an example configuration, see Cloud Bursting with RackConnect
v3.

Note

You must include the ServiceNet network in your configuration
if you use a load balancer so the load balancer can retrieve the IP address of new
servers. See Load balancer parameters.

draining_timeout

Specifies the number of seconds Autoscale will put the CLB node in DRAINING mode
before deleting the node and eventually the server. This is used when scaling down.
Not used when there is no loadbalancers configuration. Please note that
this feature only works with a cloud load balancer.

Each scaling group has an associated status that represents the health of the
group. When the group is successfully able to launch servers and optionally add
them to load balancers then the status is ACTIVE. If the scaling group cannot
launch servers because of an error that requires user attention,
the status changes to ERROR. In this case, the group state
contains a list of human-readable messages that explain the conditions that caused the error.
After you fix the errors, you can restore the group to ACTIVE state by submitting a
converge or execute policy
API request.

Specifies configuration information for the Cloud server image that will
be created during the scaling process. If you are using Boot From
Volume, the server args are where you specify your create server template.

The server group parameter specifies details about the server as
described in the following table. Note the server arguments are
directly passed to nova when creating a server.

Parameter name and description

name

Specifies a prefix to the name for created servers. The name of new
servers will be automatically generated using the following formula:
[serverName]-AS[uniqueHash], and will look similar to the following:
[serverName]-AS12fabe. The name of new servers may be truncated to fit
within the limit of 255 characters.

flavorRef

Specifies the flavor id for the server, performance1-4 for example.
A flavor is a resource configuration for a server. For details,
see Server flavors.

imageRef

Specifies the ID of the Cloud Server image to start,
0d589460-f177-4b0f-81c1-8ab8903ac7d8 for example.

OS-DCF:diskConfig

Specifies how the disk on new servers is partitioned. Valid values are
AUTO or MANUAL. For non-Rackspace server images, this value
must always be MANUAL. A non-Rackspace server image would be one
that you imported using a non-Rackspace server. For more information,
see the Disk Configuration documentation for
Rackspace Cloud Servers.

Do not use this parameter to configure Autoscale and RackConnect
v3, use the loadBalancers parameter instead.

networks

Specifies the networks to which you want to attach the server. This
attribute enables you to attach to an isolated network for your tenant
ID, the public Internet, and the private ServiceNet. If you do not
specify any networks, your server is attached to the public Internet and
private ServiceNet. The UUID for the private ServiceNet is
11111111-1111-1111-1111-111111111111. The UUID for the public Internet
is 00000000-0000-0000-0000-000000000000.

personality

Specifies the file path or the content to inject into a
server image. See the Server Personality documentation for Rackspace Cloud Servers.

user_data

Specifies the base64 encoded create server template that you use to Boot
from Volume. For details, see the Config-Drive Extension
section of the Next Generation Cloud Servers Developer Guide. For more
information on Boot from Volume, see the developer blog
Using Cloud Init with Rackspace Cloud.

Load balancer parameters specify the configuration information for the load balancer(s) used in
the cloud server deployment, including a RackConnect v3 load balancer
pool. For background information and an example configuration, see Cloud Bursting with RackConnect
v3. Please note that you must
include the ServiceNet network in your configuration if you use a
load balancer so the load balancer can retrieve the IP address of new
servers.

Parameter name and description

loadBalancerId

Specifies the ID of the load balancer that is automatically generated
when the load balancer is created.

port

The server port for receiving traffic from the load balancer, often poirt 80.

Autoscale uses webhooks to initiate scaling events. A webhook is an
industry-standard protocol for sending events between systems; for Auto
Scale, they are used to execute policies. A webhook consists of an HTTP
callback that is triggered by some user-defined event, such as an alarm
that is set through Cloud Monitoring or another monitoring service. When
that event occurs, the source site makes an HTTP request to the URI
configured for the webhook.

A webhook contains a POST call to a defined URL, potentially with a
payload in the POST body. You can send webhooks with a simple call in
the library that you are using. You can also send them manually via
cURL:

Example: POST request to execute a webhook

curl-vhttps://example.com/webhook-XPOST-d"payload=payload"

Autoscale only supports anonymous webhooks. In regular webhooks, the
{webhook_version}/{webhook_hash} is specified by URL. In anonymous
webhooks, the URL is replaced with a hash that is known only to the
issuer— because no authentication is needed, the webhook is considered
"anonymous."

Autoscale uses Capability URLs in conjunction with webhooks. Capability
URLs are URLs that give authorization for a certain action or event. If
you know the URL, you have access to it and you can use the URL to
trigger a specific event. Capability URLs are usually long, and random,
and cannot be guessed by a user.

When a webhook is created, Autoscale creates values for the
capabilityVersion and capabilityHash parameters. These values
are created per webhook, not per policy. When you create a webhook, you
associate it with a policy. The response to the webhook creation request
includes a single capability URL that is also, by inheritance,
associated with the policy.

The Autoscale webhook architecture allows Autoscale to be integrated
with other systems, for example, monitoring systems. So, now you have
this URL that will execute a specific policy and you can fire off that
URL based on events happening outside of Autoscale.

To execute a capability URL, locate the URL in your webhook, and then
submit a POST request against it, as shown in the following example:

Executing a capability URL or an anon`ymous webhook will always return a
202, Accepted, response code, even if the request fails because of
an invalid configuration. This is done to prevent information leakage.

Note

To execute anonymous webhooks and capability URLs, no authentication is
needed. You can use a capability URL to trigger multiple webhooks.

Autoscale uses policies to define the scaling activity that will take
place, as well as when and how that scaling activity will take place.
Scaling policies specify how to modify the scaling group and its
behavior. You can specify multiple policies to manage a scaling group.

You can define a scaling policy that is invoked by a webhook when a
predefined event occurs.

Note

The change, changePercent, and desiredCapacity parameters
are mutually exclusive. You can only set one of them per policy.

To configure a webhook-based policy, you set the type parameter to
webhook and then specify the parameter values.

Webhook-triggered Policies parameter descriptions

change

Specifies the number of entities to add or remove, for example "1"
implies that 1 server needs to be added. Use to change the number of
servers to a specific number. If a positive number is used, servers are
added; if a negative number is used, servers are removed.

changePercent

Specifies the change value in per cent. Use to change the percentage of
servers relative to the current number of servers. If a positive number
is used, servers are added; if a negative number is used, servers are
removed. The absolute change in the number of servers is always rounded
up. For example, if -X% of the current number of servers translates to
-0.5 or -0.25 or -0.75 servers, the actual number of servers that
will be shut down is 1.

desiredCapacity

Specifies the final capacity that is desired by the scale up event. Note
that this value is always rounded up. Use to specify a number of servers
for the policy to implement—by either adding or removing servers as
needed.

The webhook object takes no args parameter.

Note

The change, changePercent, and desiredCapacity parameters
are mutually exclusive. You can only set one of them per policy.

The change, changePercent, and desiredCapacity parameters
are mutually exclusive. You can only set one of them per policy.

To configure a schedule-based policy, set the type parameter to
"schedule" and then specify the parameter values.

Scheduled-based Policy parameter descriptions

change

Specifies the number of entities to add or remove, for example "1"
implies that 1 server needs to be added. Use to change the number of
servers to a specific number. If a positive number is used, servers are
added; if a negative number is used, servers are removed.

changePercent

Specifies the change value, in incremental stages or per cent. Use to
change the percentage of servers relative to the current number of
servers. If a positive number is used, servers are added; if a negative
number is used, servers are removed. The absolute change in the number
of servers is always rounded up. For example, if -X% of the current
number of servers translates to -0.5 or -0.25 or -0.75 servers, the
actual number of servers that will be shut down is 1.

desiredCapacity

Specifies final capacity that is desired by the scale up event. Use to
specify a number of servers for the policy to implement—by either adding
or removing servers as needed.

args

Provide information related to the time when the policy is supposed to
be invoked.

For example to use Cron, a time-based job scheduler, specify the
time to invoke the policy in CRON format, as shown in the
following example, which configures the policy to be invoked at 6 AM
every day:

You can define a policy that scales your server resources up and down by
a predefined percentage. For example, you can define a policy to
increase your resources by 20% if a certain predefined event occurs as illustrated in
the following figure.

When setting up your scaling groups, you configure the minimum and
maximum number of resources that are allowed. These values are specified
in the minEntities and maxEntities parameters under group
configuration, and are invoked whenever you update your group
configuration.

Important

If the number of resources that is specified in a policy differs from
the amount that is specified under group configuration, the
preconfigured values take precedence.

You can set a policy to specify when to delete resources,
and how many resources to delete.

When deleting servers, Autoscale follows these rules:

If no new servers are in the process of being built, the oldest
servers are chosen to be deleted first.

If new servers are in the process of being built and in a "pending"
state, these servers are chosen to be deleted first.

After selecting servers for deletion, the Autoscale process deletes each server
immediately, unless the server has an associated load balancer that has been
configured with a draining timeout period. In these cases, Autoscale puts the
load balancer node in DRAINING mode and waits for the draining_timeout period
to end before deleting the server from the scaling group.

Autoscale supports a cooldown feature. A cooldown is a configured
period of time that must pass between actions. Cooldowns only apply to
webhook-based configurations. By configuring group cooldowns, you
control how often a group can have a policy applied, which can help
servers scaling up to complete the scale up before another policy is
executed. By configuring policy cooldowns, you control how often a
policy can be executed, which can help provide quick scale-ups and
gradual scale-downs.

Cooldowns work the following way:

Group cooldowns control how often a group can be modified by denying
all policy executions until the cooldown expires—even if conditions
exist that would trigger one.

Policy cooldowns control how often a single, specific policy can be
executed. For example, a policy cooldown can require at least six
hours until any successive scale down policies are reactivated.

Note

Cooldown configuration is irrelevant for schedule-based configurations
and the Group Cooldown and Policy Cooldown can both be set to 0 (null).

You can configure Autoscale to be triggered based on a user-defined
schedule that is specified in one or more policies.

This configuration option is helpful if you know that your Cloud Servers
deployment will need additional resources during certain peak times. For
example, if you need additional server resources during the weekend, you
can define a policy that adds 50 servers on Friday evening and then
removes these servers again on Sunday evening to return to a regular
operational state.

You can configure Autoscale to be triggered through a webhook, based on
a predefined alarm or threshold that has been previously set up in a
monitoring service. Event-based configuration works the following way:

In your monitoring service, you configure alarms that are triggered
when a high utilization of resources occurs

In Autoscale, you configure a scaling group, scaling policies, and a
webhook to be triggered when your monitoring service sets off an
alarm for high utilization of resources.

The webhook launches the Autoscale service, which looks up the
policy that has been defined in accordance with the webhook. This
policy determines the amount of cloud servers that need to be added
or removed.

Note

Servers added through a webhook triggered by an external monitoring
service will not be automatically monitored by the external monitoring
service.

You can use Autoscale with a hybrid, dedicated and cloud, solution to
"burst" into the cloud when extra servers are temporarily needed. To do
this, you use RackConnect v3, a Rackspace solution that works with
Rackspace cloud servers and creates a secure bridge between the
Rackspace cloud and your dedicated hardware.

To get started with RackConnect v3 cloud bursting:

Contact your Rackspace Support team and tell them what you want to
do. They will configure a load balancer pool for you and give you the
UUID.

Configure your launchConfigurationloadBalancers attributes with
the load balancer pool UUID that was given to you as the
loadBalancerId and use RackConnectv3 for the type. Do
not set a value for port.

The convergence feature provides higher reliability for scaling by optimizing
the use of the Cloud Servers API with retries until they are successful.
Convergence ensures that the current server and load balancer configuration
for a scaling group always matches the specification in the launch configuration
of the group. It does this by continuously converging to the desired state of the
scaling group, instead of manipulating servers only once.

Convergence also provides a self-healing capability by tracking all the servers
in an autoscaling group continuously and automatically replacing any servers that
have been deleted out-of-band or transitioned to an ERROR state.

Autoscale uses convergence internally to launch and delete servers.
You can trigger convergence explicitly by submitting a converge
request for a specified group. This operation is useful for fixing a scaling
group that is in an ERROR state. Typically, the ERROR state is caused
by an invalid launch configuration, for example a configuration that includes
a server image reference of a deleted image. After correcting the launch
configuration, you can submit a converge request
to restore the group to the desired state.

Ideas explained here are relevant to all operations of the API. See the
API Reference for details about specific operations.

The Autoscale API is implemented using a RESTful web service interface.
Like other products in the Rackspace Cloud suite, Autoscale shares a
common token-based authentication system that allows seamless access
between products and services.

Each REST request against the Cloud Big Data service requires the inclusion of a specific
authorization token, supplied in the X-Auth-Token HTTP header of each API request.
You get a token by submitting an authentication request with valid account credentials to
the following Rackspace Cloud Identity API service endpoint:

The Auto Scale version defines the contract and build information for
the API.

The contract version denotes the data model and behavior that the API
supports. The requested contract version is included in all request
URLs. Different contract versions of the API might be available at any
given time and are not guaranteed to be compatible with one another.

The Auto Scale API supports pagination of items that are returned in API
call responses. Pagination enables users to view all responses even if
the number of items returned in the response body is longer than what
fits on one page.

The pagination limit for the Auto Scale API is 100. This means you can
view 100 items at a time.

For example, if you want to get a list of all the scaling groups, and
there are more than 100 groups, you see the first 100 groups on one page
and then a link at the bottom of the page that takes you to the next
page, which contains the next 100 items.

A pagination limit that is set beyond 100 is defaulted to 100. And a
limit that is set to smaller than 1 is defaulted to 1.

The Auto Scale API paginates the following items:

scaling groups

scaling policies

webhooks

Use the limit and marker parameters to navigate the collection of items that are returned in the request.

Limit is the maximum number of items that can be returned on one page. If the
client submits a request with a limit beyond the 100 items supported by Auto Scale, the response returns
the 413overLimit error code.

Marker is the ID of the last item in the previous list. Items are sorted by create time in descending
order. When a create time is not available, items are sorted by ID. If the request includes an invalid
ID, the response returns the``400 badRequest`` error code.

When you submit a request for a group manifest, you receive a list of
all the available scaling groups and associated policies. However, the
response body only lists 100, or whatever number of responses per page
you configure using the limit parameter. If there are more results
available than what you specified in the limit parameter, a next
link is provided in the rel in the response body. This is shown in
the following example that sets the limit parameter value to 2 to list
two responses per page.

For Auto Scale, the limit value range is 1-100 inclusive.
If you set the value to a number greater than 100, it defaults to 100.
Set it to less tan 1, and it defaults to 1.

If you provide an invalid query argument for limit, the response returns a
400 message. The marker parameter specifies the last seen
group ID. When you click on the link that is returned, all the groups
displayed will have group Ids that are greater than
f82bb000-f451-40c8-9dc3-6919097d2f7e.

When you submit a request to obtain all the policies associated with a
scaling group, a list of policies is returned. However, the response
body only lists 100, or whatever number of responses per page you
configure using the limit parameter. If there are more results
available than what you specified in the limit parameter, a next
link is provided in the rel in the response body. This is shown in
the following example that sets the limit parameter value to 2 to list
two responses per page.

{"policies":[{"change":10,"cooldown":5,"id":"25adccf9-0077-4510-b37d-90a48c9dc08f","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/851153/groups/f3af279b-10d7-4a26-aead-98c00bff260f/policies/25adccf9-0077-4510-b37d-90a48c9dc08f/","rel":"self"}],"name":"scale up by 10","type":"webhook"},{"args":{"cron":"0 */2 * * *"},"change":10,"cooldown":3,"id":"2d321cd2-b873-4865-9941-5ea6783fd58c","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/851153/groups/f3af279b-10d7-4a26-aead-98c00bff260f/policies/2d321cd2-b873-4865-9941-5ea6783fd58c/","rel":"self"}],"name":"Schedule policy to run repeately","type":"schedule"}],"policies_links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/851153/groups/f3af279b-10d7-4a26-aead-98c00bff260f/policies/?limit=2&marker=2d321cd2-b873-4865-9941-5ea6783fd58c","rel":"next"}]}

The marker parameter points to the last seen policy ID. When you
click on the link that is returned, all the policies displayed will have
policy Ids that are greater than f82bb000-f451-40c8-9dc3-6919097d2f7e.

When you submit a request to obtain all the webhooks associated with a
policy, a list of webhooks is returned. However, the response body only
lists 100, or whatever number of responses per page you configure using
the limit parameter. If there are more results available than what
you specified in the limit parameter, a next link is provided in
the rel in the response body. This is shown in
the following example that sets the limit parameter value to 2 to list
two responses per page.

The marker parameter points to the last seen webhook ID. When you
click on the link that is provided in the response body, all the
webhooks displayed will have webhook Ids that are greater than
f82bb000-f451-40c8-9dc3-6919097d2f7e.

All accounts, by default, have a preconfigured set of thresholds (or
limits) to manage capacity and prevent abuse of the system. The system
recognizes rate limits and absolute limits . Rate limits are
thresholds that are reset after a certain amount of time passes.
Absolute limits are fixed.

For any user, all Auto Scale operations are limited to 1,000 calls per
minute.

In addition, the following table specifies the default rate limits for
specific Auto Scale API operations:

Table: Default Rate Limits

Method

URI

RegEx

Default

GET,
PUT,
POST,
DELETE

/v1.0/execute/*

/v1\\.0/execute/(.*)

10 per second

GET,
PUT,
POST,
DELETE

/v1.0/tenantId/*

/v1\\.0/([0-9]+)/.+

1000 per minute

Rate limits are applied in order relative to the verb, going from least
to most specific. For example, although the general threshold for
operations to /v1/0/* is 1,000 per minute, one cannot POST to
/v1.0/execute*more than 1 time per second, which is 60 times per
minute.

If you exceed the thresholds established for your account, a 413RateControl HTTP response is returned with a Retry-After header to
notify the client when it can attempt to try again.

If any Rackspace Auto Scale request results in an error, the service
returns an appropriate 4 xx or 5 xx HTTP status code, and the following
information in the body:

Title

Exception type

HTTP status code

Message

For Auto Scale users, common faults are caused by invalid
configurations. For example, trying to boot a server from an image that
does not exist causes a fault, as does trying to attach a load balancer
to a scaling group that does not exist.

When configuring a Cloud Server image through the Control Panel or the
API, you chose a specific server flavor. A flavor is an available
hardware configuration for a server. Each flavor has a unique
combination of disk space and memory capacity. The server flavor that is
specified in the Control Panel maps to a specific flavor ID in the
Rackspace API. The following table outlines the mapping between flavor
ID and the available flavors in the Control Panel.

For more information and details on flavors, see the Server
Flavors
section in the Rackspace Cloud Servers Developer's Guide.

Note

The Standard Instance flavors are being phased out. Do not use them when adding servers.

The Server flavor names have been recently modified:

Performance 1 flavors are now General Purpose v1 flavors

Performance 2 flavors are now I/O v1 flavors

Use the List Flavors
operation, described in the Rackspace Cloud Servers Developer's Guide
to get a list of all current servers.

Table: Supported Flavors for Next Generation Cloud Servers

ID

Flavor name

Memory (MB)

Disk space

Ephemeral

VCPUs

RXTX factor

2

512 MB Standard Instance

512

20

0

1

80.0

3

1 GB Standard Instance

1024

40

0

1

120.0

4

2 GB Standard Instance

2048

80

0

2

240.0

5

4 GB Standard Instance

4096

160

0

2

400.0

6

8 GB Standard Instance

8192

320

0

4

600.0

7

15 GB Standard Instance

15360

620

0

6

800.0

8

30 GB Standard Instance

30720

1200

0

8

1200.0

general1-1

1 GB General Purpose v1

1024

20

0

1

200.0

general1-2

2 GB General Purpose v1

2048

40

0

2

400.0

general1-4

4 GB General Purpose v1

4096

80

0

4

800.0

general1-8

8 GB General Purpose v1

8192

160

0

8

1600.0

compute1-4

3.75 GB Compute v1

3840

0

0

2

625.0

compute1-8

7.5 GB Compute v1

7680

0

0

4

1250.0

compute1-15

15 GB Compute v1

15360

0

0

8

2500.0

compute1-30

30 GB Compute v1

30720

0

0

16

5000.0

compute1-60

60 GB Compute v1

61440

0

0

32

10000.0

io1-15

15 GB I/O v1

15360

40

150

4

1250.0

io1-30

30 GB I/O v1

30720

40

300

8

2500.0

io1-60

60 GB I/O v1

61440

40

600

16

5000.0

io1-90

90 GB I/O v1

92160

40

900

24

7500.0

io1-120

120 GB I/O v1

122880

40

1200

32

10000.0

memory1-15

15 GB Memory v1

15360

0

0

2

625.0

memory1-30

30 GB Memory v1

30720

0

0

4

1250.0

memory1-60

60 GB Memory v1

61440

0

0

8

2500.0

memory1-120

120 GB Memory v1

122880

0

0

16

5000.0

memory1-240

240 GB Memory v1

245760

0

0

32

10000.0

onmetal-compute1

OnMetal Compute v1

32768

32

0

20

10000.0

onmetal-io1

OnMetal I/O v1

131072

32

3200

40

10000.0

onmetal-memory1

OnMetal Memory v1

524288

32

0

24

10000.0

Note

Auto Scale only supports Next Generation servers. No First Generation
servers are supported.

The account owner (identity:user-admin) can create account users on
the account and then assign roles to those users. The roles grant the
account users specific permissions for accessing the capabilities of the
Auto Scale service. Each account has only one account owner, and that
role is assigned by default to any Rackspace Cloud account when the
account is created.

See the Cloud Identity Client Developer Guide for information about
how to perform the following tasks:

Two roles (observer and admin) can be used to access the Auto Scale API
specifically. The following table describes these roles and their
permissions.

Table: Auto Scale Product Roles and Capabilities

Role Name

Role Permissions

autoscale:admin

This role provides Create, Read,
Update, and Delete permissions in
Auto Scale, where access is granted

autoscale:observer

This role provides Read permission
in Auto Scale, where access is
granted

Additionally, two multiproduct roles apply to all products. Users with
multiproduct roles inherit access to future products when those products
become RBAC-enabled. The following table describes these roles and their
permissions.

Table: Multiproduct (Global) Roles and Capabilities

Role Name

Role Permissions

admin

Create, Read, Update, and Delete
permissions across multiple
products, where access is granted

The account owner can set roles for both multiproduct and Auto Scale
scope, and it is important to understand how any potential conflicts
among these roles are resolved. When two roles appear to conflict, the
role that provides the more extensive permissions takes precedence.
Therefore, admin roles take precedence over observer roles, because
admin roles provide more permissions.

The following table shows two examples of how potential conflicts
between user roles in the Control Panel are resolved.

Table: Resolving cross-product role conflicts

Permission Configuration

View of Permission in
the Control Panel

Can the User Perform
Product Admin Functions in
the Control Panel?

User is assigned the following
roles: multiproduct observer
and Auto Scale admin

Appears that the user has
has only the mulitproduct
observer role.

Yes, for Auto Scale only.
user has the observer
role for other products.

User is assigned the following
roles: multiproduct admin
and Auto Scale observer

API operations for Auto Scale might not be available to all roles.
To see which operations are permitted to invoke which calls, review the
Permissions Matrix for Auto Scale article in the Rackspace Knowledge Center.

This operation lists the scaling groups that are available for a specified tenant.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains the list of
scaling groups.

400

InvalidQueryArgument

The "limit" query
argument is not a valid
integer.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation creates a scaling group or a collection of servers and load balancers that are managed by a scaling policy. To describe the group, specify the scaling group configuration, launch configuration, and optional scaling policies in the request body in JSON format.

If the request succeeds, the response body describes the created group in JSON format. The response includes an ID and links for the group.

You can specify custom metadata for your group configuration using the optional metadata parameter.

Note

Group metadata is stored within the Auto Scale API and can be queried. You can use the metadata parameter for
customer automation, but it does not change any functionality in Autoscale.

This table shows the possible response codes for this operation:

Response Code

Name

Description

201

Created

The scaling group has
been created.Creates an
auto scaling endpoint.

400

InvalidJsonError

The request is refused
because the body was
invalid JSON".

400

InvalidLaunchConfiguration

The "flavorRef" value
is invalid.

400

InvalidLaunchConfiguration

The "imageRef" value is
invalid or not active.

400

InvalidLaunchConfiguration

The base64 encoding for
the "path" argument in
the "personality"
parameter is invalid.

400

InvalidLaunchConfiguration

The content of the
files in the
"personality" parameter
exceeds the maximum
size limit allowed.

400

InvalidLaunchConfiguration

The load balancer ID
provided is invalid.

400

InvalidLaunchConfiguration

The number of files in
the "personality"
parameter exceeds
maximum limit.

400

InvalidMinEntities

The "minEntities" value
is greater than the
"maxEntities" value.

400

ValidationError

The request body had
valid JSON but with
unexpected properties
or values in it. Please
note that there can be
many combinations that
cause this error. We
will try to list the
most common mistakes
users are likely to
make in a particular
request. ".

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role
and attempted to
perform something only
available to a user
with an admin role.
Note, some API nodes
also use this status
code for other things.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content
type of the request is
not "application/json".

422

ScalingGroupOverLimitsError

The user has reached
their quota for scaling
groups, currently 100.

500

InternalError

An error internal to
the application has
occurred, please file a
bug report.

A launch configuration defines what to do when a
new server is created, including information about
the server image, the flavor of the server image,
and the cloud load balancer or RackConnectV3 load
balancer pool to which to connect. You must set
the type parameter to launch_server.

launchConfiguration.args

Object
(Required)

The configuration used to create new servers in
the scaling group. You must specify the server
attribute, and can also specify the optional
loadBalancers attribute. Most launch
configurations have both a server and a cloud load
balancer or RackConnectV3 load balancer pool
configured.

launchConfiguration.args.loadBalancers

Array
(Optional)

One or more cloud load balancers or RackConnectV3
load balancer pools to which to add new servers.
For background information and an example
configuration, see Cloud Bursting with
RackConnect v3. All servers are
added to these load balancers with the IP
addresses of their ServiceNet network. All servers
are enabled and equally weighted. Any new servers
that are not connected to the ServiceNet network
are not added to any load balancers.

launchConfiguration.args.loadBalancers.[*].port

Integer
(Required)

The port number of the service (on the new
servers) to use for this particular cloud load
balancer. In most cases, this port number is 80.
.. note:: This parameter is NOT required if you
are using RackConnectV3 and should be left empty.

launchConfiguration.args.loadBalancers.[*].loadBalancerId

String
(Required)

The ID of the cloud load balancer, or
RackConnectV3 load balancer pool, to which new
servers are added. For cloud load balancers set
the ID as an integer, for RackConnectV3 set the
UUID as a string. NOTE that when using
RackConnectV3, this value is supplied to you by
Rackspace Support after they configure your load
balancer pool.

launchConfiguration.args.server

Object
(Required)

The attributes that Auto Scale uses to create a
new server. The attributes that you specify for
the server entity apply to all new servers in the
scaling group, including the server name. Note the
server arguments are directly passed to nova when
creating a server. For more information see
Create Your Server with the nova Client

launchConfiguration.args.server.flavorRef

String
(Required)

The flavor of the server image. Specifies the
flavor ID for the server. A flavor is a resource
configuration for a server. For more information,
see Server flavors.

launchConfiguration.args.server.imageRef

String
(Required)

The ID of the cloud server image, after which new
server images are created.

launchConfiguration.args.server.diskConfig

String
(Required)

How the disk on new servers is partitioned. Valid
values are AUTO " or MANUAL. For non-
Rackspace server images, this value must always be
MANUAL. A non-Rackspace server image would be
one that you imported using a non-Rackspace
server. For more information, see the Disk
Configuration Extension
documentation for Rackspace Cloud Servers.

launchConfiguration.args.server.personality

Array
(Required)

The file path and/or the content that you want to
inject into a server image. For more information,
see the Server personality documentation for Rackspace Cloud
Servers.

launchConfiguration.args.server.personality.[*].path

String
(Required)

The path to the file that contains data that is
injected into the file system of the new cloud
server image.

launchConfiguration.args.server.personality.[*].contents

String
(Required)

The content items that is injected into the file
system of the new cloud server image.

launchConfiguration.args.server.user_data

String
(Optional)

The base64 encoded string of your create server
template.

launchConfiguration.type

String
(Required)

The type of the launch configuration. For this
release, this parameter must be set to
launch_server.

groupConfiguration

Object
(Required)

The configuration options for the scaling group.
The scaling group configuration specifies the
basic elements of the Auto Scale configuration. It
manages how many servers can participate in the
scaling group. It specifies information related to
load balancers.

groupConfiguration.maxEntities

Object
(Optional)

The maximum number of entities that are allowed in
the scaling group. If unconfigured, defaults to
1000. If this value is provided it must be set to
an integer between 0 and 1000.

groupConfiguration.name

String
(Required)

The name of the scaling group. This name does not
need to be unique.

groupConfiguration.cooldown

Integer
(Required)

The cool-down period before more entities are
added, in seconds. This number must be an integer
between 0 and 86400 (24 hrs).

groupConfiguration.minEntities

Integer
(Required)

The minimum number of entities in the scaling
group. This number must be an integer between 0
and 1000.

groupConfiguration.metadata

Object
(Optional)

Optional. Custom metadata for your group
configuration. You can use the metadata parameter
for customer automation, but it does not change
any functionality in Auto Scale. There currently
is no limitation on depth.

scalingPolicies

Array
(Required)

This parameter group specifies configuration
information for your scaling policies. Scaling
policies specify how to modify the scaling group
and its behavior. You can specify multiple
policies to manage a scaling group.

scalingPolicies.[*]

Array
(Required)

An array of scaling policies.

scalingPolicies.[*].name

String
(Required)

A name for the scaling policy. This name must be
unique for each scaling policy.

scalingPolicies.[*].args

Object
(Optional)

Additional configuration information for policies
of type "schedule." This parameter is not required
for policies of type "webhook." This parameter
must be set to either at or cron. Both are
mutually exclusive.

The time when this policy runs. This property is
mutually exclusive with the cron parameter.
You must specify seconds when using at. For
example, if you set at:"2013-12-05T03:12:00Z". If seconds are not specified, a
400 error is returned. Note, the policy is
triggered within a 10-second range of the time
specified.

scalingPolicies.[*].changePercent

Number
(Optional)

The percent change to make in the number of
servers in the scaling group. If this number is
positive, the number of servers increases by the
given percentage. If this parameter is set to a
negative number, the number of servers decreases
by the given percentage. The absolute change in
the number of servers is rounded to the nearest
integer. This means that if -X% of the current
number of servers translates to -0.5 or -0.25 or -
0.75 servers, the actual number of servers that
are shut down is 1. If X% of the current number of
servers translates to 1.2 or 1.5 or 1.7 servers,
the actual number of servers that are launched is
2.

scalingPolicies.[*].cooldown

Number
(Required)

The cool-down period, in seconds, before this
particular scaling policy can run again. The cool-
down period does not affect the global scaling
group cool-down. The minimum value for this
parameter is 0 seconds, the maximum value is 86400
seconds (24 hrs).

scalingPolicies.[*].type

Enum
(Required)

The type of policy that runs for the current
release, this value can be either webhook for
webhook-based policies or schedule for
schedule-based policies.

scalingPolicies.[*].change

Integer
(Optional)

The change to make in the number of servers in the
scaling group. This parameter must be an integer.
If the value is a positive integer, the number of
servers increases. If the value is a negative
integer, the number of servers decreases.

scalingPolicies.[*].desiredCapacity

Integer
(Optional)

The desired server capacity of the scaling the
group; that is, how many servers should be in the
scaling group. This value must be an absolute
number, greater than or equal to zero. For
example, if this parameter is set to ten,
executing the policy brings the number of servers
to ten. The minimum allowed value is zero. Note
that the configured group maxEntities and
minEntities takes precedence over this setting.

thows the configuration for a specified scaling group, including group settings, launch configuration settings, and policy settings. The configuration is returned in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains configuration
details for the
specified scaling group.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation updates the configuration of an existing scaling group. To change the configuration, specify the new configuration in the request body in JSON format. Configuration elements include the minimum number of entities, the maximum number of entities, the global cooldown time, and other metadata. If the update is successful, no response body is returned.

The request body had
valid JSON but with
unexpected properties or
values in it. Please
note that there can be
many combinations that
cause this error.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

The maximum number of
entities that are
allowed in the scaling
group. If left
unconfigured, defaults
to 1000. If this value
is provided it must be
set to an integer
between 0 and 1000.

cooldown

Integer (Required)

The cooldown period, in
seconds, before any
additional changes can
happen. This number must
be an integer between 0
and 86400 (24 hrs).

name

String (Required)

The name of the scaling
group. This name does
not have to be unique.

minEntities

Integer (Required)

The minimum number of
entities in the scaling
group. This number must
be an integer between 0
and 1000.

metadata

Object (Required)

Specifies custom metadata
for your group
configuration. You can
use this object to enable
custom automation. The
specification does not
affect Auto Scale
functionality. There is
no limitation on depth.

Example Update scaling group configuration: JSON request

{"name":"workers","cooldown":60,"minEntities":5,"maxEntities":100,"metadata":{"firstkey":"this is a string","secondkey":"1"}}

This operation retrieves configuration details for a specified scaling group.

Details include the launch configuration and the scaling policies for the specified scaling group configuration.

The details appear in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains details about
the specified scaling
group.

400

InvalidQueryArgument

The "limit" query
argument value is not a
valid integer.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation retrieves configuration details for a specified scaling group and its associated webhooks.

Details include the launch configuration, the scaling policies, and the policies' webhooks for the specified scaling group configuration.

The details appear in the response body in JSON format.

Note

The ?webhooks=true parameter is required for this method.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains details about
the specified scaling
group, including
associated webhooks.

400

InvalidQueryArgument

The "limit" query
argument value is not a
valid integer.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

The scaling group must be empty before it can be deleted. An empty group contains no entities. If deletion is successful, no response body is returned. If the group contains pending or active entities, deletion fails and a 409 error message is returned. If there are pending or active servers in the scaling group, pass force=true to force delete the group. Passing force=true immediately deletes all active servers in the group. Pending servers are deleted when they build and become active.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success But No Content

The delete scaling group
request succeeded.

400

InvalidQueryArgument

The "force" query
argument value is
invalid. It must be
"true", any other value
is invalid. If there are
servers in the group,
only "true" succeeds. If
there are no servers in
the group, "true" and no
value given succeed.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

403

GroupNotEmptyError

The scaling group cannot
be deleted because it
has servers in it. Use
the "force=true" query
argument to force delete
the group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation deletes and replaces a specified server in a scaling group.
If the group launch configuration specifies a draining_timeout value,
then the load balancer node associated with this server is put in DRAINING mode
for the specified number of seconds before the server is deleted.

You can delete and replace a server in a scaling group with a new server in that scaling group. By default, the specified server is deleted and replaced. The replacement server has the current launch configuration settings and a different IP address.

Note

The replace and purge parameters are optional for this method.

The replace parameter determines whether the server is replaced while it is being deleted.
If the parameter is not specified, the value defaults to replace=true.
Specify replace=false if you do not want the deleted server to be replaced.

The purge parameter determines whether the server is removed from the account.
If the parameter is not specified, the value defaults to purge=true.
Specify purge=false to leave the server on the account.
This setting is useful if you want to investigate the server image after deleting it.

Note

Deleting and replacing servers in a scaling group takes some time. The time required depends on
server type, size, and the complexity of the launch configuration settings for the replacement server.

This table shows the possible response codes for this operation:

Response Code

Name

Description

202

Accepted

The request
succeeded. No
response body is
returned.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

CannotDeleteServerBelowMinError

The server cannot be
deleted and not
replaced because
doing so would
violate the
configured
"minEntities." Note
that this error could
only occur if the
"replace=false"
argument is used.

403

Forbidden

The user does not
have permission to
perform the resource;
for example, the user
only has an observer
role and attempted to
perform something
only available to a
user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

404

ServerNotFoundError

The specified server
was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has
surpassed their rate
limit.

500

InternalError

An error internal to
the application has
occurred, please file
a bug report.

paused. If paused=TRUE, the group does not scale up or down. All
scheduled or API-generated policy operations are suspended, and convergence
is not triggered. When the group is paused, any POST requests to
converge or execute policy
operations return a 403GroupPausedError response.
If paused=FALSE, all group scaling and convergence operations resume and
scheduled or API-generated policy exectuions are allowed.

pendingCapacity. Integer. Specifies the number of servers that are in a "building" state.

name. Specifies the name of the group.

active. Specifies an array of active servers in the group. This array includes the server Id, as well as other data.

activeCapacity. Integer. Specifies the number of active servers in the group.

desiredCapacity. Integer. Specifies the number of servers that are desired in the scaling group.

status. String. Indicates the scaling group status. If status=ACTIVE,
the scaling group is healthy and actively scaling up and down on request.
If status=ERROR, the scaling group cannot complete scaling operation
requests successfully, typically due to an unrecoverable error that requires
user attention.

errors. List of objects. If status=ERROR then this field contains
a list of JSON objects with each object containing a message property
that describes the error in human readable format.

This operation retrieves the current state of the specified scaling group. It describes the state of the group in terms of its current set of active entities, the number of pending entities, and the desired number of entities. The description is returned in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
describes the state of
the specified scaling
group.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation pauses the specified scaling group. When a group is paused,
no policy or convergence operations are allowed. Any convergence operations
in progress are stopped. Group configuration updates like min/max/cooldown and
launch configurations updates like imageRef can run when a group is paused.
You can resume a paused group by submitting a resume request.

This operation does not take any data and does not return any data. If it
succeeds, a 204 response code is returned.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success

Group was successfully
paused.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation resumes the specified scaling group. When a group is resumed,
policy executions and convergence operations are allowed. The group state
contains "paused":false. You can pause a group by submitting a
pause request.

This operation does not take any data and does not return any data. If it
succeeds, a 204 response code is returned.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success

Group was successfully
paused.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation triggers convergence for a specific scaling group. Convergence implies that Autoscale attempts to continuously converge to the desired state of the scaling group, instead of manipulating servers only once.
When the convergence process starts, it will continue until the desired number of servers are in the ACTIVE state.

This operation does not take any data and does not return any data. If it succeeds, a 204 response code is returned.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success

Convergence has been
successfully triggered.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

GroupPausedError

Convergence was not
triggered because group
is paused.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation retrieveslaunch configuration details for a specified scaling group.

The details include from which image to create a server, which cloud load balancers to join the server to, which networks to add the server to, and other metadata.

The details appear in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains launch
configuration details
for the specified
scaling group.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation updates an existing launch configuration for the specified scaling group.

To change the launch configuration, specify the new configuration in the request body in JSON format. Configuration elements include from which image to create a server, which load balancers to join the server to, which networks to add the server to, and other metadata. If the update is successful, no response body is returned.

The base64 encoding for
the "path" argument in
the "personality"
parameter is invalid.

400

InvalidLaunchConfiguration

The content of the
files in the
"personality" parameter
exceeds the maximum
size limit allowed.

400

InvalidLaunchConfiguration

The load balancer ID
provided is invalid.

400

InvalidLaunchConfiguration

The number of files in
the "personality"
parameter exceeds
maximum limit.

400

ValidationError

The request body had
valid JSON but with
unexpected properties
or values in it. Please
note that there can be
many combinations that
cause this error.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role
and attempted to
perform something only
available to a user
with an admin role.
Note, some API nodes
also use this status
code for other things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content
type of the request is
not "application/json".

500

InternalError

An error internal to
the application has
occurred, please file a
bug report.

The configuration used to create new servers in
the scaling group. You must specify server
attribute, and can also specify the optional
loadBalancers attribute. Most launch
configurations have both a server and a cloud load
balancer or RackConnectV3 load balancer pool
configured.

args.loadBalancers

Array
(Optional)

One or more load balancers to which to add
servers. All servers are added to these load
balancers with the IP addresses of their
ServiceNet network. All servers are enabled and
equally weighted. Any new servers that are not
connected to the ServiceNet network are not added
to any load balancers.

args.loadBalancers.[*].port

Integer
(Required)

The port number of the service (on the new
servers) to use for this particular load balancer.
In most cases, this port number is 80. NOTE that
when using RackConnectV3, instead of a cloud load
balancer, leave this parameter empty.

args.loadBalancers.[*].loadBalancerId

String
(Required)

The ID of the cloud load balancer, or
RackConnectV3 load balancer pool, to which new
servers are added. For cloud load balancers set
the ID as an integer, for RackConnectV3 set the
UUID as a string. Note that when using
RackConnectV3, this value is supplied to you by
Rackspace Support after they configure your load
balancer pool.

args.draining_timeout

Integer
(Optional)

Specifies the number of seconds for which the
cloud load balancer node associated with the server
that is being deleted will be put in DRAINING mode
before the node is actually being deleted followed
by the server. Must be between 30 and 3600
inclusive.

args.server

Object
(Required)

The attributes that Auto Scale uses to create a
new server. For more information, see Create
Servers
<http://docs.rackspace.com/servers/api/v2/cs-
devguide/content/CreateServers.html>. The
attributes that are specified for the server
entity will apply to all new servers in the
scaling group, including the server name.

args.server.flavorRef

String
(Required)

The flavor of the server image. Specifies the
flavor Id for the server. A flavor is a resource
configuration for a server. For more information
on available flavors, see the Server flavors
<http://docs.rackspace.com/cas/api/v1.0/autoscale-
devguide/content/server-flavors.html> section.

args.server.imageRef

String
(Required)

The ID of the cloud server image from which new
server images will be created.

args.server.personality.[*].path

String
(Required)

The path to the file that contains data that is
be injected into the file system of the new cloud
server image.

args.server.personality.[*].contents

String
(Required)

The content items that will be injected into the
file system of the new cloud server image.

This operation lists scaling policies that are available to a specified scaling group.

Each policy is described in terms of an ID, name, type, adjustment, cooldown time, and links. These descriptions are returned in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains a list of
scaling policies for the
specified scaling group.

400

InvalidQueryArgument

The "limit" query
argument value is not a
valid integer.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

{"policies":[{"args":{"cron":"23 * * * *"},"changePercent":-5.5,"cooldown":1800,"id":"5f26e16c-5fa7-4d4f-8e78-257ea711389f","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/5f26e16c-5fa7-4d4f-8e78-257ea711389f/","rel":"self"}],"name":"scale down by 5.5 percent at 11pm","type":"schedule"},{"args":{"at":"2013-12-05T03:12:00Z"},"changePercent":-5.5,"cooldown":1800,"id":"9f7c5801-6b25-4f5a-af07-4bb752e23d53","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/9f7c5801-6b25-4f5a-af07-4bb752e23d53/","rel":"self"}],"name":"scale down by 5.5 percent on the 5th","type":"schedule"},{"changePercent":-5.5,"cooldown":1800,"id":"eb0fe1bf-3428-4f34-afd9-a5ac36f60511","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/eb0fe1bf-3428-4f34-afd9-a5ac36f60511/","rel":"self"}],"name":"scale down by 5.5 percent","type":"webhook"},{"cooldown":1800,"desiredCapacity":5,"id":"2f45092a-fde7-4461-a67a-3519e0366cd6","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/2f45092a-fde7-4461-a67a-3519e0366cd6/","rel":"self"}],"name":"set group to 5 servers","type":"webhook"},{"change":1,"cooldown":1800,"id":"e36e6a43-2a7a-433c-918c-39fa45b75d12","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/e36e6a43-2a7a-433c-918c-39fa45b75d12/","rel":"self"}],"name":"scale up by one server","type":"webhook"}],"policies_links":[]}

This operation creates one or more scaling policies for a specified scaling group.

To create a policy, specify it in the request body in JSON format. Each description must include a name, type, adjustment, and cooldown time.

Use the JSON response to obtain information about the newly-created policy or policies:

The response header points to the List Policies endpoint.

The response body provides an array of scaling policies.

The examples that are provided below show several methods for creating a scaling policy:

A request to create a policy based on desired capacity.

A request to create a policy based on incremental change.

A request to create a policy based on change percentage.

A request to create a policy based on change percentage scheduled daily, at a specific time of day.

A request to create a policy based on change percentage scheduled once, for a specific date and time.

A request to create multiple policies, followed by the matching response.

This table shows the possible response codes for this operation:

Response Code

Name

Description

201

Created

The scaling policy has
been created.

400

InvalidJsonError

The request is refused
because the body was
invalid JSON".

400

ValidationError

Both "at" and "cron"
values for the "args"
parameter are supplied.
Only one such value is
allowed.

400

ValidationError

More than one of
"change" or
"changePercent" or
"desiredCapacity" values
are supplied. Only one
such value is allowed.

400

ValidationError

Neither "at" or "cron"
values for the "args"
parameter are supplied
and this is a "schedule"
type policy.

400

ValidationError

Neither "change" or
"changePercent" or
"desiredCapacity" values
are supplied.

400

ValidationError

The "args" parameter is
not supplied and this is
a "schedule" type policy.

400

ValidationError

The "at" value does not
correspond to "YYYY-MM-
DDTHH:MM:SS.SSSS" format.

400

ValidationError

The "cron" value is
invalid. It either
contains a seconds
component or is invalid
cron expression.

400

ValidationError

The request body had
valid JSON but with
unexpected properties or
values in it. Please
note that there can be
many combinations that
cause this error. We
will try to list the
most common mistakes
users are likely to make
in a particular request.
".

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

422

PoliciesOverLimitError

The user has reached
their quota for scaling
policies, currently 100.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

Additional configuration information for
policies of type schedule. This
parameter is not required for policies of
type webhook. This parameter must be
set to either at or cron, which
are mutually exclusive.

[*].args.cron

String
(Optional)

The time when the policy will be executed,
as a cron entry. For example, if this is
parameter is set to 10***, the
policy will be executed at one minute past
midnight (00:01) every day of the month,
and every day of the week. For more
information about cron, read:
http://en.wikipedia.org/wiki/Cron

[*].args.at

String
(Optional)

The time when this policy will be
executed. The time must be formatted
according to this service's custom :ref:
Date and Time format <date-time-format>
with seconds, otherwise a 400 error may be
returned. The policy will be triggered
within a 10-second range of the time
specified, so if you set the at time
to 2013-05-19T08:07:08Z, it will be
triggered anytime between 08:07:08 to
08:07:18. This property is mutually
exclusive with the cron parameter.

[*].changePercent

Number
(Optional)

The percent change to make in the number
of servers in the scaling group. If this
number is positive, the number of servers
will increase by the given percentage. If
this parameter is set to a negative
number, the number of servers decreases by
the given percentage. The absolute change
in the number of servers will be rounded
to the nearest integer. This means that if
-X% of the current number of servers
translates to -0.5 or -0.25 or -0.75
servers, the actual number of servers that
will be shut down is 1. If X% of the
current number of servers translates to
1.2 or 1.5 or 1.7 servers, the actual
number of servers that will be launched is
2

[*].cooldown

Number
(Required)

The cooldown period, in seconds, before
this particular scaling policy can be
executed again. The policy cooldown period
does not affect the global scaling group
cooldown. The minimum value for this
parameter is 0 seconds, the maximum value
is 86400 seconds (24 hrs).

[*].type

Enum
(Required)

The type of policy that will be executed
for the current release, this value can be
either webhook or schedule.

[*].change

Integer
(Optional)

The change to make in the number of
servers in the scaling group. This
parameter must be an integer. If the value
is a positive integer, the number of
servers increases. If the value is a
negative integer, the number of servers
decreases.

[*].desiredCapacity

Integer
(Optional)

The desired server capacity of the scaling
the group; that is, how many servers
should be in the scaling group. This value
must be an absolute number, greater than
or equal to zero. For example, if this
parameter is set to ten, executing the
policy brings the number of servers to
ten. The minimum allowed value is zero.
Note that maxEntities and minEntities for
the configured group take precedence over
this setting.

Example Create policy: JSON request

The examples that are provided below show several methods for creating a scaling policy:
* A request to create a policy based on desired capacity
* A request to create a policy based on incremental change
* A request to create a policy based on change percentage
* A request to create a policy based on change percentage scheduled daily, at a specific time of day
* A request to create a policy based on change percentage scheduled once, for a specific date and time
* A request to create multiple policies,followed by the matching response

The following example shows how to create a webhook-based policy specifying that the desired capacity be five servers and setting the cooldown period to 1800 seconds.

[{"name":"set group to 5 servers","desiredCapacity":5,"cooldown":1800,"type":"webhook"}]

[{"name":"scale up by one server","change":1,"cooldown":1800,"type":"webhook"}]

[{"name":"scale down by 5.5 percent","changePercent":-5.5,"cooldown":1800,"type":"webhook"}]

[{"name":"scale down by 5.5 percent on the 5th","changePercent":-5.5,"cooldown":1800,"type":"schedule","args":{"at":"2013-12-05T03:12:00Z"}}]

[{"change":1,"cooldown":1800,"name":"scale up by one server","type":"webhook"},{"changePercent":-5.5,"cooldown":1800,"name":"scale down by 5.5 percent","type":"webhook"},{"cooldown":1800,"desiredCapacity":5,"name":"set group to 5 servers","type":"webhook"},{"args":{"cron":"23 * * * *"},"changePercent":-5.5,"cooldown":1800,"name":"scale down by 5.5 percent at 11pm","type":"schedule"},{"args":{"at":"2013-12-05T03:12:00Z"},"changePercent":-5.5,"cooldown":1800,"name":"scale down by 5.5 percent on the 5th","type":"schedule"}]

{"policies":[{"args":{"at":"2013-12-05T03:12:00Z"},"changePercent":-5.5,"cooldown":1800,"id":"9f7c5801-6b25-4f5a-af07-4bb752e23d53","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/9f7c5801-6b25-4f5a-af07-4bb752e23d53/","rel":"self"}],"name":"scale down by 5.5 percent on the 5th","type":"schedule"},{"cooldown":1800,"desiredCapacity":5,"id":"b0555a35-b2cb-4f0e-8743-d59e1621b980","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/b0555a35-b2cb-4f0e-8743-d59e1621b980/","rel":"self"}],"name":"set group to 5 servers","type":"webhook"},{"args":{"cron":"23 * * * *"},"changePercent":-5.5,"cooldown":1800,"id":"30707675-8e7c-4ea5-9358-c21648afcf29","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/30707675-8e7c-4ea5-9358-c21648afcf29/","rel":"self"}],"name":"scale down by 5.5 percent at 11pm","type":"schedule"},{"change":1,"cooldown":1800,"id":"1f3bdd08-7aae-4009-a3b7-49aa47fc0876","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/1f3bdd08-7aae-4009-a3b7-49aa47fc0876/","rel":"self"}],"name":"scale up by one server","type":"webhook"},{"changePercent":-5.5,"cooldown":1800,"id":"5afac18c-41e5-49d6-aba8-dec17c0d8ed7","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/5afac18c-41e5-49d6-aba8-dec17c0d8ed7/","rel":"self"}],"name":"scale down by 5.5 percent","type":"webhook"}]}

Details include an ID, name, type, adjustment, cool-down time, and links.

The details appear in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains details about
the specified scaling
policy.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

Both "at" and "cron"
values for the "args"
parameter are supplied.
Only one such value is
allowed.

400

ValidationError

More than one of
"change" or
"changePercent" or
"desiredCapacity" values
are supplied. Only one
such value is allowed.

400

ValidationError

Neither "at" or "cron"
values for the "args"
parameter are supplied
and this is a "schedule"
type policy.

400

ValidationError

Neither "change" or
"changePercent" or
"desiredCapacity" values
are supplied.

400

ValidationError

The "args" parameter is
not supplied and this is
a "schedule" type policy.

400

ValidationError

The "at" value does not
correspond to "YYYY-MM-
DDTHH:MM:SS.SSSS" format.

400

ValidationError

The "cron" value is
invalid. It either
contains a seconds
component or is invalid
cron expression.

400

ValidationError

The request body had
valid JSON but with
unexpected properties or
values in it. Please
note that there can be
many combinations that
cause this error.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

A name for the scaling policy. This name
must be unique for each scaling policy.

scalingPolicies.[*].args

Object
(Optional)

Additional configuration information for
policies of type "schedule." This
parameter is not required for policies of
type webhook. This parameter must be
set to either at or cron, which
are mutually exclusive.

scalingPolicies.[*].args.cron

String
(Optional)

The time when the policy runs, as a cron
entry. For example, if you set this
parameter to 10***, the policy
runs at one minute past midnight (00:01)
every day of the month, and every day of
the week. For more information about cron,
see http://en.wikipedia.org/wiki/Cron
<http://en.wikipedia.org/wiki/Cron>.

scalingPolicies.[*].args.at

String
(Optional)

The time when this policy will be
executed. The time must be formatted
according to this service's custom :ref:
Date and Time format <date-time-format>,
with seconds, otherwise a 400 error may be
returned. The policy will be triggered
within a 10-second range of the time
specified, so if you set the``at`` time
to``2013-05-19T08:07:08Z``, it will be
triggered anytime between 08:07:08 to
08:07:18. This property is mutually
exclusive with the cron parameter.

scalingPolicies.[*].changePercent

Number
(Optional)

The percent change to make in the number
of servers in the scaling group. If this
number is positive, the number of servers
increases by the given percentage. If this
parameter is set to a negative number, the
number of servers decreases by the given
percentage. The absolute change in the
number of servers is rounded to the
nearest integer. This means that if -X% of
the current number of servers translates
to -0.5 or -0.25 or -0.75 servers, the
actual number of servers that are shut
down is 1. If X% of the current number of
servers translates to 1.2 or 1.5 or 1.7
servers, the actual number of servers that
are launched is 2.

scalingPolicies.[*].cooldown

Number
(Required)

The cooldown period, in seconds, before
this particular scaling policy can run
again. The policy cooldown period does not
affect the global scaling group cooldown.
The minimum value for this parameter is 0
seconds. The maximum value is 86400
seconds (24 hrs).

scalingPolicies.[*].type

Enum
(Required)

The type of policy that runs. Currently,
this value can be either webhook or
schedule.

scalingPolicies.[*].change

Integer
(Optional)

The change to make in the number of
servers in the scaling group. This
parameter must be an integer. If the value
is a positive integer, the number of
servers increases. If the value is a
negative integer, the number of servers
decreases.

scalingPolicies.[*].desiredCapacity

Integer
(Optional)

The desired server capacity of the scaling
the group; that is, how many servers
should be in the scaling group. This value
must be an absolute number, greater than
or equal to zero. For example, if this
parameter is set to ten, executing the
policy brings the number of servers to
ten. The minimum allowed value is zero.
Note that the configured group maxEntities
and minEntities takes precedence over this
setting.

Example Update policy: JSON request

{"change":1,"cooldown":1800,"name":"scale up by one server","type":"webhook"}

This operation deletes a specified scaling policy from the specified tenant.

If deletion is successful, no response body is returned.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success But No Content

The delete scaling
policy request succeeded.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

The execute policy
request was accepted.
The actual execution may
be delayed, but will be
attempted if no errors
are returned. Use the
"GET scaling group
state" method to see if
the policy was executed.

400

InvalidJsonError

The request is refused
because the body was
invalid JSON".

400

ValidationError

The request body had
valid JSON but with
unexpected properties or
values in it. There can
be many combinations that
can cause this error.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

CannotExecutePolicyError

The policy did not
run because a
scaling policy or
scaling group cooldown
was still in effect.

403

CannotExecutePolicyError

The policy did not
run because
applying the changes
would not result in the
addition or deletion of
any servers.

403

GroupPausedError

The policy did not run
because the group is
paused. You can resolve
this error by
resuming
the group.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scalilng
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation lists web hooks and their IDs for a specified scaling policy.

This data is returned in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains a list of
webhooks for the
specified scaling policy.

400

InvalidQueryArgument

Only "pagination" query
arguments are valid in
this request.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

{"webhooks":[{"id":"152054a3-e0ab-445b-941d-9f8e360c9eed","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/eb0fe1bf-3428-4f34-afd9-a5ac36f60511/webhooks/152054a3-e0ab-445b-941d-9f8e360c9eed/","rel":"self"},{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/execute/1/0077882e9626d83ef30e1ca379c8654d86cd34df3cd49ac8da72174668315fe8/","rel":"capability"}],"metadata":{"notes":"PagerDuty will fire this webhook"},"name":"PagerDuty"},{"id":"23037efb-53a9-4ae5-bc33-e89a56b501b6","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/eb0fe1bf-3428-4f34-afd9-a5ac36f60511/webhooks/23037efb-53a9-4ae5-bc33-e89a56b501b6/","rel":"self"},{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/execute/1/4f767340574433927a26dc747253dad643d5d13ec7b66b764dcbf719b32302b9/","rel":"capability"}],"metadata":{},"name":"Nagios"}],"webhooks_links":[]}

This operation creates one or more webhooks for the specified scaling policy.

Webhooks must have a name. If the operation succeeds, the response body contains the IDs and links to the newly created web hooks. This data is provided in the request body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

201

Created

The webhook has been
created.

400

InvalidJsonError

A syntax or parameter
error. The create
webhook request body had
invalid JSON.

400

InvalidJsonError

The request is refused
because the body was
invalid JSON".

400

ValidationError

A syntax or parameter
error. The create
webhook request body had
bad.

400

ValidationError

The request body had
valid JSON but with
unexpected properties or
values in it. Please
note that there can be
many combinations that
cause this error. We
will try to list the
most common mistakes
users are likely to make
in a particular request.
".

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

422

WebhookOverLimitsError

The user has reached
their quota for
webhooks, currently 25.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

{"webhooks":[{"id":"152054a3-e0ab-445b-941d-9f8e360c9eed","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/eb0fe1bf-3428-4f34-afd9-a5ac36f60511/webhooks/152054a3-e0ab-445b-941d-9f8e360c9eed/","rel":"self"},{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/execute/1/0077882e9626d83ef30e1ca379c8654d86cd34df3cd49ac8da72174668315fe8/","rel":"capability"}],"metadata":{"notes":"PagerDuty will fire this webhook"},"name":"PagerDuty"},{"id":"23037efb-53a9-4ae5-bc33-e89a56b501b6","links":[{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/676873/groups/605e13f6-1452-4588-b5da-ac6bb468c5bf/policies/eb0fe1bf-3428-4f34-afd9-a5ac36f60511/webhooks/23037efb-53a9-4ae5-bc33-e89a56b501b6/","rel":"self"},{"href":"https://dfw.autoscale.api.rackspacecloud.com/v1.0/execute/1/4f767340574433927a26dc747253dad643d5d13ec7b66b764dcbf719b32302b9/","rel":"capability"}],"metadata":{},"name":"Nagios"}]}

This operation retrieves webhook details for a specified scaling policy.

The details appear in the response body in JSON format.

This table shows the possible response codes for this operation:

Response Code

Name

Description

200

OK

The request succeeded
and the response
contains details about
the specified webhook.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

404

NoSuchWebhookError

The specified webhook
was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

This operation updates a webhook for a specified tenant and scaling policy.

If the specified webhook is not recognized, the change is ignored. If you submit a URL, the URL is ignored but that does not invalidate the request. If the change is successful, no response body is returned.

Note

All Rackspace Auto Scale update (PUT) operations completely replace the configuration being updated. Empty values (for example, { } )in the update are accepted and overwrite previously specified parameters. New parameters can be specified. All create (POST) webhook parameters, even optional ones, are required for the update webhook operation, including the metadata parameter.

This table shows the possible response codes for this operation:

Response Code

Name

Description

204

Success But No Content

The update webhook
request succeeded.

400

InvalidJsonError

A syntax or parameter
error. The create
webhook request body had
invalid JSON.

400

InvalidJsonError

The request is refused
because the body was
invalid JSON".

400

ValidationError

A syntax or parameter
error. The create
webhook request body had
bad.

400

ValidationError

The request body had
valid JSON but with
unexpected properties or
values in it. Please
note that there can be
many combinations that
cause this error.

401

InvalidCredentials

The X-Auth-Token the
user supplied is bad.

403

Forbidden

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

404

NoSuchWebhookError

The specified webhook
was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

415

UnsupportedMediaType

The request is refused
because the content type
of the request is not
"application/json".

422

WebhookOverLimitsError

The user has reached
their quota for
webhooks, currently 25.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

The user does not have
permission to perform
the resource; for
example, the user only
has an observer role and
attempted to perform
something only available
to a user with an admin
role. Note, some API
nodes also use this
status code for other
things.

404

NoSuchPolicyError

The requested scaling
policy was not found in
the specified scaling
group.

404

NoSuchScalingGroupError

The specified scaling
group was not found.

404

NoSuchWebhookError

The specified webhook
was not found.

405

InvalidMethod

The method used is
unavailable for the
endpoint.

413

RateLimitError

The user has surpassed
their rate limit.

500

InternalError

An error internal to the
application has
occurred, please file a
bug report.

If an autoscaled server is removed from the load balancer manually, and that
server is supposed to be included based on the scaling group configuration,
Autoscale reverts the change and adds the server back to the configured cloud
load balancer. Note that autoscale does not care if server is added to any
other CLB. It only ensures that server is always there in configured CLB.

When Autoscale attempts to add a server to a cloud load balancer that is
missing or deleted, the scaling group status changes to ERROR.
In previous Autoscale releases, the server that couldn't be added was deleted.
In the current release, the server remains in the scaling group instead of
being deleted.

THE INFORMATION CONTAINED IN THE RACKSPACE DEVELOPER DOCUMENTATION IS INTENDED FOR
SOFTWARE DEVELOPERS INTERESTED IN DEVELOPING SERVICE MANAGEMENT APPLICATIONS USING
THE RACKSPACE APPLICATION PROGRAMMING INTERFACE (API). THE DOCUMENT IS FOR
INFORMATIONAL PURPOSES ONLY AND IS PROVIDED “AS IS.”

Except as set forth in Rackspace general terms and conditions, cloud terms of service
and/or other agreement you sign with Rackspace, Rackspace assumes no liability whatsoever,
and disclaims any express or implied warranty, relating to its services including, but
not limited to, the implied warranty of merchantability, fitness for a particular purpose,
and noninfringement.

Although part of the document explains how Rackspace services may work with third party
products, the information contained in the document is not designed to work with all
scenarios. Any use or changes to third party products and/or configurations should be
made at the discretion of your administrators and subject to the applicable terms and
conditions of such third party. Rackspace does not provide technical support for third
party products, other than specified in your hosting services or other agreement you
have with Rackspace and Rackspace accepts no responsibility for third-party products.

Rackspace cannot guarantee the accuracy of any information presented after the date of
publication.