By clicking or navigating this website site, you agree to allow our collection of information on Scaleway to offer you an optimal user experience and to keep track of statistics through cookies. Learn more about our Cookie Policy.

Using Placement Groups from the API

Placement Groups Overview

In this documentation you will discover how to manage Compute Instances Placement Groups by querying the Scaleway API.

Placement Groups allow you to group instances into groups.

Placement groups have two operating modes. The first one is called max_availability. It ensures that all the compute instances that belong to the same group will not run on the same underlying hardware. The second one is called low_latency and does the exact opposite: it brings compute instances closer together to achieve higher network throughput.

If you prefer managing your placement groups in a visual environment, discover our documentation about managing Placement Groups from the Scaleway console.

Placement Groups Principles

Placement groups work for all instance ranges in the same availability zone without any architecture or type distinction. It means that GP1, DEV1, ARM64 or all the future virtualized ranges can be part of the same placement group. Unfortunately, they will not work on Bare Metal servers.

A placement group is composed of three mandatory fields:

a name

a policy type

a policy mode

Name is a free text field but let us explain what are the two others in more details.

Policy types

The policy type enables the choice of a placement behaviour for the underlying instances. It can be set to either low_latency or to max_availability.

The low latency policy regroups your instances on the nearest hardware. It will limit network latency and allows for the highest network throughput between servers. At best, instances will be placed on the same hypervisor.

The maximum availability policy spreads the instances on far away hypervisors as much as possible. It will limit the impact in case of hardware failure. For this policy, the instances may be placed anywhere in the same availability zone.

Policy Modes

The policy mode selects the instance’s allocation behaviour if the placement constraint cannot be respected. Policy mode can be set to either optional or enforced.

When the policy mode is set to optional then failing to respect the placement policy still allocates the server. When the policy mode is set to enforced then failing to respect the placement policy results in not allocating the server.

Checking a Group Status

When a number of instances are part of the same placement group, it is possible to query the full group status and then to check the field policy_respected. This field indicates if the selected policy is respected or not. It returns true if the policy is respected, false otherwise.

To get the placement information for a single server, query its server object. In the result, check the field placement_group. Likewise, it will be true if the placement is respected or false if it is not.

Placement Groups in Practice

As a practical application, we will see how to setup two instances that should never run on the same hardware. To do so, we will create a max_availability type placement group with the enforced policy.

Creating a Placement Group

First, let us create the placement group with the appropriate policy_type and policy_mode fields: