Ganeti is a virtualization cluster management software. You are expected
to be a system administrator familiar with your Linux distribution and
the Xen or KVM virtualization environments before using it.

The various components of Ganeti all have man pages and interactive
help. This manual though will help you getting familiar with the system
by explaining the most common operations, grouped by related use.

After a terminology glossary and a section on the prerequisites needed
to use this manual, the rest of this document is divided in sections
for the different targets that a command affects: instance, nodes, etc.

A physical machine which is member of a cluster. Nodes are the basic
cluster infrastructure, and they don’t need to be fault tolerant in
order to achieve high availability for instances.

Node can be added and removed (if they host no instances) at will from
the cluster. In a HA cluster and only with HA instances, the loss of any
single node will not cause disk data loss for any instance; of course,
a node crash will cause the crash of its primary instances.

A node belonging to a cluster can be in one of the following roles at a
given time:

master node, which is the node from which the cluster is controlled

master candidate node, only nodes in this role have the full cluster
configuration and knowledge, and only master candidates can become the
master node

regular node, which is the state in which most nodes will be on
bigger clusters (>20 nodes)

drained node, nodes in this state are functioning normally but the
cannot receive new instances; the intention is that nodes in this role
have some issue and they are being evacuated for hardware repairs

offline node, in which there is a record in the cluster
configuration about the node, but the daemons on the master node will
not talk to this node; any instances declared as having an offline
node as either primary or secondary will be flagged as an error in the
cluster verify operation

Depending on the role, each node will run a set of daemons:

the ganeti-noded daemon, which controls the manipulation of
this node’s hardware resources; it runs on all nodes which are in a
cluster

the ganeti-confd daemon (Ganeti 2.1+) which runs on all
nodes, but is only functional on master candidate nodes; this daemon
can be disabled at configuration time if you don’t need its
functionality

the ganeti-rapi daemon which runs on the master node and
offers an HTTP-based API for the cluster

the ganeti-masterd daemon which runs on the master node and
allows control of the cluster

Beside the node role, there are other node flags that influence its
behaviour:

the master_capable flag denotes whether the node can ever become a
master candidate; setting this to ‘no’ means that auto-promotion will
never make this node a master candidate; this flag can be useful for a
remote node that only runs local instances, and having it become a
master is impractical due to networking or other constraints

the vm_capable flag denotes whether the node can host instances or
not; for example, one might use a non-vm_capable node just as a master
candidate, for configuration backups; setting this flag to no
disallows placement of instances of this node, deactivates hypervisor
and related checks on it (e.g. bridge checks, LVM check, etc.), and
removes it from cluster capacity computations

A virtual machine which runs on a cluster. It can be a fault tolerant,
highly available entity.

An instance has various parameters, which are classified in three
categories: hypervisor related-parameters (called hvparams), general
parameters (called beparams) and per network-card parameters (called
nicparams). All these parameters can be modified either at instance
level or via defaults at cluster level.

The are multiple options for the storage provided to an instance; while
the instance sees the same virtual drive in all cases, the node-level
configuration varies between them.

There are several disk templates you can choose from:

diskless

The instance has no disks. Only used for special purpose operating
systems or for testing.

file*

The instance will use plain files as backend for its disks. No
redundancy is provided, and this is somewhat more difficult to
configure for high performance.

sharedfile*

The instance will use plain files as backend, but Ganeti assumes that
those files will be available and in sync automatically on all nodes.
This allows live migration and failover of instances using this
method.

plain

The instance will use LVM devices as backend for its disks. No
redundancy is provided.

drbd

Note

This is only valid for multi-node clusters using DRBD 8.0+

A mirror is set between the local node and a remote one, which must be
specified with the second value of the –node option. Use this option
to obtain a highly available instance that can be failed over to a
remote node should the primary one fail.

Note

Ganeti does not support DRBD stacked devices:
DRBD stacked setup is not fully symmetric and as such it is
not working with live migration.

rbd

The instance will use Volumes inside a RADOS cluster as backend for its
disks. It will access them using the RADOS block device (RBD).

gluster*

The instance will use a Gluster volume for instance storage. Disk
images will be stored in the top-level ganeti/ directory of the
volume. This directory will be created automatically for you.

Disk templates marked with an asterisk require Ganeti to access the
file system. Ganeti will refuse to do so unless you whitelist the
relevant paths in the file storage paths configuration which,
with default configure-time paths is located
in /etc/ganeti/file-storage-paths.

The default paths used by Ganeti are:

Disk template

Default path

file

/srv/ganeti/file-storage

sharedfile

/srv/ganeti/shared-file-storage

gluster

/var/run/ganeti/gluster

Those paths can be changed at gnt-clusterinit time. See
gnt-cluster(8) for details.

A framework for using external (user-provided) scripts to compute the
placement of instances on the cluster nodes. This eliminates the need to
manually specify nodes in instance add, instance moves, node evacuate,
etc.

In order for Ganeti to be able to use these scripts, they must be place
in the iallocator directory (usually lib/ganeti/iallocators under
the installation prefix, e.g. /usr/local).

Tags are short strings that can be attached to either to cluster itself,
or to nodes or instances. They are useful as a very simplistic
information store for helping with cluster administration, for example
by attaching owner information to each instance after it’s created:

While not directly visible by an end-user, it’s useful to know that a
basic cluster operation (e.g. starting an instance) is represented
internally by Ganeti as an OpCode (abbreviation from operation
code). These OpCodes are executed as part of a Job. The OpCodes in a
single Job are processed serially by Ganeti, but different Jobs will be
processed (depending on resource availability) in parallel. They will
not be executed in the submission order, but depending on resource
availability, locks and (starting with Ganeti 2.3) priority. An earlier
job may have to wait for a lock while a newer job doesn’t need any locks
and can be executed right away. Operations requiring a certain order
need to be submitted as a single job, or the client must submit one job
at a time and wait for it to finish before continuing.

For example, shutting down the entire cluster can be done by running the
command gnt-instanceshutdown--all, which will submit for each
instance a separate job containing the “shutdown instance” OpCode.

The add operation might seem complex due to the many parameters it
accepts, but once you have understood the (few) required parameters and
the customisation capabilities you will see it is an easy operation.

The add operation requires at minimum five parameters:

the OS for the instance

the disk template

the disk count and size

the node specification or alternatively the iallocator to use

and finally the instance name

The OS for the instance must be visible in the output of the command
gnt-oslist and specifies which guest OS to install on the instance.

The disk template specifies what kind of storage to use as backend for
the (virtual) disks presented to the instance; note that for instances
with multiple virtual disks, they all must be of the same type.

The node(s) on which the instance will run can be given either manually,
via the -n option, or computed automatically by Ganeti, if you have
installed any iallocator script.

Instances are automatically started at instance creation time. To
manually start one which is currently stopped you can run:

$ gnt-instancestartupINSTANCE_NAME

Ganeti will start an instance with up to its maximum instance memory. If
not enough memory is available Ganeti will use all the available memory
down to the instance minimum memory. If not even that amount of memory
is free Ganeti will refuse to start the instance.

Note, that this will not work when an instance is in a permanently
stopped state offline. In this case, you will first have to
put it back to online mode by running:

$ gnt-instancemodify--onlineINSTANCE_NAME

The command to stop the running instance is:

$ gnt-instanceshutdownINSTANCE_NAME

If you want to shut the instance down more permanently, so that it
does not require dynamically allocated resources (memory and vcpus),
after shutting down an instance, execute the following:

$ gnt-instancemodify--offlineINSTANCE_NAME

Warning

Do not use the Xen or KVM commands directly to stop
instances. If you run for example xmshutdown or xmdestroy
on an instance Ganeti will automatically restart it (via
the ganeti-watcher(8) command which is launched via cron).

Instances can also be shutdown by the user from within the instance, in
which case they will marked accordingly and the
ganeti-watcher(8) will not restart them. See
gnt-cluster(8) for details.

There are two ways to get information about instances: listing
instances, which does a tabular output containing a given set of fields
about each instance, and querying detailed information about a set of
instances.

The command to see all the instances configured and their status is:

$ gnt-instancelist

The command can return a custom set of information when using the -o
option (as always, check the manpage for a detailed specification). Each
instance will be represented on a line, thus making it easy to parse
this output via the usual shell utilities (grep, sed, etc.).

To get more detailed information about an instance, you can run:

$ gnt-instanceinfoINSTANCE

which will give a multi-line block of information about the instance,
it’s hardware resources (especially its disks and their redundancy
status), etc. This is harder to parse and is more expensive than the
list operation, but returns much more detailed information.

Ganeti will always make sure an instance has a value between its maximum
and its minimum memory available as runtime memory. As of version 2.6
Ganeti will only choose a size different than the maximum size when
starting up, failing over, or migrating an instance on a node with less
than the maximum memory available. It won’t resize other instances in
order to free up space for an instance.

If you find that you need more memory on a node any instance can be
manually resized without downtime, with the command:

$ gnt-instancemodify-mSIZEINSTANCE_NAME

The same command can also be used to increase the memory available on an
instance, provided that enough free memory is available on its node, and
the specified size is not larger than the maximum memory size the
instance had when it was first booted (an instance will be unable to see
new memory above the maximum that was specified to the hypervisor at its
boot time, if it needs to grow further a reboot becomes necessary).

You can create a snapshot of an instance disk and its Ganeti
configuration, which then you can backup, or import into another
cluster. The way to export an instance is:

$ gnt-backupexport-nTARGET_NODEINSTANCE_NAME

The target node can be any node in the cluster with enough space under
/srv/ganeti to hold the instance image. Use the --noshutdown
option to snapshot an instance without rebooting it. Note that Ganeti
only keeps one snapshot for an instance - any previous snapshot of the
same instance existing cluster-wide under /srv/ganeti will be
removed by this operation: if you want to keep them, you need to move
them out of the Ganeti exports directory.

Importing an instance is similar to creating a new one, but additionally
one must specify the location of the snapshot. The command is:

By default, parameters will be read from the export information, but you
can of course pass them in via the command line - most of the options
available for the command gnt-instance add are supported here
too.

There is a possibility to import a foreign instance whose disk data is
already stored as LVM volumes without going through copying it: the disk
adoption mode.

For this, ensure that the original, non-managed instance is stopped,
then create a Ganeti instance in the usual way, except that instead of
passing the disk information you specify the current volumes:

This will take over the given logical volumes, rename them to the Ganeti
standard (UUID-based), and without installing the OS on them start
directly the instance. If you configure the hypervisor similar to the
non-managed configuration that the instance had, the transition should
be seamless for the instance. For more than one disk, just pass another
disk parameter (e.g. --disk1:adopt=...).

First, you can use a kernel from the node, by setting the hypervisor
parameters as such:

kernel_path to a valid file on the node (and appropriately
initrd_path)

kernel_args optionally set to a valid Linux setting (e.g. ro)

root_path to a valid setting (e.g. /dev/xvda1)

bootloader_path and bootloader_args to empty

Alternatively, you can delegate the kernel management to instances, and
use either pvgrub or the deprecated pygrub. For this, you must
install the kernels and initrds in the instance and create a valid GRUB
v1 configuration file.

For pvgrub (new in version 2.4.2), you need to set:

kernel_path to point to the pvgrub loader present on the node
(e.g. /usr/lib/xen/boot/pv-grub-x86_32.gz)

kernel_args to the path to the GRUB config file, relative to the
instance (e.g. (hd0,0)/grub/menu.lst)

root_pathmust be empty

bootloader_path and bootloader_args to empty

While pygrub is deprecated, here is how you can configure it:

bootloader_path to the pygrub binary (e.g. /usr/bin/pygrub)

the other settings are not important

More information can be found in the Xen wiki pages for pvgrub and pygrub.

There are three ways to exchange an instance’s primary and secondary
nodes; the right one to choose depends on how the instance has been
created and the status of its current primary node. See
Restoring redundancy for DRBD-based instances for information on changing the secondary
node. Note that it’s only possible to change the primary node to the
secondary and vice-versa; a direct change of the primary node with a
third node, while keeping the current secondary is not possible in a
single step, only via multiple operations as detailed in
Instance relocation.

If an instance is built in highly available mode you can at any time
fail it over to its secondary node, even if the primary has somehow
failed and it’s not up anymore. Doing it is really easy, on the master
node you can just run:

$ gnt-instancefailoverINSTANCE_NAME

That’s it. After the command completes the secondary node is now the
primary, and vice-versa.

The instance will be started with an amount of memory between its
maxmem and its minmem value, depending on the free memory on its
target node, or the operation will fail if that’s not possible. See
Startup/shutdown for details.

If the instance’s disk template is of type rbd, then you can specify
the target node (which can be any node) explicitly, or specify an
iallocator plugin. If you omit both, the default iallocator will be
used to determine the target node:

If an instance is built in highly available mode, it currently runs and
both its nodes are running fine, you can migrate it over to its
secondary node, without downtime. On the master node you need to run:

$ gnt-instancemigrateINSTANCE_NAME

The current load on the instance and its memory size will influence how
long the migration will take. In any case, for both KVM and Xen
hypervisors, the migration will be transparent to the instance.

If the destination node has less memory than the instance’s current
runtime memory, but at least the instance’s minimum memory available
Ganeti will automatically reduce the instance runtime memory before
migrating it, unless the --no-runtime-changes option is passed, in
which case the target node should have at least the instance’s current
runtime memory free.

If the instance’s disk template is of type rbd, then you can specify
the target node (which can be any node) explicitly, or specify an
iallocator plugin. If you omit both, the default iallocator will be
used to determine the target node:

If an instance has not been create as mirrored, then the only way to
change its primary node is to execute the move command:

$ gnt-instancemove-nNEW_NODEINSTANCE

This has a few prerequisites:

the instance must be stopped

its current primary node must be on-line and healthy

the disks of the instance must not have any errors

Since this operation actually copies the data from the old node to the
new node, expect it to take proportional to the size of the instance’s
disks and the speed of both the nodes’ I/O system and their networking.

Disk failures are a common cause of errors in any server
deployment. Ganeti offers protection from single-node failure if your
instances were created in HA mode, and it also offers ways to restore
redundancy after a failure.

It is important to note that for Ganeti to be able to do any disk
operation, the Linux machines on top of which Ganeti runs must be
consistent; for LVM, this means that the LVM commands must not return
failures; it is common that after a complete disk failure, any LVM
command aborts with an error similar to:

Before restoring an instance’s disks to healthy status, it’s needed to
fix the volume group used by Ganeti so that we can actually create and
manage the logical volumes. This is usually done in a multi-step
process:

first, if the disk is completely gone and LVM commands exit with
“Couldn’t find device with uuid…” then you need to run the command:

$ vgreduce--removemissingVOLUME_GROUP

after the above command, the LVM commands should be executing
normally (warnings are normal, but the commands will not fail
completely).

if the failed disk is still visible in the output of the pvs
command, you need to deactivate it from allocations by running:

$ pvs-xn/dev/DISK

At this point, the volume group should be consistent and any bad
physical volumes should not longer be available for allocation.

Since the process involves copying all data from the working node to the
target node, it will take a while, depending on the instance’s disk
size, node I/O system and network speed. But it is (barring any network
interruption) completely transparent for the instance.

For non-redundant instances, there isn’t a copy (except backups) to
re-create the disks. But it’s possible to at-least re-create empty
disks, after which a reinstall can be run, via the recreate-disks
command:

$ gnt-instancerecreate-disksINSTANCE

Note that this will fail if the disks already exists. The instance can
be assigned to new nodes automatically by specifying an iallocator
through the --iallocator option.

The conversion must be done while the instance is stopped, and
converting from plain to drbd template presents a small risk, especially
if the instance has multiple disks and/or if one node fails during the
conversion procedure). As such, it’s recommended (as always) to make
sure that downtime for manual recovery is acceptable and that the
instance has up-to-date backups.

From an instance’s primary node you can have access to its disks. Never
ever mount the underlying logical volume manually on a fault tolerant
instance, or will break replication and your data will be
inconsistent. The correct way to access an instance’s disks is to run
(on the master node, as usual) the command:

$ gnt-instanceactivate-disksINSTANCE

And then, on the primary node of the instance, access the device that
gets created. For example, you could mount the given disks, then edit
files on the filesystem, etc.

Note that with partitioned disks (as opposed to whole-disk filesystems),
you will need to use a tool like kpartx(8):

After you’ve finished you can deactivate them with the deactivate-disks
command, which works in the same way:

$ gnt-instancedeactivate-disksINSTANCE

Note that if any process started by you is still using the disks, the
above command will error out, and you must cleanup and ensure that
the above command runs successfully before you start the instance,
otherwise the instance will suffer corruption.

By default, this does the equivalent of shutting down and then starting
the instance, but it accepts parameters to perform a soft-reboot (via
the hypervisor), a hard reboot (hypervisor shutdown and then startup) or
a full one (the default, which also de-configures and then configures
again the disks of the instance).

While it is not possible to move an instance from nodes (A,B) to
nodes (C,D) in a single move, it is possible to do so in a few
steps:

# instance is located on A, B
$ gnt-instancereplace-disks-nnodeCinstance1# instance has moved from (A, B) to (A, C)# we now flip the primary/secondary nodes
$ gnt-instancemigrateinstance1# instance lives on (C, A)# we can then change A to D via:
$ gnt-instancereplace-disks-nnodeDinstance1

Which brings it into the final configuration of (C,D). Note that we
needed to do two replace-disks operation (two copies of the instance
disks), because we needed to get rid of both the original nodes (A and
B).

All the aforementioned steps assure NIC configuration from the Ganeti
perspective. Of course this has nothing to do, how the instance eventually will
get the desired connectivity (IPv4, IPv6, default routes, DNS info, etc) and
where will the IP resolve. This functionality is managed by the external
components.

Let’s assume that the VM will need to obtain a dynamic IP via DHCP, get a SLAAC
address, and use DHCPv6 for other configuration information (in case RFC-6106
is not supported by the client, e.g. Windows). This means that the following
external services are needed:

A DHCP server

An IPv6 router sending Router Advertisements

A DHCPv6 server exporting DNS info

A dynamic DNS server

These components must be configured dynamically and on a per NIC basis.
The way to do this is by using custom kvm-ifup scripts and hooks.

The snf-network package [1,3] includes custom scripts that will provide the
aforementioned functionality. kvm-vif-bridge and vif-custom is an
alternative to kvm-ifup and vif-ganeti that take into account all network
info being exported. Their actions depend on network tags. Specifically:

dns: will update an external DDNS server (nsupdate on a bind server)

ip-less-routed: will setup routes, rules and proxy ARP
This setup assumes a pre-existing routing table along with some local
configuration and provides connectivity to instances via an external
gateway/router without requiring nodes to have an IP inside this network.

private-filtered: will setup ebtables rules to ensure L2 isolation on a
common bridge. Only packets with the same MAC prefix will be forwarded to the
corresponding virtual interface.

snf-network works with nfdhcpd [2,3]: a custom user space DHCP
server based on NFQUEUE. Currently, nfdhcpd replies on BOOTP/DHCP requests
originating from a tap or a bridge. Additionally in case of a routed setup it
provides a ra-stateless configuration by responding to router and neighbour
solicitations along with DHCPv6 requests for DNS options. Its db is
dynamically updated using text files inside a local dir with inotify
(snf-network just adds a per NIC binding file with all relevant info if the
corresponding network tag is found). Still we need to mangle all these
packets and send them to the corresponding NFQUEUE.

Note that the cluster requires that at any point in time, a certain
number of nodes are master candidates, so changing from master candidate
to other roles might fail. It is recommended to either force the
operation (via the --force option) or first change the number of
master candidates in the cluster - see Standard operations.

For this step, you can use either individual instance move
commands (as seen in Changing the primary node) or the bulk
per-node versions; these are:

$ gnt-nodemigrateNODE
$ gnt-nodeevacuate-sNODE

Note that the instance “move” command doesn’t currently have a node
equivalent.

Both these commands, or the equivalent per-instance command, will make
this node the secondary node for the respective instances, whereas their
current secondary node will become primary. Note that it is not possible
to change in one step the primary node to another node as primary, while
keeping the same secondary node.

When using LVM (either standalone or with DRBD), it can become tedious
to debug and fix it in case of errors. Furthermore, even file-based
storage can become complicated to handle manually on many hosts. Ganeti
provides a couple of commands to help with automation.

Beside the cluster initialisation command (which is detailed in the
Ganeti installation tutorial document) and the master failover command which is
explained under node handling, there are a couple of other cluster
operations available.

There are three commands that relate to global cluster checks. The first
one is verify which gives an overview on the cluster state,
highlighting any issues. In normal operation, this command should return
no ERROR messages:

If the verify command complains about file mismatches between the master
and other nodes, due to some node problems or if you manually modified
configuration files, you can force an push of the master configuration
to all other nodes via the redist-conf command:

$ gnt-clusterredist-conf

This command will be silent unless there are problems sending updates to
the other nodes.

It is possible to rename a cluster, or to change its IP address, via the
rename command. If only the IP has changed, you need to pass the
current name and Ganeti will realise its IP has changed:

$ gnt-clusterrenamecluster.example.com
This will rename the cluster to 'cluster.example.com'. If
you are connected over the network to the cluster name, the operation
is very dangerous as the IP address will be removed from the node and
the change may not go through. Continue?
y/[n]/?: y
Failure: prerequisites not met for this operation:
Neither the name nor the IP address of the cluster has changed

In the above output, neither value has changed since the cluster
initialisation so the operation is not completed.

The ganeti-watcher(8) is a program, usually scheduled via
cron, that takes care of cluster maintenance operations (restarting
downed instances, activating down DRBD disks, etc.). However, during
maintenance and troubleshooting, this can get in your way; disabling it
via commenting out the cron job is not so good as this can be
forgotten. Thus there are some commands for automated control of the
watcher: pause, info and continue:

The usual method to cleanup a cluster is to run gnt-clusterdestroy
however if the Ganeti installation is broken in any way then this will
not run.

It is possible in such a case to cleanup manually most if not all traces
of a cluster installation by following these steps on all of the nodes:

Shutdown all instances. This depends on the virtualisation method
used (Xen, KVM, etc.):

Xen: run xmlist and xmdestroy on all the non-Domain-0
instances

KVM: kill all the KVM processes

chroot: kill all processes under the chroot mountpoints

If using DRBD, shutdown all DRBD minors (which should by at this time
no-longer in use by instances); on each node, run drbdsetup/dev/drbdNdown for each active DRBD minor.

If using LVM, cleanup the Ganeti volume group; if only Ganeti created
logical volumes (and you are not sharing the volume group with the
OS, for example), then simply running lvremove-fxenvg (replace
‘xenvg’ with your volume group name) should do the required cleanup.

If using file-based storage, remove recursively all files and
directories under your file-storage directory: rm-rf/srv/ganeti/file-storage/* replacing the path with the correct path
for your cluster.

Stop the ganeti daemons (/etc/init.d/ganetistop) and kill any
that remain alive (pgrepganeti and pkillganeti).

Remove the ganeti state directory (rm-rf/var/lib/ganeti/*),
replacing the path with the correct path for your installation.

If using RBD, run rbdunmap/dev/rbdN to unmap the RBD disks.
Then remove the RBD disk images used by Ganeti, identified by their
UUIDs (rbdrmuuid.rbd.diskN).

On the master node, remove the cluster from the master-netdev (usually
xen-br0 for bridged mode, otherwise eth0 or similar), by running
ipadel$clusterip/32devxen-br0 (use the correct cluster ip and
network device name).

At this point, the machines are ready for a cluster creation; in case
you want to remove Ganeti completely, you need to also undo some of the
SSH changes and log directories:

rm-rf/var/log/ganeti/srv/ganeti (replace with the correct
paths)

remove from /root/.ssh the keys that Ganeti added (check the
authorized_keys and id_dsa files)

regenerate the host’s SSH keys (check the OpenSSH startup scripts)

uninstall Ganeti

Otherwise, if you plan to re-create the cluster, you can just go ahead
and rerun gnt-clusterinit.

Note that the set of characters present in a tag and the maximum tag
length are restricted. Currently the maximum length is 128 characters,
there can be at most 4096 tags per object, and the set of characters is
comprised by alphanumeric characters and additionally .+*/:@-_.

The above commands add three tags to an instance, to a node and to the
cluster. Note that the cluster command only takes tags as arguments,
whereas the node and instance commands first required the node and
instance name.

Tags can also be added from a file, via the --from=FILENAME
argument. The file is expected to contain one tag per line.

It is also possible to execute a global search on the all tags defined
in the cluster configuration, via a cluster command:

$ gnt-clustersearch-tagsREGEXP

The parameter expected is a regular expression (see
regex(7)). This will return all tags that match the search,
together with the object they are defined in (the names being show in a
hierarchical kind of way):

The tool harep can be used to automatically fix some problems that are
present in the cluster.

It is mainly meant to be regularly and automatically executed
as a cron job. This is quite evident by considering that, when executed, it does
not immediately fix all the issues of the instances of the cluster, but it
cycles the instances through a series of states, one at every harep
execution. Every state performs a step towards the resolution of the problem.
This process goes on until the instance is brought back to the healthy state,
or the tool realizes that it is not able to fix the instance, and
therefore marks it as in failure state.

By default, harep checks the status of the cluster but it is not allowed to
perform any modification. Modification must be explicitly allowed by an
appropriate use of tags. Tagging can be applied at various levels, and can
enable different kinds of autorepair, as hereafter described.

All the tags that authorize harep to perform modifications follow this
syntax:

ganeti:watcher:autorepair:<type>

where <type> indicates the kind of intervention that can be performed. Every
possible value of <type> includes at least all the authorization of the
previous one, plus its own. The possible values, in increasing order of
severity, are:

fix-storage allows a disk replacement or another operation that
fixes the instance backend storage without affecting the instance
itself. This can for example recover from a broken drbd secondary, but
risks data loss if something is wrong on the primary but the secondary
was somehow recoverable.

migrate allows an instance migration. This can recover from a
drained primary, but can cause an instance crash in some cases (bugs).

failover allows instance reboot on the secondary. This can recover
from an offline primary, but the instance will lose its running state.

reinstall allows disks to be recreated and an instance to be
reinstalled. This can recover from primary&secondary both being
offline, or from an offline primary in the case of non-redundant
instances. It causes data loss.

These autorepair tags can be applied to a cluster, a nodegroup or an instance,
and will act where they are applied and to everything in the entities sub-tree
(e.g. a tag applied to a nodegroup will apply to all the instances contained in
that nodegroup, but not to the rest of the cluster).

If there are multiple ganeti:watcher:autorepair:<type> tags in an
object (cluster, node group or instance), the least destructive tag
takes precedence. When multiplicity happens across objects, the nearest
tag wins. For example, if in a cluster with two instances, I1 and
I2, I1 has failover, and the cluster itself has both
fix-storage and reinstall, I1 will end up with failover
and I2 with fix-storage.

Sometimes it is useful to stop harep from performing its task temporarily,
and it is useful to be able to do so without distrupting its configuration, that
is, without removing the authorization tags. In order to do this, suspend tags
are provided.

Suspend tags can be added to cluster, nodegroup or instances, and act on the
entire entities sub-tree. No operation will be performed by harep on the
instances protected by a suspend tag. Their syntax is as follows:

ganeti:watcher:autorepair:suspend[:<timestamp>]

If there are multiple suspend tags in an object, the form without timestamp
takes precedence (permanent suspension); or, if all object tags have a
timestamp, the one with the highest timestamp.

Tags with a timestamp will be automatically removed when the time indicated by
the timestamp is passed. Indefinite suspension tags have to be removed manually.

If this tag is present a repair of type type has been performed on
the instance and has been completed by timestamp. The result is
either success, failure or enoperm, and jobs is a
+-separated list of jobs that were executed for this repair.

An enoperm result is an error state due to permission problems. It
is returned when the repair cannot proceed because it would require to perform
an operation that is not allowed by the ganeti:watcher:autorepair:<type> tag
that is defining the instance autorepair permissions.

NB: if an instance repair ends up in a failure state, it will not be touched
again by harep until it has been manually fixed by the system administrator
and the ganeti:watcher:autorepair:result:failure:* tag has been manually
removed.

This is useful if you need to follow a job’s progress from multiple
terminals.

A job that has not yet started to run can be canceled:

$ gnt-jobcancel17810

But not one that has already started execution:

$ gnt-jobcancel17805
Job 17805 is no longer waiting in the queue

There are two queues for jobs: the current and the archive
queue. Jobs are initially submitted to the current queue, and they stay
in that queue until they have finished execution (either successfully or
not). At that point, they can be moved into the archive queue using e.g.
gnt-jobautoarchiveall. The ganeti-watcher script will do this
automatically 6 hours after a job is finished. The ganeti-cleaner
script will then remove archived the jobs from the archive directory
after three weeks.

Note that gnt-joblist only shows jobs in the current queue.
Archived jobs can be viewed using gnt-jobinfo<id>.

It is sometimes useful to be able to use a Ganeti instance as a Ganeti
node (part of another cluster, usually). One example scenario is two
small clusters, where we want to have an additional master candidate
that holds the cluster configuration and can be used for helping with
the master voting process.

However, these Ganeti instance should not host instances themselves, and
should not be considered in the normal capacity planning, evacuation
strategies, etc. In order to accomplish this, mark these nodes as
non-vm_capable:

When this flag is set, the cluster will not do any operations that
relate to instances on such nodes, e.g. hypervisor operations,
disk-related operations, etc. Basically they will just keep the ssconf
files, and if master candidates the full configuration.

If Ganeti is deployed in multi-site model, with each site being a node
group (so that instances are not relocated across the WAN by mistake),
it is conceivable that either the WAN latency is high or that some sites
have a lower reliability than others. In this case, it doesn’t make
sense to replicate the job information across all sites (or even outside
of a “central” node group), so it should be possible to restrict which
nodes can become master candidates via the auto-promotion algorithm.

Ganeti 2.4 introduces for this purpose a new master_capable flag,
which (when unset) prevents nodes from being marked as master
candidates, either manually or automatically.

Note that marking a node both not vm_capable and not
master_capable makes the node practically unusable from Ganeti’s
point of view. Hence these two flags should be used probably in
contrast: some nodes will be only master candidates (master_capable but
not vm_capable), and other nodes will only hold instances (vm_capable
but not master_capable).

Beside the usual gnt- and ganeti- commands which are provided
and installed in $prefix/sbin at install time, there are a couple of
other tools installed which are used seldom but can be helpful in some
cases.

This tool is used to exercise either the hardware of machines or
alternatively the Ganeti software. It is safe to run on an existing
cluster as long as you don’t pass it existing instance names.

The command will, by default, execute a comprehensive set of operations
against a list of instances, these being:

creation

disk replacement (for redundant instances)

failover and migration (for redundant instances)

move (for non-redundant instances)

disk growth

add disks, remove disk

add NICs, remove NICs

export and then import

rename

reboot

shutdown/startup

and finally removal of the test instances

Executing all these operations will test that the hardware performs
well: the creation, disk replace, disk add and disk growth will exercise
the storage and network; the migrate command will test the memory of the
systems. Depending on the passed options, it can also test that the
instance OS definitions are executing properly the rename, import and
export operations.

This tool takes the Ganeti configuration and outputs a “sanitized”
version, by randomizing or clearing:

DRBD secrets and cluster public key (always)

host names (optional)

IPs (optional)

OS names (optional)

LV names (optional, only useful for very old clusters which still have
instances whose LVs are based on the instance name)

By default, all optional items are activated except the LV name
randomization. When passing --no-randomization, which disables the
optional items (i.e. just the DRBD secrets and cluster public keys are
randomized), the resulting file can be used as a safety copy of the
cluster config - while not trivial, the layout of the cluster can be
recreated from it and if the instance disks have not been lost it
permits recovery from the loss of all master candidates.

Ganeti can either be run entirely as root, or with every daemon running as
its own specific user (if the parameters --with-user-prefix and/or
--with-group-prefix have been specified at ./configure-time).

In case split users are activated, they are required to exist on the system,
and they need to belong to the proper groups in order for the access
permissions to files and programs to be correct.

The users-setup tool, when run, takes care of setting up the proper
users and groups.

When invoked without parameters, the tool runs in interactive mode, showing the
list of actions it will perform and asking for confirmation before proceeding.

Providing the --yes-do-it parameter to the tool prevents the confirmation
from being asked, and the users and groups will be created immediately.

Below is a list (which might not be up-to-date) of additional projects
that can be useful in a Ganeti deployment. They can be downloaded from
the project site (http://code.google.com/p/ganeti/) and the repositories
are also on the project git site (http://git.ganeti.org).

The ganeti-nbma software is designed to allow instances to live on a
separate, virtual network from the nodes, and in an environment where
nodes are not guaranteed to be able to reach each other via multicasting
or broadcasting. For more information see the README in the source
archive.

Before Ganeti version 2.5, this was a standalone project; since that
version it is integrated into the Ganeti codebase (see
Ganeti quick installation guide for instructions on how to enable it). If you run
an older Ganeti version, you will have to download and build it
separately.

For more information and installation instructions, see the README file
in the source archive.