VHACS (Virtualization, High Availibility and Cluster Storage, pronounced vee-hacks) is a highly availablecloud storage implementation, running on Linux v2.6. VHACS is a combination of at least 8 long term OSS/Linux based projects, along with a CLI management interface for controlling VHACS nodes, clouds, and vservers within the VHACS cluster.

The easiest way to try out VHACS and to get an idea of the how the admin-level interface works, is to use one of the available VHACS-VM Alpha images. Using two VM images for testing out the VHACS cloud will initially be the easiest way to try things out.

Client

Fabric support

VHACS uses iSCSI on the server side of the cloud, so any client with an iSCSI Initiator can take advantage of the VHACS server side cloud. As work continues on LinuxIO, other fabrics and/or storage devices will become available for the VHACS cloud, too, such as FCoE, Fibre Channel, etc.

Test and validation

In the 2 node cluster configuration running on multi-socket single core x86_64, running 32 active VHACS clouds (both client and server) of 1 GB and 100 MB sizes in the current test bed. The latter is used for multi-cloud ops, e.g.: vhacs storage -S yourVHACScloud01-4 would put those 4 into STANDBY.

Limitations

In order to scale to the number of cluster RA's requires to monitor 32 cloud clusters, we decided to convert VHACS v0.6.0 from Heartbeat to OpenAIS. As of 6/26/2008, almost all major functionality is now up and running with OpenAIS+Pacemaker.

We are also exporting DRBD's struct block_device directly via Target/IBLOCK, so this means that DRBD is mapped 1:1 between the iSCSI Target Name+TargetPortalGroupTag tuple. Using volumes on top of the DRBD block device and then exporting these from Target/IBLOCK is another option for increasing cloud density and reducing the total number of required kernel threads.

There are ~256 kernel threads for a 32 cloud cluster on a fully loaded node running both roles (see below). Also, there are 128 cluster RAs for this same multi-role fully loaded VHACS cluster node.

For VHACS v1.0, there will be an additional IFNAME defined, REPLICATION_IFNAME for replication traffic using DRBD between VHACS nodes.

Different STORAGE_IFNAME and HEARTBEAT_IFNAME NICs interfaces are supported in the current version of VHACS. In a basic example, this consists of having 2 NICs on each node in the VHACS cluster. They should be running on a different local subnet or network range from each other. Also, in the current release of the STORAGE_IFNAME and HEARTBEAT_IFNAME values must be the same on both machines, using /dev/eth0 for STORAGE_IFNAME and /dev/eth1 for HEARTBEAT_IFNAME on both machines.

halfdome:~# vhacs cluster
usage:
For full description, try :
vhacs cluster -h|--help
With all options, use -V LEVEL to increase verbosity
vhacs cluster -c|--check
vhacs cluster -I|--init [NODES]
vhacs cluster -m|--monitor
vhacs cluster -M|--monitor1
vhacs cluster [NODES] -E|--exec COMMAND|-
vhacs cluster [NODES] -P|--exec COMMAND|-
syntax for NODES argument:
foobar just the node named foobar
foobar1,foobar2,foobar3
run the subcommand recursively for all listed nodes
foobar1-3 equivalent to foobar1,foobar2,foobar3
foobar1-3,foobar5
equivalent to foobar1,foobar2,foobar3,foobar5
ALL special node name that converts to the list of
all nodes in the hearbeat cluster local node is
in, if any

vhacs node:

halfdome:~# vhacs node
usage:
For full description, try :
vhacs node -h|--help
vhacs node -s|--setrole ROLES NODES
vhacs node -d|--delrole ROLES NODES
vhacs node -l|--list
vhacs node -i|--info NODES
vhacs node -S|--standby NODES
vhacs node -A|--active NODES
syntax for NODES argument:
foobar just the node named foobar
foobar1,foobar2,foobar3
run the subcommand recursively for all listed nodes
foobar1-3 equivalent to foobar1,foobar2,foobar3
foobar1-3,foobar5
equivalent to foobar1,foobar2,foobar3,foobar5
ALL special name that converts to the list of all nodes
syntax for ROLES argument:
vhost the node can mount remote storage and will run resources
like virtual machines off it
storage the node can host physical disk partitions part
of user-created storage
vhost,storage
the node can do both
ALL equivalent to vhost,storage

vhacs storage:

halfdome:~# vhacs storage
usage:
For full description, try :
vhacs storage -h|--help
With all options, use -V LEVEL to increase verbosity
vhacs storage -c|--create STORAGES -s|--size SIZE [-n|--nodes DRBD_NODES]
vhacs storage -D|--destroy STORAGES
vhacs storage -u|--unfail STORAGES
vhacs storage -r|--restart STORAGES
vhacs storage -l|--list
vhacs storage -L|--listbig
vhacs storage -m|--monitor
vhacs storage -M|--monitor1
vhacs storage -i|--info STORAGES
vhacs storage -S|--standby STORAGES
vhacs storage -A|--active STORAGES
vhacs storage -p|--prefers NODES STORAGES
syntax for STORAGES argument:
foobar just the storage named foobar
foobar1,foobar2,foobar3
run the subcommand recursively for all listed storages
foobar1-3 equivalent to foobar1,foobar2,foobar3
foobar1-3,foobar5
equivalent to foobar1,foobar2,foobar3,foobar5
ALL special name that converts to the list of all storages
syntax for NODES argument:
nodefoobar migrate storage mount on node nodefoobar if possible and
assign scores so that this is the preferred node in the
future for mounting storages.
node1-2,node4 try to migrate storage on node1, then node2, then node4
and assign scores so that the nodes will be preferred
in that order for future migrations
syntax for DRBD_NODES argument:
Same as above, but this is used when creating a storage to set your
preferred nodes to be used for hosting the disk backend.
The ALL keyword here has no special meaning.