This article describes how to configure a system managed with Bright Cluster Manager for bursting to EC2 using so-called "third-party" connectivity methods (e.g. hardware VPN, Amazon Direct Connect). This is an alternative way of performing cloudbursting to VPCs.

The default way to do cloudbursting -- described in great detail in the Administrator Manual -- is to use OpenVPN over the internet. Using OpenVPN means that no hardware VPN or Amazon Direct Connect is required.

Cloudbursting in Bright Cluster Manager defaults to establishing an OpenVPN connection between the headnode and the cloud director. This connection is used as a secure communication channel from the headnode to the EC2. In addition to that, there are OpenVPN connections established between the cloud compute nodes and the cloud director node. Those connections are used for managing the cloud compute nodes (but not for data transfer between jobs).

Making use of the EC2-VPN platform gives the users addtional ways of establishing connection with their resources in EC2. Besides the existing over-the-Internet TCP/IP connection, it is possible to establish a hardware VPN connection with an Amazon VPC gateway, or to have a dedicated communication channel leading to the VPC (the Amazon Direct Connect).

When third-party connectivity methods are used, there is typically no need to run an OpenVPN connection on top of them between the headnode and the cloud director. Likewise, with VPCs there is typically no need for the OpenVPN communication between cloud compute nodes and the cloud director, as the VPC subnet traffic is, unlike traffic inside EC2-Classic platform, isolated from other users.

What follows are instructions for setting up cloudbursting to EC2 with no OpenVPN set up, i.e. with no tunnel interfaces and no netmap network.

Prerequisites

Bright Cluster Manager 6.1, and CMDaemon binary version 17553, or higher. You can figure out if you have these by running:

[headnode ~]# rpm -q cmdaemoncmdaemon-6.1-17553_cm6.1.x86_64

You start with a cluster managed by Bright Cluster Manager, with no cloudbursting facilities configured. I.e. there is no cloud provider account defined, there are no cloud nodes configured, there are no tunnel networks and no netmap network defined.

The instructions assume some pre-existing and pre-configured communication channel between the headnode and instances started inside the subnets of the private cloud (e.g. via "Direct Connect", or a IPSec based Hardware VPN). Ie, the following must be true:

An AWS EC2 account exists for the cloud,

a Virtual Private Cloud (VPC) has been configured inside the EC2-VPC platform,

at least one subnet is defined in the VPC

VPC routing tables and gateways are configured properly and their setup allows for communication between the cloud instances and the local cluster,

at least one VPC security group is configured,

the existing security groups and network ACL configuration should not restrict any traffic coming from the cluster.

Manually Adding The Cloud Provider Account

Do not use the cmgui wizard, and do not use the cloud-setup script. These would create the netmap network and tunnel interfaces on the existing nodes, which use an OpenVPN-based setup.Instead, the cloud provider has to be created from scratch. Watch out for leftover settings from previous configurations, which can interfere in odd ways. You really should create it from scratch:

The gateway IP address should be the customer VPN gateway. If that is not present, then a headnode-facing IP address of a router standing on the route to EC2 can be used.

The baseaddress is the baseaddress of the VPC subnet. This arbitrary example assumes a VPC with a CIDR address of 10.220.0.0/16, and a subnet of 10.220.1.0/24 inside it.

The 'ec2subnetid' field is the subnet identifier assigned to the subnet by Amazon, and it should have a value given to it by the administrator. If the field is empty, then the CMDaemon will attempt to create a new subnet inside your VPC, and it will likely fail becase its range will conflict with an existing subnet.

It is only required to represent subnets with network objects for those existing subnets in which you want to be able to start an instance using Bright Cluster Manager. However, it is recommended to define network objects for all subnets actually existing in the private cloud.

If you have multiple subnets in which you want to start your instance, you can clone the first network. Set the ec2subnetid.Optionally, set the baseaddress broadcast address, and the number of netmask bits

The 'vpcid' is the AWS-assigned id of the existing VPC which is to be managed via Bright.

The 'baseaddress' should be the base address of the entire VPC (i.e. the network IP part of its CIDR)

The 'secgroupd' and 'secgroupn' properties are the security groupd id of existing security groups which are to be usedfor the newly created cloud director and cloud compute node instances. In principle those two can be the exact same security group, but both fields must be filled in.

As shown in the example above, some special options must be configured for the VPC. It is important that these options are set *before* the private cloud object is committed for the first time.

skipVpcRouteTableSetup -- ensure that CMDaemon does not attempt to alter the existing routing tablesdontUpdateSecurityGroups -- ensure that CMDaemon does not attempt to alter the existing security groupsdontCreateSubnetWhenIDIsSet -- new subnets will not be created if a already attached subnet has 'ec2subnetid' set.dontRemoveEC2Entities -- when removing the subnets or a VPC from the bright Cluster Manager configuration, the CMDaemon will not attempt to remove the actual VPC, and it will not remove the subnets inside EC2.

The 'region' should be the actual EC2 region inside which the existing VPC is located.

[head->device*[vpc-director*]->roles*]% ..[head->device*[vpc-director*]]% set category director[head->device*[vpc-director*]]% commit[head->device*[vpc-director]->roles]% use clouddirector[head->device*[vpc-director]->roles[clouddirector]]% show

...Dependents vpc-director-dependents

...# The special dependents group visible in the output of the show command must be assigned as the group for the provisioning node

Note, that the network you assign to the eth0 interface determines inside which subnet the cloud node will be created.

Once the cloud director has booted and is in the UP state, the cloud node can be powered on.

Finalizing setup

Preparing /etc/hosts

It might be necessary to alter the /etc/hosts on the provisioned node's localdisk and change the IP address of the headnode from the IP on the local network, to the headnode's IP on the external network. This step is necessary if the cloud nodes get provisioned, but then "fail when switching to local root".

This can be done by storing a specially prepared copy of /etc/hosts inside the cloud node's software image, and then copying it to the proper location of /etc/hosts from the node's finalize script, e.g. via this command: