Be sure all the hotfixes provided by the hypervisor vendor are
applied. Track the release of hypervisor patches through your
hypervisor vendor’s support channel, and apply patches as soon as
possible after they are released. CloudStack will not track or notify
you of required hypervisor patches. It is essential that your hosts
are completely up to date with the provided hypervisor patches. The
hypervisor vendor is likely to refuse to support any system that is
not up to date with patches.

Warning

Apply All Necessary Hotfixes. The lack of up-do-date hotfixes can lead to
data corruption and lost VMs.

All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V
enabled). All hosts within a cluster must be homogeneous. That means
the CPUs must be of the same type, count, and feature flags.

The CloudStack management network must not be configured as a
separate virtual network. The CloudStack management network is the
same as the vCenter management network, and will inherit its
configuration. See Configure vCenter Management Network.

CloudStack requires ESXi and vCenter. ESX is not supported.

Ideally all resources used for CloudStack must be used for CloudStack only.
CloudStack should not share instance of ESXi or storage with other
management consoles. Do not share the same storage volumes that will
be used by CloudStack with a different set of ESXi servers that are
not managed by CloudStack.

Put all target ESXi hypervisors in dedicated clusters in a separate Datacenter
in vCenter.

Ideally clusters that will be managed by CloudStack should not contain
any other VMs. Do not run the management server or vCenter on
the cluster that is designated for CloudStack use. Create a separate
cluster for use of CloudStack and make sure that they are no VMs in
this cluster.

All of the required VLANs must be trunked into all network switches that
are connected to the ESXi hypervisor hosts. These would include the
VLANs for Management, Storage, vMotion, and guest VLANs. The guest
VLAN (used in Advanced Networking; see Network Setup) is a contiguous
range of VLANs that will be managed by CloudStack.

You should have a plan for cabling the vSphere hosts. Proper network
configuration is required before adding a vSphere host to CloudStack. To
configure an ESXi host, you can use vClient to add it as standalone host
to vCenter first. Once you see the host appearing in the vCenter
inventory tree, click the host node in the inventory tree, and navigate
to the Configuration tab.

In the host configuration tab, click the “Hardware/Networking” link to
bring up the networking configuration page as above.

During the initial installation of an ESXi host a default virtual switch
vSwitch0 is created. You may need to create additional vSwiches depending
on your required architecture. CloudStack requires all ESXi hosts in the cloud
to use consistently named virtual switches. If
you change the default virtual switch name, you will need to configure
one or more CloudStack configuration variables as well.

CloudStack allows you to configure three separate networks per ESXi host.
CloudStack identifies these networks by the name of the vSwitch
they are connected to. The networks for configuration are public (for
traffic to/from the public internet), guest (for guest-guest traffic),
and private (for management and usually storage traffic). You can use
the default virtual switch for all three, or create one or two other
vSwitches for those traffic types.

If you want to separate traffic in this way you should first create and
configure vSwitches in vCenter according to the vCenter instructions.
Take note of the vSwitch names you have used for each traffic type. You
will configure CloudStack to use these vSwitches.

By default a virtual switch on ESXi hosts is created with 56 ports. We
recommend setting it to 4088, the maximum number of ports allowed. To do
that, click the “Properties…” link for virtual switch (note this is
not the Properties link for Networking).

In vSwitch properties dialog, select the vSwitch and click Edit. You
should see the following dialog:

In this dialog, you can change the number of switch ports. After you’ve
done that, ESXi hosts are required to reboot in order for the setting to
take effect.

In the vSwitch properties dialog box, you may see a vCenter management
network. This same network will also be used as the CloudStack
management network. CloudStack requires the vCenter management network
to be configured properly. Select the management network item in the
dialog, then click Edit.

Make sure the following values are set:

VLAN ID set to the desired ID

vMotion enabled.

Management traffic enabled.

If the ESXi hosts have multiple VMKernel ports, and ESXi is not using
the default value “Management Network” as the management network name,
you must follow these guidelines to configure the management network
port group so that CloudStack can find it:

Use one label for the management network port across all ESXi hosts.

In the CloudStack UI, go to Configuration - Global Settings and set
vmware.management.portgroup to the management network label from the
ESXi hosts.

You need to extend the range of firewall ports that the console proxy
works with on the hosts. This is to enable the console proxy to work
with VMware-based VMs. The default additional port range is 59000-60000.
To extend the port range, log in to the VMware ESX service console on
each host and run the following commands:

The Cisco Nexus 1000V virtual switch is a software-based virtual machine
access switch for VMware vSphere environments. It can span multiple
hosts running VMware ESXi 4.0 and later. A Nexus virtual switch consists
of two components: the Virtual Supervisor Module (VSM) and the Virtual
Ethernet Module (VEM). The VSM is a virtual appliance that acts as the
switch’s supervisor. It controls multiple VEMs as a single network
device. The VSM is installed independent of the VEM and is deployed in
redundancy mode as pairs or as a standalone appliance. The VEM is
installed on each VMware ESXi server to provide packet-forwarding
capability. It provides each virtual machine with dedicated switch
ports. This VSM-VEM architecture is analogous to a physical Cisco
switch’s supervisor (standalone or configured in high-availability mode)
and multiple linecards architecture.

Nexus 1000v switch uses vEthernet port profiles to simplify network
provisioning for virtual machines. There are two types of port profiles:
Ethernet port profile and vEthernet port profile. The Ethernet port
profile is applied to the physical uplink ports-the NIC ports of the
physical NIC adapter on an ESXi server. The vEthernet port profile is
associated with the virtual NIC (vNIC) that is plumbed on a guest VM on
the ESXi server. The port profiles help the network administrators
define network policies which can be reused for new virtual machines.
The Ethernet port profiles are created on the VSM and are represented as
port groups on the vCenter server.

This section discusses prerequisites and guidelines for using Nexus
virtual switch in CloudStack. Before configuring Nexus virtual switch,
ensure that your system meets the following requirements:

A cluster of servers (ESXi 4.1 or later) is configured in the
vCenter.

Each cluster managed by CloudStack is the only cluster in its vCenter
datacenter.

A Cisco Nexus 1000v virtual switch is installed to serve the
datacenter that contains the vCenter cluster. This ensures that
CloudStack doesn’t have to deal with dynamic migration of virtual
adapters or networks across other existing virtual switches. See
Cisco Nexus 1000V Installation and Upgrade Guide
for guidelines on how to install the Nexus 1000v VSM and VEM modules.

The Nexus 1000v VSM is not deployed on a vSphere host that is managed
by CloudStack.

When the maximum number of VEM modules per VSM instance is reached,
an additional VSM instance is created before introducing any more
ESXi hosts. The limit is 64 VEM modules for each VSM instance.

CloudStack expects that the Management Network of the ESXi host is
configured on the standard vSwitch and searches for it in the
standard vSwitch. Therefore, ensure that you do not migrate the
management network to Nexus 1000v virtual switch during
configuration.

The following information specified in the Nexus Configure Networking
screen is displayed in the Details tab of the Nexus dvSwitch in the
CloudStack UI:

Control Port Group VLAN ID
The VLAN ID of the Control Port Group. The control VLAN is used for
communication between the VSM and the VEMs.

Management Port Group VLAN ID
The VLAN ID of the Management Port Group. The management VLAN corresponds to
the mgmt0 interface that is used to establish and maintain the connection
between the VSM and VMware vCenter Server.

Packet Port Group VLAN ID
The VLAN ID of the Packet Port Group. The packet VLAN forwards relevant data
packets from the VEMs to the VSM.

Note

The VLANs used for control, packet, and management port groups can be the
same.

Admin Name and Password
The admin name and password to connect to the VSM appliance. You must specify
these credentials while configuring Nexus virtual switch.

Management IP Address
This is the IP address of the VSM appliance. This is the IP address you
specify in the virtual switch IP Address field while configuting Nexus virtual
switch.

SSL
Should be set to Enable.Always enable SSL. SSH is usually enabled by default
during the VSM installation. However, check whether the SSH connection to the
VSM is working, without which CloudStack failes to connect to the VSM.

Whether you create a Basic or Advanced zone configuration, ensure
that you always create an Ethernet port profile on the VSM after you
install it and before you create the zone.

The Ethernet port profile created to represent the physical
network or networks used by an Advanced zone configuration trunk
all the VLANs including guest VLANs, the VLANs that serve the
native VLAN, and the packet/control/data/management VLANs of the
VSM.

The Ethernet port profile created for a Basic zone configuration
does not trunk the guest VLANs because the guest VMs do not get
their own VLANs provisioned on their network interfaces in a Basic
zone.

An Ethernet port profile configured on the Nexus 1000v virtual switch
should not use in its set of system VLANs, or any of the VLANs
configured or intended to be configured for use towards VMs or VM
resources in the CloudStack environment.

You do not have to create any vEthernet port profiles – CloudStack
does that during VM deployment.

Ensure that you create required port profiles to be used by
CloudStack for different traffic types of CloudStack, such as
Management traffic, Guest traffic, Storage traffic, and Public
traffic. The physical networks configured during zone creation should
have a one-to-one relation with the Ethernet port profiles.

Assign ESXi host’s physical NIC adapters, which correspond to each
physical network, to the port profiles. In each ESXi host that is part
of the vCenter cluster, observe the physical networks assigned to each
port profile and note down the names of the port profile for future use.
This mapping information helps you when configuring physical networks
during the zone configuration on CloudStack. These Ethernet port profile
names are later specified as VMware Traffic Labels for different traffic
types when configuring physical networks during the zone configuration.
For more information on configuring physical networks, see
“Configuring a vSphere Cluster with Nexus 1000v Virtual Switch”.

Determine the public VLAN, System VLAN, and Guest VLANs to be used by
the CloudStack. Ensure that you add them to the port profile database.
Corresponding to each physical network, add the VLAN range to port
profiles. In the VSM command prompt, run the switchport trunk allowed
vlan<range> command to add the VLAN ranges to the port profile.

For example:

switchport trunk allowed vlan 1,140-147,196-203

In this example, the allowed VLANs added are 1, 140-147, and 196-203

You must also add all the public and private VLANs or VLAN ranges to the
switch. This range is the VLAN range you specify in your zone.

Note

Before you run the vlan command, ensure that the configuration mode is
enabled in Nexus 1000v virtual switch.

For example:

If you want the VLAN 200 to be used on the switch, run the following
command:

vlan 200

If you want the VLAN range 1350-1750 to be used on the switch, run the
following command:

To make a CloudStack deployment Nexus enabled, you must set the
vmware.use.nexus.vswitch parameter true by using the Global Settings
page in the CloudStack UI. Unless this parameter is set to “true” and
restart the management server, you cannot see any UI options specific to
Nexus virtual switch, and CloudStack ignores the Nexus virtual switch
specific parameters specified in the AddTrafficTypeCmd,
UpdateTrafficTypeCmd, and AddClusterCmd API calls.

Unless the CloudStack global parameter “vmware.use.nexus.vswitch” is set
to “true”, CloudStack by default uses VMware standard vSwitch for
virtual network infrastructure. In this release, CloudStack doesn’t
support configuring virtual networks in a deployment with a mix of
standard vSwitch and Nexus 1000v virtual switch. The deployment can have
either standard vSwitch or Nexus 1000v virtual switch.

You can configure Nexus dvSwitch by adding the necessary resources while
the zone is being created.

After the zone is created, if you want to create an additional cluster
along with Nexus 1000v virtual switch in the existing zone, use the Add
Cluster option. For information on creating a cluster, see
“Add Cluster: vSphere”.

In both these cases, you must specify the following parameters to
configure Nexus virtual switch:

Parameters

Description

Cluster Name

Enter the name of the cluster you created in vCenter. For example,”cloud.cluster”.

vCenter Host

Enter the host name or the IP address of the vCenter host where you have deployed the Nexus virtual switch.

vCenter User name

Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.

vCenter Password

Enter the password for the user named above.

vCenter Datacenter

Enter the vCenter datacenter that the cluster is in. For example, “cloud.dc.VM”.

Nexus dvSwitch IP Address

The IP address of the VSM component of the Nexus 1000v virtual switch.

CloudStack supports VMware vNetwork Distributed Switch (VDS) for virtual
network configuration in a VMware vSphere environment. This section
helps you configure VMware VDS in a CloudStack deployment. Each vCenter
server instance can support up to 128 VDS instances and each VDS
instance can manage up to 500 VMware hosts.

VMware VDS is an aggregation of host-level virtual switches on a VMware
vCenter server. VDS abstracts the configuration of individual virtual
switches that span across a large number of hosts, and enables
centralized provisioning, administration, and monitoring for your entire
datacenter from a centralized interface. In effect, a VDS acts as a
single virtual switch at the datacenter level and manages networking for
a number of hosts in a datacenter from a centralized VMware vCenter
server. Each VDS maintains network runtime state for VMs as they move
across multiple hosts, enabling inline monitoring and centralized
firewall services. A VDS can be deployed with or without Virtual
Standard Switch and a Nexus 1000V virtual switch.

This field would be used for only public traffic as of now. In case of
guest traffic this field would be ignored and could be left empty for guest
traffic. By default empty string would be assumed which translates to
untagged VLAN for that specific traffic type.

Type of virtual switch. Specified as string.

Possible valid values are vmwaredvs, vmwaresvs, nexusdvs.

vmwaresvs: Represents VMware vNetwork Standard virtual switch

vmwaredvs: Represents VMware vNetwork distributed virtual switch

nexusdvs: Represents Cisco Nexus 1000v distributed virtual switch.

If nothing specified (left empty), zone-level default virtual switchwould
be defaulted, based on the value of global parameter you specify.

Following are the global configuration parameters:

vmware.use.dvswitch: Set to true to enable any kind (VMware DVS and
Cisco Nexus 1000v) of distributed virtual switch in a CloudStack
deployment. If set to false, the virtual switch that can be used in that
CloudStack deployment is Standard virtual switch.

vmware.use.nexus.vswitch: This parameter is ignored if
vmware.use.dvswitch is set to false. Set to true to enable Cisco Nexus
1000v distributed virtual switch in a CloudStack deployment.

To make a CloudStack deployment VDS enabled, set the vmware.use.dvswitch
parameter to true by using the Global Settings page in the CloudStack UI
and restart the Management Server. Unless you enable the
vmware.use.dvswitch parameter, you cannot see any UI options specific to
VDS, and CloudStack ignores the VDS-specific parameters that you
specify. Additionally, CloudStack uses VDS for virtual network
infrastructure if the value of vmware.use.dvswitch parameter is true and
the value of vmware.use.nexus.dvswitch parameter is false. Another
global parameter that defines VDS configuration is
vmware.ports.per.dvportgroup. This is the default number of ports per
VMware dvPortGroup in a VMware environment. Default value is 256. This
number directly associated with the number of guest network you can
create.

You can configure VDS by adding the necessary resources while a zone is
created.

Alternatively, at the cluster level, you can create an additional
cluster with VDS enabled in the existing zone. Use the Add Cluster
option. For information as given in “Add Cluster: vSphere”.

In both these cases, you must specify the following parameters to
configure VDS:

Parameters Description

Cluster Name

Enter the name of the cluster you created in vCenter. For example, “cloudcluster”.

vCenter Host

Enter the name or the IP address of the vCenter host where you have deployed the VMware VDS.

vCenter User name

Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.

vCenter Password

Enter the password for the user named above.

vCenter Datacenter

Enter the vCenter datacenter that the cluster is in. For example, “clouddcVM”.

Override Public Traffic

Enable this option to override the zone-wide public traffic for the cluster you are creating.

Public Traffic vSwitch Type

This option is displayed only if you enable the Override Public Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.

Public Traffic vSwitch Name

Name of virtual switch to be used for the public traffic.

Override Guest Traffic

Enable the option to override the zone-wide guest traffic for the cluster you are creating.

Guest Traffic vSwitch Type

This option is displayed only if you enable the Override Guest Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.