ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)

·IT groups are inundated with time consuming data migrations to manage growth and change

In order to solve these issues and increase efficiency IT departments are moving to converged infrastructure solutions. These solutions offer many benefits, some of which are having the integration testing completed along with thoroughly documented deployment procedures. They also offer increased feature sets and premium support with a single point of contact. Cisco and IBM have team up to bring the best networking, compute and storage in a single solution named VersaStack. VersaStack offers customer’s versatility and simplicity, great performance, along with reliability. In this document we will show how to install an All Flash VersaStack setup for a VMware infrastructure that is designed to increase IOPS and provide best performance for IO intensive applications. A brief list of the VersaStack benefits that solve the challenges previously noted include:

The current data center trend, driven by the need to better utilize available resources, is towards virtualization on shared infrastructure. Higher levels of efficiency can be realized on integrated platforms due to the pooling of compute, network and storage resources, brought together by a pre-validated process. Validation eliminates compatibility issues and presents a platform with reliable features that can be deployed in an agile manner. This industry trend and the validation approach used to cater to it, has resulted in enterprise customers moving away from siloed architectures. VersaStack serves as the foundation for a variety of workloads, enabling efficient architectural designs that can be deployed quickly and with confidence.

This document describes the architecture and deployment procedures of an infrastructure composed of Cisco®, IBM ®, and VMware® virtualization that uses IBM FlashSystem V9000 block protocols. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy the core VersaStack architecture with IBM FlashSystem V9000.

The VersaStack architecture is highly modular or "Pod"-like. There are sufficient architectural flexibility and design options to scale as required with investment protection. The platform can be scaled up (adding resources to existing VersaStack units) and/or out (adding more VersaStack units).

Specifically, VersaStack is a defined set of hardware and software that serves as an integrated foundation for both virtualized and non-virtualized solutions. VMware vSphere® built on VersaStack includes IBM FlashSystem V9000, Cisco networking, the Cisco Unified Computing System™ (Cisco UCS®), Cisco MDS fiber-channel switches and VMware vSphere software in a single package. The design is flexible enough that the networking, computing, and storage can fit in one data center rack or be deployed according to a customer's data center design. Port density enables the networking components to accommodate multiple configurations.

One benefit of the VersaStack architecture is the ability to meet any customer's capacity or performance needs in a cost effective manner. The Converged Infrastructure system capable of serving multiple protocols across a single interface allows for customer choice and investment protection because it is wire-once architecture.

This architecture references relevant criteria pertaining to resiliency, cost benefit, and ease of deployment of all components including IBM FlashSystem V9000 storage.

The architecture for this solution shown below uses two sets of hardware resources:

The common infrastructure services include Active Directory, DNS, DHCP, vCenter, Nexus 1000v virtual supervisor module (VSM), UCS Performance Manager and any other shared service. These components are considered core infrastructure as they provide necessary data-center wide services where the VersaStack Pod resides. Since these services are integral to the deployment and operation of the platform, there is a need to adhere to best practices in their design and implementation. This includes such features as high-availability, appropriate RAID setup and performance and scalability considerations given such services may need to be extended to multiple Pods. At a customer's site, depending on whether this is a new data center, there may not be a need to build this infrastructure piece.

Figure 1 illustrates the VMware vSphere built on VersaStack components and the network connections for a configuration with IBM FlashSystem V9000 Storage. This design uses the Cisco Nexus® 9372, and Cisco UCS B-Series with the Cisco UCS virtual interface card (VIC) and the IBM FlashSystem V9000 storage controllers connected in a highly available design using Cisco Virtual Port Channels (vPCs). This infrastructure is deployed to provide FC-booted hosts with block-level access to shared storage datastores.

·Support for up to 160 Cisco UCS C-Series and B-Series servers by way of additional fabric extenders and blade server chassis

·Two IBM FlashSystem V9000 control enclosures, and one V9000 Storage enclosure. Support for up to 12 flash modules of the same capacity within a storage enclosures.

For server virtualization, the deployment includes VMware vSphere. Although this is the base design, each of the components can be scaled easily to support specific business requirements. For example, more (or different) servers or even blade chassis can be deployed to increase compute capacity, additional V9000 Storage Enclosures to increase capacity, and pairs of V9000 Control Enclosures shelves can be deployed to improve I/O capability and throughput, and special hardware or software features can be added to introduce new features.

This document guides you through the low-level steps for deploying the base architecture. These procedures cover everything from physical cabling to network, compute and storage device configurations.

For Information regarding the design of VersaStack, please reference the Design guide at:

The table below details the software revisions used for validating various components of the Cisco Nexus 9000 based VersaStack architecture. To validate your ENIC version run the "ethtool -i vmnic0" through the command line of the ESX host. For more information regarding supported configurations please reference the following Interoperability links:

This document provides details on configuring a fully redundant, highly available VersaStack unit with IBM FlashSystem V9000 storage. Therefore, reference is made at each step to the component being configured as either A or B. For example, Controller-A and Controller-B are used to identify the IBM storage controllers that are provisioned within this document, and Cisco Nexus A and Cisco Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS fabric Interconnects are similarly configured. Additionally, this document details the steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02, and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, <text> appears as part of the command structure. See the following example for the network port vlan create command:

Usage:

network port vlan create ?

[-node] <nodename> Node

{ [-vlan-name] {<netport>|<ifgrp>} VLAN Name

| -port {<netport>|<ifgrp>} Associated Network Port

[-vlan-id] <integer> } Network Switch VLAN Identifier

Example:

network port vlan –node <node01> -vlan-name i0a-<vlan id>

This document is intended to enable you to fully configure the VersaStack Pod in the environment. Various steps require you to insert customer-specific naming conventions, IP addresses, VSAN and VLAN schemes, as well as to record appropriate MAC addresses.

Table 2 and Table 3 describes the VLANs as well as example IP ranges necessary for deployment as outlined in this guide. The virtual machines (VMs) necessary for deployment are outlined in this guide as well. Networking architectures can be unique to each environment. Since the design of this deployment is a POD, the architecture in this document leverages private networks and only the in-band management vlan traffic routes out through the Cisco 9k switches. Other management traffic is routed through a separate Out of Band Management switch. Your architecture could vary based on the deployment objectives. An NFS Vlan is included in this document to allow connectivity to any existing NFS datastores for migration of virtual machines if required, however NFS is not validated in the solution and is not supported on IBM FlashSystem V9000.

The SAN infrastructure allows addition of storage enclosures or additional building blocks non-disruptively. A pair of MDS switches have been used for the fibre connectivity which provides redundancy and separate fabrics have been created by utilizing VSAN’s on the MDS switches which provides dedicated hosts or server-side storage area networks (SANs) and a private fabric to support the cluster interconnects.

The logical fabric isolation provides:

·No access for any host or server to the storage enclosure accidentally.

·No congestion to the host or server-side SAN can cause potential performance implications for both the host or server-side SAN and the FlashSystem V9000.

Table 4 describes the VSANs necessary for deployment as outlined in this guide.

The variables below for the Fibre Channel environment are to be collected during the installation phase for subsequent use in this document.

The Storage Controllers are also referred to as AC2 and the Storage Enclosure as AE2 for the V9000, this document refers to the controllers as Controller A (ContA) and Controller B (ContB) and the storage enclosure as (SE).

The information in this section is provided as a reference for cabling the equipment in a VersaStack environment. To simplify cabling requirements, the tables include both local and remote device and port locations.

The tables in this section contain details for the prescribed and supported configuration of the IBM FlashSystem V9000 running 7.4.1.2.

This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site. These interfaces will be used in various configuration steps

Be sure to follow the cabling directions in this section. Failure to do so will result in changes to the deployment procedures that follow because specific port locations are mentioned.

It is possible to order IBM FlashSystem V9000 systems in a different configuration from what is presented in the tables in this section. Before starting, be sure that the configuration matches the descriptions in the tables and diagrams in this section.

Figure 4 illustrates the cabling diagrams for VersaStack configurations using the Cisco Nexus 9000 and IBM FlashSystem V9000. For more information about FlashSystem V9000 enclosure cabling information, reference the following URL:

Figure 5 shows the Management cabling. The V9000’s have redundant management connections. One path is through the dedicated out-of-band management switch, and the secondary path is through the in-band management path going up through the 9k to the production network.

IBM FlashSystem V9000 Controllers and MDS 9148S switches are 16gbps capable; 16gbps ports have been used for back-end cluster connectivity between the controllers and the storage enclosure and 8gbps ports have been utilized for host connectivity from the V9000 controllers to the Cisco UCS Fabric interconnects.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.

Please register Cisco Nexus9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. Nexus9000 devices must be registered to receive entitled support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

---- Basic System Configuration Dialog VDC: 1 ---This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.

Please register Cisco Nexus9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. Nexus9000 devices must be registered to receive entitled support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

There are multiple ways to configure the switch to uplink to your separate management switch. There are two examples shown below. These examples provide to help show you a method on how your configuration could be setup, however since networking configurations can vary, we recommend you consult your local network personal for the optimal configuration. In the first example provided in this section, a single switch is top of rack and the Cisco Nexus 9000 series switches are both connected to it through its ports #48. The Cisco 9k switches are using a 1 gig SFP to convert the connected to Cat-5 copper connecting to the top of rack switch, however connection types can vary. The 9k’s are configured with the interface-vlan option and each 9k switch has a unique IP for its VLAN. The traffic we wish to route from the 9k is the in-band management traffic, so we will use the vlan 11 and set the port to access mode. The top of rack switch also has it ports set to access mode. In the second example, we show how to leverage port channel which maximizes upstream connectivity. In the second example, the top of rack switch would have port channel configured as well.

These steps provide details for the initial Cisco MDS Fibre Channel Switch setup. We will zone the storage prior to creating the FlashSystem V9000 Cluster so the all the nodes can communicate to each other.

the system. Setup configures only enough connectivity for management of the system.

Please register Cisco MDS 9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. MDS devices must be registered to receive entitled support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

Enter milliseconds in multiples of 10 for congestion-drop for port mode F in range (<100-500>/default), where default is 500. [d]: Congestion-drop for port mode E must be greater than or equal to Congestion-drop for port mode F. Hence, Congestion drop for port mode E will be set as default.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.

Please register Cisco MDS 9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. MDS devices must be registered to receive entitled support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

9.Create the zone for the FlashSystem V9000 Cluster. If adding more FlashSystem V9000 control or storage nodes, you will add the WWPN's to the cluster communication zone used below named versastack. Host zones and the cluster zone belong to separate VSAN fabrics.

zone name versastack vsan <<var_vsan_a_clus_id>>

member device-alias VersaStack-ContA-BE1

member device-alias VersaStack-ContA-BE2

member device-alias VersaStack-ContB-BE1

member device-alias VersaStack-ContB-BE2

member device-alias VersaStack-SE-BE1

member device-alias VersaStack-SE-BE2

member device-alias VersaStack-SE-BE3

member device-alias VersaStack-SE-BE4

exit

10.Create the zoneset for the VersaStack configuration and add the zone.

zoneset name versastackzoneset vsan <<var_vsan_a_clus_id>>

member versastack

zoneset activate name versastackzoneset vsan <<var_vsan_a_clus_id>>

sh zoneset active copy run start

Cisco MDS B

1.Create Port Channel that will be uplinked to the fabric interconnect.

interface port-channel 2

2.Create a VSAN for Host Connectivity and assign interfaces to it. Ports assigned to the port channel will also be in this vsan.

9.Create the zone for the FlashSystem V9000 Cluster, if adding more FlashSystem V9000 control or storage nodes, you will add the WWPN's to the cluster communication zone used below named versastack. Host zones and the cluster zone belong to separate VSAN fabrics.

zone name versastack vsan <<var_vsan_b_clus_id>>

member device-alias VersaStack-ContB-BE3

member device-alias VersaStack-ContB-BE4

member device-alias VersaStack-ContA-BE3

member device-alias VersaStack-ContA-BE4

member device-alias VersaStack-SE-BE5

member device-alias VersaStack-SE-BE6

member device-alias VersaStack-SE-BE7

member device-alias VersaStack-SE-BE8

exit

10.Create the zoneset for the VersaStack configuration and add the zone.

In this section we will configure the storage. Encryption will be used to increase security and to save cost when disposing of drives. We will also leverage IBM Real-time compression to reduce OPEX by reducing our storage footprint, and mirroring for fault tolerance. Proper planning can optimize your performance and help reduce operational costs for your VersaStack.

Browser access to all system and service IPs is automatically configured to connect securely using HTTPS and SSL. Attempts to connect through HTTP will get redirected to HTTPS.

The system generates its own self-signed SSL certificate. Upon first connection to the system, your browser may present a security exception because it does not trust the signer; you should allow the connection to proceed.

Since we will implement encryption during our setup, we will need the licenses for this feature for both storage controllers. There is no trial license. We will also need three USB keys for our two control enclosures, installed across both control enclosures to allow us to complete the encryption. The USB keys can be removed from the system after setup and kept in a secure location. Whenever and encryption-enabled V9000 system is powered on, it requires a USB key containing the correct encryption key to be plugged into a control enclosure. As such, it is recommended that one USB key is to remain installed in each system if you plan to allow automatic rebooting of the system should it be shut down for any reason. Alternatively, you would need to re-insert one USB key to reboot.

If you are connecting multiple control enclosures for scale, the additional nodes will communicate through the Fibre Channel connection for initial discovery. To setup on node A only, complete the following steps:

1.Configure an Ethernet port of a PC/laptop to allow DHCP to configure its IP address and DNS.

2.Connect an Ethernet cable from the PC/laptop Ethernet port to the Ethernet port labeled "T" on the rear of either node canister in the V9000 control enclosure.

3.A few moments after the connection is made, the node will use DHCP to configure the IP address and DNS settings of the laptop/PC.

This will likely disconnect you from any other network connections you have on the laptop/PC. If you do not have DHCP on your PC/laptop, you can manually configure it with the following network settings: IPv4 address 192.168.0.2, mask to 255.255.255.0, gateway to 192.168.0.1, and DNS to 192.168.0.1

4.Open a browser and go to address https://install which will direct you to the initialization wizard.

5.When asked how the node will be used, select "As the first node in a new system" and click Next.

6.Follow the instructions that are presented by the initialization tool to configure the system with a management IP address <<var_cluster_mgmt_ip>>,<<var_cluster_mgmt_mask>> and <<var_cluster _mgmt_gateway>>, then click Next.

7.Click Close when the task is completed.

8.Click Next.

After you complete the initialization process, disconnect the cable as directed, between the PC/laptop and the technician port, and reconnect to your network with your previous settings. Your browser will be redirected the management GUI, at the IP address you configured.

You may have to wait up to five minutes for the management GUI to start up and become accessible.

10.Insert Contact Details <<var_contact_name>> <<var_email_contact>><<var_admin_phone>><<var_city>> then click Apply and Next and click Close.

11.Input the email server IP address <<var_mailhost_ip>> and change the port if necessary, then click Apply and Next, then Close.

12.Enter the email addresses for all administrators that should be notified when issues occur as well and any other parties that need info or inventory <<var_email_contact>>. Click Apply and Next then Close.

13.Review the Summary and screen and Click Finish, then click Close after tasks have completed.

.

14.Click the Enable Encryption popup dialog.

15.The dialog states you will need 3 usb flash drives. This is per storage controller, so make sure you have a total of 6 installed, 3 per enclosure if you are setting up 2 control enclosures. Click Next.

16.Wait for the Encryption Key updates to complete then click Next.

17.Click Commit to enable Encryption then click Close.

18.Click Cancel to the add hosts popup as they will be added later in this document.

19.Using the lower left Setting navigation, select Network, then highlight the Service IP Addresses section and click interface 1. Change the IP address if necessary and click OK.

20.Select the Node Name drop-down and select Left.

21. Click node2. Change the IP address if necessary and click OK.

22.Open another browser session to URL, enter the ipaddress of the cluster, followed by /service. (“<<var_cluster_mgmt_ip>>/service”) Enter the superuser password (set on step 4), and click Log in.

23.Click the radio button of Panel 01-1, then from the left navigation, click Change Service IP.

27.Close this browser session and return to the main V9000 GUI browser session.

28.Click the lock Access icon in the left pane and select Users to access the Users screen.

29.Select Create User.

30.Enter and new name for an alternative admin account. Leave Security Admin default, and input the new password then click Create.

31.Log out the superuser account and log back in as the new account you created.

32.Click Cancel if you are prompted to add host or volumes, and select the Pools icon one the left screen and select Volumes by Pool.

33.Click the Create Volumes selection.

34.Select a preset that you want for the ESXi boot volume and select the Pool.

35.Input quantity 4, capacity 40GB, and name VM-Host-Infra-0, and change the starting ID to 1. Click Create, then click Close.

For FlashSystem V9000 software version 7.5 and above, Click Advanced. Deselect the “Format volume”. This will save several hours of volume format time, unless clearing residual data is required. Click OK. Then click Create, then Close.

36.Click create volume again and select the disk preset, and the Pool. Enter quantity 2, capacity 2 TB, and name infra_datastore. Enter 1 for the starting ID then click Create, then click Close.

37.Click create volume again and select the disk preset, and the Pool. Enter quantity 1, capacity 500GB, and name infra_swap. Click Create, then click Close.

This section provides detailed procedures for configuring the Cisco Unified Computing System (Cisco UCS) for use in a VersaStack environment. The steps are necessary to provision the Cisco UCS C-Series and B-Series servers and should be followed precisely to avoid improper configuration.

Cisco UCS 6248 A

To configure the Cisco UCS for use in a VersaStack environment, complete the following steps:

1.Connect to the console port on the first Cisco UCS 6248 fabric interconnect.

4.When prompted, enter admin as the user name and enter the administrative password. <<var_password>>

5.Click Login to log in to Cisco UCS Manager.

6.Enter the information for the Anonymous Reporting if desired and click OK.

Upgrade Cisco UCS Manager Software to Version 2.2(5a)

This document assumes the use of Cisco UCS Manager Software version 2.2(5a). To upgrade the Cisco UCS Manager software and the UCS 6248 Fabric Interconnect software to version 2.2(5a), refer to Cisco UCS Manager Install and Upgrade Guides.

Cisco UCS Manager and Cisco UCS Fabric Interconnect software version used is 2.2(5a), the blade and rack server software bundle versions used is 2.2(3g). For more information regarding supported configurations please reference the IBM and Cisco Interoperability matrix links.

Add Block of IP Addresses for KVM Access

To create a block of IP addresses for server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:

This block of IP addresses should be in the same subnet as the management IP addresses for the Cisco UCS Manager.

1.Log into Cisco UCS Manager, click the LAN tab in the navigation pane.

2.Select Pools > root > IP Pools > IP Pool ext-mgmt.

3.In the Actions pane, select Create Block of IP Addresses.

4.Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information. <<var_In-band_mgmtblock_net>>

5.Click OK to create the IP block.

6.Click OK in the confirmation message.

Synchronize Cisco UCS to NTP

To synchronize the Cisco UCS environment to the NTP server, complete the following steps:

1.In Cisco UCS Manager, click the Admin tab in the navigation pane.

2.Select All > Timezone Management.

3.In the Properties pane, select the appropriate time zone in the Timezone menu.

4.Click Save Changes, and then click OK.

5.Click Add NTP Server.

6.Enter <<var_global_ntp_server_ip>> and click OK.

7.Click OK.

Edit Chassis Discovery Policy

Setting the discovery policy simplifies the addition of Cisco UCS B-Series Cisco UCS chassis and of additional fabric extenders for further Cisco UCS C-Series connectivity. To modify the chassis discovery policy, complete the following steps:

1.In Cisco UCS Manager, click the Equipment tab in the navigation pane and select Equipment in the list on the left.

2.In the right pane, click the Policies tab.

3.Under Global Policies, set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.

10.On the SAN tab, expand SAN, SAN Cloud, Fabric-B and right-click VSANs.

11.Right-click VSANs and choose Create VSAN.

12.Enter VSAN_B as the name of the VSAN for fabric B.

13.Keep the Disabled option selected for FC Zoning.

14.Click the Fabric B radio button.

15.Enter <<var_vsan_b_id>> as the VSAN ID for fabric B. Enter <<var_fabric_b_fcoe_vlan_id>> as the FCoE VLAN ID for fabric B, then click OK and OK.

Create Port Channels for the Fibre Channel Interfaces

To configure the necessary port channels for the Cisco UCS environment, complete the following steps:

Fabric-A

1.In the navigation pane, under SAN > SAN Cloud, expand the Fabric A tree.

2.Right-click FC Port Channels.

3.Choose Create Port Channel.

4.Enter 1 for the port channel ID and Po1 for the port channel name.

5.Click Next then choose ports 29 and 32 and click >> to add the ports to the port channel. Click Finish.

6.Check the check box for Show Navigator for FC Port-Channel 1 (Fabric A) and click OK.

7.Under the VSAN drop-down, select vsan 101.

8.Click Save Changes and then click OK.

9.Click OK to close the navigator.

Fabric-B

1.Click the SAN tab. In the navigation pane, under SAN > SAN Cloud, expand the Fabric B tree.

2.Right-click FC Port Channels.

3.Choose Create Port Channel.

4.Enter 2 for the port channel ID and Po2 for the port channel name.

5.Click Next.

6.Choose ports 29-32 and click >> to add the ports to the port channel.

7.Click Finish.

8.Check the check box for Show Navigator for FC Port-Channel 2 (Fabric B).

9.Under the VSAN drop-down, select VSAN 102, click Apply, click OK.

10.To initialize a quick sync of the connections to the MDS switch, right click the port channel created, and select disable port channel, then re-enable the port channel. Repeat this step for the port channel created for Fabric-A.

Acknowledge Cisco UCS Chassis and Cisco UCS C-Series

To acknowledge all Cisco UCS chassis and C-Series Servers, complete the following steps:

Create MAC Address Pools

To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:

1.In Cisco UCS Manager, click the LAN tab in the navigation pane.

2.Select Pools > root.

In this procedure, two MAC address pools are created, one for each switching fabric.

3.Right-click MAC Pools under the root organization.

4.Select Create MAC Pool to create the MAC address pool.

5.Enter MAC_Pool_A as the name of the MAC pool.

6.Optional: Enter a description for the MAC pool.

7.Click Next.

8.Click Add.

9.Specify a starting MAC address.

For the VersaStack solution, the recommendation is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as fabric A addresses.

10.Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.

11.Click OK.

12.Click Finish.

13.In the confirmation message, click OK.

14.Right-click MAC Pools under the root organization.

15.Select Create MAC Pool to create the MAC address pool.

16.Enter MAC_Pool_B as the name of the MAC pool.

17.Optional: Enter a description for the MAC pool.

18.Click Next.

19.Click Add.

20.Specify a starting MAC address.

For the VersaStack solution, the recommendation is to place 0B in the next to last octet of the starting MAC address to identify all the MAC addresses in this pool as fabric B addresses.

21.Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.

22.Click OK.

23.Click Finish.

24.In the confirmation message, click OK.

Create UUID Suffix Pool

To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Pools > root.

3.Right-click UUID Suffix Pools.

4.Select Create UUID Suffix Pool

5.Enter UUID_Pool as the name of the UUID suffix pool.

6.Optional: Enter a description for the UUID suffix pool.

7.Keep the prefix at the derived option.

8.Click Next.

9.Click Add to add a block of UUIDs.

10.Keep the From field at the default setting.

11.Specify a size for the UUID block that is sufficient to support the available blade or server resources.

12.Click OK.

13.Click Finish.

14.Click OK.

Create Server Pool

To configure the necessary server pool for the Cisco UCS environment, complete the following steps:

Consider creating unique server pools to achieve the granularity that is required in your environment.

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Pools > root.

3.Right-click Server Pools.

4.Select Create Server Pool.

5.Enter Infra_Pool as the name of the server pool.

6.Optional: Enter a description for the server pool.

7.Click Next.

8.Select two (or more) servers to be used for the VMware management cluster and click >> to add them to the Infra_Pool server pool.

9.Click Finish.

10.Click OK.

Create VLANs

To configure the necessary virtual local area networks (VLANs) for the Cisco UCS environment, complete the following steps:

1.In Cisco UCS Manager, click the LAN tab in the navigation pane.

In this procedure, five VLANs are created.

2.Select LAN > LAN Cloud.

3.Right-click VLANs.

4.Select Create VLANs

5.Enter IB-MGMT-VLAN as the name of the VLAN to be used for management traffic.

6.Keep the Common/Global option selected for the scope of the VLAN.

7.Enter <<var_ib-mgmt_vlan_id>> as the ID of the management VLAN.

8.Keep the Sharing Type as None.

9.Click OK and then click OK again.

10.Right-click VLANs.

11.Select Create VLANs.

12.Enter NFS-VLAN as the name of the VLAN to be used for NFS.

13.Keep the Common/Global option selected for the scope of the VLAN.

14.Enter the <<var_nfs_vlan_id>> for the NFS VLAN.

15.Keep the Sharing Type as None.

16.Click OK, and then click OK again.

17.Right-click VLANs.

18.Select Create VLANs.

19.Enter vMotion-VLAN as the name of the VLAN to be used for vMotion.

20.Keep the Common/Global option selected for the scope of the VLAN.

21.Enter the <<var_vmotion_vlan_id>> as the ID of the vMotion VLAN.

22.Keep the Sharing Type as None.

23.Click OK, and then click OK again.

24.Right-click VLANs.

25.Select Create VLANs.

26.Enter VM-Traffic-VLAN as the name of the VLAN to be used for the VM traffic.

27.Keep the Common/Global option selected for the scope of the VLAN.

28.Enter the <<var_vm-traffic_vlan_id>> for the VM Traffic VLAN.

29.Keep the Sharing Type as None.

30.Click OK, and then click OK again.

31.Right-click VLANs.

32.Select Create VLANs.

33.Enter Native-VLAN as the name of the VLAN to be used as the native VLAN.

34.Keep the Common/Global option selected for the scope of the VLAN.

35.Enter the <<var_native_vlan_id>> as the ID of the native VLAN.

36.Keep the Sharing Type as None.

37.Click OK and then click OK again.

38.Expand the list of VLANs in the navigation pane, right-click the newly created Native-VLAN and select Set as Native VLAN.

39.Click Yes, and then click OK.

Create Host Firmware Package

Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties. To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Policies > root.

3.Right-click Host Firmware Packages.

4.Select Create Host Firmware Package.

5.Enter VM-Host-Infra as the name of the host firmware package.

6.Leave Simple selected.

7.Select the version 2.2(3g) for both the Blade and Rack Packages.

8.Click OK to create the host firmware package.

9.Click OK.

Set Jumbo Frames in Cisco UCS Fabric

To configure jumbo frames and enable quality of service in the Cisco UCS fabric, complete the following steps:

1.In Cisco UCS Manager, click the LAN tab in the navigation pane.

2.Select LAN > LAN Cloud > QoS System Class

3.In the right pane, click the General tab.

4.On the Best Effort row, enter 9216 in the box under the MTU column.

5.Click Save Changes in the bottom of the window.

6.Click OK.

Create Local Disk Configuration Policy (Optional)

A local disk configuration for the Cisco UCS environment is necessary if the servers in the environment do not have a local disk.

This policy should not be used on servers that contain local disks.

To create a local disk configuration policy, complete the following steps:

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Policies > root.

3.Right-click Local Disk Config Policies.

4.Select Create Local Disk Configuration Policy.

5.Enter SAN-Boot as the local disk configuration policy name.

6.Change the mode to No Local Storage.

7.Click OK, to create the local disk configuration policy.

8.Click OK.

Create Network Control Policy for Cisco Discovery Protocol

To create a network control policy that enables Cisco Discovery Protocol (CDP) on virtual network ports, complete the following steps:

1.In Cisco UCS Manager, click the LAN tab in the navigation pane.

2.Select Policies > root.

3.Right-click Network Control Policies.

4.Select Create Network Control Policy

5.Enter Enable_CDP as the policy name.

6.For CDP, select the Enabled option.

7.Click OK to create the network control policy.

8.Click OK.

Create Power Control Policy

To create a power control policy for the Cisco UCS environment, complete the following steps:

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Policies > root.

3.Right-click Power Control Policies.

4.Select Create Power Control Policy

5.Enter No-Power-Cap as the power control policy name.

6.Change the power capping setting to No Cap.

7.Click OK to create the power control policy.

8.Click OK.

Create Server Pool Qualification Policy (Optional)

To create an optional server pool qualification policy for the Cisco UCS environment, complete the following steps:

This example creates a policy for a Cisco UCS B200-M4 Server.

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Select Policies > root.

3.Right-click Server Pool Policy Qualifications.

4.Select Create Server Pool Policy Qualification

5.Enter UCSB-B200-M4 as the name for the policy.

6.Select Create Server PID Qualifications.

7.Enter UCSB-B200-M4 as the PID.

8.Click OK to create the server pool qualification policy.

9.Click OK, and then click OK again.

Create Server BIOS Policy

To create a server BIOS policy for the Cisco UCS environment, complete the following steps:

Create vNIC Templates

The "Enable Failover" option is used for the vNICs in these steps as default, however, if deploying the optional N1kV virtual switch, the “Enable Failover“ options for the vNICs should remain unchecked.“

Create Boot Policies

This procedure applies to a Cisco UCS environment in which two FC interfaces are used on the IBM V9000 cluster Controller 1 and two FC interfaces are used on Controller 2.

Two boot policies need to be created. The first boot policy will be created to boot from Fabric A and the second to boot from Fabric B. Though not absolutely necessary to have two boot policies, having two options helps spread the load and helps ensure that a total failure does not happen if a disaster happens that removes an entire fabric.

For this example, the following WWPN values are used for the V9000. Your ports may vary depending on the configuration of your V9000.

To create boot policies for the Cisco UCS environment, complete the following steps:

Use the WWPN variables that were logged into the storage section of the WWPN table.

1.In Cisco UCS Manager, click the Servers tab in the navigation pane.

2.Choose Policies > root.

3.Right-click Boot Policies.

4.Choose Create Boot Policy.

5.Enter Boot-Fabric-A as the name of the boot policy.

6.(Optional) Enter a description for the boot policy.

7.Keep the Reboot on Boot Order Change check box unchecked.

8.Expand the Local Devices drop-down menu and Choose Add CD/DVD (you should see local and remote greyed out).

9.Expand the vHBAs drop-down menu and Choose Add SAN Boot.

10.In the Add SAN Boot dialog box, enter Fabric-A in the vHBA field.

11.Make sure that the Primary radio button is selected as the SAN boot type.

12.Click OK to add the SAN boot initiator.

13.From the vHBA drop-down menu, choose Add SAN Boot Target.

14.Keep 0 as the value for Boot Target LUN.

15.Enter the WWPN for Controller A going to switch A << var_wwpn_FC_ContA-FE1-fabricA>>

16.Keep the Primary radio button selected as the SAN boot target type.

17.Click OK to add the SAN boot target.

18.From the vHBA drop-down menu, choose Add SAN Boot Target.

19.Keep 0 as the value for Boot Target LUN.

20.Enter the WWPN for Controller A going to switch A << var_wwpn_FC_ContA-FE3-fabricA>>.

10.Right-click VM-Host-Infra-Fabric-B and choose Create Service Profiles from Template.

11.Enter VM-Host-Infra-0 as the service profile prefix.

12.Enter 3 as the Name Suffix Staring Number.

13.Enter 2 as the Number of Instances.

14.Click OK to create the service profile.

15.Click OK in the confirmation message.

16.Verify that the service profiles VM-Host-Infra-01, VM-Host-Infra-02, VM-Host-Infra-03 and VM-Host-Infra-04 have been created. The service profiles are automatically associated with the servers in their assigned server pools.

17.(Optional) Choose each newly created service profile and enter the server host name or the FQDN in the User Label field in the General tab. Click Save Changes to map the server host name to the service profile name.

Backup the Cisco UCS Manager Configuration

It is recommended you backup your Cisco UCS Configuration. Please refer to the link below for additional information.

Adding Servers

Additional server pools, service profile templates, and service profiles can be created in the respective organizations to add more servers to the Pod unit. All other pools and policies are at the root level and can be shared among the organizations.

Gather Necessary WWPN Information

After the Cisco UCS service profiles have been created, each infrastructure blade in the environment will have a unique configuration. To proceed with the SAN-BOOT deployment, specific information must be gathered from each Cisco UCS blade and from the IBM controllers. Complete the following steps:

1.To gather the vHBA WWPN information, launch the Cisco UCS Manager GUI. In the navigation pane, click the Servers tab. Expand Servers > Service Profiles > root. Click each service profile and expand to see vHBAs.

2.Click vHBA Fabric-A, in the general tab right click the WWPN and click Copy.

3.Record the WWPN information that is displayed for both the Fabric A vHBA and the Fabric B vHBA for each service profile into the WWPN variable in Table 22.

These steps will configure zoning for the WWPN's from the server and the FlashSystem V9000. We will be using the WWPN information collected in the previous steps for both the storage setup, and for server profile creation. There are 4 zones created for servers in VSAN 101 on Switch A and 4 zones created in VSAN 102 on Switch B. Host zones and the cluster zone belongs to separate VSAN fabrics.

Validate all the HBA's are logged into the MDS switch. The V9000 and the Cisco servers should be powered on. To start the Cisco server's from Cisco UCS Manager, select the server tab, then click Server-Service-Profiles-root, and right-click VM-Host-Infra-01 then select boot server .

5.Validate all powered on systems HBA's are logged into the switch through the show zoneset command.

zoneset name versastackzoneset vsan 102 member VM-Host-Infra-01-B member VM-Host-Infra-02-B

member VM-Host-Infra-03-B member VM-Host-Infra-04-B

4.Activate the zoneset.

zoneset activate name versastackzoneset vsan 102

Validate all the HBA's are logged into the MDS switch. The V9000 and the Cisco servers should be powered on. To start the Cisco server's from Cisco UCS Manager, select the server tab, then click Server-Service-Profiles-root, and right click VM-Host-Infra-01 then select boot server.

5.Validate the all powered on systems HBA's are logged into the switch.

In this section we will be adding the host mappings for the host profiles created through Cisco UCS Manager to the V9000 storage, connecting to the boot LUNs, and doing the initial ESXi install. The WWPN's for the hosts will be required to complete this section.

This section provides detailed instructions for installing VMware ESXi 5.5 Update 2 in a VersaStack environment. After the procedures are completed, two San-booted ESXi hosts will be provisioned. These deployment procedures are customized to include the environment variables.

Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in Keyboard, Video, Mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs). In this Method, we are using the Cisco Custom ESXi 5.5 U2 GA ISO file which is downloaded from the URL below. This is required for this procedure as it contains custom Cisco drivers and thereby reduces installation steps.

In this section we will set up the VSphere environment using Windows 2008 and SQL server. The Virtual machines used in this procedure will be installed on a local Datastore one VersaStack for any Greenfield deployments, however these could be install on a different ESX clustered system or physical hardware if desired. This procedure will use the volumes previously created for VMFS Datastores.

ESXi Host VM-Host-Infra-01

To set up the VMkernel ports and the virtual switches on the VM-Host-Infra-01 ESXi host, complete the following steps:

1.From each vSphere Client, select the host in the inventory.

2.Click the Configuration tab.

3.Click Networking in the Hardware pane.

4.Click Properties on the right side of vSwitch0.

5.Select the vSwitch configuration and click Edit.

6.From the General tab, change the MTU to 9000.

7.Click OK to close the properties for vSwitch0.

8.Select the Management Network configuration and click Edit.

9.Change the network label to VMkernel-MGMT and select the Management Traffic checkbox.

10.Click OK to finalize the edits for Management Network.

11.Select the VM Network configuration and click Edit.

12.Change the network label to IB-MGMT Network and enter <<var_ib-mgmt_vlan_id>> in the VLAN ID (Optional) field.

13.Click OK to finalize the edits for VM Network.

14.Click Add to add a network element.

15.Select VMkernel and click Next.

16.Change the network label to VMkernel-NFS and enter <<var_nfs_vlan_id>> in the VLAN ID (Optional) field.

17.Click Next to continue with the NFS VMkernel creation.

18.Enter the IP address <<var_nfs_vlan_id_ip_host-01>> and the subnet mask <<var_nfs_vlan_id_mask_host01>> for the NFS VLAN interface for VM-Host-Infra-01.

19.Click Next to continue with the NFS VMkernel creation.

20.Click Finish to finalize the creation of the NFS VMkernel interface.

21.Select the VMkernel-NFS configuration and click Edit.

22.Change the MTU to 9000.

23.Click OK to finalize the edits for the VMkernel-NFS network.

24.Click Add to add a network element.

25.Select VMkernel and click Next.

26.Change the network label to VMkernel-vMotion and enter <<var_vmotion_vlan_id>> in the VLAN ID (Optional) field.

27.Select the Use This Port Group for vMotion checkbox.

28.Click Next to continue with the vMotion VMkernel creation.

29.Enter the IP address <<var_vmotion_vlan_id_ip_host-01>> and the subnet mask <<var_vmotion_vlan_id_mask_host-01>> for the vMotion VLAN interface for VM-Host-Infra-01.

30.Click Next to continue with the vMotion VMkernel creation.

31.Click Finish to finalize the creation of the vMotion VMkernel interface.

32.Select the VMkernel-vMotion configuration and click Edit.

33.Change the MTU to 9000.

34.Click OK to finalize the edits for the VMkernel-vMotion network.

35.Click add and select Virtual Machine Network, then click Next.

36.Change the network label to VM-Traffic and enter <<var_vmtraffic_vlan_id>> in the VLAN ID (Optional) field

37.Click next, click finish to complete the creation of the VM-traffic network.

38.Close the dialog box to finalize the ESXi host networking setup.

This procedure uses 1 physical adapter (vmnic0) assigned to the vSphere Standard Switch (vSwitch0). If you plan to implement the 1000V Distributed Switch later in this document, this is sufficient. If your environment will be using the vSphere Standard Switch, you must assign another physical adapter to the switch. Click the properties of Vswitch0 on the configuration networking tab, click the Network Adapters tab, click Add, select vmnic1, click Next, click Next, click Finish, and click Close.

VersaStack VMware vCenter 5.5 Update 2

The procedures in the following subsections provide detailed instructions to install VMware vCenter 5.5 Update 2 in a VersaStack environment. After the procedures are completed, a VMware vCenter Server will be configured along with a Microsoft SQL Server database to provide database support to vCenter. These deployment procedures are customized to include the environment variables.

This procedure focuses on the installation and configuration of an external Microsoft SQL Server 2012 R2 database, but other types of external databases are also supported by vCenter. To use an alternative database, refer to the VMware vSphere 5.5 documentation.

To install VMware vCenter 5.5 Update 2, an accessible Windows Active Directory® (AD) Domain is necessary. If an existing AD Domain is not available, an AD virtual machine, or AD pair, can be set up in this VersaStack environment. Refer to the Appendix.

27.Click in the BIOS Setup Utility window and use the right arrow key to navigate to the Boot menu. Use the down arrow key to select CD-ROM Drive. Press the plus (+) key twice to move CD-ROM Drive to the top of the list. Press F10 and Enter to save the selection and exit the BIOS Setup Utility.

9.On the Select Features screen, check the box .Net Framework 3.5 Features and click Next.

10.On the Confirm installation selections screen, a warning will be displayed asking Do you need to specify an alternate source path?. If the target computer does not have access to Windows Update, click the Specify an alternate source path link to specify the path to the \sources\sxs folder on the installation media and then click OK. After you have specified the alternate source, or if the target has access to Windows update, click the X next to the warning, and then click Install.

7.On the Select Features screen, check the box .Net Framework 3.5 Features and click Next.

8.On the Confirm installation selections screen, a warning will be displayed asking Do you need to specify an alternate source path?. If the target computer does not have access to Windows Update, click the Specify an alternate source path link to specify the path to the \sources\sxs folder on the installation media and then click OK. After you have specified the alternate source, or if the target has access to Windows update, Click the X next to the warning, and then click Install.

6.Right-click the volume infra_swap and leave All I/O Groups default and select map to host.

7.Choose host VM-Host-Infra-02 and select Map Volumes.

8.Click Map All volumes on the warning popup, then click Close.

9.In vSphere in the left pane right click the cluster VersaStack_Management, and click Rescan for datastores.

At this point of the install, there is a warning for no network management redundancy. The optional Cisco 1000v virtual switch shown later is this document will remedy that issue. If you are not installing 1000v, you should add the second Cisco network adapter to the VMware standard switch to each ESX hosts by clicking on the configuration tab, and in the hardware pane, click Networking, click the properties of vSwitch0. From the Network adapters tab, click Add and select the unclaimed adapter vmnic1, and click Next, then click Next again and then click Finish.

The Cisco Nexus 1000V is a distributed virtual switch solution that is fully integrated within the VMware virtual infrastructure, including VMware vCenter, for the virtualization administrator. This solution offloads the configuration of the virtual switch and port groups to the network administrator to enforce a consistent data center network policy. The Cisco Nexus 1000V is compatible with any upstream physical access layer switch that is compliant with Ethernet standard, Cisco Nexus switches, and switches from other network vendors. The Cisco Nexus 1000V is compatible with any server hardware that is listed in the VMware Hardware Compatibility List (HCL).

The Cisco Nexus 1000V has the following components:

·Virtual Supervisor Module (VSM)—The control plane of the switch and a VM that runs Cisco NX-OS.

·Virtual Ethernet Module (VEM)—A virtual line card that is embedded in each VMware vSphere (ESXi)host. The VEM is partly inside the kernel of the hypervisor and partly in a user-world process, called the VEM Agent.

Cisco Nexus 1000V Architecture

Layer 3 control mode is the preferred method of communication between the VSM and the VEMs. In Layer 3 control mode, the VEMs can be in a different subnet than the VSM and from each other. Active and standby VSM control ports should be Layer 2 adjacent. These ports are used to communicate the HA protocol between the active and standby VSMs. Each VEM needs a designated VMkernel NIC interface that is attached to the VEM that communicates with the VSM. This interface, which is called the Layer 3 Control vmknic, must have a system port profile applied to it (see System Port Profiles and System VLANs), so the VEM can enable it before contacting the VSM.

12.Enter the vCenter IP and login information. For domain accounts, use the Administrator@Vsphere.local login format and do not use domainname\user account format.

13. Accept default ports, and click Next.

14.Review the summary screen, click Power on after deployment and click Finish.

15.After the VM boots in a few minute the Plugin is registered. Validate the plugin in the vSphere client by clicking Plug-ins, then Manage plug-ins in the top menu bar and look under Available Plug-ins.

Install the VSM through the Cisco Virtual Switch Update Manager

The VSUM will deploy the VSM primary and secondary to the ESXi hosts through the GUI install. You will have a VSM primary running on 1 ESXi host and a secondary running on the other ESXi host. Both of these are installed at them same time through the host selection. Complete the following steps to deploy the VSM:

On the machine where you will run the browser for the VMware vSphere Web Client, you should have installed Adobe Flash as well the Client Integration plugin for the web client. The plug-in can be downloaded from the lower left corner of the web client login page.

5.Keep the default for deploy new VSM and High Availability Pair. Select IB-Mgmt for the control and Management VLAN.

6.For the Host Selection, click the suggest button and choose the Datastores.

7.Enter a domain ID for the switch configuration section.

8.Enter the following information for the VSM configuration <<var_vsm_hostname>> <<var_vsm_mgmt_ip>>, <<var_vsm_mgmt_mask>> <<var_vsm_mgmt_gateway>> <<var_password>>, then click Finish. You can launch a second VSphere Client to monitor the progress. Click Tasks in the left pane. It will take a few minutes to complete.

Perform Base Configuration of the Primary VSM

To perform the base configuration of the primary VSM, complete the following steps:

1.Use an SSH client, log in to the primary Cisco Nexus 1000V VSM as admin.

In this section, the unused standard switch components will be removed and the second VIC will be assigned. To remove the unused switch and assign the second VIC, complete the following steps:

ESXi Host VM-Host-Infra-01

Repeat the steps in this section for all ESXi hosts.

1.From each vSphere Client, select the host in the inventory.

2.Click the Configuration tab.

3.Click Networking.

4.Select the VSphere Standard switch, then click Properties.

5.Click the temporary network VMkernel-MGMT-2 created for the migration and click Remove.

6.Click Yes, then click Yes again.

7.Click Close.

8.Validate you still are focused on the VSphere standard switch and click Remove to remove this switch.

9.Click Yes to the warning popup.

10.After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View.

11.Click Manage Physical Adapters.

12.Scroll down to the system-uplink box and click Add NIC.

13.Choose vmnic0 and click OK, then click OK again.

14.Validate there are no warnings for the ESX nodes. From each vSphere Client, select the Hosts and Clusters in the inventory section, click the Summary tab.

15.If there are warnings, right-click each node and click Reconfigure for vSphere HA.

Remove the Redundancy for the NIC in Cisco UCS Manager

While creating the ESXi vNIC template settings, the default is to enable hardware failover on the vNIC. When you have deployed the N1kV, that setting is no longer required and should be disabled. To remove the redundancy, complete the following steps:

1.Launch UCS Manager and click the LAN tab.

2.Click Policies, root, vNIC templates.

3.Click vNIC_Template_A, and on the General Tab uncheck enable failover.

This section describes how to use the Cisco UCS Performance Manager Setup Wizard to provide your license key, define users and passwords, and set up Cisco UCS domains. To setup the Cisco UCS Performance Manager, complete the following steps:

1.In a web browser, navigate to the login page of the Cisco UCS Performance Manager interface. Cisco UCS Performance Manager redirects the first login attempt to the Setup page, which includes the End User License Agreement (EULA) dialog.

2.Read through the agreement.

3.At the bottom of the EULA dialog, check the check box on the left side, and then click the Accept License button on the right side.

4.On the Cisco UCS Performance Manager Setup page, click Get Started.

5.On the Add Licenses page, click Add License File.

6.Proceed to the next task or repeat the preceding step.

7.Verify the product name and number of servers in the Current Status field matches the product you purchased, click Next.

8.On the Setup Users page, enter a password for the admin user, and create an account for one additional user.

9.In the Set admin password area, enter and confirm a password for the admin user account.

IBM is well known for management software. Added value to this solution can be obtained by installing IBM's Storage Management Console for VMware vCenter. Please visit the IBM website to obtain the latest version at http://www.ibm.com/us/en/.

This Bill of Materials is using the Cisco 1300 series VIC for blade servers. The Cisco 1200 VIC series can be substituted for the 1300 series VIC. Please consult with the IBM and Cisco compatibility guides for the latest hardware supported.

27.Click in the BIOS Setup Utility window and use the right arrow key to navigate to the Boot menu. Use the down arrow key to select CD-ROM Drive. Press the plus (+) key twice to move CD-ROM Drive to the top of the list. Press F10 and Enter to save the selection and exit the BIOS Setup Utility.

57.Type the FQDN of the Windows domain for this VersaStack and click Next.

58.Select the appropriate forest functional level and click Next.

59.Keep DNS server selected and click Next.

60.If one or more DNS servers exist that this domain can resolve from, select Yes to create a DNS delegation. If this AD server is being created on an isolated network, select No, to not create a DNS delegation. The remaining steps in this procedure assume a DNS delegation is not created. Click Next.

61.Click Next to accept the default locations for database and log files.

69.Expand the Server and Forward Lookup Zones. Select the zone for the domain. Right-click and select New Host (A or AAAA). Populate the DNS Server with Host Records for all components in the VersaStack.

70.Optional: Build a second AD server VM. Add this server to the newly created Windows Domain and activate Windows. Install Active Directory Domain Services on this machine. Launch dcpromo.exe at the end of this installation. Choose to add a domain controller to a domain in an existing forest. Add this domain controller to the domain created earlier. Complete the installation of this second domain controller. After vCenter Server is installed, affinity rules can be created to keep the two AD servers running on different hosts.

Sreeni has over 17 years of experience in Information Systems with expertise across Cisco Data Center technology portfolio, including DC architecture design, virtualization, compute, network, storage and cloud computing.

Dave Gimpl, Senior Technical Staff Member, IBM Systems

Dave has over 25 years of engineering experience in IBM's Systems group, and is the Chief Architect of the FlashSystem V9000 and has been involved in the development of the FlashSystem product range from its inception.