About the Authors

Lindsey Street is a systems architect in the NetApp Infrastructure and Cloud Engineering team. She focuses on the architecture, implementation, compatibility, and security of innovative vendor technologies to develop competitive and high-performance end-to-end cloud solutions for customers. Lindsey started her career in 2006 at Nortel as an interoperability test engineer, testing customer equipment interoperability for certification. Lindsey has her Bachelors of Science degree in Computer Networking and her Master's of Science in Information Security from East Carolina University.

John George is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is focused on developing, validating, and supporting cloud infrastructure solutions that include NetApp products. Before his current role, he supported and administered Nortel's worldwide training network and VPN infrastructure. John holds a Master's degree in computer engineering from Clemson University.

Chris O'Brien is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 15 years.

Chris Reno is a reference architect in the NetApp Infrastructure and Cloud Enablement group and is focused on creating, validating, supporting, and evangelizing solutions based on NetApp products. Before being employed in his current role, he worked with NetApp product engineers designing and developing innovative ways to perform Q and A for NetApp products, including enablement of a large grid infrastructure using physical and virtualized compute resources. In these roles, Chris gained expertise in stateless computing, netboot architectures, and virtualization.

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit:

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

Overview

Industry trends indicate a vast data center transformation toward shared infrastructures. By using virtualization, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure, thereby increasing agility and reducing costs. Cisco and NetApp have partnered to deliver FlexPod, which serves as the foundation for a variety of workloads and enables efficient architectural designs that are based on customer requirements.

Audience

This document describes the architecture and deployment procedures of an infrastructure composed of Cisco®, NetApp®, and VMware® virtualization that uses FCoE-based storage serving NAS and SAN protocols. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy the core FlexPod architecture with NetApp Data ONTAP® operating in 7-mode.

Architecture

The FlexPod architecture is highly modular or "podlike." Although each customer's FlexPod unit varies in its exact configuration, after a FlexPod unit is built, it can easily be scaled as requirements and demand change. The unit can be scaled both up (adding resources to a FlexPod unit) and out (adding more FlexPod units).

Specifically, FlexPod is a defined set of hardware and software that serves as an integrated foundation for both virtualized and nonvirtualized solutions. VMware vSphere® built on FlexPod includes NetApp storage, NetApp Data ONTAP, Cisco networking, the Cisco Unified Computing System™ (Cisco UCS®), and VMware vSphere software in a single package. The design is flexible enough that the networking, computing, and storage can fit in one data center rack or be deployed according to a customer's data center design. Port density enables the networking components to accommodate multiple configurations of this kind.

One benefit of the FlexPod architecture is the ability to customize or "flex" the environment to suit a customer's requirements. This is why the reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of an FCoE-based storage solution. A storage system capable of serving multiple protocols across a single interface allows for customer choice and investment protection because it truly is a wire-once architecture.

Figure 1 shows the VMware vSphere built on FlexPod components and the network connections for a configuration with FCoE-based storage. This design uses the Cisco Nexus® 5548UP, Cisco Nexus 2232PP FEX, and Cisco UCS C-Series and B-Series with the Cisco UCS virtual interface card (VIC) and the NetApp FAS family of storage controllers connected in a highly available design using Cisco Virtual PortChannels (vPCs). This infrastructure is deployed to provide FCoE-booted hosts with file- and block-level access to shared storage datastores. The reference architecture reinforces the "wire-once" strategy, because as additional storage is added to the architecture; be it FC, FCoE, or 10 Gigabit Ethernet, no recabling is required from the hosts to the Cisco UCS fabric interconnect.

•Support for hundreds of Cisco UCS C-Series and B-Series servers by way of additional fabric extenders and blade server chassis

•One NetApp FAS3250-A (HA pair) operating in 7-mode

Storage is provided by a NetApp FAS3250-AE (HA configuration in two chassis) operating 7-Mode. All system and network links feature redundancy, providing end-to-end high availability (HA). For server virtualization, the deployment includes VMware vSphere. Although this is the base design, each of the components can be scaled flexibly to support specific business requirements. For example, more (or different) servers or even blade chassis can be deployed to increase compute capacity, additional disk shelves can be deployed to improve I/O capacity and throughput, and special hardware or software features can be added to introduce new capabilities.

This document guides you through the low-level steps for deploying the base architecture, as shown in Figure 1. These procedures cover everything from physical cabling to compute and storage configuration to configuring virtualization with VMware vSphere.

Software Revisions

It is important to note the software versions used in this document. Table 1 details the software revisions used throughout this document.

Configuration Guidelines

This document provides details for configuring a fully redundant, highly available configuration for a FlexPod unit with IP-based storage. Therefore, reference is made to which component is being configured with each step, either A or B. For example, controller A and controller B are used to identify the two NetApp storage controllers that are provisioned with this document, and Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS fabric interconnects are similarly configured. Additionally, this document details steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02, and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, <text> appears as part of the command structure. See the following example for the vlan create command:

controller A> vlan create

Usage:

vlan create [-g {on|off}] <ifname> <vlanid_list>

vlan add <ifname> <vlanid_list>

vlan delete -q <ifname> [<vlanid_list>]

vlan modify -g {on|off} <ifname>

vlan stat <ifname> [<vlanid_list>]

Example:

controller A> vlan create vif0 <management VLAN ID>

This document is intended to enable you to fully configure the customer environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 describes the VLANs necessary for deployment as outlined in this guide. The VM-Mgmt VLAN is used for management interfaces of the VMware vSphere hosts. Table 3 lists the VSANs necessary for deployment as outlined in this guide. Table 4 lists the configuration variables that are used throughout this document. Table 4 can be completed based on the specific site variables and used in implementing the document configuration procedures.

If you use separate in-band and out-of-band management VLANs, you must create a Layer 3 route between these VLANs. For this validation, a common management VLAN was used.

Table 2 Necessary VLANs

VLAN Name

VLAN Purpose

ID Used in Validating This Document

Mgmt in band

VLAN for in-band management interfaces

3175

Mgmt out of band

VLAN for out-of-band management interfaces

3171

Native

VLAN to which untagged frames are assigned

2

NFS

VLAN for NFS traffic

3170

FCoE - A

VLAN for FCoE traffic for fabric A

101

FCoE - B

VLAN for FCoE traffic for fabric B

102

vMotion

VLAN designated for the movement of VMs from one physical host to another

3173

VM Traffic

VLAN for VM application traffic

3174

Packet Control

VLAN for Packet Control traffic

3176

Table 3 Necessary VSANs

VSAN Name

VSAN Purpose

ID Used in Validating This Document

VSAN A

VSAN for fabric A traffic. ID matches FCoE-A VLAN

101

VSAN B

VSAN for fabric B traffic. ID matches FCoE-B VLAN

102

Table 4 Created VMware Virtual Machine

Virtual Machine Description

Host Name

vCenter SQL Server database

vCenter Server

NetApp Virtual Storage Console (VSC) and NetApp OnCommand® core

NetApp vSphere Storage APIs for Storage Awareness (VASA) Provider

Table 5 Configuration Variables

Variable

Description

Customer Implementation Value

<<var_controller1>>

Storage Controller 1 Host Name

<<var_controller1_e0m_ip>>

Out-of-band management IP for Storage Controller 1

<<var_controller1_mask>>

Out-of-band management network netmask

<<var_controller1_mgmt_gateway>>

Out-of-band management network default gateway

<<var_adminhost_ip>>>

Administration Host Server IP

<<var_timezone>>

FlexPod time zone (for example, America/New_York)

<<var_location>>

Node location string

<<var_dns_domain_name>>

DNS domain name

<<var_nameserver_ip>>

DNS server IP(s)

<<var_sp_ip>>

Out-of-band service processor management IP for each storage controller

System Configuration Guides

System configuration guides provide supported hardware and software components for the specific Data ONTAP version. These online guides provide configuration information for all NetApp storage appliances currently supported by the Data ONTAP software. They also provide a table of component compatibilities.

1. Make sure that the hardware and software components are supported with the version of Data ONTAP that you plan to install by checking the System Configuration Guides at:

2. Click the appropriate NetApp storage appliance and then click the component you want to view. Alternatively, to compare components by storage appliance, click a component and then click the NetApp storage appliance you want to view.

Controllers

Follow the physical installation procedures for the controllers in the FAS32xx documentation in NetApp Support site at:

•SAS disk drives use software-based disk ownership. Ownership of a disk drive is assigned to a specific storage system by writing software ownership information on the disk drive rather than by using the topography of the storage system's physical connections.

•Unique disk shelf IDs must be set per storage system (a number from 0 through 98).

•Disk shelf power must be turned on to change the digital display shelf ID. The digital display is on the front of the disk shelf.

•Disk shelves must be power-cycled after the shelf ID is changed for it to take effect.

•Changing the shelf ID on a disk shelf that is part of an existing storage system running Data ONTAP requires that you wait at least 30 seconds before turning the power back on so that Data ONTAP can properly delete the old disk shelf address and update the copy of the new disk shelf address.

•Changing the shelf ID on a disk shelf that is part of a new storage system installation (the disk shelf is not yet running Data ONTAP) requires no wait; you can immediately power-cycle the disk shelf.

Data ONTAP 8.1.2

Complete the Configuration Worksheet

Before running the setup script, complete the configuration worksheet from the product manual.

Note To access Configuration Worksheet, you need to have access to NetApp Support site: http://now.netapp.com/

Assign Controller Disk Ownership and initialize storage

This section provides details for assigning disk ownership and disk initialization and verification.

Typical best practices should be followed when determining the number of disks to assign to each controller head. You may choose to assign a disproportionate number of disks to a given storage controller in an HA pair, depending on the intended workload.

In this reference architecture, half the total number of disks in the environment is assigned to one controller and the remainder to its partner.

Table 19 Controller Details

Detail

Detail Value

Controller 1 MGMT IP

<<var_controller1_e0m_ip>>

Controller 1 netmask

<<var_controller1_mask>>

Controller 1 gateway

<<var_controller1_mgmt_gateway>>

URL of the Data ONTAP boot software

<<var_url_boot_software>>

Controller 2 MGMT IP

<<var_controller2_e0m_ip>>

Controller 2 netmask

<<var_controller2_mask>>

Controller 2 gateway

<<var_controller2_mgmt_gateway>>

Controller1

1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the storage system is in a reboot loop, Press Ctrl-C to exit the Autoboot loop when you see this message:

Starting AUTOBOOT press Ctrl-C to abort...

2. If the system is at the LOADER prompt, enter the following command to boot Data ONTAP:

autoboot

3. During system boot, press Ctrl-C when prompted for the Boot Menu:

Press Ctrl-C for Boot Menu...

Note If 8.1.2 is not the version of software being booted, follow the steps to install new software. If 8.1.2 is the version being booted, then proceed with step 14, maintenance mode boot.

4. To install new software first select option 7.

7

5. Answer yes for performing a nondisruptive upgrade.

y

6. Select e0M for the network port you want to use for the download.

e0M

7. Select yes to reboot now.

y

8. Enter the IP address, netmask, and default gateway for e0M in their respective places.

<<var_controller1_e0m_ip>>

<<var_controller1_mask>>

<<var_controller1_mgmt_gateway>>

9. Enter the URL where the software can be found.

Note This Web server must be pingable.

<<var_url_boot_software>>

10. Press Enter for the username, indicating no user name.

Enter

11. Enter yes to set the newly installed software as the default to be used for subsequent reboots.

y

12. Enter yes to reboot the node.

y

13. When you see "Press Ctrl-C for Boot Menu":

Ctrl-C

14. To enter Maintenance mode boot, select option 5.

5

15. When you see the question "Continue to Boot?" type yes.

y

16. To verify the HA status of your environment, enter:

ha-config show

Note If either component is not in HA mode, use the ha-config modify command to put the components in HA mode.

17. To see how many disks are unowned, enter:

disk show -a

Note No disks should be owned in this list.

18. Assign disks.

disk assign -n <<var_#_of_disks>>

Note This reference architecture allocates half the disks to each controller. However, workload design could dictate different percentages.

Note The initialization and creation of the root volume can take 75 minutes or more to complete, depending on the number of disks attached. When initialization is complete, the storage system reboots. You can continue to controller 2 configuration while the disks for controller 1 are zeroing.

Controller 2

1. Connect to the storage system console port. You should see a Loader-A prompt. However if the storage system is in a reboot loop, Press Ctrl-C to exit the Autoboot loop when you see this message:

Starting AUTOBOOT press Ctrl-C to abort...

2. If the system is at the LOADER prompt, enter the following command to boot Data ONTAP:

autoboot

3. During system boot, press Ctrl-C when prompted for the Boot Menu:

Press Ctrl-C for Boot Menu...

Note If 8.1.2 is not the version of software being booted, follow the steps to install new software. If 8.1.2 is the version being booted, then proceed with step 14, maintenance mode boot

4. To install new software first select option 7.

7

5. Enter yes for performing a nondisruptive upgrade.

y

6. Select e0M for the network port you want to use for the download.

e0M

7. Enter yes to reboot now.

y

8. Enter the IP address, netmask and default gateway for e0M in their respective places.

<<var_controller2_e0m_ip>>

<<var_controller2_mask>>

<<var_controller2_mgmt_gateway>>

9. Enter the URL where the software can be found.

Note This Web server must be pingable.

<<var_url_boot_software>>

10. Press Enter for the username, indicating no user name.

Enter

11. Enter yes to set the newly installed software as the default to be used for subsequent reboots.

y

12. Enter yes to reboot the node.

y

13. When you see "Press Ctrl-C for Boot Menu":

Ctrl-C

14. To enter Maintenance mode boot, select option 5:

5

15. If you see the question "Continue to Boot?" type yes.

y

16. To verify the HA status of your environment, enter:

ha-config show

Note If either component is not in HA mode, use the ha-config modify command to put the components in HA mode.

17. To see how many disks are unowned, enter:

disk show -a

Note The remaining disks should be shown.

18. Assign disks by entering:

disk assign -n <<var_#_of_disks>>

Note This reference architecture allocates half the disks to each controller. However, workload design could dictate different percentages.

19. Reboot the controller.

halt

20. At the LOADER prompt, enter:

autoboot

21. Press Ctrl-C for Boot Menu when prompted.

Ctrl-C

22. Select option 4 for a Clean configuration and initialize all disks.

2. Navigate to the Service Process Image for installation from the Data ONTAP prompt page for your storage platform.

3. Proceed to the Download page for the latest release of the SP Firmware for your storage platform.

4. Using the instructions on this page, update the SPs on both controllers. You will need to download the .zip file to a web server that is reachable from the management interfaces of the controllers.

64-Bit Aggregates in Data ONTAP 7-Mode

A 64-bit aggregate containing the root volume is created during the Data ONTAP setup process. To create additional 64-bit aggregates, determine the aggregate name, the node on which to create it, and how many disks it will contain. Calculate the RAID group size to allow for roughly balanced (same size) RAID groups of between 12 and 20 disks (for SAS disks) within the aggregate. For example, if 52 disks were being assigned to the aggregate, select a RAID group size of 18. A RAID group size of 18 would yield two 18-disk RAID groups and one 16-disk RAID group. Keep in mind that the default RAID group size is 16 disks, and that the larger the RAID group size, the longer the disk rebuild time in case of a failure.

Controller 1

Execute the following command to create a new aggregate:

aggr create aggr1 -B 64 -r <<var_raidsize>> <<var_#_of_disks>>

Note Leave at least one disk (select the largest disk) in the configuration as a spare. A best practice is to have at least one spare for each disk type and size.

Controller 2

Execute the following command to create a new aggregate:

aggr create aggr1 -B 64 -r <<var_raidsize>> <<var_#_of_disks>>

Note Leave at least one disk (select the largest disk) in the configuration as a spare. A best practice is to have at least one spare for each disk type and size.

Flash Cache

Controller 1 and Controller 2

Execute the following commands to enable Flash Cache:

options flexscale.enable on

options flexscale.lopri_blocks off

options flexscale.normal_data_blocks on

Note For directions on how to configure Flash Cache in metadata mode or low-priority data caching mode, see TR-3832: Flash Cache and PAM Best Practices Guide at: http://media.netapp.com/documents/tr-3832.pdf. Before customizing the settings, determine whether the custom settings are required or whether the default settings are sufficient.

IFGRP LACP

Since this type of interface group requires two or more Ethernet interfaces and a switch that supports LACP, make sure that the switch is configured properly.

Controller 1 and Controller 2

Run the following command on the command line and also add it to the /etc/rc file, so it is activated upon boot:

ifgrp create lacp ifgrp0 -b port e1a e1b

wrfile -a /etc/rc "ifgrp create lacp ifgrp0 -b ip e1a e1b"

Note All interfaces must be in down status before being added to an interface group.

FCP

Controller 1 and Controller 2

1. License FCP.

license add <<var_fc_license>>

2. Start the FCP service.

fcp start

3. Record the WWPN or FC port name for later use.

fcp show adapters

NTP

The following commands configure and enable time synchronization on the storage controller. You must have either a publically available IP address or your company's standard NTP server name or IP address.

Controller 1 and Controller 2

1. Run the following commands to configure and enable the NTP server:

date <<var_date>>

2. Enter the current date in the format of [[[[CC]yy]mm]dd]hhmm[.ss]].

For example: date 201208311436; would set the date to August 31st 2012 at 14:36.

options timed.servers <<var_global_ntp_server_ip>>

options timed.enable on

Data ONTAP SecureAdmin

Secure API access to the storage controller must be configured.

Controller 1

1. Issue the following as a one-time command to generate the certificates used by the Web services for the API.

secureadmin setup ssl

SSL Setup has already been done before. Do you want to proceed? [no] y

Server Configuration

FlexPod Cisco UCS Base

This section provides detailed procedures for configuring the Cisco Unified Computing System (Cisco UCS) for use in a FlexPod environment. The following steps are necessary to provision the Cisco UCS C-Series and B-Series servers and should be followed precisely to avoid improper configuration.

Cisco UCS 6248UP Fabric Interconnect A

To configure the Cisco UCS for use in a FlexPod environment, follow these steps:

1. Connect to the console port on the first Cisco UCS 6248 fabric interconnect.

Add Block of IP Addresses for KVM Access

To create a block of IP addresses for server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, follow these steps:

Note This block of IP addresses should be in the same subnet as the management IP addresses for the Cisco UCS Manager.

1. In Cisco UCS Manager, click the LAN tab in the navigation pane.

2. Choose Pools > root > IP Pools > IP Pool ext-mgmt.

3. In the Actions pane, choose Create Block of IP Addresses.

4. Enter the starting IP address of the block and the number of IP addresses required, and the subnet and gateway information.

5. Click OK to create the IP block.

6. Click OK in the confirmation message window.

Synchronize Cisco UCS to NTP

To synchronize the Cisco UCS environment to the NTP server, follow these steps:

1. In Cisco UCS Manager, click the Admin tab in the navigation pane.

2. Choose All > Timezone Management.

3. In the Properties pane, choose the appropriate time zone in the Timezone menu.

4. Click Save Changes, and then click OK.

5. Click Add NTP Server.

6. Enter <<var_global_ntp_server_ip>> and click OK.

7. Click OK.

Edit Chassis Discovery Policy

Setting the discovery policy simplifies the addition of B-Series Cisco UCS chassis and of additional fabric extenders for further C-Series connectivity.

To modify the chassis discovery policy, follow these steps:

1. In Cisco UCS Manager, click the Equipment tab in the navigation pane and choose Equipment in the list on the left.

2. In the right pane, click the Policies tab.

3. Under Global Policies, set the Chassis/FEX Discovery Policy to 2-link or set it to match the number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.

4. Set the Link Grouping Preference to Port Channel.

5. Click Save Changes.

6. Click OK.

Enable Server and Uplink Ports

To enable server and uplink ports, follow these steps:

1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.

9. Right-click VM-Host-Infra-Fabric-B and choose Create Service Profiles from Template.

10. Enter VM-Host-Infra-0 as the service profile prefix.

11. Enter 1 as the number of service profiles to create.

12. Click OK to create the service profile.

Figure 57 Creating Service Profile from a Service Profile Template

13. Click OK in the confirmation message.

Verify that the service profiles VM-Host-Infra-01 and VM-Host-Infra-02 have been created. The service profiles are automatically associated with the servers in their assigned server pools.

14. (Optional) Choose each newly created service profile and enter the server host name or the FQDN in the User Label field in the General tab. Click Save Changes to map the server host name to the service profile name.

Add More Servers to FlexPod Unit

Additional server pools, service profile templates, and service profiles can be created in the respective organizations to add more servers to the FlexPod unit. All other pools and policies are at the root level and can be shared among the organizations.

Gather Necessary Information

After the Cisco UCS service profiles have been created, each infrastructure blade in the environment will have a unique configuration. To proceed with the FlexPod deployment, specific information must be gathered from each Cisco UCS blade and from the NetApp controllers. Insert the required information into Table 20 and Table 21.

Table 20 FC Port Names for Storage Controllers 1 and 2

Storage Controller

FCoE Port

FC Port Name

1

1a

1

1b

2

1a

2

1b

Note To gather the FC port name information, run the fcp show adapters command on the storage controller.

Table 21 vHBA WWPNs for Fabric A and Fabric B

Cisco UCS Service Profile Name

Fabric A vHBA WWPN

Fabric B vHBA WWPN

VM-Host-infra-01

VM-Host-infra-02

Note To gather the vHBA WWPN information, launch the Cisco UCS Manager GUI. In the navigation pane, click the Servers tab. Expand Servers > Service Profiles > root. Click each service profile and then click the Storage tab in the right pane. In Table 21, record the WWPN information that is displayed in the right pane for both the Fabric A vHBA and the Fabric B vHBA for each service profile.

Storage Networking

FlexPod Cisco Nexus Base

Table 22 Flexpod Cisco Nexus Base Prerequisite

Description

The Cisco Nexus switch must be running Cisco Nexus NX-OS 5.2(1)N1(3) or late

The following procedures describe how to configure the Cisco Nexus switches for use in a base FlexPod environment. Follow these steps precisely; failure to do so might result in an improper configuration.

Set Up Initial Configuration

Cisco Nexus A

To set up the initial configuration for the Cisco Nexus A switch on <<var_nexus_A_hostname>>, follow these steps:

Configure the switch.

Note On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning.

Abort Power on Auto Provisioning and continue with normal setup? (yes/no) [no]:
yes

Uplink into Existing Network Infrastructure

Depending on the available network infrastructure, several methods and features can be used to uplink the FlexPod environment. If an existing Cisco Nexus environment is present, NetApp recommends using vPCs to uplink the Cisco Nexus 5548 switches included in the FlexPod environment into the infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment. Make sure to run copy run start to save the configuration on each switch after configuration is completed.

Create VSANs, Assign and Enable Virtual Fibre Channel Ports

This procedure sets up Fibre Channel over Ethernet (FCoE) connections between the Cisco Nexus 5548 switches, the Cisco UCS Fabric Interconnects, and the NetApp storage systems.

Create Zones

Cisco Nexus 5548 A

To create zones for the service profiles on switch A, follow these steps:

1. Create a zone for each service profile.

zone name VM-Host-Infra-01_A vsan <<var_vsan_a_id>>

member device-alias VM-Host-Infra-01_A

member device-alias <<var_controller1>>_1a

member device-alias <<var_controller2>>_1a

exit

zone name VM-Host-Infra-02_A vsan <<var_ vsan_a_id>>

member device-alias VM-Host-Infra-02_A

member device-alias <<var_controller1>>_1a

member device-alias <<var_controller2>>_1a

exit

2. After the zone for the Cisco UCS service profiles has been created, create the zone set and add the necessary members.

zoneset name FlexPod vsan <<var_ vsan_a_id>>

member VM-Host-Infra-01_A

member VM-Host-Infra-02_A

exit

3. Activate the zone set.

zoneset activate name FlexPod vsan <<var_ vsan_a_id>>

exit

copy run start

Cisco Nexus 5548 B

To create zones for the service profiles on switch B, follow these steps:

1. Create a zone for each service profile.

zone name VM-Host-Infra-01_B vsan <<var_ vsan_b_id>>

member device-alias VM-Host-Infra-01_B

member device-alias <<var_controller1>>_1b

member device-alias <<var_controller2>>_1b

exit

zone name VM-Host-Infra-02_B vsan <<var_ vsan_b_id>>

member device-alias VM-Host-Infra-02_B

member device-alias <<var_controller1>>_1b

member device-alias <<var_controller2>>_1b

exit

2. After all of the zones for the Cisco UCS service profiles have been created, create the zone set and add the necessary members.

zoneset name FlexPod vsan <<var_ vsan_b_id>>

member VM-Host-Infra-01_B

member VM-Host-Infra-02_B

exit

3. Activate the zone set.

zoneset activate name FlexPod vsan <<var_ vsan_b_id>>

exit

copy run start

Storage Part 2

Data ONTAP 7-Mode SAN Boot Storage Setup

The following subsections create initiator groups (igroups) on storage controller 1 and map the SAN boot LUNs to these igroups so that VMware ESXi can be installed on the LUNs for the two management hosts created.

Map Boot LUNs to Igroups

Controller 1 Command Line Interface

lun map /vol/esxi_boot/VM-Host-Infra-01 VM-Host-Infra-01 0

lun map /vol/esxi_boot/VM-Host-Infra-02 VM-Host-Infra-02 0

VMware vSphere 5.1 Setup

FlexPod VMware ESXi 5.1 FCoE 7-Mode

This section provides detailed instructions for installing VMware ESXi 5.1 in a FlexPod environment. After the procedures are completed, two FCP-booted ESXi hosts will be provisioned. These deployment procedures are customized to include the environment variables.

Note Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in Keyboard, Video, Mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their Fibre Channel Protocol (FCP) boot Logical Unit Numbers (LUNs).

Log in to Cisco UCS 6200 Fabric Interconnect

Cisco UCS Manager

The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the UCS environment to run the IP KVM.

To log in to the Cisco UCS environment, follow these steps:

1. Open a Web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco UCS Manager application.

2. Log in to Cisco UCS Manager by using the admin user name and password.

4. Choose the NetApp LUN that was previously set up as the installation disk for ESXi and press Enter to continue with the installation.

5. Choose the appropriate keyboard layout and press Enter.

6. Enter and confirm the root password and press Enter.

7. The installer issues a warning that existing partitions will be removed from the volume. Press F11 to continue with the installation.

8. After the installation is complete, uncheck the Mapped check box (located in the Virtual Media tab of the KVM console) to unmap the ESXi installation image.

Note The ESXi installation image must be unmapped to make sure that the server reboots into ESXi and not into the installer.

9. The Virtual Media window might issue a warning stating that it is preferable to eject the media from the guest. Because the media cannot be ejected and it is read-only, simply click Yes to unmap the image.

10. From the KVM tab, press Enter to reboot the server.

Set Up Management Networking for ESXi Hosts

Adding a management network for each VMware host is necessary for managing the host. To add a management network for the VMware hosts, follow these steps on each ESXi host:

ESXi Host VM-Host-Infra-01

To configure the VM-Host-Infra-01 ESXi host with access to the management network, follow these steps:

1. After the server has finished rebooting, press F2 to customize the system.

Login and choose the driver ISO for version 2.1(1a). Download the ISO file. Once the ISO file is downloaded, either burn the ISO to a CD or map the ISO to a drive letter. Extract the following files from within the VMware directory for ESXi 5.1:

–Network - net-enic-2.1.2.38-1OEM.500.0.0.472560.x86_64.zip

–Storage - scsi-fnic-1.5.0.20-1OEM.500.0.0.472560.x86_64.zip

2. Document the saved location.

Load Updated Cisco VIC enic and fnic Drivers

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

To load the updated versions of the enic and fnic drivers for the Cisco VIC, follow these steps for the hosts on each vSphere Client:

b. Verify that the clock is now set to approximately the correct time.

Note The NTP server time may vary slightly from the host time.

Move VM Swap File Location

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

To move the VM swap file location, follow these steps on each ESXi host:

1. From each vSphere Client, choose the host in the inventory.

2. Click the Configuration tab to enable configurations.

3. Click Virtual Machine Swapfile Location in the Software pane.

4. Click Edit at the upper right side of the window.

5. Choose Store the swapfile in a swapfile datastore selected below.

6. Select infra_swap as the datastore in which to house the swap files.

7. Click OK to finalize moving the swap file location.

FlexPod VMware vCenter 5.1

The procedures in the following subsections provide detailed instructions for installing VMware vCenter 5.1 in a FlexPod environment. After the procedures are completed, a VMware vCenter Server will be configured along with a Microsoft SQL Server database to provide database support to vCenter. These deployment procedures are customized to include the environment variables.

Note This procedure focuses on the installation and configuration of an external Microsoft SQL Server 2008 R2 database, but other types of external databases are also supported by vCenter. For information about how to configure the database and integrate it into vCenter, see the VMware vSphere5.1 documentation at: http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

To install VMware vCenter 5.1, an accessible Windows Active Directory® (AD) Domain is necessary. If an existing AD Domain is not available, an AD virtual machine, or AD pair, can be set up in this FlexPod environment. See "Appendix" section for this setup.

Build Microsoft SQL Server VM

ESXi Host VM-Host-Infra-01

To build a SQL Server virtual machine (VM) for the VM-Host-Infra-01 ESXi host, follow these steps:

1. Log in to the host by using the VMware vSphere Client.

2. In the vSphere Client, choose the host in the inventory pane.

3. Right-click the host and choose New Virtual Machine.

4. Click Custom and then click Next.

5. Enter a name for the VM. Click Next.

6. Choose infra_datastore_1. Click Next.

7. Choose Virtual Machine Version: 8. Click Next.

8. Verify that the Windows option and the Microsoft Windows Server 2008 R2 (64-bit) version are selected. Click Next.

9. Choose two virtual sockets and one core per virtual socket. Click Next.

27. In the BIOS Setup Utility window and use the right arrow key to navigate to the Boot menu. Use the down arrow key to select CD-ROM Drive. Press the plus (+) key twice to move CD-ROM Drive to the top of the list. Press F10 and Enter to save the selection and exit the BIOS Setup Utility.

32. Choose Custom (Advanced). Make sure that Disk 0 Unallocated Space is selected. Click Next to allow the Windows installation to complete.

33. After the Windows installation is complete and the VM has rebooted, click OK to set the Administrator password.

34. Enter and confirm the Administrator password and choose the blue arrow to log in. Click OK to confirm the password change.

35. After logging in to the VM desktop, from the VM console window, choose the VM menu. Under Guest, choose Install/Upgrade VMware Tools. Click OK.

36. If prompted to eject the Windows installation media before running the setup for the VMware tools, click OK, then click OK.

37. In the dialog box, choose Run setup64.exe.

38. In the VMware Tools installer window, click Next.

39. Make sure that Typical is selected and click Next.

40. Click Install.

41. Click Finish.

42. Click Yes to restart the VM.

43. After the reboot is complete, choose the VM menu. Under Guest, choose Send Ctrl+Alt+Del and then enter the password to log in to the VM.

44. Set the time zone for the VM, IP address, gateway, and host name. Add the VM to the Windows AD domain.

Note A reboot is required.

45. If necessary, activate Windows.

46. Log back in to the VM and download and install all required Windows updates.

Note This process requires several reboots.

Install Microsoft SQL Server 2008 R2

vCenter SQL Server VM

To install SQL Server on the vCenter SQL Server VM, follow these steps:

1. Connect to an AD Domain Controller in the FlexPod Windows Domain and add an admin user for the FlexPod using the Active Directory Users and Computers tool. This user should be a member of the Domain Administrators security group.

2. Log in to the vCenter SQL Server VM as the FlexPod admin user and open Server Manager.

Build and Set Up VMware vCenter VM

Build VMware vCenter VM

To build the VMware vCenter VM, follow these steps:

1. Using the instructions for building a SQL Server VM provided in the section "Build Microsoft SQL Server VM," build a VMware vCenter VM with the following configuration in the <<var_ib-mgmt_vlan_id>> VLAN:

–4GB RAM

–Two CPUs

–One virtual network interface

2. Start the VM, install VMware Tools, and assign an IP address and host name to it in the Active Directory domain.

Set Up VMware vCenter VM

To set up the newly built VMware vCenter VM, follow these steps:

1. Log in to the vCenter VM as the FlexPod admin user and open Server Manager.

15. In the Host field, enter either the IP address or the host name of the VM-Host-Infra_01 host. Enter root as the user name and the root password for this host. Click Next.

16. Click Yes.

17. Click Next.

18. Choose Assign a New License Key to the Host. Click Enter Key and enter a vSphere license key. Click OK, and then click Next.

19. Click Next.

20. Click Next.

21. Click Finish. VM-Host-Infra-01 is added to the cluster.

22. Repeat this procedure to add VM-Host-Infra-02 to the cluster.

FlexPod Cisco Nexus 1110-X and 1000V vSphere

The following sections provide detailed procedures for installing a pair of high-availability (HA) Cisco Nexus 1110-X Virtual Services Appliances (VSAs) in a FlexPod configuration. Primary and standby Cisco Nexus 1000V Virtual Supervisor Modules (VSMs) are installed on the 1110-Xs. By the end of this section, a Cisco Nexus 1000V distributed virtual switch (DVS) will be provisioned. This procedure assumes that the Cisco Nexus 1000V software version 4.2(1)SV2(1.1a) has been downloaded from www.cisco.com and expanded. This procedure also assumes that VMware vSphere 5.1 Enterprise Plus licensing is installed.

Set Up the Primary Cisco Nexus 1000V VSM

Cisco Nexus 1110-X A

To set up the primary Cisco Nexus 1000V VSM on the Cisco Nexus 1110-X A, follow these steps:

1. Continue periodically running the following command until module 2 (Cisco Nexus 1110-X B) has a status of ha-standby.

show module

2. Enter the global configuration mode and create a virtual service blade.

config t

virtual-service-blade VSM-1

dir /repository

3. If the desired Cisco Nexus 1000V ISO file (nexus-1000v.4.2.1.SV2.1.1a.iso) is not present on the Cisco Nexus 1110-X, run the copy command to copy it to the Cisco Nexus 1110-X disk. You must place the file either on an FTP server or on a UNIX® or Linux® machine (using scp) that is accessible from the Cisco Nexus 1110-X management interface. An example copy command from an FTP server is copy ftp://<<var_ftp_server>>/nexus-1000v.4.2.1.SV2.1.1a.iso /repository/.

virtual-service-blade-type new nexus-1000v.4.2.1.SV2.1.1a.iso

interface control vlan <<var_pkt-ctrl_vlan_id>>

interface packet vlan <<var_pkt-ctrl_vlan_id>>

enable primary

Enter vsb image:[nexus-1000v.4.2.1.SV2.1.1a.iso] Enter

Enter domain id[1-4095]: <<var_vsm_domain_id>>

Note This domain ID should be different than the VSA domain ID.

Enter SVS Control mode (L2 / L3): [L3] Enter

Management IP version [V4/V6]: [V4] Enter

Enter Management IP address: <<var_vsm_mgmt_ip>>

Enter Management subnet mask: <<var_vsm_mgmt_mask>>

IPv4 address of the default gateway: <<var_vsm_mgmt_gateway>>

Enter HostName: <<var_vsm_hostname>>

Enter the password for 'admin': <<var_password>>

copy run start

4. Run show virtual-service-blade summary. Continue periodically entering this command until the primary VSM-1 has a state of VSB POWERED ON.

Set Up the Secondary Cisco Nexus 1000V VSM

To set up the secondary Cisco Nexus 1000V VSM on Cisco Nexus 1110-X B, follow these steps in two subsections:

Cisco Nexus 1110-X A

Run system switchover to activate Cisco Nexus 1110-X B.

Cisco Nexus 1110-X B

1. Log in to Cisco Nexus 1110-X B as the admin user.

config t

virtual-service-blade VSM-1

dir /repository

2. If the desired Cisco Nexus 1000V ISO file (nexus-1000v.4.2.1.SV2.1.1a.iso) is not present on the Cisco Nexus 1110-X, run the copy command to copy it to the Cisco Nexus 1110-X disk. You must place the file either on an FTP server or on a UNIX or Linux machine (using the scp command) that is accessible from the Cisco Nexus 1110-X management interface. An example copy command from an FTP server is copy ftp:// <<var_ftp_server>>/nexus-1000v.4.2.1.SV2.1.1a.iso /repository/.

enable secondary

Enter vsb image: [nexus-1000v.4.2.1.SV2.1.1a.iso] Enter

Enter domain id[1-4095]: <<var_vsm_domain_id>>

Enter SVS Control mode (L2 / L3): [L3] Enter

Management IP version [V4/V6]: [V4] Enter

Enter Management IP address: <<var_vsm_ mgmt_ip>>

Enter Management subnet mask: <<var_vsm_ mgmt_mask>>

IPv4 address of the default gateway: <<var_vsm_mgmt_gateway>>

Enter HostName: <<var_vsm_hostname>>

3. Enter the admin password <<var_password>>.

4. Type show virtual-service-blade summary. Continue periodically entering this command until both the primary and secondary VSM-1s have a state of VSB POWERED ON.

9. Choose the first ESXi host and click the Configuration tab. In the Hardware box, choose Networking.

10. Make sure that vSphere Standard Switch is selected at the top next to View. vSwitch0 should not have any active VMkernel or VM Network ports on it. On the upper right of vSwitch0, click Remove.

11. Click Yes.

12. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View.

13. Click Manage Physical Adapters.

14. Scroll down to the system-uplink box and click Add NIC.

15. choose vmnic0 and click OK.

16. Click OK to close the Manage Physical Adapters window. Two system uplinks should now be present.

17. choose the second ESXi host and click the Configuration tab. In the Hardware field, click Networking.

18. Make sure vSphere Standard Switch is selected at the top next to View. vSwitch0 should have no active VMkernel or VM Network ports on it. On the upper right of vSwitch0, click Remove.

19. Click Yes.

20. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View.

21. Click Manage Physical Adapters.

22. Scroll down to the system-uplink box and click Add NIC.

23. choose vmnic0 and click OK.

24. Click OK to close the Manage Physical Adapters window. Two system-uplinks should now be present.

25. From the SSH client that is connected to the Cisco Nexus 1000V, run show interface status to verify that all interfaces and port channels have been correctly configured.

Figure 80 Verifying Interfaces and Port Channels

26. Run show module and verify that the two ESXi hosts are present as modules.

Figure 81 Verifying the ESXi Hosts are Shown as Modules

27. Run copy run start.

28. Type exit two times to log out of the Cisco Nexus 1000v.

FlexPod Management Tool Setup

NetApp Virtual Storage Console (VSC) 4.1 Deployment Procedure

VSC 4.1 Preinstallation Considerations

The following licenses are required for VSC on storage systems running Data ONTAP 8.1.2 7-mode:

•Protocol licenses (NFS and FCP)

•FlexClone (for provisioning and cloning only)

•SnapRestore (for backup and recovery)

•SnapManager suite

Install VSC 4.1

To install the VSC 4.1 software, follow these steps:

1. Using the instructions in section "Build Microsoft SQL Server VM," build a VSC and an OnCommand virtual machine with 4GB RAM, two CPUs, and one virtual network interface in the <<var_ib-mgmt_vlan_id>> VLAN. The virtual network interface should be a VMXNET 3 adapter. Bring up the VM, install VMware Tools, assign IP addresses, and join the machine to the Active Directory domain. Install the current version of Adobe Flash Player on the VM. Install all Windows updates on the VM.

5. In the navigation pane, choose Monitoring and Host Configuration if it is not selected by default

6. In the list of storage controllers, right-click the first controller listed and choose Modify Credentials.

7. Enter the storage cluster management IP address in the Management IP address field. Enter admin for the User name, and the admin password for the Password. Make sure that Use SSL is selected. Click OK.

8. Click OK to accept the controller privileges.

Figure 88 vSphere Client Showing Storage Controllers

Optimal Storage Settings for ESXi Hosts

VSC allows for the automated configuration of storage-related settings for all ESXi hosts that are connected to NetApp storage controllers. To use these settings, follow these steps:

1. Choose individual or multiple ESXi hosts.

2. Right-click and choose Set Recommended Values for these hosts.

Figure 89 Setting Recommended Values for the Hosts

3. Check the settings to apply to selected vSphere hosts. Click OK to apply the settings.

Note Depending on what changes have been made, the servers might require a restart for network-related parameter changes to take effect. If no reboot is required, the Status value is set to Normal. If a reboot is required, the Status value is set to Pending Reboot. If a reboot is required, the ESX or ESXi servers should be placed into Maintenance Mode, evacuate (if necessary), and be restarted before proceeding.

VSC 4.1 Provisioning and Cloning Setup

Provisioning and cloning in VSC 4.1 helps administrators to provision both VMFS and NFS datastores at the data center, datastore cluster, or host level in VMware environments.

1. In a vSphere Client connected to vCenter, choose Home > Solutions and Applications > NetApp and click the Provisioning and Cloning tab on the left. Choose Storage controllers.

2. In the main part of the window, right-click <<var_controller1>> and choose Resources.

3. In the <<var_controller1>> resources window, use the arrows to move volumes ifgrp0-<<var_nfs_vlan_id>>, esxi_boot and aggr1 to the right. Also choose the Prevent further changes check box as shown in Figure 91.

5. In the main part of the window, right-click <<var_controller2>>and choose Resources.

6. In the<<var_controller2>> resources window, use the arrows to move volumes ifgrp0-<<var_nfs_vlan_id>>, infra_datastore_1 and aggr1 to the right. choose the Prevent Further changes check box as shown in Figure 92.

VSC 4.1 Backup and Recovery

Adding Storage Systems to the Backup and Recovery Capability

Before you begin using the Backup and Recovery capability to schedule backups and restore your datastores, virtual machines, or virtual disk files, you must add the storage systems that contain the datastores and virtual machines for which you are creating backups.

Note The Backup and Recovery capability does not use the user credentials from the Monitoring and Host Configuration capability.

Follow these steps to add the storage systems to the Backup and Recovery capability:

Figure 93 Adding Storage System to Backup and Recovery Capability

1. Click Backup and Recovery and then click Setup.

2. Click Add. The Add Storage System dialog box appears.

3. Type the DNS name or IP address and the user credentials of the storage cluster.

4. Click Add to add the storage cluster.

Backup and Recovery Configuration

To configure a backup job for a datastore, follow these steps:

1. Click Backup and Recovery, then choose Backup.

2. Click Add. The Backup wizard appears.

Figure 94 Configuring Backup

3. Type a backup job name and description.

4. If you want to create a VMware snapshot for each backup, choose Perform VMware consistency snapshot in the options pane.

5. Click Next.

6. choose infra_datastore_1 and then click to move it to the selected entities. Click Next.

Figure 95 Selecting Entities to Backup

7. choose one or more backup scripts if available and click Next.

8. choose the hourly, daily, weekly, or monthly schedule that you want for this backup job and click Next.

Figure 96 Setting Schedule for Backup

9. Use the default vCenter credentials or type the user name and password for the vCenter Server and click Next.

11. Review the summary page and click Finish. If you want to run the job immediately, choose the Run Job Now option and then click Finish.

Figure 98 Summary of Backup Settings

12. On the management interface of storage controller 2, automatic Snapshot copies of the infrastructure datastore volume can be disabled by typing the command:

snap sched infra_datastore_1 0 0 0

13. Also, to delete any existing automatic Snapshot copies that have been created on the volume type the following command:

snap list infra_datastore_1

snap delete infra_datastore_1 <snapshot name>

OnCommand Unified Manager 5.1

Create Raw Device Mapping (RDM) Datastore

From the VMware vCenter Client, do as follows:

1. In the VMware vCenter Client, from Home > Inventory > Hosts and Clusters, right-click the FlexPod_Management cluster.

2. choose NetApp > Provisioning and Cloning > Provision Datastore.

3. Make sure the Infra_Vserver is selected in Vserver drop-down menu and click Next.

4. choose VMFS as the Datastore type and click Next.

5. choose FCP as the Protocol type, set the Size to 100, enter the datastore name as RDM_Map, check the check box to create new volume container, choose aggr02 as the Aggregate, check the Thin Provision check box, and click Next.

6. Verify settings and click Apply.

Install .NET Framework 3.5.1 Feature

From the Virtual Storage Console (VSC) and OnCommand VM:

1. Log in to the VSC and OnCommand VM as the FlexPod admin and open Server Manager.

Install SnapDrive 6.4.2

2. Browse to the location of the SnapDrive installation package and double-click the executable file. This launches the SnapDrive installation wizard and opens the Welcome page.

3. Click Next in the Welcome page of the SnapDrive installation wizard.

4. If this is a new SnapDrive installation, read and accept the license agreement. Click Next.

5. If this is a SnapDrive upgrade, choose Modify/Upgrade in the Program Maintenance page. Click Next.

6. choose "Per Storage System" as the license type. Click Next.

Note•In the case of upgrading SnapDrive, the license information will already be populated.

•In the case of selecting storage system licensing, SnapDrive can be installed without entering a license key. SnapDrive operations can be executed only on storage systems that have a SnapDrive or SnapManager license installed.

•In the case of clustered Data ONTAP 8.1-based systems, the storage system licensing for SnapDrive is bundled with the other SnapManager product licenses. They are now a single license called the SnapManager_suite license.

7. In the Customer Information page, type the user name and organization name. Click Next.

8. The Destination Folder page prompts for a directory in which to install SnapDrive on the host. For new installations, by default this directory is C:\Program Files\NetApp\SnapDrive\.To accept the default, click Next.

Note•For a 7-Mode environment, either the Express edition or the Standard edition of the software is available.

•If the infrastructure has both 7-Mode and clustered Data ONTAP systems, two OnCommand instances are needed to manage the respective 7-Mode or clustered Data ONTAP systems.

9. choose Standard edition and click Next.

10. Enter the 14-character license key when prompted and click Next.

11. choose the installation location, if different from the default.

Note Do not change the default location of the local Temp Folder directory, or the installation will fail. The installer automatically extracts the installation files to the %TEMP% location.

12. Follow the remaining setup prompts to complete the installation.

From an MS-DOS command prompt, follow these steps as an administrator:

13. In preparation for the database movement to the previously created LUN from local storage, stop all OnCommand Unified Manager services and verify that the services have stopped.

dfm service stop

dfm service list

14. Move the data to the previously created LUN.

Note The dfm datastore setup help command provides switch options available with the command.

dfm datastore setup O:\

15. Start OnCommand Unified Manager and then verify that all services have started.

dfm service start

dfm service list

16. Generate an SSL key.

dfm ssl server setup

Key Size (minimum = 512..1024..2048..) [default=512]: 1024

Certificate Duration (days) [default=365]: Enter

Country Name (e.g., 2 letter code): <<var_country_code>>

State or Province Name (full name): <<var_state>>

Locality Name (city): <<var_city>>

Organization Name (e.g., company): <<var_org>>

Organizational Unit Name (e.g., section): <<var_ unit>>

Common Name (fully-qualified hostname): <<var_oncommand_server_fqdn>>

Email Address: <<var_admin_email>>

Note The SSL key command fails if certain command line option inputs do not follow specified character lengths (for example, a two-letter country code), and any multiword entries must be encased in double quotation marks, for example, "North Carolina."

17. Turn off automatic discovery.

dfm option set discoverEnabled=no

18. Set the protocol security options for communication with various devices.

dfm service stop http

dfm option set httpsEnabled=yes

dfm option set httpEnabled=no

dfm option set httpsPort=8443

dfm option set hostLoginProtocol=ssh

dfm option set hostAdminTransport=https

Note The HTTPS and SSH protocols must be enabled on the storage controllers that are monitored by OnCommand Unified Manager.

19. Restart the DataFabric Manager HTTP services to make sure that the security options take effect.

dfm service start http

20. Configure OnCommand Unified Manager to use SNMPv3 to poll configuration information from the storage devices. Use the user name and password generated for SNMPv3.

21. Set up OnCommand Unified Manager to send AutoSupport through HTTPS to NetApp.

dfm option set SMTPServerName=<<var_mailhost>>

dfm option set autosupportAdminContact=<<var_storage_admin_email>>

dfm option set autosupportContent=complete

dfm option set autosupportProtocol=https

22. Manually add the storage cluster to the OnCommand server.

dfm host add <<var_cluster1>>

dfm host add <<var_cluster2>>

23. Set the array login and password credentials in OnCommand Unified Manager. This is the root or administrator account.

dfm host set <<var_cluster1>> hostlogin=root

dfm host set <<var_cluster1>> hostPassword=<<var_password>>

dfm host set <<var_cluster2>> hostlogin=root

dfm host set <<var_cluster2>> hostPassword=<<var_password>>

24. List the storage systems discovered by OnCommand Unified Manager and their properties.

dfm host list

dfm host get <<var_cluster1>>

dfm host get <<var_cluster2>>

25. Test the network configuration and connectivity between the OnCommand server and the named host. This test helps identify misconfigurations that prevent the OnCommand server from monitoring or managing a particular appliance. The test should be the first command used if a problem using the OnCommand server occurs with only some of the appliances.

dfm host diag <<var_cluster1>>

dfm host diag <<var_cluster2>>

26. (optional) Configure an SNMP trap host.

dfm alarm create -T <<var_oncommand_server_fqdn>>

27. Configure OnCommand Unified Manager to generate and send e-mails for every event whose importance ranks as critical or higher.

dfm alarm create -E <<var_admin_email>> -v Critical

28. Create a manual backup.

dfm backup create -t snapshot

29. Schedule backups to a virtual backup directory on the 100GB FC LUN.

2. Scroll down to locate the NetApp NFS Plug-in for VMware VAAI, choose the ESXi platform, and click Go.

3. Download the .vib file of the most recent plug-in version.

4. Verify that the file name of the .vib file matches the predefined name that VSC 4.1 for VMware vSphere uses: NetAppNasPlugin.vib.

Note If the .vib file name does not match the predefined name, rename the .vib file. Neither the VSC client nor the NetApp vSphere Plug-in Framework (NVPF) service needs to be restarted after the .vib file is renamed.

Note The default directory path is C:\Program Files\NetApp\Virtual Storage Console\. However, VSC 4.1 for VMware vSphere lets you change this directory. For example, if you are using the default installation directory, the path to the NetAppNasPlugin.vib file is the following: C:\Program Files\Virtual Storage Console\etc\vsc\web\NetAppNasPlugin.vib.

6. In the VMware vSphere Client connected to the vCenter Server, choose Home > Solutions and Applications > NetApp.

Note The Monitoring and Host Configuration capability automatically installs the plug-in on the hosts selected.

Figure 110 Selecting All the ESXi Host for Installing NFS Plug-in

10. Choose Home > Inventory > Host and Clusters.

11. For each host (one at a time), right-click the host and choose Enter Maintenance Mode.

Figure 111 Entering Maintenance Mode in vSPhere Client

12. Click Yes, click Yes again, and then click OK.

Note It might be necessary to migrate all VMs away from the host.

13. After the host is in maintenance mode, right-click the host and choose Reboot.

14. Enter a reason for the reboot and click OK.

15. After the host reconnects to the vCenter Server, right-click the host and choose Exit Maintenance Mode.

16. Make sure that all ESXi hosts get rebooted.

NetApp VASA Provider

Install NetApp VASA Provider

To install NetApp VASA Provider, follow these steps:

1. Using the previous instructions for virtual machine creation, build a VASA Provider virtual machine with 2GB RAM, two CPUs, and one virtual network interface in the <<var_ib-mgmt_vlan_id>> VLAN. The virtual network interface should be a VMXNET 3 adapter. Bring up the VM, install VMware Tools, assign IP addresses, and join the machine to the Active Directory domain.

4. Run the executable file netappvp-1-0-winx64.exe to start the installation.

Figure 112 Preparing to Install NetApp VASA provider

5. On the Welcome page of the installation wizard, click Next.

6. Choose the installation location and click Next.

Figure 113 NetApp VASA Provider Installation Location

7. On the Ready to Install page, click Install.

Figure 114 Ready to Install NetApp VASA Provider

8. Click Finish to complete the installation.

Figure 115 Installation Completed

Configure NetApp VASA Provider

After NetApp VASA Provider is installed, it must be configured to communicate with the vCenter Server and retrieve storage system data. During configuration, specify a user name and password to register NetApp VASA Provider with the vCenter Server, and then add the storage systems before completing the process.

Add Storage Systems

The NetApp VASA Provider dialog box can be used to add the storage systems from which NetApp VASA Provider collects storage information. Storage systems can be added at any time.

To add a storage system, follow these steps:

1. Double-click the VASA Configuration icon on your Windows desktop or right-click the icon and choose Open to open the NetApp FAS/V-Series VASA Provider dialog box.

2. Click Add to open the Add Storage System dialog box.

Figure 116 Adding Storage Systems

3. Enter the host name or IP address, port number, and user name and password for the storage system.

Figure 117 Entering Storage System Login Credentials

4. Click OK to add the storage system.

5. Add both storage systems to the VASA Provider.

Register NetApp VASA Provider with vCenter Server

To establish a connection between the vCenter Server and NetApp VASA Provider, NetApp VASA Provider must be registered with the vCenter Server. The vCenter Server communicates with NetApp VASA Provider to obtain the information that NetApp VASA Provider collects from registered storage systems.

To register NetApp VASA Provider with the vCenter Server, follow these steps:

1. Under Alarm Thresholds, accept or change the default threshold values for volume and aggregate. These values specify the percentages at which a volume or aggregate is full or nearly full.

The default threshold values are the following:

–85% for a nearly full volume

–90% for a full volume

–90% for a nearly full aggregate

–95% for a full aggregate

Note After you finish registering NetApp VASA Provider with the vCenter Server, any changes made to the default threshold values are saved only when you click OK.

2. Under VMware vCenter, enter the host name or IP address of the vCenter Server machine and the user name and password for the vCenter Server.

3. Specify the port number to use, or accept the default port number for the vCenter Server.

4. Click Register Provider.

5. Click OK to commit all the details and register NetApp VASA Provider with the vCenter Server.

Note To use the vSphere Client to register NetApp VASA Provider with the vCenter Server, copy the URL from the VASA URL field and paste it into the vCenter Server.

Figure 118 Registering NetApp VASA Provider with VMware vCenter

6. Click OK to close the VASA Configuration.

Verify VASA Provider in vCenter

1. Log in to vCenter using vSphere Client.

2. Click the Home tab at the upper-left portion of the window.

3. In the Administration section, click Storage Providers.

4. Click Refresh All. The NetApp VASA Provider (NVP) should now appear as a vendor provider.

Figure 119 NetApp VASA Provider is Listed as Vendor Provider

5. Click the Home tab in the upper-left portion of the window.

6. In the Inventory section, click Datastores and Datastore Clusters.

7. Expand the vCenter and the data center. Choose a datastore.

8. Click the Summary tab. Verify that a System Storage Capability appears under Storage Capabilities.

27. In the BIOS Setup Utility window and use the right arrow key to navigate to the Boot menu. Use the down arrow key to choose CD-ROM Drive. Press the plus (+) key twice to move CD-ROM Drive to the top of the list. Press F10 and Enter to save the selection and exit the BIOS Setup Utility.

59. Type the FQDN of the Windows domain for this FlexPod and click Next.

Figure 125 Naming the Forest Root Domain

60. Choose the appropriate forest functional level and click Next.

61. Keep DNS server selected and click Next.

Figure 126 Selecting Additional Options for the Domain Controller

62. If one or more DNS servers exist that this domain can resolve from, Click Yes to create a DNS delegation. If this is AD server is being created on an isolated network, click No, to not create a DNS delegation. The remaining steps in this procedure assume a DNS delegation is not created. Click Next.

63. Click Next to accept the default locations for database and log files.

71. Expand the Server and Forward Lookup Zones. Choose the zone for the domain. Right-click and choose New Host (A or AAAA). Populate the DNS Server with Host Records for all components in the FlexPod.

72. (Optional) Build a second AD server VM. Add this server to the newly created Windows Domain and activate Windows. Install Active Directory Domain Services on this machine. Launch dcpromo.exe at the end of this installation. Choose to add a domain controller to a domain in an existing forest. Add this domain controller to the domain created earlier. Complete the installation of this second domain controller. After vCenter Server is installed, affinity rules can be created to keep the two AD servers running on different hosts.

Configuring Cisco VM-FEX with the UCS Manager

Background

FlexPod for VMware utilizes distributed virtual switching to manage the virtual access layer from a central point. While previous versions of FlexPod have only described the use of the Cisco Nexus 1000V, there exists an option to use the built-in virtual switching functionality delivered through hardware on the Cisco UCS known as VM-FEX. This has several advantages:

•There is no need for extra HW such as Cisco Nexus 1110-X.

•Cisco UCS provides a central configuration environment with which the administrator is already familiar.

•Compared to using the Cisco Nexus 1000v as virtual appliances within vCenter itself, this setup avoids an SPOF and common restart issues when running the distributed switches in an environment in which they are required for the network functionality of the ESX servers on which they are running. This is a common problem that needs to be addressed in the solution design.

In other words, it dramatically simplifies the hardware setup and operation by optimally utilizing the new hardware features.

Process Overview

This section provides a detailed overview of VM-FEX setup, configuration, and operation using Cisco UCS Manager.

8. The remaining sections of the Create BIOS Policy wizard (RAS Memory, Serial Port, USB, PCI Configuration, Boot Options, and Server Management) can retain the Platform Default option. Click Next on each of these windows and then click Finish to complete the wizard.

Create a VM-FEX Enabled Service Profile Template

To create a Cisco UCS service profile using VM-FEX, clone a previously defined Cisco UCS service profile and apply the dynamic vNIC and BIOS policies by following these steps in the Cisco UCS Manager:

1. Click the Servers tab in the left navigation pane and expand the Service Profile Templates.

2. Right-click VM-Host-Infra-Fabric-A and choose Create a Clone.

3. Type a clone name and choose an organizational owner for the new service profile template.

Figure 131 Cloning Service Profile Template

4. Click OK when notified that the service profile clone was successfully created. The Service Template navigation window appears.

5. Click the Network tab and choose Change Dynamic vNIC Connection Policy under the Actions section of the working pane. The Change Dynamic vNIC Connection Policy form appears.

6. Choose Use a Dynamic vNIC Connection Policy from the drop-down menu and the previously created Dynamic vNIC policy. Click OK.

Figure 132 Changing the Dynamic vNIC Connection Policy

7. Click OK when notified that the vNIC connection policy was successfully modified.

8. From the Service Template properties window, click the Policies tab.

9. Expand the BIOS Policies in the Policies section of the working pane.

10. Choose the previously defined FEX BIOS policy and click OK.

Figure 133 Choosing a BIOS Policy

Create VM-FEX Service Profile

To create service profiles from the service profile template, follow these steps:

1. In Cisco UCS Manager, click the Servers tab in the navigation pane.

Standard Operations

The VM-FEX environment supports the addition of port profiles to the distributed switch. The following section describes how to add these distributed port groups.

Add Distributed Port Group to the VDS (vSphere Distributed Switch)

Port Profiles

Port profiles contain the properties and settings that you can use to configure virtual interfaces in Cisco UCS for VM-FEX. The port profiles are created and administered in Cisco UCS Manager. After a port profile is created, assigned to, and actively used by one or more distributed virtual switches (DVSs), any changes made to the networking properties of the port profile in Cisco UCS Manager are immediately applied to those DVSs.

In VMware vCenter, a port profile is represented as a port group. Cisco UCS Manager pushes the port profile names to VMware vCenter, which displays the names as port groups. None of the specific networking properties or settings in the port profile is visible in VMware vCenter. You must configure at least one port profile client for a port profile if you want Cisco UCS Manager to push the port profile to VMware vCenter.

Port Profile Client

The port profile client determines the DVSs to which a port profile is applied. By default, the port profile client specifies that the associated port profile applies to all DVSs in VMware vCenter. However, you can configure the client to apply the port profile to all DVSs in a specific data center or data center folder or to only one DVS.

Create a VM-FEX Port Profile

Follow these steps to create VM-FEX port profiles for use on the Cisco UCS distributed virtual switch.

1. Log in to Cisco UCS Manager.

2. Click the VM tab.

3. Right-click Port Profile > Create Port Profile.

4. Enter the name of the Port Profile.

5. (Optional) Enter a description.

6. (Optional) Choose a QoS policy.

7. (Optional) Choose a network control policy.

8. Enter the maximum number of ports that can be associated with this port profile. The default is 64 ports.

Note The maximum number of ports that can be associated with a single DVS is 4096. If the DVS has only one associated port profile, that port profile can be configured with up to 4096 ports. However, if the DVS has more than one associated port profile, the total number of ports associated with all of those port profiles combined cannot exceed 4096.

9. (Optional) Choose High Performance.

Note Select None—Traffic to and from a virtual machine passes through the DVS.

Select High Performance— Traffic to and from a virtual machine bypasses the DVS and hypervisor and travels directly between the virtual machines and a virtual interface card (VIC) adapter.

10. Choose the VLAN.

11. Choose Native-VLAN.

12. Click OK.

Figure 142 Creating Port Profile

Or

Figure 143 Creating Port Profile with High Performance

The port profile created will appear in the working pane.

Create the Port Profile Client

To create the client profile for use in the Cisco UCS virtual distributed switch, follow these steps:

1. In the navigation pane under the VM tab, expand All > Port Profiles. Right-click the Port Profile and click Create Profile Client.

2. Choose the data center created in your vCenter Server, folder, and distributed virtual switch created in section "Integrate Cisco UCS with vCenter."

3. Click OK.

Figure 144 Creating Profile Client

Or

Figure 145 Creating Profile Client for DVS-FEX

The client profile created will appear in your distributed virtual switch DVS-FEX in vCenter as a port group.

Repeat these steps as necessary for the workloads in the environment.

Migrate Networking Components for ESXi Hosts to Cisco DVS-FEX

vCenter Server VM

To migrate the networking components for the ESXi hosts to the Cisco FEX-DVS, follow these steps:

1. In the VMware vSphere client connected to vCenter, choose Home > Networking.

9. Choose the first ESXi host and click the Configuration tab. In the Hardware field, choose Networking.

10. Make sure that vSphere Standard Switch is selected at the top next to View. vSwitch0 should not have any active VMkernel or VM Network ports on it. On the upper right of vSwitch0, click Remove.

11. Click Yes.

12. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View.

13. Click Manage Physical Adapters.

14. In the uplink-pg-DVS-FEX field click Add NIC.

15. Choose vmnic0 and click OK.

16. Click OK to close the Manage Physical Adapters window. Two uplinks should now be present.

17. Choose the second ESXi host and click the Configuration tab. In the Hardware field, choose Networking.

18. Make sure vSphere Standard Switch is selected at the top next to View. vSwitch0 should have no active VMkernel or VM Network ports on it. On the upper right of vSwitch0, click Remove.

19. Click Yes.

20. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch.

21. Click Manage Physical Adapters.

22. In the uplink-pg-DVS-FEX field click Add NIC.

23. Choose vmnic0 and click OK.

24. Click OK to close the Manage Physical Adapters window. Two uplinks should now be present.

VM-FEX Virtual Interfaces

In a blade server environment, the number of vNICs and vHBAs configurable for a service profile is determined by adapter capability and the amount of virtual interface (VIF) namespace available in the adapter. In Cisco UCS, portions of VIF namespace are allotted in chunks called VIFs. Depending on your hardware, the maximum number of VIFs is allocated on a predefined, per-port basis.

The maximum number of VIFs varies based on hardware capability and port connectivity. For each configured vNIC or vHBA, one or two VIFs are allocated. Standalone vNICs and vHBAs use one VIF, and failover vNICs and vHBAs use two.

The following variables affect the number of VIFs available to a blade server, and therefore, the number of vNICs and vHBAs you can configure for a service profile.

•The maximum number of VIFs supported on your fabric interconnect

•How the fabric interconnects are cabled

•If the fabric interconnect and IOM are configured in fabric port channel mode

For more information about the maximum number of VIFs supported by your hardware configuration, refer to the Cisco UCS 6100 and 6200 Series Configuration Limits for Cisco UCS Manager for your software release. Table 23 and Table 24 reference these limits.