The method described in that article is still valid for ESXi 5.0 since the old vicfg and esxcfg commands are still available, however with 5.0 version you can get a similar result using the new esxcli namespaces, following is how to do it.

First task is to get a list of the iSCSI HBAs in order to know the name of the software iSCSI initiator.

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

Please take into account that iSCSI is supported with Virtual Connect since the version 3.10 of Virtual Connect Manager and only with the Flex-10 and FlexFabric modules. However I’m going to leave iSCSI configuration for a future post, since I didn’t have many opportunities to try it with VC, and write only about Fibre Channel.

Before we start with the wizard and all the setup task is important to explain the Virtual Connect storage fundamentals.

The first concept to understand are the several key Fibre Channel port types. There a three basic FC ports:

N_Port (Node Port) – An N_Port is a port within a node that provides Fibre Channel attachment like an HBA port. VC-FC module uplink ports are N_ports.

F_Port (Fabric Port) – This a port on a FC switch connected to an N_port and addressable by it. These are commonly used in Edge or Core switches. The VC-FC module’s downlink ports are F_ports in order to allow the HBAs to login into them.

E_Port (Expansion Port) – These are switch ports used for switch-to-switch connections known as Inter Switch Link or ISL.

Additionally there are two other ports, however these ports are not typically seen in Virtual Connect environments.

The next key concept to understand in N_Port ID Virtualization or NPIV. It’s a T11 FC standard than can be defined as a Fibre Channel facility that allows to assign multiple N_Port_IDs to a single N_Port, this is a physical N_port having multiple port WWNs. Of course the VC-FC module must be connected to a Fibre Channel switch that supports NPIV.

And how manages Virtual Connect all this port stuff? I believe that an image is worth a thousand words, so first I will show with the below diagrams illustrate how FC ports and SAN are managed with and without Virtual Connect.

As it can be seen the SAN switches, like the Cisco MDS 9124e, that can be used in any blade enclosure including the HP ones are part of the SAN Fabric, that means the enclosure itself is part of the Fabric. These switches are connected to the SAN Core via E_ports or ISL.

In this configuration the SAN boundary has been moved out of the enclosure. The VC-FC module includes an HBA Aggregator which is an NPIV device. It passes, transparently, the signals from multiple HBAs to a single switch port.

Here it is how the whole process would go:

VC-FC module uplink port issue a Login Request, an FLOGI to the SAN and advertize itself as NPIV capable port.

Upon receiving an ACCept from the Fabric it would begin to process server requests.

Server HBAs would begin normal Fabric login process with the WWNs.

VC-FC module would translate FLOGI requests into an FDISC requests since a single N_Port can only receive one FLOGI request.

SAN switch would reply with an ACCept and provide HBAs with Fabric addresses.

The ACCept frames would reach uninterrupted the HBAs.

From then on all the traffic will be carried over the sane link for all HBA connections.

Now the the basic concepts are explained and, hopefully clear, it’s time to configure the storage.

We are going to use the Fibre Channel Setup Wizard to:

Identify the World Wide Names (WWNs) to be used by the servers.

Define the available SAN fabrics.

You can launch the wizard either from the Tools menu in the Virtual Connect page or right after finishing the Network Setup Wizard. From the welcome screen click Next and move into the World Wide Name (WWN) Settings page.

In this first page you can specify if you want to use the WWN settings that comes with the Fiber Channel HBA card or if the HP Virtual Connect supplied WWN settings.

Virtual Connect will assign both a port WWN and a node WWN to a Fibre Channel port, the node WWN will always be the same as the port WWN incremented by one.

There is key advantage when configuring Virtual Connect to assign the WWNs and is that, since it maintains a consistent storage identity, it allows blades to be replaced in case of failure without affecting the external SAN.

In the wizard select Virtual Connect assigned WWNs and click Next to move into the Assigned WWNs screen.

This screen is very similar to the MAC address range selection screen we saw in the previous post. Here you have to choose between an user defined WWNs range and an HP defined one. You must ensure that the selected range is unique within the environment.

Next we are going to define the Fabric, first you’ll be presented with a screen asking if you want to define the fabric.

After that we have to enter the Fabric name, assign the uplink ports and configure the speed.

After applying the configuration the wizard will move to the next screen where it will ask if you want to create more Fabric, for the example purposes I decided to create a another one named fabric_prod2.

When you are done with the second fabric finish the wizard and the storage setup will be done. You can review and modify the configuration from the Virtual Connect main interface.

The next post will be the last of the series and I will discuss about Virtual Connect Server Profiles. As always any feedback would be welcome :-)

During a previous project I had the opportunity to work very closely with the EMC people and Symmetrix arrays, in fact I got a couple of very good friends from that project. At the time I created a bunch of text files for my self reference about EMC SRDF and Timefinder technologies.

Today I decided to review that files, give them some order, well sort of, and put them here as a survival guide/quick reference in the hope that will be of help to any of you. The first of this guides will be about EMC Symmetrix Timefinder.

I don’t have sample output for every command, been more than a year since the last time I work with Timefinder, to complement my own samples I got several outputs from the Timefinder manuals.

This is not a complete Timefinder usage guide, just my personal notes taken from my direct experience with product.

Timefinder Basics

EMC Timefinder is a replication solution that creates full volume copies. For the full-HP guys out there this is very similar to the XP or EVA Business Copy product.

And we are done. As I said this is not a full guide so please if there is anything that you don’t get please leave a comment and I will try to clarify. Also if any of you have additional tips or “recipes” for Timefinder please comment :-)

Juanma.

40.416691-3.700345

Share this:

Like this:

This post will outline the necessary steps to create a standard (no-multisite) HP P4000 cluster with two nodes. Creating a two-node cluster is a very similar process as the one-node cluster described in my first post about P4000 systems.

The cluster is composed by:

2 HP P4000 Virtual Storage Appliances

1 HP P4000 Failover Manager

The Failover Manager, or FOM, is a specialized version of the SAN/iQ software. It runs as a virtual appliance in VMware, thought the most common situation is to run it in a ESX/ESXi servers running it under VMware player or Server is also supported.

The FOM integrates into a management group as a real manager and is intended only to provide quorum to the cluster, one of its main purposes is to provide quorum in Multi-site clusters. I decided to use it in this post to provide an example as real as possible.

To setup this cluster I used virtual machines inside VMware Workstation, but the same design can also be created with physical servers and P4000 storage systems.

From the Getting started screen launch the clusters wizard.

Select the two P4000 storage systems and enter the name of the Management Group

During the group creation will ask to create a cluster, choose the two nodes as members of the cluster, will add the FOM later, and assign a name to the cluster.

Next assign a virtual IP address to the cluster.

Enter the administrative level credentials for the cluster.

Finally the wizard will ask if you want to create volumes in the cluster, I didn’t take that option and finished the cluster creation process. You can also add the volumes later as I described in one of my previous posts.

Now the that cluster is formed we are going to add the Failover Manager.

It’s is important that the FOM requires the same configuration as any VSA as I depicted in my first post about the P4000 storage systems.

In the Central Management Console right-click into the FOM and select Add to existing management group.

Select the management group and click Add.

With this operation the cluster configuration is done. If everything went well in the end you should have something like this.

Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.

Simon Seagrave (@Kiwi_Si) from TechHead.co.uk wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.

In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.

1 – Enable NFS service

As always point your browser to https://<openfiler_address&gt;:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.

2 – Setup network access

From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.

3 – Create the volumes

The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.

First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.

Create a partition and set the type to Physical Volume.

Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.

Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:

iSCSI

XFS

Ext3

The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.

Click Create and the new volume will be created.

4 – Create the NFS share

Go to the Shares tab, there you will find the new volume as an available share.

Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.

Click into the volume name and in the pop-up enter the name of the folder and click Create folder.

Select the folder and in the pop-up click the Make Share button.

Finally we are going to configure the newly created share; select the share to enter its configuration area.

Edit the share data to your suit and select the Access Control Mode. Two modes are available:

Public guest access – There is no user based authentication.

Controlled access – The authentication is defined in the Accounts section.

Since this is only for my homelab I choose Public access.

Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.

You can also edit the NFS options and configure to suit your personal preferences and/or specifications.

Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.

The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.

5 – Add the datastore to the ESX servers

Now that the share is created and configured it is time to add it to our ESX servers.

As usually from the vSphere Client go to Configuration -> Storage -> Add storage.

In the pop-up window choose Network File System.

Enter in the Server, Folder and Datastore Name label.

Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.

And with this we are finished. If you see any mistake or have anything to add please comment :-)

Like this:

As I explained in my first post about the SAN/iQ command line, to remotely manage a P4000 storage array instead of providing the username/password credentials in every command you can specify an encrypted file which contains the user/password information.

To create this file, known as the key file, just use the createKey command and provide the username, password, array IP address or DNS name and the name of the file.

By default the key file is created in the user’s home directory, c:\Documents and Settings\<username> in Windows XP/2003 and C:\Users\<username> in Windows Vista/2008/7.

The file can also be stored in a secure location on the local network, in that case the full path to the key file must be provided.

Of course the main reason to create a key file, apart from ease the daily management, is to provide a valid authentication mechanism for any automation script that you can create using the cliq.

Juanma.

40.416691-3.700345

Share this:

Like this:

You are in front of a Linux box, a VM really, with a bunch of new disks that must be configured and suddenly you remember that there is no ioscan in Linux, you will ask yourself ‘who is so stupid to create an operative system wihtout ioscan?’ at least I did x-)

Yes it is true, there is no ioscan in Linux and that means that everytime you add a new disk to one of your virtual machine you have to reboot it, at least technically that is the truth. But don’t worry there is a quick and dirty way to circumvent that.

Like this:

The reason for this post is trying to be a single point of reference for HP related VMware resources.

I created the list for my personal use while ago but in the hope that it can be useful for someone else I decided to review and share it. I will try to keep the list up to date and also add it as a permanent page in the menu above.

General resources

HP virtualization with VMware – This is the main page about VMware in the HP site. It has dozens of links to White Papers, webinars, podcasts and other HP sites about VMware.

VMware on ProLiant

ProLiant server VMware support matrix – This page is the Rosetta Stone for every VMware installation on HP hardware. It has every HP Proliant Blade/Server cross-referenced in a table with every ESX/ESXi version from the 2.1 to the 4.1. The vSphere tab has also a column about VMware FT support.