June 2009 – Virtualization ProA SearchVMware.com blog2014-09-08T17:34:26Zhttp://itknowledgeexchange.techtarget.com/virtualization-pro/feed/atom/Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=9142009-07-01T15:13:01Z2009-06-30T21:03:33ZAt some point, you may need to know how to kill a stuck or frozen VMware vSphere 4.0 ESXi host virtual machine when the traditional power controls do not work. As with VMware ESX, there are several methods, which I covered in a previous post, killing a virtual machine (VM) on a VMware ESX host in vSphere. The...

The methods for ESXi are very similar to that of ESX, but the execution is different as ESXi doesn’t have a service console like ESX’s. The methods below are listed in order of usage preference, beginning with using normal VM commands and ending with a brute force method.

Method 1: Use the vmware-cmd command in the vSphere command-line interface (CLI)

Note: The vSphere CLI is formerly known as the Remote CLI and is not to be confused with the vSphere PowerCLI. The vSphere CLI is the CLI equivalent of using the vSphere Client. Because ESXi does not have a service console like ESX’s, you need to use the remote vSphere CLI to run the vmware-cmd command with ESXi. The vSphere CLI can be downloaded and installed on any Linux or Windows system and can be used to run specific commands remotely on any ESX/ESXi host, and consists of a collection of Perl scripts for each specific ESX/ESXi command. To use this method, follow the steps below.

Run the vSphere CLI on the system that you installed it on. You’ll need to switch to the \bin subdirectory where the Perl scripts are located to run the commands.

The vmware-cmd command uses the configuration file name (.vmx) of the VM to specify the VM on which it’s going to perform an operation. You can type vmware-cmd.pl -H <ESXi host name> -l to get a list of all VMs on the host and the path and name of their configuration file. The path uses the Universally Unique Identifier (UUID) or long name of the data store; alternatively, you can use the friendly name instead. You’ll be prompted for a log in to the ESXi host before the command will execute. Here you have the option to specify a vCenter Server with -H and you use -T to specify the ESXi host that the vCenter Server manages. Note: You can avoid entering log-in information every time you run a command by using a configuration file or Windows authentication passthrough using Security Support Provider Interface (SSPI). See the vSphere Command-Line Interface
Installation and Reference Guide documentation for more info.

You can optionally check the power state of the VM by typing vmware-cmd.pl -H <ESXi host name> <VM config file path & name> getstate.

You can check the state again to see if it worked; if it did the state should now be off.

Method 2: Use the vm-support command to shut down the VM

When you use the vm-support command to shut down a VM, you must first find the virtual machine ID (VMID) and then use the vm-support command to forcibly terminate it. This method does more then shut down the VM — it also produces debug information that you can use to troubleshoot an unresponsive VM. On ESXi hosts the vm-support command can be using the special tech support mode which provides access to its Busybox, Posix-based management console.

On the ESXi console, press Alt-F1.

Type the word unsupported (text will not be displayed while typing) and press Enter. A password prompt will appear, enter the root password for the ESXi host and you will be at a # prompt in the root partition.

The vm-support command is a multi-purpose command that is mainly used to troubleshoot host and VM problems. You can use the -X parameter to forcibly shut down a VM and also produce a file with debug information. As with ESX hosts, running this command will create a .tgz file but it will not be located in the directory that you run the command in. Instead it will be created in the /var/tmp directory which points to the 4 GB Virtual File Allocation Table (VFAT) system swap partition. You can also set a Virtual Machine File System (VMFS) volume as your working directory for the .tgz file. First, type vm-support -x to get a list of VMIDs of your running VMs.

To forcibly shut down the VM and generate core dumps and log files, type vm-support -X <VMID>. If you wish to specify an alternate directory for the .tgz file that is created also add the -w <vmfs volume path> parameter. You will receive prompts asking if you want to take a screenshot of the VM. This can be useful if you want to see if there are any error messages. You will also be prompted about whether you wish to send an non-maskable interrupt (NMI) and an ABORT to the VM, which can further aid in debugging. You must say yes to the ABORT prompt for the VM to be forcibly stopped. Once the process completes, which can take 10-15 minutes, a .tgz file will be created in the /var/tmp directory that you can use for troubleshooting purposes.

You can check the state of the VM again by typing vm-support -x. You should not see the VM listed at this point. Be sure and delete the .tgz file that is created when you are done to avoid filling up your host disk.

You can leave tech support mode by typing ‘exit’ and pressing Alt-F2 to return to the normal console mode.

Method 3: Find the VM’s process identifier and forcibly terminate it

This method also relies on using the tech support mode console that is used in method 2 to run the commands.

On the ESXi console, press Alt-F1.

Type the word unsupported (text will not be displayed while typing) and press Enter. A password prompt will appear. Enter the root password for the ESXi host and you will be at a # prompt in the root partition.

The process status (ps) command shows the currently-running processes on a server, and the grep command finds the specified text in the output of the ps command. Type ps -g | grep <virtualmachinename> which will return the WID (first column), CID (second column) and process group ID (PGID) (fourth column) of the running processes of the VM. You will have several entries returned; the number in the fourth column of the entries is the PGID of the VM.

The kill command sends a signal to terminate a process using its ID number. The ‘-9′ parameter forces the process to quit immediately and cannot be ignored like the more graceful ‘-15′ parameter can sometimes be. Type kill -9 <PGID> which will forcibly terminate the process for the specified VM.

You can check the state of the VM again by typing vm-support -x; you should no longer see the VM listed.

You can leave tech support mode by typing ‘exit’ and press Alt-F2 to return to the normal console mode.

All three of these methods work identically on ESXi hosts in both VMware Infrasture 3 and vSphere.

]]>0Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=9072009-06-25T15:40:41Z2009-06-25T15:35:15ZThis week’s VMTN Community Roundtable podcast was about Fault Tolerance (FT). Henry Robinson and Karen Ritter of VMware joined to provide information about the development and future of FT. Here’s a summary of some interesting details from the podcast, but if you haven’t listened to it yet, I recommend that you check out the recording as it...

]]>This week’s VMTN Community Roundtable podcast was about Fault Tolerance (FT). Henry Robinson and Karen Ritter of VMware joined to provide information about the development and future of FT.

Here’s a summary of some interesting details from the podcast, but if you haven’t listened to it yet, I recommend that you check out the recording as it provides a lot of valuable technical information.

VMware spent a lot of time working with Intel/AMD to refine their physical processors so VMware could implement its vLockstep technology, which replicates non-deterministic transactions between the processors by reproducing the CPU instructions on the other processor. All data is synchronized so there is no loss of data or transactions between the two systems. In the event of a hardware failure you may have an IP packet retransmitted, but there is no interruption in service or data loss.

Think of the primary and secondary as two same size gears with a chain between them so they always rotate at the same speed. If the secondary gear slows down due to a resource issue on its host, the primary gear will also slow down and vice versa. If the secondary virtual machine (VM) slows down to the point that it is severely impacting the performance of the primary VM, than FT between the two will cease and a new secondary will be found on another host.

Virtual symmetric multiprocessing (vSMP) support will come in a future release. Trying to keep a single CPU in lockstep between hosts is challenging enough and more development is needed to try and keep multiple CPUs in lockstep between hosts.

FT does not use a specific CPU feature but requires specific CPU families to function. VLockstep is more of a software solution that relies on some of the underlying functionality of the processors. The software level records the CPU instructions at the VM level and relies on the processor to do so; it has to be very accurate in terms of timing and VMware needed the processors to be modified by Intel and AMD to ensure complete accuracy. The SiteSurvey utility simply looks for certain CPU models and families, but not specific CPU features, to determine if a CPU is compatible with FT. In the future, VMware may update its CPU ID utility to also report if a CPU is FT capable.

Currently there is a restriction that hosts must be running the same build of ESX/ESXi; this is a hard restriction and cannot be avoided. You can use FT between ESX and ESXi as long as they are the same build. Future releases may allow for hosts to have different builds.

VMotion is supported on FT-enabled VMs, but you cannot VMotion both VMs at the same time. Storage VMotion is not supported on FT-enabled VMs. FT is compatible with Distributed Resource Scheduler (DRS) but will not automatically move the FT-enabled VMs between hosts to ensure reliability. This may change in a future release of FT.

You can use FT on a vCenter Server running as a VM as long as it is running with a single vCPU.

There is no limit to the amount of FT-enabled hosts in a cluster, but you cannot have FT-enabled VMs span clusters. A future release may support FT-enabled VMs spanning clusters.

There is an API for FT that provides the ability to script certain actions like disabling/enabling FT using PowerShell.

The requirement for dedicated gigabit network interface cards (NICs) for FT Logging is not a hard requirement but is recommended. You could use a shared NIC for FT Logging for small or dev/test environments. The four FT-enabled VM limit is per host, not per cluster, and is not a hard limit, but is recommended for optimal performance.

The current version of FT is designed to be used between hosts in the same data center, and is not designed to work over wide area network (WAN) links between data centers due to latency issues and fail over complications between sites. Future versions may be engineered to allow for FT usage between external data centers.

VMware’s FT is first generation technology and will get better as it matures over time. Future releases of FT may include enhancements such as relaxing the build level requirements, support for vSMP VMs, support for backing up an FT-enabled VM with VMware Consolidated Backup and also support for movement of FT-enabled VMs via DRS.

]]>0Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8992009-07-01T15:16:00Z2009-06-23T15:34:34ZOccasionally virtual machines (VMs) get stuck in a zombie state and will not respond to a power-off command using the traditional vSphere client power controls. Rebooting a host will fix this condition — but rebooting is usually not an option. Fortunately, there are a few methods for forcing the VM to shut down without rebooting the...

]]>Occasionally virtual machines (VMs) get stuck in a zombie state and will not respond to a power-off command using the traditional vSphere client power controls. Rebooting a host will fix this condition — but rebooting is usually not an option. Fortunately, there are a few methods for forcing the VM to shut down without rebooting the host.

I previously documented these methods with VMware Infrastructure 3 (VI3) and wanted to make sure they all worked with vSphere. The methods below are listed in order of usage preference starting with using normal VM commands and ending with a brute force method.

Method 1 – Using the vmware-cmd service console command (the command-line interface equivalent of using the vSphere Client)

Log in to the ESX service console.

The vmware-cmd command uses the configuration file name (.vmx) of the VM to specify the VM to perform an operation on. You can type vmware-cmd -l to get a list of all VMs on the host and the path and name of their configuration file. The path uses the Universally Unique Identifier (UUID) or long name of the data store; alternatively, you can use the friendly name instead. If you do not want to type the path when using the vmware-cmd command you can change to the VM’s directory and run the command without the path.

You can optionally check the power state of the VM by typing vmware-cmd <VM config file path & name> getstate.

You can check the state again to see if it worked; if it did the state should now be off.

Method 2 – Using the vm-support command to shut down the VM by first finding the virtual machine ID

…and then using the vm-support command to forcibly terminate it. This method does a lot more then shutting down the VM, as it also produces debugging information that you can use to troubleshoot an unresponsive VM.

Log in to the ESX Service Console.

The vm-support command is a multi-purpose command that is mainly used to troubleshoot host and VM problems. You can use the -X parameter to forcibly shutdown a VM and also produce a file with debugging information. This command will create a .tgz file in the directory that you run it in and cannot be run from a VMFS volume directory (running it in the /tmp directory is recommended). First type vm-support -x to get a list of the virtual machine IDs (VMID) of your running VMs.

To forcibly shut down the VM and generate core dumps and log files, type vm-support -X <VMID>. You will receive prompts asking if you want to take a screenshot of the VM. A screenshot can be useful to see if there are any error messages. You will also be prompted to see if you wish to send an NMI and an ABORT to the VM, which can aid in debugging. You must say yes to the ABORT prompt for the VM to be forcibly stopped. Once the process completes, which can take 10-15 minutes, a .tgz file will be created in the directory in which you ran the command that you can also use for troubleshooting purposes. To avoid filling up your file system when the file is created, switch to the /tmp directory when you run the command.

You can check the state of the VM again either by using the vmware-cmd command or by typing vm-support -x and you should not see the VMID for that VM listed anymore. Be sure and delete the .tgz file that is created when you are done to avoid filling up your host disk.

Method 3 – Using the kill command by first finding the process identifier (PID) of the VM and then using the kill command.

Log in to the ESX service console.

The process status (ps) command in Linux shows the currently running processes on a server and the grep command finds the specified text in the output of the ps command. Type ps auxfww | grep <virtualmachinename> to get the process ID (PID) of the VM. You will have two entries returned, one is from the running of the ps command. The longer entry is the running VM process. The longer entry will end in the config file name of the VM and is the one you want to use; the number in the second column of that entry is the PID of the VM.

The kill command in Linux sends a signal to terminate a process using its ID number. The ‘-9′ parameter forces the process to quit immediately and cannot be ignored like the more graceful ‘-15′ parameter can sometimes be. Type kill -9 <PID> which will forcibly terminate the process for the specified VM.

You can check the state using the vmware-cmd command to see if it worked; if it did, the state should now be off.

All three of these methods work identically on ESX hosts in both VI3 and vSphere. These methods also work for ESXi, but their execution is a bit different. In a future blog post we will cover how to use these methods with ESXi.

]]>3Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8902009-06-23T15:46:51Z2009-06-22T18:13:38ZAlthough there are several tech notes that suggest otherwise, most Lotus Domino workloads can be successfully virtualized. I thought I’d review some of the tech notes as they seem to be discouraging people from virtualizing Domino servers. Both of the notes I’ll review blame the virtualization layer as the cause of poor performance, but as...

]]>Although there are several tech notes that suggest otherwise, most Lotus Domino workloads can be successfully virtualized. I thought I’d review some of the tech notes as they seem to be discouraging people from virtualizing Domino servers. Both of the notes I’ll review blame the virtualization layer as the cause of poor performance, but as Domino servers are resource intensive, the issue could very well have been improper architecture or the versions of Domino and vSphere. If you do virtualize Domino, make sure that the underlying architecture is sufficient and make sure to have the latest versions of Domino and vSphere, as the latest versions provide I/O and performance enhancements.

Your Lotus Domino server uses considerably more CPU under VMware ESX 2.5 and 3.0 than running directly on the same hardware. With a simple mail workload sending mail to just a few users, CPU spikes to almost 100% for the duration of the test both under Windows and Linux guest virtual machines (VMs).

Cause:

This issue has been investigated and it was determined it is independent from Domino.

Solution:

Domino depends on the OS and hardware to perform disk I/O operations. If these operations consume excessive CPU and leave virtually no CPU cycles for Domino to perform any additional computation, the performance of the whole server will suffer and response time for users can increase to unacceptable levels.

In the same way as we see an important decrease of CPU when performing a file copy directly on the hardware vs under VMware ESX, we also verified that the same mail workload performed for Domino showed lower CPU usage and better overall performance and response time when Domino was running directly on the hardware.

As this issue is occurring independently from Domino, it is suggested to evaluate the expected load for the server before migrating it under VMware.

While VMware ESX is a supported hypervisor for Domino and Domino-based applications, several customers have reported poor performance of Lotus Domino HTTP on Windows 2003 on VMware ESX. The symptoms are flat CPU utilization (regardless of the number of vCPU assigned to the VM, 1, 2 or 4 vCPUs presents identical performance), low memory utilization and very high response time to Web users requests with moderate load (more than a few users hitting the server).

Cause:

The cause of this issue is currently unknown. IBM and VMware are working to identify the source of this problem. Customers that open a service request with VMware can reference VMware problem report 318726.

IBM Lotus Support has confirmed the problem and verified it is not introduced by a Domino server misconfiguration. In fact, the same settings allow the Domino server to serve HTTP requests in a timely fashion and support heavy load when running on a physical server, while response time increases considerably when the same Domino server runs in a Virtual Machine on VMware ESX 3.0 and 3.5.

Solution:

For customers running Domino HTTP, it is recommended to either keep the Domino server on a physical server or, if load is known to be low, verify the performance of the application in a VM prior to moving into production.

As mentioned earlier, both tech notes seem to discourage customers from virtualizing Domino servers in certain situations and blame the virtualization layer as the cause of poor performance problems. Domino servers are often times very resource intensive consumers, so when virtualizing them it is critical to properly architect the virtual hosts. Another tech note entitled How to size Domino and Sametime systems for full production loads on VMware ESX Server sums this up nicely:

Virtualization is not merely saving hardware by placing multiple services on a single physical machine. Virtualization requires a carefully designed environment, a clear understanding of how system resources are utilized, sufficient bandwidth and a contingency plan such that future upgrades and tuning are made possible for future changes. These considerations are particularly important for mission critical and resource intensive applications like Lotus Domino-based servers.

In general you should make sure that your VMware ESX Server, SAN, HBA, Fiber Switch and network cards can support the combined load of all virtual machines, when external factors cause a burst of user load onto the system.

Virtualized servers require the same or a higher level of performance than that normally available in a physical server environment. VMware ESX does not reinvent the laws of physics: CPUs and GHz, available RAM, network speed, disk subsystem speed and latency do not disappear. VMware ESX server will not provide an adequate environment for running Lotus products in a production-grade workload if sufficient resources are not allocated to each of the virtual machines.

In addition to having the proper architecture, I recommend that you use the latest versions of both Domino and vSphere as they both provide performance enhancements and I/O optimizations that are beneficial to Domino hosts. Additionally, using newer server hardware that has virtualization-specific CPU technology like Intel’s Nehalem and AMD’s Shanghai processors will help reduce the CPU overhead of the virtualization layer which will result in applications that run faster. By following these recommendations you should be able to successfully virtualize almost any Domino workload.

]]>0Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8872009-06-22T21:10:00Z2009-06-17T16:55:43ZVMware publishes a guide called the Hardware Compatibility List (HCL). The HCL lists all of the hardware components that are supported by each version of ESX and ESXi. This very important guide is divided into different sub-guides, which include systems (server make/models), storage devices (SAN/iSCSI/NFS) and I/O devices (NICs/atorage controllers), and is updated frequently with...

]]>VMware publishes a guide called the Hardware Compatibility List (HCL). The HCL lists all of the hardware components that are supported by each version of ESX and ESXi. This very important guide is divided into different sub-guides, which include systems (server make/models), storage devices (SAN/iSCSI/NFS) and I/O devices (NICs/atorage controllers), and is updated frequently with new hardware added and older hardware removed. I was curious about the inner workings of how this guide is maintained, so I contacted VMware for some answers.

First, you might wonder why this guide is important? There are two reasons. The first is that ESX/ESXi has a limited set of hardware device drivers that are installed and loaded into the VMkernel, and while it is possible to install additional unsupported device drivers, it is not recommended. Consequently, if you use a network or storage adapter that is not on the HCL, there is a very good chance that it might not work because the driver for it is not included.

The second reason is that VMware only provides support for server hardware that is listed on the HCL. Just because server hardware is not listed on the HCL doesn’t mean it will not work with ESX/ESXi, however. There is a lot of older hardware and other hardware brands/models that are not listed on the HCL that work just fine but are not supported by VMware. So if you are using hardware that is not listed on the HCL and call VMware’s Global Support Services for assistance with an issue, you might wonder if they will help you at all. What VMware will do is assist customers in problem analysis to determine whether or not the issue is related to the unsupported hardware. If the issue is suspected to be hardware-related, VMware reserves the right to request that the unsupported hardware be removed from the server. If VMware determines that the problem is related to the unsupported hardware, they will request that you open a support request with the hardware vendor instead.

So you might be wondering how the hardware on the guide is selected? Hardware vendors have to test and certify that their hardware works properly with the latest versions of ESX and ESXi. Once this has been completed, VMware will add them to the guide. VMware works with hardware vendors as part of their Technology Alliance Partner (TAP) program, and any vendor can apply to have its hardware added to the HCL. Once an application is received, the vendor is responsible for completing the certification criteria and submitting its results to VMware for review and approval. The first step of this process requires vendors to submit a VMware Compatibility Analysis for the hardware that they intend to certify. After VMware reviews and approves the analysis, the next step is for the vendor to engage with a third-party testing lab (currently VMware works with AppLabs or Cognizant for this) to certify that its hardware works properly with ESX and ESXi. VMware will not disclose the specific testing criteria that is used for certifying hardware for the HCL, but it does use the same certification criteria for all vendors that apply.

You’ll probably notice that the HCL contains mostly newer hardware and that older hardware is periodically removed from it. VMware does not enforce an expiration period for hardware added to the HCL, but it is up to each vendor to certify its hardware for the most current VMware product releases. Vendors are free to initiate the certification process at any time it needs to have new hardware added to the HCL. Additionally, each vendor can choose to remove older hardware from the HCL as it releases newer hardware versions.

So while using hardware not listed on the HCL may be OK for labs, it is highly recommended that you only use hardware on the HCL for production use. Be sure to check the HCL periodically, especially if you plan to upgrade to a newer ESX/ESXi version, as you will want to make sure your hardware is listed before upgrading. You should check this guide before you purchase any server hardware to use with ESX/ESXi. Also, be sure all your server components are listed in the guide, including NICs and storage adapters. Often it may take a short period of time before newer hardware is added to the HCL. If you have newer hardware and it is not yet listed on the HCL, try contacting the vendor to see where it is at with getting its hardware certified by VMware.

The guide format was recently changed and is now searchable via an online form. Additionally, you can download the full guides in PDF format for each hardware component. Make sure you bookmark the guide and periodically check it — you don’t want to find out when you call VMware support that you’re using unsupported hardware.

Special thanks to John Troyer, who got me in touch with the right person at VMware to talk to, and to Nick Fuentes, who found the answers to my questions.

]]>0Bridget Botelhohttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8802009-06-11T15:38:18Z2009-06-11T15:38:18ZVMware architect and virtualization expert Gabe van Zanten wrote an interesting post on his blog, “Gabe’s Virtual World” pointing out that VMware appears to be following in Microsoft’s footsteps by bullying partners and customers. “Several stories have emerged that made it look like VMware has learned from Microsoft and is now practicing the same strategy. Maybe Paul...

]]>VMware architect and virtualization expert Gabe van Zanten wrote an interesting post on his blog, “Gabe’s Virtual World” pointing out that VMware appears to be following in Microsoft’s footsteps by bullying partners and customers.

“Several stories have emerged that made it look like VMware has learned from Microsoft and is now practicing the same strategy. Maybe Paul Maritz’s tricks for Microsoft are now reused against VMware competitors,” van Zanten wrote. “First, there was a change in their VMworld policy reported by Brian Madden which according to VMware in an official response was just to prevent competitors from trashing VMware like Microsoft did at VMworld 2008. Although that is a viable explanation, the text now is in the legal documents and can be used as VMware pleases.”

These tactics raise the question of whether VMware plans to block more ESXi tools, forcing people to upgrade to a paid version of their software. As van Zanten wrote in his blog, “It is obvious they want ESXi Free to be “unmanageable”, since it is difficult to manage an ESXi free host with a read only remote administration kit. But why? Does VMware think that a small company will now switch to VMware’s vSphere Essentials edition just to be able to manage ESXi? Is VMware afraid that customers start building large clusters of ESXi Free hosts and use third party products to manage them?”

These concerns and others have yet to be answered. In the meantime, I’ll leave you with a couple text book characteristics of bullies:

Those who bully have personalities that are authoritarian, combined with a strong need to control or dominate… If aggressive behavior is not challenged in childhood, there is a danger that it may become habitual.

And the result of “habitual” bullying behavior, VMware, is alienation. In this case, of your customers.

]]>0Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8712009-06-09T13:55:28Z2009-06-09T13:55:28ZI wanted to know when current VMware users are planning to upgrade their existing VMware production environments to vSphere, so I ran a poll on my website. I then ran additional polls to find out the primary reasons that users are holding off on upgrading to vSphere, and to see if they are planning on...

I wanted to know when current VMware users are planning to upgrade their existing VMware production environments to vSphere, so I ran a poll on my website. I then ran additional polls to find out the primary reasons that users are holding off on upgrading to vSphere, and to see if they are planning on upgrading their Enterprise licenses to the new Enterprise Plus licenses.

What I discovered is that while people are eager to upgrade to vSphere, there are reasons why others are waiting; and while some will upgrade from Enterprise to Enterprise Plus licensing before the end of the year, a number of respondents indicated that they have no plans to upgrade at all.

When will you upgrade?

The first poll clearly indicated that many existing customers will be quick to upgrade to vSphere to benefit from the new technologies. Out of 140 responses, 32% are planning to upgrade within 3-6 months and another 25% of users are planning on upgrading within 0-3 months.

Only 12% had already upgraded to vSphere, so my second poll was to find out the primary reason that users were holding off on upgrading to vSphere.Why wait?

Out of 101 responses, 31% of users are waiting until they get more knowledge and experience with vSphere. 29% of users indicated that they are waiting for the first maintenance update of vSphere.

This would seem to indicate that many are leery of this release as it is new and has only been tested by VMware and its beta testers, suggesting that memories of the infamous time bomb bug still linger.

Enterprise or Enterprise Plus?

Finally, I wanted to see if customers were planning on upgrading to the new Enterprise Plus edition and taking advantage of the discounted $250/processor special upgrade offer that is valid until the end of the year. I ran a third poll; out of 65 responses only 43% of current Enterprise customers are planning on upgrading to Enterprise Plus this year. 37% are not planning on upgrading at all — my guess is that this is because they can not afford to upgrade their licenses or are not interested in the extra features that are included in Enterprise Plus (Host Profiles and Distributed vSwitches).

To summarize these results, many customers are anxious to upgrade to vSphere for its new features and benefits — but not too anxious, many are holding off until they learn more about it and for the first update to be released. Additionally, many customers are not willing to spend the extra money to upgrade to the new Enterprise Plus license tier and will stick with their existing Enterprise licenses.

]]>0Texiwillhttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8632009-06-08T16:45:28Z2009-06-08T16:45:28ZSince I am an independent consultant and VMware Communities Guru, I have recently been asked many questions about whether or not to upgrade to VMware vSphere 4. My answers depends on the following items: The hardware involved. VMware vSphere has certain hardware requriements, if your target hosts do not support these minimal requirements, then they are...

]]>Since I am an independent consultant and VMware Communities Guru, I have recently been asked many questions about whether or not to upgrade to VMware vSphere 4. My answers depends on the following items:

The hardware involved. VMware vSphere has certain hardware requriements, if your target hosts do not support these minimal requirements, then they are not good candidates for running VMware vSphere. The basic requirements are:

– Intel-VT or AMD-V support. This pretty much goes without saying; it is impossible to use VMware vSphere if these features are not enabled within the BIOS.

– No eXecute (NX) or eXecute Disable (XD) support within the BIOS. In some cases you are required to enable this bit to allow VMware vSphere to run.

Whether or not the hardware is fully supported by VMware for VMware vSphere 4. This implies that the hardware and IO devices you are planning to use are listed within the VMware Hardware Compatibility Guides. If they are not then there is a chance that when you call for VMware Support that they will deny you this support. It does not happen often but it is possible, so be aware of this. If you are not using VMware vSphere 4 in production, this may not be a huge issue as many a whitebox will work, just be sure your IO devices are listed within the VMware Hardware Compatibility Guides.

Whether or not current management agents exist for VMware vSphere 4. This implies that your current crop of management agents, such as HP Insight Management Agents, are available for VMware vSphere 4. Monitoring of your physical hardware and alerting on issues is too important to not have available if your use VMware vSphere 4.

Have you tested vSphere 4 in your environment? Wanting to upgrade implies that you have tested vSphere 4 within your environment and that you are comfortable with the changes in licensing and operation of this .0 release of software. It is unwise to just place VMware vSphere 4 into production without first running some tests. How much of a test plan you use depends on your existing testing processes, but some testing is required. If you are upgrading, at minimum you should test to see which path is a smoother transition for you: upgrading or reinstalling.

Have you considered licensing level changes? There are many licensing level changes within VMware vSphere 4 with respect to what is available at each license level. If you upgrade will you also need to upgrade your licenses to maintain the appropriate levels of functionality. DRS is a case in point. It is important to know exactly what your licenses imply when you upgrade. With new starts of VMware ESX, it is also important to understand your license levels.

Do you need to upgrade your hardware to use all vSphere 4.0 functions? In some cases, before you can utilize all features of VMware vSphere 4 such as Fault Tolerance you will need to also upgrade your processors. Not every processor supports VMware vLockstep. If you do require VMware Fault Tolerance, for example, it is important to know its limitations and the required CPUs.

These are not all of the issues involved with upgrading to VMware vSphere 4, but they are helpful considerations for deciding if you should upgrade or even can upgrade as well as start using VMware vSphere 4 for new installations.

As with any virtualization endeavor, it is extremely import to architect, design, and plan your vSphere 4 installations or upgrades. It is very easy to install VMware vSphere 4 without doing any planning, but if you do so, expect frustration, delays, and long days and nights. Ask the tough questions during your planning stages and do not rush to implement vSphere 4 unless there is a major need to do so.

]]>Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8572009-06-05T15:35:29Z2009-06-04T17:27:42ZVMware vSphere is a free upgrade for licensed customers with active Support and Subscription (SnS), but many users have reported problems obtaining their new vSphere licenses from VMware. Because VMware decided to go with a simpler license key instead of a more complex license file, new license keys are needed to use vSphere. When vSphere...

]]>VMware vSphere is a free upgrade for licensed customers with active Support and Subscription (SnS), but many users have reported problems obtaining their new vSphere licenses from VMware. Because VMware decided to go with a simpler license key instead of a more complex license file, new license keys are needed to use vSphere. When vSphere went GA on May 21st, VMware did license upgrades for its customers, creating vSphere license keys for them. Presumably this process looked at customers’ existing licenses, checked if their maintenance was current to see if they were entitled to the upgrade and, if so, generated keys for all the hosts and vCenter Servers that qualified.

My own experience with obtaining vSphere licenses was frustrating as well. On May 21st, when vSphere was first posted to VMware’s website, I logged into the website, went to download the vSphere bits and found I was unable to — the licensing portal claimed I had no active contracts and therefore was not entitled to the upgrade. There was also a big note at the top that stated that the license upgrades were in progress and could take some time to complete. OK, I figured, I’ll give them some time and try again later in the day. I was anxious to get the new GA code, however, and found I could download it right away by choosing to evaluate vSphere instead. The download process was fairly smooth, downloads were quick despite the large file sizes and they had a download manager utility that allowed multiple file downloads at once.

Anxious to get my licenses, I tried again several times that day at regular intervals, and each time, the license portal stated that I had no active contracts. I gave up for the day and decided to try again the next day. The next day I tried again and received the same message about no active contracts, so I did some more digging through the licensing portal and found that all our SnS contracts were indeed expired, despite the fact that we’d renewed the SnS each year. I could see the original purchases from several years ago but no renewals since then. Now, we had bought our licenses through Hewlett-Packard (HP) as part of our hardware purchases and had paid the SnS contract renewals each year through HP as well. I suspected that this might have something to do with why our contracts were showing expired, so I contacted VMware’s licensing support.

The first person I talked to said he saw my original contracts and because we bought through a reseller — HP — I would have to manually do something to update our contracts so we could access our vSphere licenses, and was told to try again in 24 hours. Being Friday, I waited until the following week and tried again — same thing, no active contracts, so I called back and got a different story, claiming that we weren’t paying our renewals and weren’t entitled to vSphere. Well now I was starting to get a bit angry and frustrated. The person I was dealing with said he would run a domain report to look for all of our contracts and renewals. That turned up the same result — all our contracts were expired, so I was told they’d investigate further, presumably with HP, and get back to me.

Sensing I was in for a long battle, I contacted someone I knew at VMware who put me in touch with someone in the renewal department who could make things happen. He told me that HP was responsible for keeping VMware up to date on contract renewals; clearly this wasn’t happening in my case. He said he would contact HP, verify our renewals and get the system updated right away. A few days later, VMware had the information from HP on our renewals and had its systems updated. The following days I was able to log into the licensing portal and our contracts showed as active and our vSphere licenses were available.

While I think VMware did a great job on most of the GA launch, it definitely could have done a better job with the licensing upgrades. As I mentioned before, I heard from many other users who also experienced problems getting their vSphere licenses. This experience has made me re-think buying VMware licenses through a third-party vendor like HP, who dropped the ball with our renewals and was not doing what it was supposed to be doing. I found out from VMware that we do have the option to renew our contracts directly through VMware instead, so that is what we will do when they come up for renewal again next month. By dealing directly with VMware for our SnS, hopefully we will not experience this type of issue again in the future when the successor to vSphere is released.

]]>0Eric Sieberthttp://itknowledgeexchange.techtarget.com/virtualization-pro/http://itknowledgeexchange.techtarget.com/virtualization-pro/?p=8542009-06-05T15:50:17Z2009-06-01T21:30:40ZVMware publishes a great document called Configuration Maximums that details all the configuration maximums for the various components of virtual machines, hosts and vCenter servers. With the vSphere release, VMware has created a new document that has both new and modified configuration maximums specifically for vSphere. I went through and compared the latest VI3 document...

]]>VMware publishes a great document called Configuration Maximums that details all the configuration maximums for the various components of virtual machines, hosts and vCenter servers. With the vSphere release, VMware has created a new document that has both new and modified configuration maximums specifically for vSphere. I went through and compared the latest VI3 document with the vSphere document and I noted some differences between the two, which are listed below.

Virtual Machine

VI 3.5

vSphere 4

Number of virtual CPUs per virtual machine

4

8

RAM per virtual machine

64 GB

255 GB

NICs per VM

4

10

Concurrent remote console sessions

10

40

ESX host

VI 3.5

vSphere 4

Hosts per storage volume

32

64

Fibre Channel paths to LUN

32

16

NFS Datastores

32

64

Hardware iSCSI initiators per host

2

4

Virtual CPUs per host

192

512

Virtual Machines per host

170

320

Logical processors per host

32

64

RAM per host

256 GB

1 TB

Standard vSwitches per host

127

248

Virtual NICs per standard vSwitch

1,016

4,088

Resource pools per host

512

4,096

Children per resource pool

256

1,024

Resource pools per cluster

128

512

You should pay special attention to the many footnotes in the document that detail special circumstances for some of the maximums. One important footnote, first noted by Duncan Epping, is that the maximum virtual machines per host in an high-availability cluster is 100, but if there are more than eight hosts in a single cluster, the maximum virtual machines per host is only 40. This important footnote definitely limits the number of larger hosts that you can have in a cluster and will influence how you design your clusters.

The Configuration Maximum documents for VI3 and vSphere are both available on VMware’s website. Be sure to periodically check the documentation links for new releases, as these documents are sometimes updated when any changes occur from new versions of ESX and vCenter Server being released.