“Together VMware and Apple could make a run at redefining the desktop (it’s clear that VMware/Microsoft synergy isn’t going to happen). VMware delivers the virtual infrastructure and Apple delivers the OS. Combined, we could have a new generation of desktop delivery that might eventually supplant Microsoft.”

In this post, Wolf — a Burton Group senior analyst and virtualization expert — urges VMware and Apple to get it together and end Microsoft’s desktop dominance. Wolf cogently explains why the operating system-based desktop will fade away as desktop virtualization and new personal desktop models emerge.

What’s your take on where the desktop is going, and which model will dominate in businesses and with consumers? Sound off here, or write to me at jstafford@techtarget.com.

Desktop virtualization packages rely on snapshots and virtual drive functionality. The de facto functionality standard here is found in VMware Workstation and VMware Server, but the tools in Sun’s VirtualBox may be setting a new standard. Let’s take a quick look at how snapshots and virtual drives work within Sun xVM VirtualBox.

VirtualBox snapshot technology provides the same basic functionality as the VMware products in that they can be taken while the virtual machine (VM) is running or offline. The snapshots are taken from two different places depending on the state of the VM. For a running VM, the snapshot is taken from the running console as shown in the figure below.

When a VM is powered off, snapshots may be taken in the properties of the VM. This difference is a slight inconvenience, but is an easy learning curve to overcome. Further, if a VM needs to revert to a saved snapshot, this same location is where the VM would be reverted. VirtualBox gives the option to build from the snapshots, so there can be multiple point-in-time restores for a single VM. Snapshots in VirtualBox are kept in the .VirtualBox\Machines\VMName\Snapshots location by default, and are a collection of .VDI and .SAV files. The figure below shows three point-in-time restores for a single VM:

As with all snapshot restores, you should be sure that you want to restore as the reverting process is authoritative to the VM. Reverting to a VirtualBox snapshot taken while the system is running reverts precisely to that point with the VM running, rather than a powered off state. Overall, the functionality inventory of VirtualBox snapshot functions as advertised and brings another positive view to this exciting virtualization platform.

More information on the VirtualBox 1.6.x product can be found in the online user guide.

As server virtualization technology makes its way from test environments into production, IT organizations are struggling to keep up with the inherent management challenges involved in dealing with virtual environments.

The ease with which VMs are created makes it that much easier for VMs to be launched and moved willy-nilly regardless of the security and software licensing cost issues, just to name two common problems. Vendors of course have been hip to these challenges. This month, Embotics Corp. released version 2.0 of its V-Commander management software designed to automatically nip virtual sprawl in the bud. One way the software does this is by automatically enforcing policy dictating such things as VM expiration dates and through role-based security access that defines just who can do what in terms of VM creation and migration.

Also this month, Netuitive Inc. revamped its Service Analyzer business service management (BSM) software to include virtualization management capabilities. Nick Sanna, Netuitive’s president and CEO, said the company’s self-learning correlation software can monitor the status of applications across the environment, whether they are physical or virtual. “The idea [behind Service Analyzer],” said Sanna, is to eliminate IT management silos by automating performance management and providing end-to-end visibility into business services.”

Well, maybe not. However, VMware Inc. reported today that 900 universities including top tier schools such as Harvard and Yale are saving big bucks using VMware Inc. virtualization.

Many renowned universities that have deployed VMware to reduce capital and operating costs, increase application and system uptime, decrease power consumption and improve disaster preparedness include Cambridge, Princeton, Stanford, Purdue, the University of Maryland, the University of Auckland, and the University of California campuses at Berkeley, Los Angeles and San Diego.

These schools and hundreds more around the world are running their mission-critical enterprise applications, database systems, and education-specific applications such as CollegeNET and the Blackboard Academic Suite in VMware virtualized environments, the company reported.

Others are using VMware for disaster recovery (DR).

Bowdoin College in Maine partnered with Los Angeles-based Loyola Marymount University to build a co-located datacenter for cross-country DR. By partnering and using VMware to create back-up systems, the schools have achieved higher availability and better load balancing, with more than 70% of their environment virtualized and more than 100 virtual machines (VM). They are saving $15,000 in annual server maintenance and have avoided $500,000 in hardware costs, according to VMware.

Ohio State University has been a VMware virtualizatiton customer since 2003 when the College of Humanities needed to upgrade its IT infrastructure and found there was not enough room to expand. After deploying VMware virtualization, the College was able to meet its upgrade needs with 54 VMs running on three physical host servers. The college avoided $160,000 in hardware costs and cut server provisioning time down from three weeks to five minutes, and the IT staff can now manage all of its VMware VMs from a single console.

Clearly, the education sector is a strong market for VMware, as there are now 900 universities and colleges using the virtualization platform. Because of this, VMware created a free online tool called VMware Academic Program staffed with IT professionals from higher education facilities to answer questions about overall IT best practices. In addition to these experts, the site also includes case studies to help understand how others have implemented VMware.

In last week’s blog, I wrote about my first experiences with Sun’s xVM VirtualBox 1.6.2. I like the interface and the features available to this free desktop virtualization product. Among these great options is one that lets users configure the VirtualBox server to view virtual machines remotely with VRDP, or VirtualBox Remote Desktop Protocol.

VRDP is a compatible implementation of Microsoft’s Remote Desktop Protocol (RDP) that is configured for easy console access to the guest platform from remote systems. This is different from a web-based interface that the competition has in that it is configurable per virtual machine. Let’s take a look at how to configure VRDP for a virtual machine in these steps below.

The first step is to enable VRDP, or remote console as it is called within the interface. By default, VRDP is disabled for all virtual machines and is enabled with a specified security method. The security methods are referred to as null, guest and external. The null method is a no-security model in that any VRDP connection will be accepted, and this configuration is documented by Sun as being designed for a testing and private network only configuration. To enable VRDP on a virtual machine, click on the settings tab while the virtual machine is powered off and configure the remote display option:

Once VRDP is configured, the virtual machine will accept connections the next time it starts. The tricky part is the port and IP address configuration. On default configurations, 3389 would be used for the VRDP session on the host. If your host is a Windows system and is running Remote Desktop, another port should be specified. VRDP can also remotely start the virtual machine with VboxHeadless headless command. Once the virtual machine is running, a connection is made to the host system running VirtualBox and the specified port if not 3389. This connection will provide the redirected console within a standard rdesktop or mstsc session, and will be at all states and regardless if the guest is using a network interface. In this configuration, an operating system could be installed and the virtual BIOS can be accessed as well as other tasks below the operating system.

More information on the VRDP implementation can be found in the VirtualBox online user manual from the VirtualBox community website in section 7.4.

Today at Red Hat Summit in Boston, two of Red Hat’s emerging technology engineers, Dan Barrange and Richard Jones, presented the new tool sets that their team has developed for work with Xen virtual machines (VMs), including command line utilities, which will become part of the oVirt tool set.

According to Barrange, “You won’t have to lock into any particular technology underneath,” because these new utilities don’t require installation on a guest or require administrators to log in. Like the forthcoming Red Hat Enterprise Linux (RHEL) KVM-based hypervisor, these tools can also be launched from disk. “That’s the competitive advantage to using our tools,” says Barrange.

Red Hat engineer Richard Jones says that these new command line monitoring tools allow for a wider range of kernels and filesystems to be used and will offer better Windows support. Some of the utilities featured today include the following:

Red Hat has developed these tool sets for oVirt, its next-generation virtualization management console. Unlike the current Virtual Machine Manager for RHEL, oVirt creates a small “stateless” image of the host virtualization layer with no local disks or installation necessary.

Should you assign a virtual machine (VM) more than one virtual processor or not? It’s common for admins to configure virtual symmetric multiprocessing, or VMs with multiple CPUs, whether it is needed or not.The decision to use more then one virtual processor in a VM should be based on an actual requirement by the applications installed on the VM and not simply because two processors are better then one. Many physical servers commonly have multiple CPUs regardless if the applications running require them. While being wasteful of server resources, this does not negatively impact a physical server but most VMs will usually run better with one virtual processor and can actually run slower when more than one is assigned to it.

The reason for this is the hypervisor’s CPU scheduler must find simultaneous cores available equal to the number assigned to the VM. So a four VCPU VM will need to have four free cores available on the host for every CPU request that is made by the VM. If there are not four cores available because other VMs are using them then the VM must wait until the cores become available. Single VCPU VMs have a much easier time because they only need there to be a single core available for the scheduler to process CPU requests for it.

Here are some tips on assigning VCPUs to VMs:

Limit the number of VSMP VMs on your hosts. The less you have, the better your VMs will perform.

Assign a VM multiple VCPUs only if you are running an application that requires it and will make use of them.

Don’t assign a VM the same amount of VCPUs as your host system has total cores available.

If you are going to use VSMP have at least twice (preferably three or four times) the number of cores available on your host system then that of your VM with the most VCPUs. So if you have a four VCPU VM, have at least eight cores available on your host server and preferably 16.

If you are converting a multi-CPU physical Windows server to a single VCPU VM, make sure you change the HAL from multiprocessor to uniprocessor.

Don’t use CPU affinity as it restricts the scheduler and makes it harder to process CPU requests. The scheduler is very good at what it does, so let it do its job.

The virtualization world is still waiting for the official release of the Open Virtual Machine File Format, or OVF, once Distributed Management Task Force (DMTF) puts the finishing touches on what will be an industry standard virtual machine (VM) format. According to DMTF’s Christy Leung the organization plans to announce the release of OVF over the next couple of months.

OVF frees users from platform dependence in virtual environments, enabling them to mix and match platforms without incurring interoperability problems. Despite the clear benefits of a common format in a multiplatform virtualization landscape, a universal format has encountered some roadblocks.

Since late 2007, DMTF has worked on OVF when Dell, HP, IBM, Microsoft, VMware and XenSource submitted a proposal for a standardized format for VMs. At the upcoming Burton Group Catalyst conference later this month, DMTF member organizations — including VMware, Citrix, and Novell — will demonstrate OVF interoperability publicly for the first time. According to Burton Group analyst Chris Wolf, “some vendors moved OVF support higher up on their development roadmap in order to have it ready in time to demonstrate at the Catalyst conference.”

Wolf says that OVF is worth the wait — and the investment in the long term. “OVF has a nice long-term goal of standardizing the way hypervisors mount and run VMs,” says Wolf, “but its immediate use is primarily in importing VMs and standardize how VM metadata is managed.”

Wolf goes on to say that while OVF VMs will soon be able to load onto any hypervisor, a virtual hard disk conversion may be required as part of the import process because of the presence of two primary virtual hard disk formats in play: Virtual Machine Disk Format for VMware and Virtual Hard Disk for Microsoft and Xen. “OVF would have even more value if all vendors could agree to use a single standardized virtual hard disk format,” according to Wolf. “Thus far, the reasons for not having a single virtual hard disk format are more political than technological.”

When DTMF finishes its work, OVF will greatly improve the functionality of virtual machines. “OVF metadata is extensible, so any software vendor could use OVF to embed their management metadata inside VMs, regardless of hypervisor,” says Wolf.

“That is a big deal, as vendors could have a consistent management methodology regardless of hypervisor.”

You may hear the term SCSI reservations frequently when dealing with VMware servers that utilize shared storage. SCSI reservations are used to ensure exclusive access to disk-based resources when multiple hosts are accessing the same shared storage resources. In addition to being used by VMware hosts, SCSI reservations are also used by Microsoft Cluster Server.

SCSI reservations are only used for specific operations when metadata changes are made and are necessary to prevent multiple hosts from concurrently writing to the metadata to avoid data corruption. Once the operation completes the reservation is released and other operations can continue. Because of this exclusive lock, it is important to minimize the concurrent number of reservations that are made. When too many reservations are being made at once, you may receive I/O failures because a host is unable to make a reservation to complete an operation because another host has locked the logical unit number (LUN). When a host is unable to make a reservation because of a conflict with another host, it will continue to retry at random intervals until it is successful; however, if too many attempts are made the operation will fail.

Some examples of operations that require metadata updates include:

Creating or deleting a VMFS datastore

Expanding a VMFS datastore onto additional extents

Powering on or off a VM

Acquiring or releasing a lock on a file

Creating or deleting a file

Creating a template

Deploying a VM from a template

Creating a new VM

Migrating a VM with VMotion

Growing a file (e.g., a Snapshot file or a thin provisioned Virtual Disk)

Having a minimal amount of reservation conflicts is generally unavoidable and will not have a big impact on your hosts and VMs. To avoid having too many conflicts, try to limit the number of operations that can cause reservations and stagger them so too many are not happening simultaneously. All reservation errors are logged to the /var/log/vmkernel log file on each ESX host. To reduce the amount of conflicts:

Limit the number of snapshots you have running, as snapshots grow in 16MB increments and every time they grow they cause SCSI reservations.

Only vMotion a single VM per LUN at any one time.

Only cold migrate a single VM per LUN at any one time.

Do not power on/off too many VMs simultaneously.

Limit VM/template creations and deployments to a single VM per LUN at any one time.

Consider using smaller LUN sizes (<600GB) and do not use extents to extend a VMFS volume

This blog post was written by Megan Santosus, features writer for SearchServerVirtualization.com.

By now, server virtualization has pretty much proved its mettle as a way to consolidate data centers and reduce costs. As virtualization has gone mainstream, some of the management challenges have become top of mind. Consider the situation for a senior IT manager at a financial services company, who spoke on the condition of anonymity. “Virtualization is great stuff,” he said. “But it does change the way you manage things.”

Two years ago, the financial services company began implementing virtualization — specifically VMware and ESX Server, although the company has since deployed virtualization with Sun Solaris clusters. At that time, the company realized that it had a gap in virtual server management capabilities. “We are making a large push with ESX servers, and we want to manage them holistically with some of the other servers in our environment,” the IT manager said.

To that end, four months ago the company began beta testing CA Advanced Systems Management r11.2; the company already uses the previous version of the software, and one of the enhancements with 11.2 is integration with VMware VirtualCenter. By installing an agent on VirtualCenter and another on the CA management server, the company now collects and aggregates the performance data for virtual machines into a centralized Web-based system. “We take the performance data on the physical ESX server and provide that to our capacity team so they can plan and manage our virtual environment,” the IT manager said.

For the capacity team, virtualization means being able to figure things out in advance such as how many hosts can run on an ESX server, what’s the footprint of the application, and whether it’s best to put components on the same physical box or spread them out. “We can now give the capacity team performance data they need to make the decisions about moving things around,” the IT manager said. Rather than planning, the IT manager likens the process now to capacity modeling. “If we want to move virtual servers running Oracle, Apache and Weblogix, we look at the performance data to make our decisions.”