This forum is to discuss Virtualization Comparisons. You can request for a virtualization comparison, your opinion and comment about a virtualization comparison posted on our site, Share and update information about virtualizaiton products.

Recommended Configuration for Guest OS
Hyper-V will support a broad array of devices, support for both 32- and 64-bit and multi-processor guests, and broad support for a variety of storage solutions including iSCSI and fibre channel SAN. The virtual machines will be able to utilize the large memory allocation (up to 64 gigabytes per virtual machine) and integrated virtual switch support to enable virtualization of most workloads.

For the guest operating system, install one of the following:

Either 32-bit or 64-bit version of Windows Server 2008 or Windows Server 2003 or SUSE Enterprise Linux Server 10 with Service Pack 1. Other operating systems are not recommended at this point, while they may work.

A maximum of four virtual processors may be allocated to a virtual machine running Windows Server 2008. For virtual machines running other guest operating systems, for the beta release, it is recommended that a maximum of one virtual processor be allocated.

Managing Hyper-V via MMC
Windows Server 2008 Hyper-V, once installed can be managed via MMC similar to other roles in Windows Server 2008. Select the "Hyper-V Manager " from the Administrative Tools folder on the Start menu to start the virtualization Management MMC console. With this console, you can manage either the local system or connect to other servers and manage them.

Prerequisites
The beta for Windows Server 2008 Hyper-V is only available in x64 edition of Windows Server 2008 Enterprise Release Candidate 1. You will need a clean install of x64 edition of Windows Server 2008 Enterprise release candidate 1 on your host system. Hyper-V cannot be enabled on systems running inside virtual machines.
The full set of prerequisites for installing Hyper-V on Windows Server 2008 will be published separately closer to RTM.
The beta release of Hyper-V is available only on Windows Server 2008 Enterprise Release Candidate 1 (RC1), x64 editions. Hyper-V does not run on x86 architecture.
In addition to the system requirements for Windows Server 2008, Hyper-V requires an x64-based processor, hardware-assisted virtualization, and hardware data execution protection. For the beta release, a maximum of sixteen logical processors are tested.

SORRY FOR MY BAD English, The website with prices has a lot of information about Windows 2008 and HyperV

Thanks for posting again, as mentioned earlier we have updated our comparison at http://itcomparison.com/Virtualization/ ... e35esx.htm and it got to address many of the points mentioned into your earlier reply. We have got our had on four boxes which support Hyper-V and had setup two of them with Hyper-V and the other two with VMware VI3 V3.5. We are still carrying further testing, but we have updated the comparison to reflect our finding. It will be getting updated as new finding appear. About your last post it will be very hard to address some of the point in the comparison, but will address here what I can of it.

• reliability
- MS Hyper-V still in beta and highly depend on Windows 2008 services specially for clustering and NLB. With Windows 2008 still being not fully released and Hyper-V still in beta will be very hard to tell how reliable will it reach. In the other hand, we all know how reliable is VMware. I believe it will be unfair to compare reliability at the moment as we will be comparing a beta to a production ready product.

• TCO
TCO is not very easy to calculate a clear cut between the two products. As each firm have its special needs and requirements. For example, if your firm is mainly a Linux place then Hyper-V beta is a full waste. If you are planning to only use windows 2003 and 2008, you don't need all the advanced features of VMware and highest reliability then Hyper-V might be your choice. TCO will be more clear when Hyper-V reach its final release. It will be even easier to predict the TCO for a certain scenario than a generalize one. VMware were always the most expensive virtualization on the market today, but always made a better TCO. In the other hand, that is not guaranteed to always be the same with new virtualization products being crafted everyday.

• controlability
Virtual Center 2.5 is way a head of Hyper-V Manager & Microsoft Virtual Machine Manager feature wise and ease of use at the moment, but we still can't predict how these will end up at the final release.

• Scalability
VMware seems to scale lot better at the moment. Its easier to setup HA in VMware in large scale without taking the headache of Host Clustering. In addition, DRS make it easy to ensure your resources are used in a very balanced manner without you having to watch over it closely.

• Performance
In our lab, at a glance the performance of our windows 2003 and SUSE Linux virtual machines was about 30% better on VMware than MS Hyper-V. We are planning to run a load test at the two products in the near future and will report back the result. Any way seems MS Hyper-V is not performing that bad for a beta. We will update after the load stress test.

• Redundancy
- As mentioned in the comparison page VMotion require zero downtime where Quick Migration will require downtime. So planned maintainance require down time on Hyper-V, but no downtime required on VMware.

- HA seemed easier to setup than host clustering and faster failover, but host clustering still done a good job at our setup.

• Connectivity
- Seems both support a very wide range of ISCSI and SAN arrays.

I hope the comparison posted and my feed back on your points be helpfull to you.

I wil getter more information about this subject. Its an intresting subject. We shouldnt underestimate HyperV, MS is good at getting something they want. Im at a company to match these products. Al the information i getter will be posted here. I hope we can make the comparison chart better in the future.I will try to post evry day!!!

Thanks for posting back, and definitely we can get the comparison chart better. I will be following up your posts closely and hope more people will even join and post more information. In the same time, our team is trying to collect as much information and test the two products in the lab. In the other hand, we are not under estimating Hyper-V, but as mentioned in the comparison chart and in my replied to you Hyper-V is still in beta. We are not sure what surprises M$ will come up with when the product is ready to market. In two weeks MS will be holding their Windows 2008 Lunch conference and our team is attending. I hope we can collect more information in there as well.

I have came at across post by deandownsouth saying the following about the comparison:

"Excellent comparison (and not just because I agree with it ), I'm still having some OS driver problems with my test blades that have slowed my evaluation and I got called into a meeting this afternoon (got to love project managers) so my time today was cut short in the lab.

Just wanted to add to the comments regarding NLB vs. DRS. The comments are correct, but I don't think it conveys enough that with DRS, it MOVES the entire VM live, OS and all to another host in the farm based on settings. That's all together different from just managing network traffic across NICs. Also, I may have missed it, but with SVMotion or Storage VMotion, entire disk files can be moved live from one array to another."

I believe its a great contribution, although not posted at our forum, but we should address .

Thanks for you for posting it in the forum as I requested you by e-mail. In addition, thanks to deandownsouth for his great comment. As promised to you we already have updated the comparison to take care of his points. You can see the updates if you visit the link again
MS HyperV vs VMware ESX

VMware Infrastructure Requirements
VirtualCenter manages ESX Server hosts using a server and three types of remote
management clients.
VirtualCenter Server Requirements
The VirtualCenter Server is a physical machine or virtual machine configured with
access to a supported database.
Hardware Requirements
The VirtualCenter Server hardware must meet the following requirements:
Processor – 2.0GHz or higher Intel or AMD x86 processor. Processor requirements
can be larger if your database is run on the same hardware.
Memory – 2GB RAM minimum. RAM requirements can be larger if your database
is run on the same hardware.

Disk storage – 560MB minimum, 2GB recommended. You must have 245MB free
on the destination drive for installation of the program, and you must have 315MB
free on the drive containing your %temp% directory.

NOTE Storage requirements can be larger if your database runs on the same
hardware as the VirtualCenter Server machine. The size of the database varies with
the number of hosts and virtual machines you manage. Using default settings for
a year with 25 hosts and 8 to 16 virtual machines each, the total database size can
consume up to 2.2GB (SQL) or 1.0GB (Oracle).

1GB RAM minimum.
One or more Ethernet controllers. Supported controllers include:
Broadcom NetXtreme 570x gigabit controllers
Intel PRO/100 adapters
For best performance and security, use separate Ethernet controllers for the service
console and the virtual machines.
A SCSI adapter, Fibre Channel adapter, or internal RAID controller:
Basic SCSI controllers are Adaptec Ultra‐160 and Ultra‐320, LSI Logic
Fusion‐MPT, and most NCR/Symbios SCSI controllers.
Fibre Channel. See the Storage / SAN Compatibility Guide.
RAID adapters supported are HP Smart Array, Dell PercRAID (Adaptec
RAID and LSI MegaRAID), and IBM (Adaptec) ServeRAID controllers.
A SCSI disk, Fibre Channel LUN, or RAID LUN with unpartitioned space. In a
minimum configuration, this disk or RAID is shared between the service console
and the virtual machines.
For hardware iSCSI, a disk attached to an iSCSI controller, such as the QLogic
qla405x.
For SATA, a disk connected through supported dual SAS‐SATA controllers that are
using SAS drivers.
ESX Server 3 supports installing and booting from the following storage systems:
ATA disk drives – Installing ESX Server 3 on an ATA drive or ATA RAID is
supported. However, ensure that your specific drive controller is included in the
supported hardware.
Storage of virtual machines is currently not supported on ATA drives or RAIDs.
Virtual machines must be stored on VMFS volumes configured on a SCSI or SATA
drive, a SCSI RAID, or a SAN.
NOTE The 3Com 3c990 driver does not support all revisions of the 3c990.
For example, 3CR990B is incompatible.

Enhanced Performance Recommendations
The lists in previous sections suggest a basic ESX Server 3 configuration. In practice,
you can use multiple physical disks, which include SCSI disks, Fibre Channel LUNs,
RAID LUNs, and so on.
Here are some recommendations for enhanced performance:
RAM – Having sufficient RAM for all your virtual machines is important to
achieving good performance. ESX Server 3 hosts require more RAM than typical
servers. An ESX Server 3 host must be equipped with sufficient RAM to run
concurrent virtual machines, plus run the service console.
For example, operating four virtual machines with Red Hat Enterprise Linux or
Windows XP requires your ESX Server 3 host be equipped with over a gigabyte of
RAM for baseline performance:
1024MB for the virtual machines (256MB minimum per operating system as
recommended by vendors × 4)
272MB for the ESX Server 3 service console
Running these example virtual machines with a more reasonable 512MB RAM
requires the ESX Server 3 host to be equipped with at least 2.2GB RAM.
2048MB for the virtual machines (512MB × 4)
272MB for the ESX Server 3 service console
These calculations do not take into account variable overhead memory for each
virtual machine. See the Resource Management Guide.
Dedicated fast Ethernet adapters for virtual machines – Dedicated gigabit
Ethernet cards for virtual machines, such as Intel PRO/1000 adapters, improve
throughput to virtual machines with high network traffic.
Disk location – For best performance, all data used by your virtual machines
should be on physical disks allocated to virtual machines. These physical disks
should be large enough to hold disk images to be used by all the virtual machines.
VMFS3 partitioning – For best performance, use VI Client or VI Web Access to set
up your VMFS3 partitions rather than the ESX Server 3 installer. Using VI Client or
VI Web Access ensures that the starting sectors of partitions are 64K‐aligned,
which improves storage performance.
NOTE The ESX Server 3 host might require more RAM for the service console if
you are running third‐party management applications or backup agents.

Processors – Faster processors improve ESX Server 3 performance. For certain
workloads, larger caches improve ESX Server 3 performance.
Hardware compatibility – To ensure the best I/O performance and workload
management, VMware ESX Server 3 provides its own drivers for supported
devices. Be sure that the devices you use in your server are supported. For
additional details on I/O device compatibility, download the ESX Server I/O
Compatibility Guide from www.vmware.com/support/pubs/vi_pubs.html.
Hardware and Software Compatibility
For more information on supported hardware and software, download the ESX Server
Compatibility Guides from www.vmware.com/support/pubs/vi_pubs.html.
Systems compatibility – Lists the standard operating systems and server
platforms against which VMware tests.
I/O compatibility – Lists devices that are accessed directly through device drivers
in the ESX Server host.
Storage compatibility – Lists the combinations of HBAs and storage devices
currently tested by VMware and its storage partners.
Backup software compatibility – Describes the backup packages tested by
VMware.
Supported Guest Operating Systems
The VMware Guest Operating System Installation Guide includes information on
supported guest operating systems. You can download this document at:
http://www.vmware.com/support/pubs/vi_pubs.htmlESX Server offers support for a number of 64‐bit guest operating systems. See the Guest
Operating System Installation Guide for a complete list.
There are specific hardware requirements for 64‐bit guest operating system support.
For AMD Opteron‐based systems, the processors must be Opteron Rev E and later. For
Intel Xeon‐based systems, the processors must include support for Intel Virtualization
Technology (VT). Many servers that include CPUs with VT support might ship with VT
disabled by default, and VT must be enabled manually. If your CPUs support VT but
you do not see this option in the BIOS, contact your vendor to request a BIOS version
that lets you enable VT support.
To determine whether your server has the necessary support, you can use a CPU
Compatibility Tool at http://www.vmware.com/download/vi/drivers_tools.html.

Virtual Machine Requirements
Each ESX Server machine has the following requirements.
Virtual processor
Intel Pentium II or later (dependent on system processor)
One, two, or four processors per virtual machine
Virtual chip set — Intel 440BX‐based motherboard with NS338 SIO chip
Virtual BIOS — PhoenixBIOS 4.0 Release 6
NOTE If you create a two‐processor virtual machine, your ESX Server machine
must have at least two physical processors. For a four‐processor virtual machine,
your ESX Server machine must have at least four physical processors.