Hyper-V scalability in Windows Server 2012 and Windows Server 2012 R2

In this article

Hyper-V in Windows Server® 2012 and Windows Server® 2012 R2supports significantly larger configurations of virtual and physical components than in previous releases of Hyper-V. This increased capacity enables you to run Hyper-V on large physical computers and to virtualize high-performance, scale-up workloads. This topic lists the supported maximum configuration for the various components. As you plan your deployment of Hyper-V, consider the maximums that apply to each virtual machine as well as those that apply to the physical computer that runs the Hyper-V role.

Note

For information about System Center Virtual Machine Manager (VMM), see Virtual Machine Manager. VMM is a Microsoft product for managing a virtualized data center that is sold separately.

Virtual machines

The following table lists the maximums that apply to each virtual machine.

Component

Maximum

Notes

Virtual processors

64

The number of virtual processors supported by a guest operating system might be lower. For more information, see the Hyper-V overview.

Memory

1 TB

Review the requirements for the specific operating system to determine the minimum and recommended amounts.

Each virtual hard disk is stored on physical media as either a .vhdx or a .vhd file, depending on the format used by the virtual hard disk.

Virtual IDE disks

4

The startup disk (sometimes referred to as the boot disk) must be attached to one of the IDE devices. The startup disk can be either a virtual hard disk or a physical disk attached directly to a virtual machine.

Virtual SCSI controllers

4

Use of virtual SCSI devices requires integration services to be installed in the guest operating system. For a list of the guest operating systems for which integration services are available, see the Hyper-V overview.

Virtual SCSI disks

256

Each SCSI controller supports up to 64 disks, which means that each virtual machine can be configured with as many as 256 virtual SCSI disks. (4 controllers x 64 disks per controller)

Virtual Fibre Channel adapters

4

As a best practice, we recommended that you connect each virtual Fibre Channel Adapter to a different virtual SAN.

Size of physical disks attached directly to a virtual machine

Varies

Maximum size is determined by the guest operating system.

Snapshots

50

The actual number may be lower, depending on the available storage. Each snapshot is stored as an .avhd file that consumes physical storage.

Virtual network adapters

12

- 8 can be the “network adapter” type. This type provides better performance and requires a virtual machine driver that is included in the integration services packages.
- 4 can be the “legacy network adapter” type. This type emulates a specific physical network adapter and supports the Pre-execution Boot Environment (PXE) to perform network-based installation of an operating system.

Virtual floppy devices

1 virtual floppy drive

None.

Serial (COM) ports

2

None.

Server running Hyper-V

The following table lists the requirements and maximums that apply to the server running Hyper-V.

Failover Clusters and Hyper-V

The following table lists the maximums that apply to highly available servers running Hyper-V. It is important to do capacity planning to ensure that there will be enough hardware resources to run all the virtual machines in a clustered environment. For more information about requirements for failover clusters, see Failover Clustering Hardware Requirements and Storage Options.

Component

Maximum

Notes

Nodes per cluster

64

Consider the number of nodes you want to reserve for failover, as well as maintenance tasks such as applying updates. We recommend that you plan for enough resources to allow for 1 node to be reserved for failover, which means it remains idle until another node is failed over to it. (This is sometimes referred to as a passive node.) You can increase this number if you want to reserve additional nodes. There is no recommended ratio or multiplier of reserved nodes to active nodes; the only specific requirement is that the total number of nodes in a cluster cannot exceed the maximum of 64.

Running virtual machines per cluster and per node

8,000 per cluster

Several factors can affect the real number of virtual machines that can be run at the same time on one node, such as:

- Amount of physical memory being used by each virtual machine.
- Networking and storage bandwidth.
- Number of disk spindles, which affects disk I/O performance.