Customers have attempted to virtualize Microsoft Exchange Server since the earliest hypervisors appeared. At first, Microsoft resisted these attempts and would not provide support if problems appeared. The attitude was that any problem must be replicated on a “real” server before support was possible.

The situation changed with Exchange 2007. Customer demand, the growing maturity of virtualization technology and the appearance of Microsoft’s own hypervisor (Hyper-V) created a new imperative for Exchange to support virtualization. Since then, Microsoft has steadily improved the ability of Exchange to use different virtualization technologies and Exchange has become an application that is commonly run on Microsoft Hyper-V, VMware vSphere and the other hypervisors approved by Microsoft.

Virtualization creates its own particular technical demands that system administrators have to take into account as they plan its use with applications. Some applications, like Exchange, have relatively strict guidelines about the virtualization technologies that can be used and those that cannot. Sometimes this is because a technology is unproven with Exchange; sometimes it is because the way that the technology operates conflicts with the way that Exchange behaves. This document lays out the most important issues that system administrators should know about as they approach the deployment of Exchange 2013 on a virtualized platform.

Given the rapid cadence of updates with Microsoft’s release for Exchange 2013, the ongoing development of hypervisors, new capabilities in Windows and other improvements in hardware and software, the advice outlined here is prone to revision over time. It is correct as of April 2014 and covers Exchange 2013 SP1, Windows 2012 and Windows 2012 R2, and the virtualization technology available at this time.

The case for virtualizing Exchange

Advocates of virtualization usually advance their case on the basis that virtualization allows greater utilization of available hardware. The idea is that one or more large virtual hosts are capable of providing the necessary resources to support the required number of Exchange servers. Tuning the virtual host allows precisely the right level of resources to be dedicated to each Exchange server, whether it is a dedicated mailbox server, a multi-role server, or a CAS or transport server. The advantages claimed include:

Virtual servers make more efficient use of available hardware and therefore reduce the overall cost of the solution. It is possible that a well-configured and managed virtual host is capable of supporting a number of virtual Exchange servers and that the overall solution will make better use of the total hardware resources than if Exchange is deployed on a set of physical computers. This is particularly true in deployments where Exchange serves only a couple of hundred mailboxes and the load only takes a portion of the total resources available in a physical server. Another example of virtualization adding value is the deployment of several virtual Exchange servers (ideally on multiple host machines) instead of one large physical server to support thousands of mailboxes. In this case, the virtual Exchange servers can be arranged in a Database Availability Group (DAG) to take advantage of the application’s native high availability features, whereas the single large physical server is not protected by a DAG and therefore represents a single potential point of failure.

Efficient use of resources must be examined in the context of designs created for specific circumstances. It is possible to dedicate far too many resources to handle a particular workload and consequently the virtual servers will not be particularly efficient; likewise, it is possible to dedicate too few resources in an attempt to maximize utilization.

Virtual servers are more flexible and easier to deploy. Well-managed virtual environments are configured to allow new Exchange servers to be spun up very quickly indeed―far faster than it takes to procure new physical hardware and then install Windows, Exchange and whatever other software is required for the production environment. This capability allows a solution to be more flexible than a physical equivalent―a factor that might be important when migrating from one Exchange version to another or if extra capacity is required in situations like corporate mergers.

Virtual servers are easier to restore. If a virtual server fails, it can be easier to create a new virtual server and then restore it using the Exchange Setup / RecoverServer option than it would be to fix the myriad hardware problems that can afflict a physical server.

Virtual Exchange servers allow small companies to deploy technologies such as DAGs without having to invest in multiple physical servers. This is correct, as it is possible to run many virtual Exchange 2013 servers arranged in a DAG on a single physical server. However, as explained above, the concept of not wanting all of one’s eggs to be in a single basket holds true here too as a failure of the single physical server necessarily renders the complete DAG inoperative.

None of these advantages can be gained without careful planning and preparation of the virtual environment that will support Exchange. A badly configured and managed virtual environment will be even more fraught with problems than its physical counterpart. It is therefore critical to emphasize that it requires substantial effort to support virtualized Exchange. In the IT world, nothing good comes free of charge.

About the author

Tony Redmond is the owner of Tony Redmond & Associates, an Irish consulting company focused on Microsoft technologies. With experience at Vice-President level at HP and Compaq plus recognition as a Microsoft MVP, Tony is considered by many around the world an expert in Microsoft Collaboration Technology. Tony has authored 13 books, filed a patent and more. He is a senior contributing editor to WindowsITPro.com where he writes the “Exchange Unwashed” blog.