I want to discuss multiple NIC vMotion in vSphere 5. I feel it is one of the best features in this release and certainly deserves a close look in any environment which makes use of vMotion. Including (of course) any DRS enabled cluster. Before I start, I want to reference two excellent blog posts on this subject by Frank Denneman and Duncan Epping. I will be essentially summarizing their posts here as well as adding my own data from several host evacuation tests I have performed. Frank’s article is here. Duncan’s article is here. What is it? Multi-NIC vMotion is exactly what it sounds like. It provides the ability to use multiple network interfaces during any vMotion process. This applies even when only moving a single VM. The benefits are pretty obvious: Benefits: Manual vMotion: The increased available bandwidth to the vMotion process will decrease the amount of time it takes to migrate any given VM. This becomes even more apparent on VMs with high memory utilization. DRS: DRS will make calculations based on the average vMotion migration...

Just wanted to write a quick post about the merits of EVC, and one of its little known advantages. Well, maybe calling it “little known” is an overstatement, but at the very least I would say it is widely overlooked. EVC – As you probably know stands for “Enhanced vMotion Capability.” When you enable EVC on a cluster, you are essentially setting a baseline CPU compatibility level for all hosts in that cluster. This is beneficial for later when you may add a new hosts with a newer processor generation. Note that it will not allow you to cross the Intel/AMD gap, but for processors within the same family, you will be able to mix certain generation types. In short, it makes newer processors behave like the older processors already in your cluster. Most people overlook EVC until it is too late. It will be a year or two down the road and all of a sudden they need to add more hosts with newer processors to increase capacity. No problem right? Just...

I have been recently mulling over the potential benefit of LACP in some of our environments. I want to discuss how LACP is implemented in vSphere, its limitations, and the potential benefits that I see in its use. I will also go over the process for enabling LACP from the vSphere side of things. Beginning with vSphere 5.1, VMware supports Link Aggregation Control Protocol (LACP) on distributed switches (vDSs). LACP, as I am sure you are already aware, allows the bundling together of multiple physical links to form a single logical channel. The purpose here is to provide more efficient network redundancy and failover (as well as increased available bandwidth, which I will get to in a moment). LACP works by simultaneously sending frames down each interface that has been enabled for LACP. If the device on the other end of the connection is configured for LACP, it will also start sending frames along those same links thereby enabling both systems to detect multiple connections between themselves and combine...

I am now in the prep stage for the VCAP5-DCD (VMware Certified Advanced Professional: Datacenter Design) exam. I plan to sit for the exam sometime in January. The exam has 100 questions, and the total allotted time for the exam is 225 minutes. There is a mixture of multiple choice questions along with several “design” questions where you have to use a Visio-like tool to build an appropriate conceptual/logical/physical design based on information provided in a case study. If you happen to be taking the exam in a country where English is not the primary language, you are afforded an extra 30 minutes. From what I have read and am hearing from others who have taken the exam, the time limit can easily become an issue. This is a long exam, and it is a challenging one. In the current version of the exam, you are not allowed to go back and revisit questions once they have been answered. Also you are not allowed to “Mark for review.” I believe this is due...