vCPU to pCPU Ratios – Are they still relevant?

One question I’m commonly asked (aka weekly if not daily) is what are the perfect pCPU to vCPU ratios that I should plan for, and operate to, for maximum performance. I wanted to document my perspective for easy future reference.

The answer?

There is no common ratio and in fact, this line of thinking will cause you operational pain. Let me tell you why.

In the past we’ve used rules of thumb like 4 vCPU’s to 1 pCPU (4:1) or even as high as 10 vCPU’s to 1 pCPU (10:1) but this was based on an often unspoken assumption – those workloads were basically idle. Many organizations started their virtualization journeys by consolidating low hanging fruit so it was easy and not uncommon, to see very high vCPU to pCPU consolidation ratios.

Thus, consolidation ratios were born and became a foundation capacity planning construct for virtual environments. Wars were waged over who could get a better consolidation ratio. Technologies like Intel’s Hyper-threading were introduced to provide better consolidation value. Large excel spreadsheets became the new operational dashboards to manage capacity.

The reality though, this was a very “simple” view to capacity planning, operations and reporting that many were lucky to leverage it for as long as they have. The churn rate of customer environments has continued to increase, as have the size of virtual machines and their consumption of resources. Lastly, due to virtual first polices, many customers no longer have access to profile an application stack on a physical environment before virtualizing it.

So if one cannot predict what will be virtualized, what its requirements are, or how long its lifecycle will be, we cannot create a static ratio for commitment of any resources dimensions – compute, memory, network or storage. (Incidentally, we should also strive to ensure no one else attempts to create, or enforce, a model like this either.)

Instead, we need to “Drive by Contention”

By this, I mean we need to invest in pools of resources for application owners and our new model becomes closely monitoring those pools for contention, which indicates the pool cannot support any more applications, and then growing them as required. This presents a new set of challenges that teams must overcome and master.

vSphere and its management tools have been designed for this purpose and not just for compute. At the platform layer, vSphere supports large clusters of resources that are dynamically balanced by services like DRS and Storage DRS to mitigate the affects of contention over an applications lifecycle. The vRealize Operations suite monitors applications and pools of resources lettings you know when there is a performance issue or you need to manage capacity. Technologies like memory Transparent Page Sharing, Storage IO Control and Network IO control, ensure that under times of contention, remaining resource are shared based on your business priorities, until new capacity can be leveraged. This type of model allows effective consumption of resources, getting you the best consolidation ratio, while ensuring application KPI’s are always met – something that cannot be done with a static ratio.

So in order to move away from static ratios, provide value by ensuring efficient consumption of hardware investments, and support the ever increasing dynamic nature of the business, the operations model and toolsets need to be upgraded. They also need to take into account advanced concepts like a logical CPU’s not necessarily being equal to a pCPU (eg: Hyper-threading). The speed at which you can respond to a performance or capacity issue, becomes a key mechanism to reduce risk.

If speed of response is not on your side, a conservative starting point would be a 1 vCPU to 1 pCPU ratio, not taking to consideration logical CPU’s created via Hyper-threading. As an organization matures and invests in new tools and processes, this ratio will increase as a side effect. Its final value will be determined by your mix of applications, choice of technologies and maturity of operations which is different for every organization.

In summary, one to one gives you maximum performance at the highest cost. But in the world of doing more with less, learning how to drive by contention will ensure you maximize both performance and investment – let’s make that the new challenge.

About the Author

Mark Achtemichuk currently works as a Staff Engineer within VMware’s R&D Operations and Central Services Performance team, focusing on education, benchmarking, collaterals and performance architectures. He has also held various performance focused field, specialist and technical marketing positions within VMware over the last 7 years. Mark is recognized as an industry expert and holds a VMware Certified Design Expert (VCDX#50) certification, one of less than 250 worldwide. He has worked on engagements with Fortune 50 companies, served as technical editor for many books and publications and is a sought after speaker at numerous industry events. Mark is a blogger and has been recognized as a VMware vExpert from 2013 to 2016. He is active on Twitter at @vmMarkA where he shares his knowledge of performance with the virtualization community. His experience and expertise from infrastructure to application helps customers ensure that performance is no longer a barrier, perceived or real, to virtualizing and operating an organization’s software defined assets.

Comments

Amen. It’s great to have experts like you validating what my advice to my customers. They have moved away from static data Excel spreadsheet to actual, live data in vRealize Operations. They are, like you said, are “driving by Contention”.