Disclaimer: These thoughts are my personal opinions. Since I work on vFabric these days and not vSphere I’m not very familiar with the specific reasons behind any choices made with respect to the new licensing scheme.

I missed the big launch yesterday because I’m hanging out in India with the GemFire and SQLFire teams, but I got up this morning and saw the discussion is completely dominated by talk of the licensing changes. That’s too bad because vSphere 5 has some pretty cool new features that aren’t getting the attention they deserve, for instance Storage DRS and SRM failback.

The new scheme is quite different from anything we’ve seen and any change is naturally met with skepticism. I think people who are worried about the new scheme should consider what would have happened if VMware maintained the status quo. vSphere 4 was not licensed per host, rather it’s “per CPU socket up to X cores” where X depends on the license level, 6 for Enterprise and 12 for Enterprise Plus if I recall correctly.

The big push in hardware is to increase core counts as fast as possible. Predictions are core counts will follow a Moore’s law-like trajectory of doubling about every 18 months. I know there are 10 core CPUs on the market today and much more dense CPUs are in the pipeline. So consider that in a few years you’re going to have computers with 32 or 48 or maybe even 64 cores per CPU. If you’ve got a computer with 4 CPU sockets each with 64 cores you’d need 44 licenses of Enterprise or 24 licenses of Enterprise Plus to have the host in compliance. (The formula is ceiling(# cores / max cores at license level) * number of sockets. Get that??)

Now consider that each host in your inventory is going to have different hardware profiles and require the same kinds of calculations. What a mess!

The fundamental thing is that to deal with this mess VMware had to change to some sort of pooling mechanism. Other vendors are going to have to do similar things. These sorts of host-by-host calculations on discrete boundaries are just unsustainable as IT environments get more and more complex. Pooling is the only answer. The only other option that is tenable is per-host licensing, but you won’t see that from anyone because it can’t monetize big hosts and small hosts differently.

VMware could have chosen to do core-based pooling or memory-based pooling. As it happens they chose memory-based, I don’t know the specifics of why, these hardware assets seem to be on the same growth trajectory. But my understanding is the amounts of memory they chose (24/32/48 depending on license level) were chosen so that most common current hardware configurations would not be affected.

So if you’re worried about vSphere’s new licensing scheme, consider the mess you would be in without this change, and also bear in mind that you were going to need to buy a lot more vSphere 4 licenses to enable the next wave of hardware anyway, because of exploding core densities. With pooling the operational aspects get a whole lot easier.

Update: In case you missed it, the vRAM licensing scheme was changed with higher ratios and some other tweaks. Read about it on the official blog.

I had not considered the limitations that they had placed on CPUs cores as a future limiting factor and I think you are spot on as to a need to make a decision between the cpu and memory.

I think they chose memory primarily so cloud providers (Public and Private) could easily quantify a vSphere license cost/vm so that they could pass those costs onto their customers. Right now that process is very dynamic and convoluted. Since they chose memory there was no reason any longer to limit the cores available which they wisely did away with.

There are a few scenarios where it will hurt some users. Imagine users who want to use the large scale virtualization abilities in vSphere 5 (8 vCPU, 128GB RAM) VMs to do big databases. I just sat in on a meeting today all day with such a customer (and yes, they do really need such large DB machines).

Say they used a vSphere 4.1 license and filled a 2 socket, 16 core box with 256GB memory. With ESX 4.1 and Ent+ Licensing, they need to buy only 2 licenses (1 per socket). Done. With vSphere 5, the 96GB license they get doesn’t cover it. They need to buy 6 licenses for that very same host.

It almost seems to punish users who want to take advantage of the new awesome features. Thoughts?

I think you nailed it when you stated that “These sorts of host-by-host calculations on discrete boundaries are just unsustainable as IT environments get more and more complex”.

I can see where VMware is going with this licensing model and I think it will give IT more control. As with any changes, this unfortunately is perceived as a negative impact to some, but it will soon be overshadowed by the great new products and features.

Your comment hits pretty close to my home because I’m dealing with SQLFire and GemFire, which are actually in-memory databases. So what am *I* going to tell customers that want to run SQLFire or GemFire on vSphere 5 and are worried they won’t be able to because of prohibitive cost?

These days everybody’s looking for a faster database, and one simple answer is just to give your database a ton of memory. It’s a lot easier than solving the fundamental problems in database design and works pretty well for read-intensive databases.

However there’s limits to how far you can really go with it. Right now your database’s bottleneck is most likely storage, disk seek times in particular are real performance killers. Caching more in memory cuts down on a lot of disk seeks and reads get a lot faster. So now you don’t have that disk bottleneck. But where did your bottleneck go? In a lot of cases you will now have a new bottleneck in the CPU, something you never would have seen before because your data throughput never got high enough to come close.

So there’s a limit to how far you can just throw RAM at the problem before you’ll also find you need to start throwing CPU at the problem. In your case you’ve got 1 vCPU per 16GB of database RAM. That will be ok for some workloads, batch oriented things or databases that serve mostly key lookups for examples. But I don’t think this ratio will be good enough for analytic use cases or databases that need to perform more sophisticated queries.

The thing to be sure of is whether the user will be satisfied with the performance they’ll get from a 1:16 ratio or does that ratio need to be more like 1:8 or 1:4? When you first move from disk-only to caching lots of memory, the performance difference is night and day and it seems great, but that sort of satisfaction doesn’t tend to last too long, people are getting a lot more demanding about low latency and real-time needs.

@Josh: For cloud providers, there is VSSP. The EULA for regular VMware products doesn’t permit it being used in a shared hosting environment.

@Matt: Yep, I see that scenario as well. I have customers that need over 100 vCPU on their VMs, and massive amounts of memory. It will be fun to ask them buy licenses for massive amounts of memory in order to use vSphere 5… I really look forward to it

Time to start relying on workarounds other than memory for databases. E.g., http://flashssoft.com and others like them, including SAN-level caches. Speed up the disk I/O so the DB engine doesn’t need as much RAM for caching. Not for purely in-RAM DBs, of course.

I could get behind the new licensing model more easily if VMWare would actually sell vRAM licenses. The fact that amount of vRAM you get is tied to the number of CPUs you license is what I find to be a mess. I understand that they needed to provide an “evolution” path for existing customers.. That’s fine.. Give me the one time conversion of my vSphere 4.1 licenses to vSphere 5′s vRAM licensing, and then let me purchase additional GBs of vRAM going forward. Let me buy 5 GB of Enterprise Plus vRAM if I find after the conversion that I am short 5 GB… Don’t make me buy an entire addtional cpu license of Enterprise Plus to get 48 GB of additional vRAM when I only need 5. Attaching vRAM to CPU licenses is just more confusing than necessary, and just stinks of forcing customers to buy more than they need.. Isn’t one of the primary driving factors behind server virtualization the desire to get the most utilization out of physical hardware? So we can consolidate our hardware to ensure we’re not spending too much on wasted cycles, but are forced by a silly licensing policy to buy more licensing than we need.

Additionally, I think there is a bit of bait and switch going on here. Since the beginning of production ready VMWare we’ve been able to allocate as much vRAM as we wanted (within reason) to VMs knowing that VMWare will only use what is actually needed. There has been incentive for admins to over provision the stuff both to avoid unseen memory utilization increases and to help sell doubting DBAs and the like on VMWare’s ability to virtualize their workloads without performance impacts. The result is that a lot of us have piles upon piles of VMs with more vRAM associated to them than what is really required. Now all of a sudden we have a reason to pay attention to this and now need to go back and start trying to convince VM owners that they need to give up vRAM. That’s going to be a nightmare. You tell an Oracle DBA that you’re removing 2 Gigs of RAM from their server. If you manage to convince them you can bet your behind that any real or perceived performance degradation on that server will be blamed on the RAM being removed.

Additionally, removing vRAM will require VM reboots.. So now we get to force downtime on people who ultimately allowed us to virtualize their environments on the promise of drastically improved uptime. Yay!

@Andreas is correct. “Retail” vSphere licenses do not allow you to provide hosting services for other organizations. You must be signed into their VSPP program.

As a VSPP this change is not too unexpected and already feels familiar. Our rental licensing model has already been based of $x per allocated vRAM per month. That is the only basis for us charging customers for their hosted consumption and has been like that for awhile.

This will run some major alignments when customers need to hit the ROI worksheets as now there are some direct lines that can be drawn between what running their IT would cost in CAPEX vs OPEX since the pricing model for both is one step similar.

Carter, your reasoning is sound and I can agree with your argumentation.
Only problem I have is that the vRAM amounts “…so that most common current hardware configurations would not be affected.” seem to have been calculated quite some time ago.
Our boxes that use an Ent+ plus license tend to have substantially more than 48GB.

There are a few scenarios where it will hurt some users. Imagine users who want to use the large scale virtualization abilities in vSphere 5 (8 vCPU, 128GB RAM) VMs to do big databases. I just sat in on a meeting today all day with such a customer (and yes, they do really need such large DB machines).

Say they used a vSphere 4.1 license and filled a 2 socket, 16 core box with 256GB memory. With ESX 4.1 and Ent+ Licensing, they need to buy only 2 licenses (1 per socket). Done. With vSphere 5, the 96GB license they get doesn’t cover it. They need to buy 6 licenses for that very same host.

It almost seems to punish users who want to take advantage of the new awesome features. Thoughts?

Matt,

I’m not sure I understand your math here. To license a vSphere 5 pool to use 128GB of vRAM under Ent+, wouldn’t you need to purchase 3 licenses, not 6, unless you plan on using all 256GB of pRAM, leaving you with no headroom in the pool.

The new licensing seems to be someone at VMware trying to make more money without thinking through the consequences. Your assumption about cpu cores growing is just another of those not-so-well-through-through-items. If we in a few years from now grow to CPUs with 64 cores – how much memory do you think will be in a server per such CPU? Today I would say 128GB per 10-core CPU is a fair amount of memory. Let’s not just multiply up cpu cores, as you do in your misguided blogpost. If I grow from 10 core to 64 cores per CPU – wouldn’t it then be fair assumption that the memory in the server grows similarly? So 512GBx6.4=3.2TB
As 4-cpu server in your 64-core future will thus possibly have 13TB RAM.
How many vSphere 5 licenses will that require?
I’ll tell you: 13000/48=271 vSphere licenses… Just because of RAM.
If you had been looking at cores like in vSphere 4 the same server would have only required 24 licenses.
That’s a 11x cost difference.

Food for thought, eh?

A more realistic calculation for the not so distant future is with a very nice virtualization-server; HP DL580G7 with 4x E7-4870 cpus and 512GB RAM.
vSphere 4-licensing would mean 4 enterprise plus licenses.
vSphere 5-licensing means 11 enterprise plus licenses.

Oops! should have used 128MB in that calculation. so correct numbers would be that the “future” server would have 3.2TB RAM and with that require 68 enterprise plus licenses which is still very much more than the 24 required by the core-count.

Too bad someone didn’t do their job properly and got the ratios correct from the beginning then? Those numbers should have been doubled at the moment as they were only valid 3-4 years ago when we had slow 4-core CPUs with no hyperthreading.
It shouldn’t take very high IQ to understand than looking at old environments with old hardware isn’t representative of what customers are buying this year. Somebody at VMware need their heads examined or find something to do which they are better at.

One other issue is that if you are in a predicament where you do not possess a co-signer then you may really need to try to wear out all of your school funding options. You can get many awards and other grants that will provide you with funds to help with classes expenses. Thanks for the post.

AHJ is looking for content based partnerships with webmasters in the medical niche. American Health Journal is a health care web site containing 3000+ of high quality medical videos. We can offer content exchanges, link exchanges, and exposure to your website. Get in touch with us at our contact page on our website.

I think this is among the most significant info for me.
And i am glad reading your article. But want to remark on some general
things, The website style is perfect, the articles is really excellent :
D. Good job, cheers

Needed to send you a tiny note to give thanks over again with the remarkable suggestions you have shared here. It was so incredibly open-handed of you to deliver freely precisely what a few individuals could possibly have sold for an e book to help with making some money for themselves, particularly given that you might have done it if you decided. The suggestions additionally acted to become easy way to understand that someone else have the identical interest much like mine to realize a good deal more in regard to this problem. I think there are thousands of more enjoyable opportunities ahead for those who browse through your blog post.