This issue is the same old cloud computing security red herring. People will share network bandwidth and storage space just fine, but there’s just something sinister about sharing the physical compute power. What you end up with is a lot of vague “well it might happen!” with no specifics and no examples where it has happened, despite massive use of public cloud computing services for sometime now.

So what’s the fix for this? If someone orders a single virtual machine, dedicate them an entire physical server so they can be all warm and fuzzy that nobody is close enough to do something malicious to them with those unspecified hypervisor security holes.

Does anyone else see anything wrong with this idea? Sure, you can probably make a cloud automatically carve out a physical chunk of itself for one customer. Unfortunately, when you do that, it’s not “cloud computing” anymore. At the very least, you lose most of the characteristics that make it “cloud” and turn it into a standalone virtualization server or, heaven forbid, effectively an old fashioned standalone dedicated server. It just simply isn’t cloud anymore.

Do this and now you lose the utilization and energy efficiency that make cloud computing a much more cost effective proposition. Is the customer with one virtual machine going to be willing to pay the cost of a dedicated server for the privilege of segregating a physical cloud host for their private use or are they going to expect cloud pricing because you’re still calling it “cloud”? If they’re paying dedicated price for a single virtual machine and their own private physical server, why not just do a separate dedicated server?

The one sole advantage that this idea has is automation. It would certainly be easier to dedicate and undedicate a physical server in a cloud environment. On the other hand, this would be likely to be an operations nightmare, trying to keep enough excess cloud servers available to accommodate losing an entire physical server when you provisioned a single VM. Of course, keeping that much excess capacity running also kills the heck out of your efficiency and, by extension, your costs.

8 responses to “Cloud computing: another ridiculous proposal.”

Vern, you’re spot on. The piece you refer to is just the result of more fud from incumbent suppliers. The whole concept of a private cloud is flawed – in the same way as a private nuclear power station is flawed: you just cannot get the scale economies.

otoh, I don’t think it’s worth the blood pressure to worry about it. Later adopters will either die or move to public cloud as and when their CFOs start to challenge the capex spend, or unusually high opex, and then much of the role of the CIO goes the way of the Chief Electricity Officer.

I’ll note that my post says “for a typical who has a reasonable number of VMs (most non-startups have dozens, usually hundreds, of VMs), the wasted capacity is minimal”. This is not for the one-VM customer (i.e., not the legacy VPS hosting customer).

Say you have a customer with 100 VMs — pretty typical for an entry-level cloud customer today (mainstream business, not a start-up). If you’re at 10 VMs per physical server, at worst you waste 10% of the capacity used by that customer by doing provisioning in this way. Charge a modest price uplift to cover it, and you’re fine. It might even be more profitable — charge a 20% uplift to do ‘psuedo private’ like this

Ahhhh, but you can’t make an assumption like that. Operating a provision on demand cloud means that VM workloads could stay steady or they could vary radically, that’s a big part of the attraction of a cloud. Now instead of smoothing demand changes out over a bunch of tenants, you create exaggerated radical changes by carving out a single tenant.

Consider this. I run a load balancer on my cloud that balances VM workloads across all the physical hosts in the cloud. Every time this one special customer went over 10 VMs, the system would not only have to move however many other tenants to clear another physical server, it would also have to rebalance the entire cloud (sort of a ripple effect). Now this special customer drops that 11th VM, and the entire cloud has to rebalance radically again. Now picture a dozen special customers all doing this, it would be a total nightmare. You’re not just impacting one customer, you’re impacting everything all the way across the public cloud.

Sorry, but I’ll stick to my position on this. If the customer needs a private cloud, build, properly engineer, and charge for a private cloud. Trying to dynamically carve chunks out of a busy public cloud to create a “pseudo private” service is a bad idea.

Actually,one can make an assumption like that — presumably one wishes to operate this in a commercially reasonable way, and the service provider would of course restrict the availability of a psuedo-private service to customers who have enough scale that one physical server’s worth of wasted capacity makes no real difference — the 100 VM mark is pretty reasonable, and pretty trivial to achieve when you’re not talking about small businesses.

In a cloud at scale, a single physical server just isn’t that big of a deal. Fundamentally, provisioing one small VM on a physical host and then leaving the rest of that host empty doesn’t create any more capacity management issues that provisioning one large VM that eats the whole resources of an entire physical host. (Amazon, for instance, does a perfectly decent job of capacity management with a whole variety of instance sizes, all of which take up different percentages of different classes of physical hosts, including instance sizes that eat full physical hosts.)

Lydia
Your point is the underpinnning issue for why public cloud must be cheaper than private, since the profit margin is dependent on the utilisation of all of the assets, which is a function of scale/liquidity of supply.

As the software designs start to catch up with the benefits of pay-by-the-hour VMs it becomes much less economic to dedicate silicon to specific customers. This change will be faster than normal technology adoptions as newer SDLC approaches like Continuous Delivery enable nearly friction free migration between underlying hardware and software platforms.

The main proponents of ‘private cloud’ are existing vendors whose business model is based on large capital spend by customers. The product vendors need to concentrate on their new market (the public cloud providers, who want cheap commodities). The service vendors need to refactor their value chains.

I do think that the ease with which Amazon managed to slough off the DDOS attacks by wikileaks supporters shows just how reliable/secure the public cloud is in practice. Maybe this event will start to clear some of the FUD spread around private cloud.

Lydia:
Sure, Amazon does a decent job of capacity management, but they’re doing it with a shared pool, not carving out reserved physical servers for individual tenants. A pool of 100 shared tenants does not equal 100 individual tenants (if it did, the insurance companies would be out of business).

I’ll also reiterate here that this has more impact on the cloud than just the raw physical server capacity used. The potential 8-9 or more VM migrations that could be triggered by spinning up 1 VM in this scenario impacts the capacity of 3 servers for every migrate (originating, destination, pool master), the shared storage, and the network capacity, all effectively wasted capacity. Substantial quantities of frivolous VM migrations in a cloud is a bad thing.

I suppose you could mandate that the tenant maintain at least 100 VMs to qualify for pseudo private cloud, but that kind of wrecks the dynamic flexibility that is part and parcel of cloud computing.

In the end, I prefer to use the right tool for the job. Attempting to mix private and public cloud services onto the same facilities is an invitation to sub-optimal results.