Posted
by
Soulskill
on Friday December 13, 2013 @01:09PM
from the buzzwords-as-a-service dept.

itwbennett writes "Two reports out this week, one a new 'codex' released by 451 Research and the other an updated survey into cloud IaaS pricing from Redmonk, show just how insane cloud pricing has become. If your job requires you to read these reports, good luck. For the rest of us, Redmonk's Stephen O'Grady distilled the pricing trends down to this: 'HP offers the best compute value and instance sizes for the dollar. Google offers the best value for memory, but to get there it appears to have sacrificed compute. AWS is king in value for disk and it appears no one else is even trying to come close. Microsoft is taking the 'middle of the road,' never offering the best or worst pricing.'"

That is possibly more moronic than simply mistaking a verb for a noun.

No, it's not moronic. Slang is never moronic, it just is. That is how language evolves, especially within subgroups... even I at the periphery of the cloud world (as I'm primarily a developer and not a sysadmin) understood what "compute" meant and didn't even think twice reading it.

The fact that you had trouble with it merely means you exist more outside that world than the people using it, not that there is anything wrong with the word

This is not slang. This is an attempt at forcing slang and that is what makes it moronic. I am fully aware of the evolution of language and this is not it.

No, it does not say the same thing. It makes it look like the sentence was cut off because there was a trailing verb. There are perfectly good nouns that would fit that spot and actually be descriptive. Alternatively, they could have actually written what they meant instead of trying to "spice" it up and make it look hip. If they wanted to talk ab

Oh, slang is often moronic. Slang mostly exists as deliberate language mis-use an in-group identifier. And groups of morons have slang too.

But this is technical jargon (a specific kind of slang), and technical jargon is definitely stupid when simpler common English works in place of the jargon, as technical jargon isn't only deliberate language mis-use an in-group identifier, but is needed to communicate with little ambiguity. "Utilize" is stupid jargon because "use" works fine.

That... doesn't help. What the hell is compute capacity?? How much compute the cloud instance can do?

Ok, dropping the pretense of being dense, I can see what the intended meaning probably is. But how much harder is it to say computing capacity, or computational capacity, or any other way of saying it that doesn't make the speaker sound like a douche.

Seems like you can pick which vendor gives you the best value based on the use case of your application. Doesn't seem that absurd to me at all.

Infrastructure is sort of like being a car manufacturer - a lot of investment in hardware, facilities and people; meaning the barriers to entry are quite high. Sure, I could piece together my own infrastructure in my basement, but to offer the bandwidth and up time that the big boys offer? NFW. The power (as in alternating current from my utility) alone is an issue and there's a bunch of things that add together to make a 99% up time system that isn't exactly off the shelf knowledge or technology.

Some features are included in one, but not the other. Some thing are add-ons. Some things aren't even available.

Trying to get a "compare like to like" is damned near impossible, because they've carefully set them up so it's impossible to do that.

Which means if you're trying to evaluate several of these services to figure out which is the best value for your needs, you need to do extensive fiddling to get them described in the same terms and actually be able to understand what you're seeing.

The point of the article seemed to be less about who's best at what, and more about how difficult it is to actually determine it. And he's right, in my opinion. The way cloud services are usually priced can make it really difficult to know what your actual cost will be.

I guess you would enjoy car hopping where one offers you furlongs to the hogshead and another milidrops/cm. The passenger capacity is measured in baby chimps and the A/C capacity in Yankee stadium beers. Meanwhile, instead of a sticker pric, you must multiply out the pennies/unit for each metric displayed and total them up. Each car has different prices for each metric, some metrichs have limited granularity and not all vehicles present equivalents to each metric.

Every time I read these types of articles, I feel like implementation cost is always ignored. Sure, maybe I get some extra compute for my dollar here, or some extra memory there, but how long did it take to integrate this solution using a given vendor's APIs and services? How easily can I script scale-up and scale-down policies? How effective are those scaling policies at actually saving me resources and money? I think this is kind of an old-fashioned way of calculating infrastructure pricing - it's more complex than just pricing out servers that happen to be somewhere else.
Major caveat, however - it's awfully tough to calculate some of those intangibles accurately enough to put in a whitepaper...

The quality of journalism here? Don't you mean everywhere? Nearly everything you read today is pandering and propaganda. I blame the audience for a good portion of that. It's amazing how many people that think they are intellectuals refuse to even consider that their favorite theory is not "fact". Thirty years ago there were pig-headed people too, but not nearly as bad as today.

There is simply no free lunch. The guy you are outsourcing to is in it for the money. He will make sure he makes his money off of you. They're not going to put up with crap that your employees normally would. They certainly won't do it for free.

Recurring costs are everyhere in IT. Power, AC, floor space, people to guard your servers, replacing broken/obsolete hardware. This is nothing new. It's not like you just buy a big ass server and watch it run forever with no recurring support costs.

I think a lot of people here are massivly underestimating the total cost of a unit of computing resources when they run it in their own machine rooms. It's not like your machine room is any more efficient to operate than Amazon's. In fact, it's probably massively less efficient unless you're a pretty big operation. The only cost they have that you don't have is "profit for Amazon."

The flip side is that at a small scale, you get a certain amount 'for free.' If you need to have some infrastructure locally, then you already have some sort of a room with space to put a new server in, you already have sufficient electricity. You already have a guy to replace a blown hard drive. The extra time he spends replacing it is technically nonzero, but it's a fairly rare event, so a single extra server tends to be "in the noise." The big cost is as soon as you exhaust your existing capacity. I

Well, not intentionally perhaps, but it likely manages to do so anyway. At the extreme end you have applications like R&D where the demand for computational simulation and analysis resources may fluctuate wildly - an appropriate cloud service will let them pay for only what they need, rather than needing to maintain their own peak-capable infrastructure at all times.

More commonly it trades periodic large capital outlays for hardware, plus plus ongoing rent a

Oh, sure, someone like Amazon can probably get a better price on the hardware than you or I. But they still need to buy it, power it and arrange bandwidth, same as anyone else.

Where they come into their own is in a few very particular (and for that matter very common) use cases:

- Where you don't need the power of a whole server and can get by just fine on a tenth that amount.
- Where your requirements may spike occasionally - but the keyword is "occasionally". They don't spik

That's a miniscule part of it. My company's base infrastructure is n servers. During heavy load, we routinely need to scale up to n*20, maybe n*50 capacity. We pay out the ass for a few hours then drop back down to the cheap n size. Because we share a cloud provider with many thousands of other companies, we can do that scaling for a tiny fraction of what it would cost us to support our maximum capacity on our own. When our needs are peaking, our neighboring companies are scaling down and going dark for the

Insane hard to calculate what your costs will be at any given provider. So insane bad for the bottom line.

It seems pretty easy to calculate what your costs will be at any given provider - just add up your infrastructure needs and use the published pricing to calculate how much you'll pay. When we migrated to AWS, we estimated our monthly bill to within 10% of our actual monthlybill. Of course, if you don't know what your needs are, then you're shooting in the dark, but the same is true if you're buying your equipment on hosting it at a coloc.

What's hard is comparing prices against all providers since you have t

Connecting that blade server to other Internet services and to customers and protecting your service from hardware or software failure can become a challenge. "The cloud" (someone else's computer) provides Internet connectivity, failover to a fresh instance, and managed backup.

Sure, then cost out the electrical and HVAC infrastructure to make sure that the wonderous blade server always has power and cooling. And no a simple UPS in the rack is not going to suffice for that AD/E-mail/File server infrastructure that supports 200 lawyers in three different buildings across six blocks in downtown Madison Wisconsin.

Not a fan of "Cloud Computing" but it is not as simple as buying a Blade server and plugging in an internet connections.

1) 200 lawyers can afford some electrical and HVAC costs, not to mention a well-paid IT staff.

2) It pretty much is that simple for those of us who do it. Supporting infrastructure for a few hundred people is child's play nowadays. And hell, if you're setting up AD and Exchange on cloud servers, you can do it on your own hardware.

No argument but it is not as cheap as the GP was making it out to be. Setting up a robust environment is not cheap nor easy. If you are setting up a new environment from scratch there are a lot of costs unrelated to computing hardware that individuals often fail to account for. A co-location or cloud environment may, and I emphasize may, be a way to go. In my experience the way the cloud vendors wind up nickel and diming you makes it not such a no-brainer.

Every time I compare SSD to HD, I don't see the power saving GB for GB unless you are talking trivial amounts of GB.

For example, Intel P3700 series SSD [hothardware.com] (2 TB max size) has a power consumption of 25 watts writing and 10 watts idle. Look at the collossal heat sink on that thing.

A Seagate ES.3 [seagate.com] 7200 rpm 2 TB SAS enterprise HD has a power consumption of 10 watts random read and 6 watts idle. Considering that the 4 TB model doesn't take much more power than that, th

I'm not "hating", I'm just pointing out that in the huge list of gigantic data breaches there certainly seem to be a lot of non-cloud instances. I don't think rolling your own makes you safer unless you are exceptional in that regard.

Seriously, a 128 core blade server with tons of TB in DDR3 and a couple of SSD boxes are pretty darned cheap.

And then your data doesn't get "stolen" or "lost".

Of course, you need 2 of them for redundancy. And a router. And a firewall. And a load balancer - all duplicated for redundancy. And multiple internet connections from different vendors (you don't trust your coloc for internet connectivity, right? That's like using a cloud provider).

And then you need to duplicate the whole thing in another datacenter for geographical redundancy.

And hire people to manage it all.

Suddenly it's not so cheap when all you really needed is a half dozen 2 core servers and a few warm spares in the remote datacenter.

And then you need to duplicate the whole thing in another datacenter for geographical redundancy.

Useful for some workloads, sure. But if it is an internal service, rather than something like a website (gasp, not all servers are public facing websites) then if my office gets taken out by a meteorite, none of the corpses in the building actually care about whether or not some instance of the service exists in some other safer geographic region.

Of course, you need 2 of them for redundancy. And a router. And a firewall. And a load balancer - all duplicated for redundancy. And multiple internet connections from different vendors (you don't trust your coloc for internet connectivity, right?

Do you even know what a blade server is? Redundant blades, redundant power supplies... redundant bloody everything.

I do, and I even know the difference between a blade chassis and a blade server. And I've seen what happens when a voltage regulator failure on a blade takes out the entire 12V rail on the blade chassis (as well as taking out the blade next to it). No one that cares about reliability is going to run a single chassis.

Firewalling, routing, and load balancing can be handled by VMs running on said ridiculously redundant blade server.

Don't you still need people to set up all of those services?

Does anyone really run their border firewall on the same blade chassis that run their servers? I won't even plug non-firewalled internet traffic into the same core switches that carry the rest of our traffic.

Most businesses don't need geographical redundancy because they don't need 100% uptime. Very few do. I'd say the vast majority of the businesses (small to medium) out there can get by without their servers for a day. They might not like it, but they won't die.

That's what businesses say when they haven't had a week-long outage because a transformer blew a hole in the side of their colocation center. Business continuity can make-or-break a business after a disaster - and it comes very cheap with most cloud computing solutions. Shipping hourly data snapshots to a remote coloc is cheap a business can be up and runnning at the remote site with no more than an hour of lost data.

I used to work for a company that sold cloud services. It can be good for some use cases, but not so often as people seem to think.

Sure, cloud computing is not for everyone, but a single blade chassis is not a replacement for cloud computing.

Let's start by using "codex" correctly. (Or, in this case, not using it at all...) It's not a secret decoder ring. It's a bound set of pages. Or a "book", but not necessarily with a cover. A codex be a guide to decoding or translating something, but that would be completely incidental, as the word carries no such meaning.

One very important aspect to pay attention to is the advertised performance service you will get. CPU cycles, size of memory, volume of storage, amount of networking bandwidth are all sure to be price points and advertising points. I would encourage everyone to pay attention to any fine print about:*dedicated vs shared CPU. The biggest problem with CPU sharing is that CPU cycles are scheduled to be shared on over subscribed "cloud" providers, which helps lower cost. Oversubscribed CPU cycles causes CPU wait time, which means that your "cloud" CPU may need to wait X amount of time to be scheduled for your N CPU cores that you are paying for. Let's say that you have 8 CPU's, you may need to wait for 8 CPU's to be unused on the physical host your are on before you get to do any work at all. If you have 1 or 2 CPU's than this is far less of an issue. The greater the core count the bigger the issue.

*Memory ballooning. Memory is one of the most easily over subscribed resources in "clouds". To cut costs Memory is allocated to you at, let's say 12GB. But you only use 6GB. On the back end you are really only given 6GB. Going further let's say that you have 12GB, use only 6GB, but only have 4GB actively in use by your application. There are memory scheme's out there that will write the 2GB that you do not use very often to disk(think swapping intelligently).

*Disk IO speeds. Storage can be really cheap or really expensive depending on how it is architected. Pay attention to any fine print talking about what the storage consists of and if you have any kind of dedicated Disk IO. The cheapest "cloud storage" provider may be offering a product that works great for highly cached low transaction websites. But that same provider may give poor performance for a high rate of disk transaction logging server, or high transactional application.

*bandwidth limitations. Pay attention to quality of service limits. Pay attention to bandwidth sharing, do you get full advertised bandwidth to the internet or do you get "up to bandwidth" limits. Network connections to other servers that are co-hosted could be as fast as 40+GB/s. If it matters to your application ask if there are higher bandwidth connections between co-hosted servers.

*backups, service uptimes, service failure compensation, riders on the contract that talk about lower temporary performance in the event of a hardware failure. Options for expansion of resources(hot or cold).

that and amazon and others all oversubscribe the hardwarewe have vmware servers that use twice the physical memory that's in there. 2GB to an instance doesn't mean it's always using 2GB so you can add more instances

Let's say that you have 8 CPU's, you may need to wait for 8 CPU's to be unused on the physical host your are on before you get to do any work at all. If you have 1 or 2 CPU's than this is far less of an issue. The greater the core count the bigger the issue.

You seem to be describing a "feature" in versions of VMware that are very old these days.

Cloud computation sites like CEX.io and Cloudhashing.com and are for those who don't want to house their computation mechanism at home. The cloud cost at least triple compare to similar (performance-wise) hardware, but you don't have to deal with electricity and stuff, plus you can sell back your hashing power to the exchange.

Most IT services and applications have gone to extremely complicated price models now. The purpose is to confuse upper level management so that they just decide to buy the highest level of service because they can't figure out what any of the levels mean.

Try reading the MS SQL Server license guides. It's more complicated than the software itself and even has quick reference guides and instructions on how to read the guides. Most managers just say to buy the most expensive so they know they're covered.

The most expensive licensing for a product does not always get you all the functionality you want or need to use the product. Many companies offer "plugins" or add on services to make their base or even advanced product better. These products often do not have an all inclusive option. Ultimately any marketer will try to get as much out of their products as they think they can get away with. If people making decisions can not, by them selves, understand exactly what they are buying they ought to include othe

Cloud pricing is insane (and insanely complex) because otherwise the vendor wouldn't make any real money off of it.

Take AWS for instance. Sure, the spot pricing is cheap as hell. Well, it would be, if they didn't charge you $0.11/GB-hour for storage, a penny-fraction for every 10,000 GET requests you receive (and a similar price for every 1,000 PUT/form requests), and a zillion other nickel-and-dime charges that turn a forecasted $300/mo. estimate into a $3200/mo. OpEx ( for five moderately-busy servers w/ a small DB... basically a smallish-sized commercial website).

I know this because I just inherited one of these. My predecessor promised cheap, I'm stuck with managing expensive (and am moving the #$@! thing back into our existing colo space as soon as I can practically do so...)

my employer had an interesting result when looking at these factors, which is: AWS is the same cost as our own datacenter for heavily utilized systems. Where a savings can be realized is in hosting burst or temporary capacity. Or, I suppose, if you don't' have your own DC. It makes sense, AWS pricing would have to ultimately be the same as anyone else's datacenter, with maybe a little economy of scale thrown in. But any well run DC should price out in the same neighborhood.

It does come down to that, but only almost. When you consider what it takes to dink around with VPC, along with other infrastructure integration hassles (not to mention the sysadmin's time in ramping-up and dealing with them)? It can get pricey in a hurry. Gets even worse when you have a *nix-heavy environment, and discover that unless you want to jump through a ton of hoops, you can only migrate 'doze server 2003/8 VMs to it.

Now as a cold-start remote DR site that you build-up (say, leave your DB on in the

I know this because I just inherited one of these. My predecessor promised cheap, I'm stuck with managing expensive (and am moving the #$@! thing back into our existing colo space as soon as I can practically do so...)

Sounds like your predecessor fell for a scam that's existed since time immemorial. Outsourcing isn't always cheaper. How can it be when the company you're outsourcing to faces the exact same costs as you do but needs to make a profit on top?

Oh, sure, it is under some specific circumstances. But the idea that it always is is downright lazy management.

My guess is that you've never managed a data center, or specced a large enterprise application to serve high numbers of simultaneous users, if you think that cloud offers a "false economy" to users.

Certainly, you can waste money on cloud by pushing "everything" into the cloud. But you can save shitloads of money by adopting a cloud model as well. If you simply need to expand into a cloud provider occasionally to accommodate seasonal peaks, then you can save yourself massive amounts of infrastructure cost