Stated reasons are standardization and cost. Based on that, the head of our IT department has decided it's a great idea to buy many used Dell 2950 servers for running VMs.

I feel like this is a bad idea. We are already more limited by space and heat than anything else- paying a little extra to get a newer more efficient server would have been a lot cheaper than spending tens of thousands on construction of a second server room, for example. We literally have 6 racks full of servers, mostly 2950s, and no more room for expansion in our original server room. I am under the impression that if we consolidated our servers into something faster and more efficient we could save a lot of space and produce less heat. Also, a few of the 2950s have already failed, and since they are old with no or limited warranty it's just been extra expenses for us.

I'd like to be able to make a strong case to stop this insanity and switch to buying new or at least newer hardware. Just based on CPU progress of the last few years I imagine a brand new server should have about 4-8X as much processing power as a 2950, and would let us consolidate the VMs hosted on multiple 2950s into a single machine. The barrier to doing this is the ease at which they find used 2950s, for example there are several on ebay right now for $399 buy it now. Even if I try to explain how a brand new server could host 5X as many VMs, they will point out that a new server costs more than 5X as much as a 2950.

Is my thinking flawed, or should I try to push for a change here? Anyone have a good suggestion for a more modern server to standardize on to replace hundreds of VMs?

2950 had a variety of CPUs meaning vMotion may not work between 2 dissimilar models. I believe there was 3 of them typically called 2950, 2950 II, 2950 III.

2950 uses Buffered DDR2 which is insanely expensive compared to DDR3. Since VM's require lots of RAM and memory bandwidth, you will need a lot of it. The limit for some of the machines was only 32GB of which you likely could buy 64->128GB of DDR3 for the same price.

All of what you say is true, they don't seem to care. They spend thousands upgrading the RAM to 32GB in each server. Maybe I need to just spec out a superior newer machine and details the cost advantages and send it to someone above in the company.

Biggest thing I would comment on is power. 1 say Dell R720 with 128GB of RAM will use about 2/3 the power of 1 2950 [I think it has been awhile, the 2950 is getting on to 5-7 years old now] and also cooling needs (all that power has to go some where.) 4 2950's to get to 128GB of RAM would have [in the ESXi world at least] 4 x the host licensing costs and the R720 use only 16% of the power and cooling (I think that is about 600% of the power needs for 2950s vs R720). Add in that you can no longer get support warranties for them, you may see large down time trying to get the right PSU / Drive etc off ebay.

Add in that some of those came with CPUs that didn't have VM extensions or in some cases were 32-bit only and as such won't even install HyperV / ESXi 4.0+

Is the person pushing this someone with minimal to no business sense and only sees the price in front of them? TCO is more important but can be hard to grasp for some people.

What imagoon said, mostly. Utility bill is going to be king if it's a big datacenter. If you've got a small outfit with a couple racks and an IT guy who is willing to shoulder the reliability penalties and unexpected cost penalties of half-assing, well, it's ultimately not your problem, and every time some old SCSI drive fails, you get to politely I-Told-You-So the guy.

If you're running a completely virtualized workload and are just looking to get cheap compute nodes in the racks, you really should be moving to blades. If you're renting rack space, you save thousands of dollars a month. If you're not, you can start quietly renting rack space to local startups and actually generate revenue for the company.

Calculating TCO is kind of the first rule of IT management. He should probably be sent at the company's expense to a week long workshop in Aspen so he can proudly explain his cost-cutting moves to an incredulous crowd of CIOs and IT Trolls who will correct his behavior for you.

If you've got a small outfit with a couple racks and an IT guy who is willing to shoulder the reliability penalties and unexpected cost penalties of half-assing, well, it's ultimately not your problem, and every time some old SCSI drive fails, you get to politely I-Told-You-So the guy.

That is pretty much it. All told we have probably 6 full racks in our original server room, and a newly built server room with 2 racks partially filled.

Part of the problem is this is a recent (last 3-4 years) change over to virtualization, previously we had a huge mix match of different servers with no virtualization. So, even if we are doing this wrong on multiple counts it still looks good to the bosses in charge, as it is a net positive change from how things were run in the past.

Putting my thoughts into this thread and hearing the responses has helped me figure out what to do. I am not going to do anything drastic like leapfrog my boss and claim that he is doing everything wrong, but I am going to start getting more involved in server maintenance and pay attention to details such as the true cost of servers including replaced drives, memory, etc. The next time we need to add more servers I'll have some ammunition to make a strong case for something more modern.

That is pretty much it. All told we have probably 6 full racks in our original server room, and a newly built server room with 2 racks partially filled.

Part of the problem is this is a recent (last 3-4 years) change over to virtualization, previously we had a huge mix match of different servers with no virtualization. So, even if we are doing this wrong on multiple counts it still looks good to the bosses in charge, as it is a net positive change from how things were run in the past.

Putting my thoughts into this thread and hearing the responses has helped me figure out what to do. I am not going to do anything drastic like leapfrog my boss and claim that he is doing everything wrong, but I am going to start getting more involved in server maintenance and pay attention to details such as the true cost of servers including replaced drives, memory, etc. The next time we need to add more servers I'll have some ammunition to make a strong case for something more modern.

I find asking accounting for a copy of the electric bill and doing a "cost analysis" useful for this. Even if you don't use it it is good to thing to do for personal experience. You use something like this:

To compute the loads of the servers, add in cooling, UPS losses etc to give yourself a decently accurate ballpark power cost. You can then go from there to calculate the cost to run the server / year.

Then you add in licensing (virtualization isn't free... at least if you use more than say 2 servers it shouldn't be.)

Your personal salary * 1.25 to 1.50 can also play a part in it etc. Obviously for teams you would need guestimate.

To give you an idea:

100 watts at the PSU is normally somewhere around 250 -> 300 watts used at the mains.

100 watts at the psu typically is about 150watts to cool, A whole room UPS can lose about 15% of the power going through it [this varies a lot, look up your gear] and that loss needs to ejected from the room so add in cooling for it.

Sounds like the change they need is going to be too much to swallow so you're probably going to be trudging up an icy hill alone and I wouldn't advise it. I'm reading it shaking my head though; it's some crazy logic thinking that grabbing $400 servers off ebay is the best investment for an (apparently) growing IT infrastructure.

You don't mention who you use for virtualization, but the costs of going the VMware route with a bunch of cheap servers is ridiculous compared to modern dual socket or better powerhouses loaded to the hilt with ram. Consolidate those old servers at a high ratio and long term it works out so much better. Kind of sad because those $10,000's could have gone toward doing it right the first time.

This is all just speculation though as it's really a lot of work to determine what's needed from organization to organization (but something that should absolutely be done before ever making a significant purchase). Getting a daily ballpark average of your total IOPS would be very useful. I mean, if what you're running currently is handled by a bunch of old servers it could be an eye opening experience to realize less than a dozen modern servers and some good SAN's could run your business with power to spare.

Stated reasons are standardization and cost. Based on that, the head of our IT department has decided it's a great idea to buy many used Dell 2950 servers for running VMs.

I feel like this is a bad idea. We are already more limited by space and heat than anything else- paying a little extra to get a newer more efficient server would have been a lot cheaper than spending tens of thousands on construction of a second server room, for example. We literally have 6 racks full of servers, mostly 2950s, and no more room for expansion in our original server room. I am under the impression that if we consolidated our servers into something faster and more efficient we could save a lot of space and produce less heat. Also, a few of the 2950s have already failed, and since they are old with no or limited warranty it's just been extra expenses for us.

I'd like to be able to make a strong case to stop this insanity and switch to buying new or at least newer hardware. Just based on CPU progress of the last few years I imagine a brand new server should have about 4-8X as much processing power as a 2950, and would let us consolidate the VMs hosted on multiple 2950s into a single machine. The barrier to doing this is the ease at which they find used 2950s, for example there are several on ebay right now for $399 buy it now. Even if I try to explain how a brand new server could host 5X as many VMs, they will point out that a new server costs more than 5X as much as a 2950.

Is my thinking flawed, or should I try to push for a change here? Anyone have a good suggestion for a more modern server to standardize on to replace hundreds of VMs?

Your company wouldn't happen to be located in the Bay Area of CA would it?

My employer buys PC/laptops - depreciates them to $0 in 3 years. Then they refuse to allow employees to install a newer OS, or replace a disk drive, etc. They incorrectly reason that the IT tech support costs are too high - the flaw being the disruption to my customer-facing work is far, far more costly. Not too hard to get them to buy a new PC. But I don't have the time to suffer a migration. And should I try to charge my costs to overhead (non-billable), to make the long complex move to a new PC (I have a lot of development tools) - I get chewed out for billing cost to overhead.

The IT managers get graded on their LAN security and IT costs. To heck with impact to us revenue-producing, customer-facing employees (we're 95% of the employees).

You don't mention who you use for virtualization, but the costs of going the VMware route with a bunch of cheap servers is ridiculous compared to modern dual socket or better powerhouses loaded to the hilt with ram. Consolidate those old servers at a high ratio and long term it works out so much better. Kind of sad because those $10,000's could have gone toward doing it right the first time.

We do use VMware. I don't buy the software so I am not certain on the pricing, it's not a per-core pricing model? I suppose a new server probably has double (or more) the performance per core anyway even if that is the case.

We do use VMware. I don't buy the software so I am not certain on the pricing, it's not a per-core pricing model? I suppose a new server probably has double (or more) the performance per core anyway even if that is the case.

Currently it is more or less "per socket." Going back to my R720 vs 2950, The R710 would cost 1 or 2 sockets, the 2950's 4 [to] 8.

So that R720 can handle up to 8 cores / socket landing you with 16 cores 32 threads and a max of 768GB of ram (possibly) for the same price as a 2950 with dual quads and max 32GB of RAM as far as VMWare or HyperV licensing is concerned. Some of the VMWare tiers are couple thousand dollars a socket by the way. Using a $2000 price point:

2950: 4 x 32 GB -> 128 GB $8000 -> $16000 or R720 $2000 to $4000 for a $6000 to $12000 delta. That is just the fixed portions. There is support contracts per core etc.

That $12k delta right there pays for that one R720 (and then some I am pretty sure.)

I don't know the details of the company you work for, but there's a good chance there's much more going on behind the scenes that factor into this decision, especially "hidden" costs (hidden to you, at least). It includes much more than processing efficiency, storage capability, or power consumption. Standardization in itself can save the company TONS of money, and continuously upgrading can make keeping that standardization in place more difficult (more changes = more money). In some circumstances, it really is cheaper to maintain standardized, possibly outdated hardware over longer periods of time.

Of course, the higher ups in your company might just be dumb and making the wrong calls. Or, if under a parent company, they might just be following forced policies, standards, and procedures. Like I said, I have no idea what your company is like.

I would suggest talking to your boss. Tell him/her you might have a potential cost savings idea for the company, but don't immediately go into details. Make sure you have done plenty of research and planning prior to this, but you really want to have a conversation with them first. Get their side...then introduce yours (unless shown it isn't viable at that point).

__________________
"What the heck are you gonna do if you're on a picnic and have an ice cream, and the ants crawl on the ice cream? What are you gonna do? You're gonna eat the ants, because it's made out of protein. For your health!" - Dr. Steve Brule

Unless I'm missing something, I am starting to understand why they went with 2950s. My understanding of v sphere 5 licensing is that it is per CPU with allowance for unlimited number of cores per CPU, however the virtual ram is limited to 32GB for each license. As a 2950 supports a maximum of 64GB of RAM, and uses 2 processors, 2 licenses is sufficient on both counts.

With a newer & more powerful server, we might get a lot more CPU power out of each socket, but the v sphere license still limits us to 32GB so we wouldn't be able to fully take advantage of that. This is assuming we actually use the full 64GB of RAM in our existing servers and performance isn't being bottle-necked by lack of CPU performance.

Of course, this whole thing is blown out the window when you consider how much it cost to upgrade the 2950s to 64GB of DDR2, at crucial it's $3287.99 per server. Brand new Dell server with much cheaper DDR3 available costs less.

I'd still like to go with a more modern server for any new purchases, but do have *some* understanding as to why they went with what they did so far.

Unless I'm missing something, I am starting to understand why they went with 2950s. My understanding of v sphere 5 licensing is that it is per CPU with allowance for unlimited number of cores per CPU, however the virtual ram is limited to 32GB for each license. As a 2950 supports a maximum of 64GB of RAM, and uses 2 processors, 2 licenses is sufficient on both counts.

With a newer & more powerful server, we might get a lot more CPU power out of each socket, but the v sphere license still limits us to 32GB so we wouldn't be able to fully take advantage of that. This is assuming we actually use the full 64GB of RAM in our existing servers and performance isn't being bottle-necked by lack of CPU performance.

Of course, this whole thing is blown out the window when you consider how much it cost to upgrade the 2950s to 64GB of DDR2, at crucial it's $3287.99 per server. Brand new Dell server with much cheaper DDR3 available costs less.

I'd still like to go with a more modern server for any new purchases, but do have *some* understanding as to why they went with what they did so far.

No. That's not a good reason. As you point out, a newer two-socket server with 64GB of DDR3 costs less than 64GB of DDR2. And performance/power use would still be better. (And it would have a warranty!)

Also, what the hell version of vSphere are you using? Current licensing doesn't have a per-socket memory or core limitation.

Thanks, you cleared up the part I was confused about. It looks like in vSphere 5 there was 32GB virtual memory limit per license, but in 5.1 that limitation has been removed. Suddenly modern hardware looks a lot better.

Thanks, you cleared up the part I was confused about. It looks like in vSphere 5 there was 32GB virtual memory limit per license, but in 5.1 that limitation has been removed. Suddenly modern hardware looks a lot better.

yea because the world rose up and was about to kick VMware to curb for their idiotic license VRAM license shit they tried to push on us. i believe if you had a standard license it was 32gb and enterprise was like 64gb per core.

__________________
20 years ago, we had Johnny Cash, Bob Hope and Steve Jobs. Now we have no Cash, no Hope and no Jobs. Please donít let Kevin Bacon die.Ē Bill Murray

"Going to McDonalds for a salad is like asking a prostitute for a hug." Sean Fallon