Just curious if there is a "rule of thumb" on when (if ever) to go to a blade server setup? We rent rack space in a data center and have plenty of space for physical servers and storage. We are (like most) moving most of our servers to VM's and are looking at purchasing new hardware to support as many VM's as possible - multiple multi-core CPU's and 100+ GB memory...

20 Replies

Here is how i see it. You are renting rack space and you have plenty of space for servers, Blades, 1U etc, right? Are you paying for power usage? If so Dell and others make very power efficient servers designed for cloud computing, and I think VM software can power down and up servers on an as needed basis. Since blades are very dense you need to know how good the cooling is where you rent space. Do they rack cool? Do they have temp sensors that can change airflow as needed, hot cold isles, etc. If memory is an issue then I believe that most 1 U servers will give you greater options there. Blades are expensive relative to 1 U servers but make sense in certain places. Mainly space constraints. I have a friend who runs a data center for a major US corp and they built a smaller addition to house blade servers and it is less than 1/4 full because of Virtualization. As a result they are going back to 1U and 2U servers in future purchase.

If you play the game and contact say HP and Dell directly you might get them to "loan" you a blade chassis and one server to test. Oh one last thing pounds per square foot. :-)

a lot of the new blades have dual back planes, multiple NICs, RAIDed shared storage multiply power supplies, N+1 in all categories with no single point of failure but yes something to consider. If you use VM's then you can load balance across multiple blades and eliminate any possibility of down time. Again, get the direct Reps in there to talk to you or have your reseller bring them in. Find what fits and got for it.

In my opinion Blades were the hot topic before Virtualization really took off. It was a way to keep your data center from growing out of control. Now that VM's are the better way to go, you can have a small number of physical servers running 100's of virtual servers.

Unless you just have no rack space, I don't see the big benefits of blade centers anymore.

This is what I was thinking and just wanted to make sure that I wasn't missing something. When you start pricing out the blade systems, with all of the add-on pieces that are necessary to get it up and running (and fully redundant), they rapidly exceed the cost of 1U hardware. Then there is that issue of heat - although our data center has hot and cold aisles, lots of heat in concentrated areas is never good. Just takes one fan going out to push it over the edge.

I think that we will concentrate on beefy regular servers. Thanks for the input!

You have to look at your environment and your workload needs. Will your workload start off small and then grow rapidly over a short period of time? Is your company planning an acquistion soon that will require a scalable server infrastructure? Are you running VDI? Blade systems also save some space, so if you're moving into a new building and want a smaller, more efficient data room, then maybe blades will work for you.

In short, if your environment has to scale up in server workloads over a short period of time, then blades are good. As mentioned, virtualization can handle most workload scalability, but with extremely rapid growth in workloads, blades might be a good fit.

However, for most shops, the ROI isn't there and normal rack mount servers with virtualization will do the trick. If you bought a blade system a couple of years ago and the enclosure is only half filled, you probably should have bought regular servers and virtualized them.

I have a blade chassis and have previously used single servers (pizza boxes), the blade chassis has a lot of environmentla and power management on-board but costs a whack load more, they are however a very dense computing cluster so if you are paying by rack space, this will probably be a win for you, if the space isn't an issue and there is good environmental management for your servers then I would definitely go for single servers.

+1 for texkonc on the single point of failure worry, although most blades have redundant backplanes etc... these aren't hot swap that I know of, so you are still exposed, I have never heard of one failing so far... but what about outages for firmware updates etc ???

1st Post

If space is an issue, the bladesystem would be a good option. It saves you renting space if you rent by space. Plus getting a Bladecenter would also come with a built-in SAN. You have storage and computing power in one bladecenter,

I've never been a fan of blade systems. The cost and complexity just don't make sense. My last place we had the IBM blades and they were a constant problem (no so much the servers but the management piece). And once those blades are full you're looking at massive costs to get more. "Pizza boxes" just are. Sure, they take up space (although 1U ain't much) but there's no particular limit to how many you can have.

With virtualization even fast company growth isn't a good argument. If you have storage in place, a 1U box can easily hold 15-20 VM's (depending on config, of course) and you can deploy a virtual machine in less then an hour so rapid growth is EASY.

If you're that limited in space, then MAYBE you can make the costs of blades work but I wonder, if you're at that point that perhaps building out a small server room may be workable.

Just saw HP has a blade that supports 1TB of RAM, and 4 x 8 core procs.

The only "I must buy a blade" reason is if you NEED HP RGS. If you do you know who you are, and until remoteFX catches up your stuck with crazy expensive workstation blades.

Pros:

Blades allow you to not have to manage a lot of cables, save like 10% on your power bill. It provides unified management tools and consoles so you don't have to login or poll multiple places to know the health of your entire environment (Storage networking, switching etc). They allow for crazy density so if your in some crazy place like Hudson ST, it saves a lot MRC on hosting. Ideally with the right modules you shouldn't need really anything but a firewall and a SAN.

It locks you into a vendor/platform. Most Data-centers are not designed to cool what has the BTU rating of a Webber Gas Grill (try stacking 4 of these in a rack). Crazy vendor required special racks or else you loose support, epically painful to install, weird bios/firmware issues effect ALL devices as they are the same hardware (being to homogeneous can cause problems).

They are cool but...

I would buy blades AFTER I had bought the following things first though

At least 8 sockets in VMhosts with a N+1 redundency for HA or N2+1 for FT

At least 512GB of RAM across VMhosts

A Duel Controller SAN from a vendor that on paper gurentees 99.999% uptime.

The other other justification is the person with the pocket book has been drinking the "EMC+VMWARE+CISCO" coolaid a little too much and is willing to approve their crazy expensive vblock setup as it creates the mythical environment where firmware revisions incompatibility hell never exists between a SAN, switching and your HBA's.

Don't know how I missed this thread. I've been saddled with blade systems in the past and they've been nothing but nightmares. A fortune to purchase, massive performance problems, management nightmare and none of the operational issues even begins to compare to the cost of phasing them out.

when depends on your current invetory and needs, if you only need 2-7 servers then U based, if your needing 10-50 then blades, they are modular designed for VM's and have great builtin tools for daly monitoring. i love the intel modular server. great reliable product lower cost than the "Big Iorn" vendors. you can deck one out for around 30K and be able to run 50+vms with no worrys, i always add a spare blade and cpu on the shelf if one starts to flake out it will notify you via email, walk over tell it to move, and swap the chassis out. its a great system.

when depends on your current invetory and needs, if you only need 2-7 servers then U based, if your needing 10-50 then blades, they are modular designed for VM's and have great builtin tools for daly monitoring. i love the intel modular server. great reliable product lower cost than the "Big Iorn" vendors. you can deck one out for around 30K and be able to run 50+vms with no worrys, i always add a spare blade and cpu on the shelf if one starts to flake out it will notify you via email, walk over tell it to move, and swap the chassis out. its a great system.

I've run the numbers for people getting hundreds of physical servers and still not effective. I think that you only see it when you are running thousands of identical boxes - and even then we opted against it.

0

This discussion has been inactive for over a year.

You may get a better answer to your question by starting a new discussion.