I find myself needing more virtualization capability than I'm getting out of my HP N40L Microservers. I've been getting by with virtualbox on my workstation, but that brings its own set of problems.

I've had a small stack of Poweredge servers sitting in a pile for a while. Pretty good spec, dual quad core cpus, 32g memory, all the bells and whistles of the day. I figure one of them will cover my needs for a while with room to spare. Of course, anyone who is familiar with these knows that they're crazy loud, its so loud you can hear it anywhere in the house regardless of where you stashed it.

So I've looked at some alternatives.

There are some nice Intel and Supermicro motherboards on Ebay for not-crazy money. Being able to reuse the memory and CPUs is a pretty nice thing, however I'll need to shell out for heatsinks, fans, case, power supply, etc. Given the specific requirements of a dual socket 771 motherboard, these add up pretty quickly to a sizable pile of cash (window shopping estimates say no less than $750). My favorite board so far is probably the Intel S5400SF, with 16 DIMM slots for a maximum of 128G of ram, what a beast that would be for virtual machines! http://ark.intel.com/products/29871/Int ... rd-S5400SF

There's also the possibility of building a beefy desktop and using the common desktop silencing components, but again, starting from scratch would be at least that same $750 outlay.

After much deliberation, I decided that I'll take the less costly, more difficult direction and modify the equipment I already have.

Here is what I know, starting with the bad:. The 2950 is a 2U server with 6 60MM fans, 2 of which are in the power supplies.. The fans in the case baseline around 6500 RPM and go up from there based on heat. These buggers are rated at over 1.6A!. The BMC (Baseboard management controller) has a minimum RPM at which point an audible alarm goes off and all fans ramp up to 100% speed. Intel Xeon CPUs and FB-DIMMs are generally power hungry.

On the positive side:. I don't care about modifying the case to reach my goals, and I'm handy with a dremel.. There has been some work on modifying the BMC firmware to change the minimum RPM settings. My budget for the project is "as long as it costs less than the other options". I won't be in the same room as the Dell when its done, it needs to be quiet from the other side of a wall.

So I've got a basic plan to start with. I'd like to replace the 60MM fans with slower, quieter models, modify the BMC firmware to be quieter, add a few 120 or 140mm fans to blow in from the top, and use a sound deadening material to reduce/absorb vibration through the metal case. I haven't decided on the hard drives, however they're all SAS, which can run SATA as well, at least in theory.

If it has to be quiet on the other side of the wall, put it in a closed, but vented cabinet. There are few 60mm fans that are quiet, and fewer (if any) that will keep a 2U machine cool enough to run well, especially a pair of quad core CPUs, as S771 was a bugger for heat.

If the motherboard is standard EATX or even ATX, you would see better results scrapping the 2U case and getting a full tower, which would allow you to use 120mm fans. Modifying the BMC will let you escape the RPM requirements, but you'll need to keep an eye on temps so you know you're still running as cool as you should be. Good tower coolers with good fans make that a simple solution.

I would also scrap the server PSU unless having that is a dealbreaker and/or it's proprietary. Some dell PSUs are, some dell PSUs are not. If you can move to a standard ATX PSU in a full tower, you could replace the PSU, CPU coolers and all the fans for much less than buying a whole new machine, and have a result that may even be quiet in the same room.

Well I have a few environmental advantages that should help me, such as not being in the middle of a stack of other hot running servers, not to mention a workload that shouldn't stress it out too much. That should be worth a few points.

The ATX/EATX thing is a no-go, this is all proprietary.

Overall view:

PSU connector:

PSU (2 of these, hot swappable!):

I've seen a few 60MM fans that run at much more reasonable RPM, but of course the reality is that these factory fans are very high CFM, and I expect I will need to supplement airflow elsewhere. The 60MM fans should be enough to keep the air mostly flowing from back to front, but I'll for sure be keeping a close eye on temps.

Parts that need cooling:

Stock fans: The spec sheet for this fan says flat out it will run 12000rpm and move almost 68CFM each, producing 61.5 dbA... Each... Not counting the noise of blowing on things.

I would say that "same room" quiet is a pretty unrealistic goal, but I am optimistic I can achieve "next room" quiet.

So I've got a basic plan to start with. I'd like to replace the 60MM fans with slower, quieter models, modify the BMC firmware to be quieter, add a few 120 or 140mm fans to blow in from the top, and use a sound deadening material to reduce/absorb vibration through the metal case. I haven't decided on the hard drives, however they're all SAS, which can run SATA as well, at least in theory.

I would strongly advice against changing those tiny fans. And that's mainly because if you do not hear those 60mm PSU fans, they won't give enough cooling. This (and similar) servercases are designed to be cooled by air conditioned air put through the whole case at a lot of pressure. If you do not have max 28° C room temp throughout the whole year, you run this server outside of Dells specifications. And if you then go and reduce the air pressure available for cooling via slower spinning fans...

For the other case fans inside this Dell server you'll probably have a hard time finding replacements,because server fans tend to have non standard cabling and dimensions.

It seems to me like it would be easier (and possibly more cost effective) to just build a high end consumer desktop system from the ground up instead of retrofitting this one for acceptable noise levels. You could probably even sell some of the old parts in online classifieds to help with the initial outlay on the new system.

As someone who used to work closely with these beasts let me echo the recommendation to forget it. They really do need those screaming fans to stay cool.

Unless you have a really heavy multi-threaded load or tons of VMs a modern desktop quad core is going to soundly trump the 2950 in performance. You said your workload wouldn't stress the 2950 much, so it sounds unlikely that you need expensive high end equipment to run your VMs effectively. Also, have you considered power consumption (and expense) in your costs? Depending on your power rates a new system might pay for itself sooner than you think.

Don't forget that the newer Intel chips have a host of instruction sets for dealing with VMs to aid performance. Let me add my voice to the chorus and suggest an i5-3550 with a ridiculously big tower heatsink. The reality is, you're going to spend more than that to get this thing to where you want, even if you don't think so now. Especially if you're ok with "next room quiet" it can be a generic case and PSU from Newegg, basic motherboard, then just SSD, CPU, and RAM. In fact, that can probably be less than $750.

I'm not new to servers, and I'm not coming at this from an uneducated position. The 2950 series supports (and I have) Clovertown Xeons, which include VT-x virtualization. The enhancements since then are nice, but Clovertown gets the important stuff.

As far as workload goes, I need something that can outperform a N40L microserver, which uses an AMD Neo cpu. Its pretty slow. While none of my VMs should be a particularly cpu hungry instance, I have quite a few of them split among 2 microservers and my workstation (the high end i7 with 32G ram).

Dell specifies an operating range of 10 to 35c, not a limit of 28c, but we keep the house at 22c so that isn't an issue.

Really, lets remove the whole "poweredge" part. I've got a system with two 80W TDP processors, another ~70W from the memory, and we'll just guestimate another 50W from the rest of the chipset. That's just under 300W to cool in the main compartment. Right now its being cooled with 60MM fans that have a combined maximum airflow of 408CFM (actually I haven't looked at the spec on the PSU fans, but I believe they're properly temperature controlled and may not even be an issue) and a nominal flow of roughly half that (7250rpm "idle" speed vs 12000rpm full speed). Without trying very hard at all, I can find 2000rpm 120mm fans that push 100CFM each that will be next-room silent while delivering tons of fresh air to the hot parts. Don't let the shiny metal exterior fool you, its still just a regular source of heat needing to be dissipated.

I've given myself a budget of $200 US, not counting hard drive replacements if I end up needing to do that (I'll test the 10k SAS and see how they sound before making that decision).

I don't see how the system could be quieted if the PSU is proprietary and therefore cannot be replaced with a traditional quiet ATX unit. Regardless of which route you choose to take the 60mm fans have to go. A new case is probably needed as well, possibly a custom enclosure. Beyond that I don't have much to contribute - retrofitting the server to be quiet seems impossible to me and I don't have much experience with servers anyway.

If I look at the photo you titled "Overall view", I can see 4 fans blowing from left to right. Cold air only comes from the front (which is not particularly open judging from the photo above) and is then guided towards the lower part (the black plastic thingy with the Dell logo with the DIMMs below?) where it is finally exhausted towards the back.

Furthermore, the fins of the CPU coolers appear to be rather tight, i.e. they *need* a high pressure fan unless I am mistaken.

If you have some cheap 120mm fans or similar lying around, I would try the following (You don't want silent fans, but fairly high pressure fans which are quiet enough not to be audible in the next room):- Put fans directly in front of the 60mm fans pushing cold air in the case from above.- Put fans directly behind the CPU fans and above the DIMMs pulling out the hot air to the top.- If necessary add a fan blowing in cold air from the back of the case.- Possibly add some air guidance on top of the case to avoid sucking in the exhaust air again.

Side view of my idea: (gray is air guidance, red area CPUs)

Attachment:

server.gif

This should make sure that the 60mm fans can turn as slowly as possible. Ideally you should also be able to replace the 60mm fans with slower, but more silent fans, but I couldn't find any great candidates during a short search.

If you don't mind adding holes to the case, this would be something you could try without paying a lot. The fans you need should be in the $5 range and if you get 5 (two in top, two out top, one in back) it would cost you only $25 + your time to try it.

In any case, keep us posted! This is an interesting project.

You do not have the required permissions to view the files attached to this post.

The replacements for the 60MM fans, I'm not targeting high CFM. In fact, if I am able to set the alarm threshold to 0 on the BMC, I might skip them entirely. Absolutely the way to go in my mind is to cut the case to allow mounting off 120MM (or 140mm) fans. I'm convinced that the airflow needed to adequately cool the CPUs and memory will bleed off enough to cool the rest of the system.

The spacing between fins on the CPU heatsinks is not so terrible, I don't expect that to cause much trouble.

You are correct in your assumption that air currently flow from front to back, generally your racks are arranged into sets of "hot aisles" and "cold aisles", where you have servers facing each other eating the air conditioning, and back to back exhausting the warm air.

I have a backup plan too (tower coolers sticking out the top of the case), failure is not an option!

I have a backup plan too (tower coolers sticking out the top of the case), failure is not an option!

Start with this. Those heatsinks are small, have tight, small area fins. No way you can go fanless w/ them. They are designed to work with high pressure, high airflow. You won't get that from a big slower fan. See if you can find tower HS for the socket. If not, modify mounting hardware of some good cheap retail towers (like coolermaster 212).

The PSUs you can probably cool by opening up the top side for 120mm fans blowing down.

Well guys, I hate to say it, but I think I've had a change of heart. Not because i think the project is too difficult, but because I think the end result ends up not being worth the effort. I used the Phoronix Test Suite to benchmark this guy, and for grins I ran the same benchmark on my desktop. I was a little surprised to find that across the board my (lightly overclocked) sandy bridge 2600k with a SSD outperformed 8 cores of Harpertown Xeon with a solid RAID5 and fast disks by a pretty huge margin, in most cases double the performance. I expected the desktop to be faster, but I expected it to be a lot closer than this.. at least on the more purely multi-threaded tests.

You know the saying, You can't un-ring the bell... I can't ignore the performance difference contrasted with the general efficiency. As much as I like the idea of the remote access card, redundant storage, etc, the reality is that this system isn't in a remote location, and those features aren't valuable enough on their own to justify it.

So I actually signed up for this board, just to thank you for this post. Ive been internally debating building another i5-3330 vs Buying a poweredge 2950 locally - and what would give me the most bang for my buck. Cost for the tower without drives is 360, vs buying the poweredge is 350 - so obviously very similar. Im looking to cluster with an existing i5 "server" running Server 2012.

I was trying to figure out what one was going to give the most performance; but was really aprehensive based on the power draw likely to come from the 2950 (raising my electric bill....wife will kill me, literally, shes shown me the knife) and based on the general age of the system.

Looks like ill get a lot more flexibility AND performance out of just building another desktop.

I know it is a very late reply to the OP, but in hopes to solving the issue easily for newcomers.

If you are good with a soldering iron and know a thing or 2 about resistors this is the solution for you, it worked perfectly for me and only took about 30mins.

briefly;

1. Remove the hotswap fans.2. cut the red wire of each fan.3. extend the wire, so you can add a resistor or 2 resistors in parallel like i did to solve the heat issue of the resistors.4. solder in your resistors.5. heat shrink.

6. done, put fans in power on, my results below -4350rpm ~ 4950rpm

It is so much quieter now, if you want your fans at a different RPM, you can calculate the resistance you need

If you want a more detailed build log please visit my personal blog http://wayne-grady.comI will write up the build log today for your reference

My server now runs in my laundry quietly with my other gear;Dell PowerEdge 295032GB FB PC2-53002x 2 NIC network cards6x 15000RPM SAS HDDs (RAID5)SSD in a neat hotswap pcie bay at the back in one of the expansion slots (awesome everyone should get one )

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum