What are the benefits of using Server Hardware versus just placing server software on top of Desktop Hardware?

I've been running a small time webserver for a couple of websites, a blog and a multi-user dungeon for years and am considering upgrading. I've always simply used whatever old desktop machines I had lying around for the server and since my pages don't see a lot of traffic that was fine. Now, however, I'm learning more about running servers in true development environments. I have been exposed to a few of the differences in dedicated server hardware and desktop hardware. I am debating whether it's worth it for me to go the extra mile, spend the extra money and learn about true server hardware in order to assemble a server. Or if I should just stick with what I know and build myself a low power usage multi-core desktop to use that as a server same as I always do.

So the question is, what are the differences between server and desktop hardware and what do you gain by using server hardware for servers?

And for my specific case, are those gains worth the time and effort if it's a small time server for small time projects?

Edit: Yikes that was the longest run-on sentence I've ever written. I must be more tired than I'd thought.

Questions on Server Fault are expected to relate to server, networking, or related infrastructure administration within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.
If this question can be reworded to fit the rules in the help center, please edit the question.

13 Answers
13

So the question is, what are the
differences between server and desktop
hardware and what do you gain by using
server hardware for servers?

Others have mentioned management features which is a huge plus. Some have mentioned better supported products so I'll stay away from those solid points. All-in-all, with server hardware (in general) you typically gain 3 things (IMO):

Durability - Granted not everyone has an identical experience but server grade equipment seems to last a bit longer than their desktop counterparts. But even for prolonged (and stressful) usage, server hardware tends to hold up to the demands under the specifications provided by the manufacturer.

Stability/Reliability/Longer support - Usually better driver support for the appropriate OS. Desktop equipment may or may not have solid drivers but server hardware wouldn't sell without some serious attention to detail. Driver support is critical as well as serious testing of equipment. I find that once I pay for the server hardware, I worry less about this issue. If an issue does arise, manufacturers usually update the drivers/firmware/software/etc.

Scalable - The majority of most core server hardware (motherboards, CPU, RAM) anticipate upgrades to one degree or another. Most desktops do not anticipate a larger quantity of resources. This is usually a function of the motherboard chipset more than anything, but server chipsets are significantly different from desktops.

And for my specific case, are those
gains worth the time and effort if
it's a small time server for small
time projects?

It sounds like a 'small-time' server is all you need but you may want to invest into the lower-end server hardware to gain more flexibility and try newer technologies such as virtualization, clustering, etc. etc. Granted the costs are higher for server hardware but just as in life, you get what you pay for.

I doubt the time and effort (and cash) is worth all the effort in your case. If you think you might someday (1-2 years) not too long decide to put more effort or energy into more services, then some server hardware might do you some good. Otherwise, save your cash for now until you know you're going to really use that server hardware to a greater extent.

+1! And if someone does go with a lower-end server, a pox on their household if they go with an integrated RAID controller! =) Seriously, spend a few hundred more and get at least a low end RAID card.
–
WesleyAug 25 '09 at 18:31

2

Agreed on the RAID controller. On-board is no match for hardware RAID.
–
osij2isAug 25 '09 at 19:12

In my experience, it's the little details that are put into server hardware that makes all the difference. Using desktop hardware or uber-cheap server hardware will be 95% fine (of course, 76% of all statistics are made up on the spot...) but it's that last 5% that will nickel and dime you, possibly to death.

For example, on desktop machines network cards will probably not have SNMP, WoL or other mass management capabilities. Javier's experience with NICs is an excellent example of desktop hardware simply not being robust enough.

Consumer/desktop hard drives are not manufactured for long run times, vibrational resistance, quite the level of error correction or quite the level of error reporting. You could, of course, buy more robust hard drives. But will the disk controller be able to detect those errors and report them? Which brings me to my next point...

Disk controllers are most certainly not going to be as fast nor, most importantly IMO, will they have the amount of error detection and reporting. ::casts contemptuous glance at a HP ML115 with nVidia onboard controller:: That ML 115 is a true-blue server, but a lowest-end model and even that has given me fits. I regret not being able to get a good disk controller. Don't skimp on disk controllers!

In general, the resources will not typically be up to handling the load, management capabilities or reliability standards that you would expect from something that's going to be running 24/7 for multiple users.

Also, server hardware manufacturers offer many software management tools for free like the ProLiant Support Pack for HP that can help manage the plethora of drivers and other system information points that lurk here and there.

For me, someone who uses small time servers and works on small time projects, I shudder to think about the 5 minutes here, 10 minutes there and even a whole weekend once in a while that have been taken up stepping on little bugs here and there because I didn't have something quite as robust as I could have had. Life's too short. Spend some extra cash and get some nicer kit!

Or not. It's up to you. There is certainly a lot to learn when you're running around spending so much time trying to patch up flaky systems. Just don't expect one of those things to be the name of your wife's new boyfriend. =P

For me, it's mostly about physical management, it's a lot neater if all your boxes are the same width and installed inside a rack or cabinet, instead of an assorted mix of desktop faceplates, with different heights.

It's easier to order the appropriate configuration, maybe big disks, or many smaller, or lots of RAM, always with skimpy onboard VGA. with desktop boards you get a lot of what you don't need, and too little of what's important. For example, it used to be very hard to get a good board with support for more than 4GB RAM.

some years ago, some 'desktop grade' (3Com) network boards had very limited drivers, and if you saturated the bandwidth for a few hours, they started to drop a lot of packets, seriously degrading performance. The 'server grade' board with the exact same specifications had the expected behavior for roughly twice the price.

Server hardware tends to be optimised for performance and reliability whereas desktop hardware tends to be built to a budget.

When you start looking at the Tier-1 server hardware vendors (IBM/HP/Dell, etc) you start seeing that a huge amount of work is done by these vendors to optimise the hardware by ensuring that they have reliable, standardised components. You will also find that added functionality such as hardware-level remote control and administration (using DRAC/iLO boards) and the vendors also usually offer massive software stacks that simplify OS installation and deployment. Server hardware also tends to have longer warranty periods.

Something which nobody else seems to have mentioned is that a lot of "real servers" (although not all, usually only the more expensive models) have out-of-band remote access cards, like Sun RSC/ILOM/ALOM cards and Dell DRAC cards. These have their own network connections, and let you do nice things like remotely power on/power off the server or get a console, eliminating the need for an IP-KVM in some cases.

Most allow you to access the system via a web interface, some allow telnet and SSH access. These are most useful when you have a separate out-of-band connection to the system, but even without that it can be a lifesaver to access a local tty without actually being there (ever accidentally break networking or SSH, or set your firewall to block all traffic?).

While this may not be useful to your specific use case (if the machine is sitting under your desk anyway, none of this will really be useful), it's a lifesaver when you're working with machines that are in a data centre/far away.

It's about economies of scale for me (in addition to the other responses, which I won't rehash). When you've got to manage more than 3 servers, it becomes incredibly important that you aren't solving the same problems on different hardware over and over and over again. When I buy a particular make/model, the hardware and drivers are a known quantity. If one of them requires a bios update, I'll apply that to its model-sisters as well. Getting into the case, replacing various parts - you're cutting down your learning curve on the hardware.

My company pays a premium for that, but in return they have more of my work-week available for other projects providing (presumably) more value than troubleshooting another driver issue or incompatibility.

Incidentally, I wouldn't worry about not being able to virtualize on self-assembled hardware. Just check the CPU model and that's it (other than having enough resources that your VMs aren't starved). We've got some demo laptops running hyperV quite happily.

I'd also check a few questions like this if you want any commentary on self-built vs. pre-built.

We used to run desktop/workstation class hardware for our servers. These are the things that we've noticed the most:

Redundancy. I've yet to see desktop hardware with redundant power supplies.

Hot-swap. I've yet to see desktop hardware that has hot-swap hard drives. Or RAM. Or power supplies.

Reliability. Most desktop motherboards don't even support ECC RAM. And then there are the numerous hardware issues we've had with desktop crap. Once upon a time, we had one server that required a monitor to actually be plugged in before it would reboot. All server hardware is designed to run headless, especially the rack-mounted stuff.

Speed. SCSI drives just run faster. Especially when the disks are 10 or 15K RPM. And that extra-wide bus doesn't hurt either. So do Xeon processors.

Honestly, if you're even asking "why can't I use desktop hardware" the answer should be "shop on E-bay for servers" instead of "desktop hardware". The great thing about computer hardware is that used stuff has a much lower failure rate than new (all the faulty hardware from the factory has already died). And it drops in price by 1/2 for every year that it's been around. If you want spares, buy N+1, they're cheap.

Now, for your particular application, you could run the whole thing on what you've got. It's not that important. It's not that big of a deal if you need to reboot it every other week. But if people were paying you for hosting, get servers. Preferrably a cluster of them.

However, be careful. Being labeled as "server" does not automatically make it better. Sometimes a very expensive "server" hardware will fail as much or even more than a good desktop. (in this case, then, you should contact your vendor to replace this faulty hardware)

If you are in really small and non-critical projects, then just use a good-quality desktop hardware. It will work, and sometimes it will work non-stop for years.

If you have the budget for server-class hardware, then choose it wisely. Sometimes a less expensive server hardware is much better than a very expensive one.

Whatever road you choose, remember to add redundancy (multiple hard disks in RAID) and a good UPS.

As mentioned, server hardware is really designed for reliability. You get things like ECC RAM, dual or quad CPU sockets, redundant power supplies and fans, RAID for your hard drives among other things. You can also generally build "bigger" boxes than you can with "standard" hardware which is beneficial for maintenance (fewer boxes) and solutions like Virtualization (pretends to be more boxes).

Google doesn't do that sort of thing and takes the approach I think you'd be comfortable with, use cheaper hardware and replace it when it fails. Google's redundancy and scaling is horizontally (more boxes). The advantage here is it's cheaper to buy at the cost of complexity of overall architecture. Applications need to be designed differently and how the overall system functions can get more complicated. You need to add load balancers and such which may or may not be possible.

Something else I haven't seen written is support.
All the big companies have different support lines for the servers/business line of products. So when you call and say my Dell PowerEdge isn't booting and I have a big yellow light and it's beeping at me, the support seems better

Yeah I know we've all had bad support too I'm sure, but I found I had less bad support with server/business lines, then with the home lines or with home built stuff at other companies.

I've had to call in on Dell hardware issues 3 times for their servers, I've always had good support and no arguements for replacement parts, they know that server is critial to my business and my business needs to be running for me to buy my next server from them. Remember to keep in warrenty though.

If your application is I/O bound, you can get server hardware with multiple PCI busses, so that one saturated controller doesn't prevent other controllers from pushing data. You never find this on consumer-class gear.

Just a note: it seems that a lot of newer desktop hardware has Intel VT or AMD-V support. The real trick is if the OEM BIOS supports it. Dell didn't support Intel VT for several BIOS revisions at least for my T7700 in a Dell XPS 1530. I think OEMs are getting better at supporting things like this for home use hardware, especially with things like Virtual XP in Windows 7. And yes, I saw that you used the words "might not" =) It is a good point.
–
WesleyAug 25 '09 at 18:29

Its a good point, but I just built a hyper-v server and I had to upgrade my AMD chip because it didnt have AMD-V yet :D
–
barfoonAug 25 '09 at 19:02