I know that noise level is a bit of a joke for a "real" server, but I've had customers ask me about installing servers in environments that are less than ideal, where the noise level would be a factor in their decision (such as the corner of an office that is occupied).

I would be interested in seeing noise levels from these units and possibly a future article focused on a couple units that don't sound like a 747 on takeoff.Reply

I've experienced this as well. Primarily ones and twos of units as file and workgroup servers where towers would be ideal. Unfortunately, there are not many options when it comes to the storage. Those FC and SAS 16 and 24 bay RAIDs are virtually all designed for rack and put out their share of noise.

While it doesn't occur often, some companies are requesting racks operate in the same room as workers; sometimes this has to do with large scientific equipment in the room, or other various lab requirements.

What I would also like to see in addition to noise levels (dB levels at various distances) like Ben request, is the the thermal load (BTU output of each unit). While some manufacturers give it out and it is often estimated, a real world assessment would be nice. Although I understand this would be difficult if you did not receive a dozen or more eval units; perhaps you have trick up your sleeve.Reply

A 95W chip consuming it's full 95W, 24 hours a day, 365 days a year consumes 832.2 kWh a year. At the GE (http://www.csgnetwork.com/elecenergycalcs.html) average of $.1 per kWh, 832.3 kWh costs only $83.22/year. Going from a 95W chip to a 35W chip only saves $51.19/year. So going with the 4170HE instead of the 4122 costs you $74. You save $24.60 a year. You'd need to keep the chip about 3 years for it to pay for itself. You'll only save a fraction of that per year when only considering the chip consumption, which seems like a safe assumption in the spirit of this article where chips are swapped between the same server.

I did not use the GE calculator, but it gives the same numbers. A TDP is the thermal output, but I think it's safe to assume that that number is a close representation of the power use of the CPU. Power in = Power out. The only power outs I can think of are the thermal power and the "data" power. I'd find it difficult to imagine the "data" power being significant, and I think that the data would be the same no matter which chip you use.Reply

Exactly. This point is missed in almost all articles related to power consumption. "But it's only 5 cents per week more...blah....blah....blah".

Yes but the problem is the cost associated with the REMOVAL of that heat that is often much more expensive and troublesome than the actual moderate increase in power consumption. This is compounded in a server environment where even a slight increase can start to cause issues with air handlers, total power draw for the room, and that extra 5 cents can become a significant increase in operating costs.

So please before someone else writes a post in this or an upcoming article think the whole situation through. It's not just the additional power the server uses, but what do you do with that extra power (in the form of heat) that really matters.Reply

PSU: I understand that when custom building a server you'll get to match you exact load to your PSU. How about when you're buying a premade though. I don't buy servers but I bet Dell charges a fine premium when you upgrade to the higher dollar CPU. Quick trip to Dell page says the 4170HE is a $102 premium over the 4122. That price makes it look like they keep the same PSU installed. Going with a 4162EE costs $252, which still looks like they're keeping the same PSU. The 35W chip saves you $52 a year. 5 year pay back until you consider heat removal.

Heat: CPU's make heat through electrical resistance, which is actually a pretty inefficient way to make heat. Your AC system is better, but we'll assume it's equal. You can take all of my original pay back numbers and cut them in half. That means almost all of these chips take from 1.5 to 2.5 years to pay for themselves.

How long do you keep a server? I'm sure I have no idea, but most companies don't implement a cost saving unless it pays back in under two years.

You're right, there are extra costs, some of the time. What about the quarter of the year when it's cold outside? How about when you already have the system and are considering upgrading just for the lower operating costs? Too many times I've seen forum posts about people "saving" all this money by reducing their computer power draw by a few watts.Reply

Quite a few people have given you already a good answer. Check also the "Power cost" by ERJ post out. Most people collocating will pay a fixed cost, and will pay a lot more if they create "bursty power" (i.e. demand more power than was agreed). So requiring a half Amp less or more than your limit can make a big difference. Reply

Why do you think least power consumption is important for cloud providers? A cloud provider wants to keep all servers running at peak utilization, all day, every day of the week. The platform is at 80% of its peak power consumption when 10% loaded, so there's little cost going from 10% to 100% load. A cloud provider wants the highest throughput per watt at 100% load.Reply

People like you should be banned from the internet, your personal feelings towards a certain vendor clearly affect your way of working with IT.

I pity those who would do any business with you with such a narrow mind. By folks like you we would still be living in the Netburst ERA, wasting hours of time waiting for a job to finish.

Just FYI in the 2p server compare here there is not a single RAS feature intel has more then AMD, and using the platform you obviously refer too (EX) in a power consumption compare would be a really big joke, just like the more secure intel cpu thx to mcafee hence they just bought the company not let alone integrating this before 2013.....

Bugs, yeah right, you mean bugs from the one behind the keyboard. we use 1000's of Intel and AMD servers in Hospital environments, both do there jobs more then well.

Intel is not a standard, x86 is which is btw not created by Intel, not to mention the 64bit you are probably running these days :DReply

thx for the information sharing but I have a feeling that there will be many other people with more added value posts ....Your smooth, lightning fast ATOM system seems to be handling the CUT/PASTE typing really well ......Reply

Home electric pricing is very different than rack pricing. Consider, for a good datacenter, you need UPS and power generators capable of matching every watt in use. You need pdu's. You have extra heat generation so you need additional cooling.

For our colo space we pay somewhere in the range of $500 a month for a 30amp 120v circuit. Getting the best performance per watt is definitely part of our criteria.Reply

I gotta thank you for the laugh. Never have I thought the word "craps" would make me laugh so much. I think it may be that you seem to use that word in every single post combined with one of the most narrow minded points of views I have seen on this site.

I've long since filed sans2212 in the same category as SiliconDoc, under "has nothing to bring to the discussion whatsoever aside of (an initial period of) light entertainment for all readers (which rapidly becomes tedious)".Reply

"You can get a slightly faster 1.8GHz version, the 4164 EE, but that chip costs more than twice as much ($698). As we are searching for low power and inexpensive CPUs, it didn't make the cut. The only disadvantage other than the lower clock speed is the lower clocked HT3 link at 2GT/s instead of 6.4GT/s."

It's a bit of a weird paragraph since I was first thinking that you were suggesting the 4164 EE does 6.4GT/s but (for your sake) I can also interpret "instead of" in the final sentence to refer to the other AMD cpus you're testing here.

But it's still a deficient paragraph considering "the lower clocked HT3", since in reality, this resolves to an HT1 (for the 4162 EE).

It's like a car test mentioning a "less powerful V8 engine" when they're referring to a V6.

This is a bit OT, did I miss the full article about AnandTech's server upgrade, or has the story not been posted yet? I remember we got a couple preview articles, and now nothing for several months. I was really interested in seeing the full story of the upgrades.Reply

Its too bad the benchmarks didn't include comparisons to a mainstream processor like the E5620. That way we could get a sense of scale between all the low power processor performance/power usage levels.

In other words, if the E5620 is only slightly worse than the low power processors, it makes the scale smaller so the differences between the low power processors are more pronounced, similar to the charts in the article.

However, if the E5620 is much worse than the low power procs, it makes the chart scale much higher and suddenly the relative difference between the low power procs seems almost insignificant.

I understand the concept of max density and therefore max performance/watt for datacenters, but there are plenty of small businesses with 1-4 racks in a corporate site computer closet running back-office systems who are also interested in balancing TCO on a smaller scale, and including a mainstream proc in your charts would help them (me). :)Reply