1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe

2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.

Can you elaborate what you expect from the management solution that you won't expect to see in a dense server? Reply

re: network consolidationNetwork consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.

If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.

And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.

re: managementI've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).

Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)Reply

I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.Reply

Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.

You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.

The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)Reply

It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.Reply

"Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. "

It's not one server. It's actually 2 servers. just because they're in a 2U X 1/2 width form factor doesn't mean they're just one system. There are 2 systems there. Sure you can pack 512GB into 2U with 2 servers, but there are better ways.

1. Dell makes a PowerEdge R620, where you can pack 384GB into 1U, two of those gives you the same number of systems in the same space, with 50% more memory.

2. Dell also has their new R720, which is 2U and has a capacity of 768GB in a 2U form factor. Again, 50% more memory capacity in the same 2U. However, that's short 2 processor sockets.

2. Now, there's the new R820. 4 sockets, 1.5TB of memory, 7 slots, in 2U of space. It's a beast. I have one of these on the way from Dell for my test lab.

Working as an admin in a test lab, dealing with all brands of servers, my experiences with various brands gives me a rather unique insight. I have had very few problems with Dell server, despite having nearly 30% Dell servers. We've had 7 drives die (all Toshiba) and one faceplate LCD go out. Our HP boxes, at less than 10% of our lab, have had more failures. The IBMs, ahile also less than 10%, have had absolutely no hardware failures. Our Supermicros comprise about 25% of the lab, yet contribute >80% of the hardware problems, from motherboards that just quit recognizing memory to backplanes that quit recognizing drives. I'm not too happy with them.Reply

Sure, you can load each of those Rxxx Dell servers with boatloads of memory, but you fail to mention that it comes with a significant performance/penalty. The moment you put a third Dimm on a memory channel your memory speeds drops from 1600 (IF you started with 1600 memory to begin with) to 1066 or worse, 800. On a virtualization host, that makes a big difference.Reply

A few corrections - the 192GB for HCDIMMs is incorrect - it should also be 384GB.

There is no data available that confirms a 20% higher power consumption for HCDIMMs over LRDIMMs. There is a suspicious lack of benchmarks available for LRDIMMs. It is possible that figure arises from a comparison of 1.5V HCDIMMs vs. 1.35V LRDIMMs (as were available at IBM/HP).

It is incorrect that LRDIMMs are somehow standard and HCDIMMs are non-standard.

In fact HCDIMMs are 100% compatible with DDR3 RDIMM JEDEC standard.

It is the LRDIMMs which are a new standard and are NOT compatible with DDR3 RDIMMs - you cannot use them together.

The 1600MHz HCDIMM mention is interesting - would be good to hear more on that.Reply

I have whittled down the use case for HCDIMMs/LRDIMMs and RDIMMs as follows:

The HCDIMM use case is at:- 16GB at 3 DPC use- 32GB (outperform both RDIMMs and LRDIMMs)

LRDIMMs are not viable at:- 16GB (RDIMMs are better)- 32GB (HCDIMMs are better)

RDIMMs are not viable at:- 32GB (because they are 4-rank - trumped by LRDIMMs/HCDIMMs)

There is a reason the Netlist HCDIMMs were only released on the virtualization servers from IBM/HP - because at 16GB levels the only niche available for LRDIMM/HCDIMM vs. RDIMM is the 3 DPC space. This will expand considerably at 32GB to mainstream levels as soon as 32GB HCDIMMs are released (they are currently in qualification with IBM/HP and have not been announced yet - though maybe expected shortly).

I had created an infographic covering the memory choices - search the net for the article entitled:

Infographic - memory buying guide for Romley 2-socket servers

HCDIMMs are not available at SuperMicro (as they are for IBM/HP) - so I was surprised you even covered HCDIMMs (since the article is after all referring to the SuperMicro line of servers).Reply

BTW, Johan, I work for HP and asked some of the guys in ISS Technical Marketing why we don't send you our servers for eval like you get from SuperMicro and sometimes Dell

They felt that you guys didn't do alot of Server Reviews, and that your readership wasn't generally the kind of folks that buy HP Servers.

So I am curious if you could spin up a poll or something in the future to prove them wrong.If there is enough support I'm sure we can you some gear to play with.

I sometimes giggle when I see the stuff people on here get excited about in these reviews though. "Can you see the BIOS through IPMI?". Thats the kind of thing Compaq offered back with the RILOE II and have been integrated into the motherboard since iLO 1 which is like 4 or 5 years old at least.iLO4 on the Gen8 line have taken that a step further and we now hook the Display system BEFORE POST starts so instead of an invalid memory config getting you a series of beeps, you now get a full blown screen either on local VGA or on the Remote Console that straight up tells you you have a memory mismatch and why. i have seen his demo'd with NO DIMMs even installed in the server and you still get Video and obvious status messages.Reply