We got a sneak peak at the Supermicro's brand new Twin 2U server: the SYS-6027TR-D71FRF. The 2U chassis has two dual Xeon E5 based servers inside that are fed by two fully redundant 1280W PSUs (at 180-230V, 1000W at 100-140V).

The two servers are held in place using screwless clips.

You get the density of a 1U server without needing four PSUs for redundancy and without the very power hungry 40 mm fans. Indeed, using only 2 PSUs and 80 mm fans should save quite a bit of power compared to 2 1U servers. Last time we measured, the Twin servers consumed 6% less power than the best 1U servers on the market.

At the same time, the expansion capabilities are better: you get two full height and one half height PCIe 3.0 (!) x16 (x8 electrical) slots. The only disadvantage is that you only get 4 DIMM slots per CPU, which generally limits each server to about 128 GB of RAM (8 x 16 GB) unless you go with expensive 32 GB LR-DIMMS for a total of 256 GB. Therefore this server is probably better for HPC workloads than for memory intensive virtualization and database applications.

This new Twin server also features FDR InfiniBand interconnect technology, good for 56Gb/s (!) low latency network connections with an X4 cable. This should work especially well in tandem with Intel Data Direct I/O technology, where packets are directly transferred into the Last Level Cache (LLC) instead of being DMAed to the memory. This is something we'll be investigating in a later article.

Post Your Comment

65 Comments

I wonder if this Data Direct I/O Technology has any relevance to audio engineering? I know that latency is a big deal for those guys. In past I have read some discussion on latency at gearslutz, but the exact science is beyond me.

Perhaps future versions of protools and other professional DAWs will make use of Data Direct I/O Technology.Reply

You said for the first one: the Xeon E5-2660 offers 20% better performance, the 2690 is 31% faster. It is interesting to note that LS-Dyna does not scale well with clockspeed: the 32% higher clockspeed of the Xeon E5-2690 results in only a 14% speed increase.

I think that I might have an answer for you as to why it might not scale well with clock speed.

When you start a multiprocessor LS-DYNA run, it goes through a stage where it decomposes the problem (through a process called recursive coordinate bisection (RCB)).

This decomposition phase is done every time you start the run, and it only runs on a single processor/core. So, suppose that you have a dual-socket server where the processors say...are hitting 4 GHz. That can potentially be faster than say if you had a four-socket server, but each of the processors are only 2.4 GHz.

In the first case, you have a small number of really fast cores (and so it will decompose the domain very quickly), whereas in the latter, you have a large number of much slower cores, so the decomposition will happen slowly, but it MIGHT be able to solve the rest of it slightly faster (to make up for the difference) just because you're throwing more hardware at it.

Here's where you can do a little more experimenting if you like.

Using the pfile (command line option/flag 'p=file'), not only can you control the decomposition method, but you can also tell it to write the decomposition to a file.

So had you had more time, what I would have probably done is written out the decompositions for all of the various permutations you're going to be running. (n-cores, m-number of files.)

When you start the run, instead of it having to decompose the problem over and over again each time it starts, you just use the decomposition that it's already done (once) and then that way, you would only be testing PURELY the solving part of the run, rather than from beginning to end. (That isn't to say that the results you've got is bad - it's good data), but that should help to take more variables out of the equation when it comes to why it doesn't scale well with clock speed. (It should).Reply