>http://www.theinquirer.net/inquirer/news/2261273/hp-talks-up-amds-kyoto-despite-seeing-intel-as-a-compute-workhorse?source=email_rt_mc_body>> (they're not talking about the signalling fabric yet)
first, I have to agree that posting whole articles is not optimal;
excerpts for commentary are fine (ie, legal under fair-use.)
yes, the backplane seems to be the most interesting part of this,
since putting a bunch of servers in a box is pretty boring.
from a sysadmin/architect standpoint, the density is uninteresting
versus the infrastructure. does each node do IPMI? they mention
PXE, but that's wholly executed by the target node, not really
infrastructure at all. if it's not IPMI, I certainly hope their
management interface is:
- an open API (not some crap proprietary tool)
- reports power consumption and temperature
- provides warm resets and preferably also boot settings
- captures console (serial port) logs
consider this product if your premise is that you want to put a bunch
of isolated servers into 4U. a completely conventional approach would
be 4x 1u 4s systems or 8x .5u 2s systems - either one gets you 16
conventional sockets. that's somewhere between 128 and 512 full-on cores,
64 ddr3/1600 channels, etc.
putting 45 separate wimpy servers in a box doesn't seem like all that much of
an accomplishment, unless you have some sort of isolation requirement that
precludes VMs, etc.
it's interesting that the backplane provides 45x dual Gb and 6x 10G external;
that's not horribly out of balance (90 Gb vs 60 Gb).
putting a disk on each module does imply they're going for big-data.
VDI and video transcoding seems like a very strange way to sell APUs,
but what AMD is saying about HSA makes onchip GPUs look potentially
quite interesting for HPC. without 2.5d integration, memory bandwidth
is a real issue, but having CPU+GPU coherently access memory is
attractive and a real difference between AMD-HSA and Cuda or Phi.
regards, mark hahn.