Network utilization maxed out beyond the 10 Gbps single NIC used on both the compute and storage nodes. This suggests that the array is likely to deliver more IOPS if more network bandwidth is available. Next test will use 2 teamed NICs on the compute node as well as 3 storage nodes with teamed 10 Gbps NICs as well.

CPU is maxed on the storage nodesduring the test. Storage nodes have 4 cores. This suggests that CPU may be a bottleneck on storage nodes. It also leads me to believe that a) more processing power is needed on the storage nodes, and b) RDMA NICs are likely to enhance performance greatly. The Mellanox ConnectX-3 VPI dual port PCIe8 card may be just what the doctor ordered. In a perfect environment, I would have that coupled with the Mellanox Infiniband MSX6036F-1BRR 56Gbps Switch.

Disk IO performance on the storage nodes during the test showed about 240 MB/s data transfer, or about 60 MB/s per each of the disks in the node. This corresponds to the native IO performance of the SAS disks. This suggests minimal/negligible boost from the 550 GB PCIe flash card in the storage node.