USNA Parallel Computing Cluster

The ARCS Group maintains the primary USNA distributed / parallel computing cluster for faculty and student research and advanced computing. The cluster is located in a state-of-the-art computer room in Ward Hall. The cluster has two (2) control computers, referred to as Head Nodes; forty-seven (47) parallel processing computers, referred to as Compute Nodes; and two (2) storage management computers, referred to as Storage Nodes.

All nodes are connected to 3 private networks: a 1GigE Network for System Operations, a 40Gb/s Non-Blocking Infiniband Network for computing communications, and a 1GigE Network for KVM/IPMI Node Management / Control. The cluster is only accessible on the USNA Intranet through its two Head Nodes. Both Head Nodes are in the hpc.usna.edu domain.

ClusterSF manages most of the Compute Nodes for general faculty and student use, while ClusterSL manages a subset of the Compute Nodes for physics research. The cluster has 28 Compute Nodes active under the ClusterSF Head Node and 19 Compute Nodes active under the ClusterSL Head Node. Red Hat Enterprise Linux (RHEL) 6.7 is installed on ClusterSF (10.1.71.11) and RHEL 5.6 is installed on ClusterSL (10.1.71.12), along with the Scyld cluster management software. Both head nodes use MOAB/Torque for job scheduling and support various versions of MPI.

Stateful and Stateless Support

ClusterSF, one of the two Head Server Nodes and one of the two Storage Server Nodes are used to control twenty-eight (28) Compute Server Nodes as a stateful operating sub-cluster, and ClusterSL, the other Head Server Node and Storage Server Node are used to control nineteen (19) of the Compute Server Nodes as a stateless operating sub-cluster. The cluster management software used on the stateless side of the cluster is Scyld Clusterware, instead of Moab Cluster Suite.