You are here

Oakley

Oakley is an HP-built, Intel® Xeon® processor-based supercomputer, featuring more cores (8,328) on half as many nodes (694) as the center’s former flagshipsystem, the IBM Opteron 1350 Glenn Cluster. The Oakley Cluster can achieve 88 teraflops, tech-speak for performing 88 trillion floating point operations per second, or, with acceleration from 128 NVIDIA® Tesla graphic processing units (GPUs), a total peak performance of just over 154 teraflops.

Hardware

Detailed system specifications:

8,328 total cores

12 cores/node & 48 gigabytes of memory/node

Intel Xeon x5650 CPUs

HP SL390 G7 Nodes

128 NVIDIA Tesla M2070 GPUs

873 GB of local disk space in '/tmp'

QDR IB Interconnect

Low latency

High throughput

High quality-of-service.

Theoretical system peak performance

88.6 teraflops

GPU acceleration

Additional 65.5 teraflops

Total peak performance

154.1 teraflops

Memory Increase

Increases memory from 2.5 gigabytes per core to 4.0 gigabytes per core.

Storage Expansion

Adds 600 terabytes of DataDirect Networks Lustre storage for a total of nearly two petabytes of available disk storage.

System Efficiency

1.5x the performance of former system at just 60 percent of current power consumption.

Batch Specifics

Compute nodes on Oakley are 12 cores/processors per node (ppn). Parallel jobs must use ppn=12.

If you need more than 48 GB of RAM per node, you may run on one of the 8 large memory (192 GB) nodes on Oakley ("bigmem"). You can request a large memory node on Oakley by adding the following directive to your batch script: #PBS -l mem=192GB

We have a single huge memory node ("hugemem"), with 1 TB of RAM and 32 cores. You can schedule this node by adding the following directive to your batch script: #PBS -l nodes=1:ppn=32. This node is only for serial jobs, and can only have one job running on it at a time, so you must request the entire node to be scheduled on it. In addition, there is a walltime limit of 48 hours for jobs on this node.

Requesting less than 32 cores but a memory requirement greater than 192 GB will not schedule the 1 TB node! Just request nodes=1:ppn=32 with a walltime of 48 hours or less, and the scheduler will put you on the 1 TB node.