Glenn

The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.

Batch Limit Rules

Memory Limit:

It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs. On Glenn, it equates to 3GB/core and 24GB/node.

If your job requests less than a full node ( ppn<8 ), it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (3GB/ core). For example, without any memory request ( mem=XX ), a job that requests nodes=1:ppn=1 will be assigned one core and should use no more than 3GB of RAM, a job that requests nodes=1:ppn=3 will be assigned 3 cores and should use no more than 9GB of RAM, and a job that requests nodes=1:ppn=8 will be assigned the whole node (8 cores) with 24GB of RAM. It is important to keep in mind that the memory limit ( mem=XX ) you set in PBS does not work the way one might expect it to on Glenn. It does not cause your job to be allocated the requested amount of memory, nor does it limit your job’s memory usage. For example, a job that requests nodes=1:ppn=1,mem=9GB will be assigned one core (which means you should use no more than 3GB of RAM instead of the requested 9GB of RAM) and you were only charged for one core worth of Resource Units (RU).

A multi-node job ( nodes>1 ) will be assigned the entire nodes with 24GB/node. Jobs requesting more than 24GB/node should be submitted to other clusters (Oakley or Ruby)

Walltime Limit

Here are the queues available on Glenn:

NAME

MAX WALLTIME

NOTES

Serial

168 hours

Longserial

336 hours

Restricted access

Parallel

96 hours

Job Limit

An individual user can have up to 128 concurrently running jobs and/or up to 2048 processors/cores in use. All the users in a particular group/project can among them have up to 192 concurrently running jobs and/or up to 2048 processors/cores in use. Jobs submitted in excess of these limits are queued but blocked by the scheduler until other jobs exit and free up resources.

A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately. Jobs submitted in excess of this limit will be rejected.

Glenn Retirement

Glenn Cluster Retired: March 24, 2016

The Glenn Cluster supercomputer has been retired from service to make way for a much more powerful new system. As an OSC client, the removal of the cluster will result in work being added to the heavy workload being handled by Oakley or Ruby, potentially impacting you.

Why is Glenn being retired?

Glenn is our oldest and least efficient cluster. In order to make room for our new cluster, to be installed later in 2016, we must remove Glenn to prepare the space and do some facilities work.

How will this impact me?

Demand for computing services is already high, and removing approximately 20 percent of our total FLOP capability will likely result in more time spent waiting in the queue. Please be patient with the time it takes for your jobs to run. We will be monitoring the queues, and may make scheduler adjustments to achieve better efficiency.

How long will it take before the new cluster is online?

We expect the new cluster to be partially deployed by Aug. 1, and completely deployed by Oct. 1. Even in the partially deployed state, there will be more available nodes: 1.5x more cores, and roughly 3x more FLOPs available than there are today.

Is there more information on the new cluster?

Need help, or have further questions?

We will help. Please let us know if you have any paper deadlines or similar requirements. We may be able to make some adjustments to make your work more efficient, or to help it get through the queue more quickly. To see what things you should consider if you have work to migrate off of Glenn, please visit https://www.osc.edu/glennretirement.

Queues and Reservations

Here are the queues available on Glenn. Please note that you will be routed to the appropriate queue based on your walltime and job size request.

Name

Nodes available

max walltime

max job size

notes

Serial

Available minus reservations

168 hours

1 node

Longserial

Available minus reservations

336 hours

1 node

Restricted access

Parallel

Available minus reservations

96 hours

256 nodes

Dedicated

Entire cluster

48 hours

436 nodes

Restricted access

"Available minus reservations" means all nodes in the cluster currently operational (this will fluctuate slightly), less the reservations listed below. To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.

In addition, there are a few standing reservations.

Name

Times

Nodes Available

Max Walltime

Max job size

notes

Debug

8AM-6PM Weekdays

16

1 hour

16 nodes

For small interactive and test jobs.

GPU

ALL

32

336 hours

32 nodes

Small jobs not requiring GPUs from the serial and parallel queues will backfill on this reservation.

Occasionally, reservations will be created for specific projects that will not be reflected in these tables.