Pacman User Guide (Penguin Computing Cluster)

Introduction

Pacman is a resource dedicated to University of Alaska affiliated academic users performing non-commercial, scientific research of Arctic interest.

"Pacman" Hardware Specifications

The ARSC Penguin Computing cluster consists of the following hardware:

12 Login Nodes

2- Six core 2.2 GHz AMD Opteron Processors

64 GB of memory per node. (64 GB per core)

1 Mellanox Infiniband DDR Network Card

1 Large Memory Login Node

4 Eight core 2.3 GHz AMD Opteron Processors

256 GB of memory per node (8 GB per core)

QLogic QDR Infiniband Network Card

800 GB local disk

140 GB solid state drive

2 Login Nodes with GPUs

two NVIDIA Tesla M2050 GPUs per node

3GB GDDR5 memory per GPU

2.4 GHz Intel Xeon E5620 CPUs

256 Four Core Compute Nodes

2- Dual core 2.6 GHz AMD Opteron Processors

16 GB of memory per node (4 GB per core)

Voltaire DDR Infiniband Network Card

88 Sixteen Core Compute Nodes

2 Eight core 2.3 GHz AMD Opteron Processors

64 GB of memory per node (4 GB per core)

QLogic QDR Infiniband Network Card

250 GB local disk

20 Twelve Core Compute Nodes

2 Six core 2.2 GHz AMD Opteron Processors

32 GB of memory per node (2.6 GB per core)

Mellanox Infiniband DDR Network Card

3 Large Memory Nodes

4 Eight core 2.3 GHz AMD Opteron Processors

256 GB of memory per node (8 GB per core)

QLogic QDR Infiniband Network Card

800 GB local disk

140 GB solid state drive

QLogic QDR and Mellanox DDR Infiniband Interconnect

275 TB Lustre file system (available center-wide)

Operating System / Shells

The operating system on pacman is RedHat Enterprise Linux version 6.4.

The following shells are available on pacman:

sh (Bourne Shell)

ksh (Korn Shell)

bash (Bourne-Again Shell) default

csh (C Shell)

tcsh (Tenex C Shell)

If you would like to have your default login shell changed, please contact User Support .

System News, Status, and RSS Feeds

System news is available via the news command when logged on to pacman. For example, the command "news queues" gives news about the current queue configuration. System status and public news items are available on the web.

Connecting to Pacman

Connections to pacman should be made using an SSH compliant client. Linux and Mac OS X systems normally include a command line "ssh" program. Persons using Windows systems to connect to pacman will need to install an ssh client (e.g. PuTTY). For additional details see the Connecting to ARSC Academic Systems page.

Here is an example connection command for Mac OS X and Linux command line clients:

% ssh -XY arscusername@pacman1.arsc.edu

File transfers to and from pacman should also use SSH protocol via the "scp" or "sftp" programs. Persons using Windows systems to connect to pacman will need to install an sftp or scp compatible Windows client (e.g. FileZilla, WinSCP).

Pacman has a number of login nodes available. The nodes "pacman1.arsc.edu" and pacman2.arsc.edu" are primarily intended for file editing and job submission. Activities requiring significant CPU time or memory should occur on "pacman3.arsc.edu" through "pacman13.arsc.edu".

Login Node

Intended Purpose

pacman1.arsc.edu

Compiling and Batch Job Submission

pacman2.arsc.edu

Compiling and Batch Job Submission

pacman3.arsc.edu through pacman9.arsc.edu

Compute Intensive Interactive Work

pacman10.arsc.edu through pacman12.arsc.edu

Batch Data Transfer Work

pacman13.arsc.edu

Compute Intensive Interactive Work / 256GB Memory / 32 Cores

Sample Code Repository ($SAMPLES_HOME)

The $SAMPLES_HOME directory on pacman contains a number of examples including, but not limited to:

Torque/Moab scripts for MPI, OpenMP and Hybrid applications

Examples for Abaqus, OpenFoam, Gaussian, NWChem and other installed applications.

Available Software

Open source and commercial applications have been installed on the system in /usr/local/pkg. In most cases, the most recent versions of these packages are easily accessible via modules. Additional packages may be installed upon request.

Parallel Programming Models

Programming Environments

This provides multiple compiling environments for different programming languages and compiler brands. The modules package is installed, which allows you to quickly switch between these different environments.

Pre-Processing and Post-Processing on Login Nodes

Several pacman login nodes have been configured with higher CPU limits and memory Limits to allow for pre-processing and post-processing as well as code development, testing and debugging activities.

The login nodes pacman3.arsc.edu through pacman9.arsc.edu allow for greater memory and CPU time use. For codes requiring significant memory, please verify there is light system load on the login node being used prior to running applications. The "top" command will display current memory use.

Job Submission and Resource Accounting

The University of Alaska Fairbanks is an affirmative action/equal
opportunity employer and educational institution and is a part of the University
of Alaska system.
Arctic Region Supercomputing Center (ARSC) |PO Box 756020, Fairbanks, AK 99775 | voice: 907-450-8602 | fax: 907-450-8601 | Supporting high performance computational research in science and engineering with emphasis on high latitudes and the arctic.
For questions or comments regarding this website, contact info@arsc.edu