Parallel Programming Models

The Pacman and Fish systems support multiple parallel programming models.

Hardware Level

Model

Description

Shared-memory node

Auto

Automatic shared-memory parallel executables can be compiled by and linked with the-Mconcur=option to pgf90pgcc or pgCC. Since only a subset of loops can generally be parallelized this way OpenMP directives can further improve performance.

Shared-memory node

OpenMP

This is a form of explicit parallel programming in which the programmer inserts directives into the program to spawn multiple shared-memory threads, typically at the loop level. It iscommon, portable, and relatively easy. On the downside, it requires shared memory which limitsscaling to the number of processors on a node. To activate OpenMP directives in your code, usethe
-mp option to pgf90pgcc or pgCCOpenMP can be used in conjunction with autoparallelization.

Shared-memory node

pthreads

The system supports POSIX threads.

Distributed memory system

MPI

This is the most common and portable method for parallelizing codes for scalable distributed memory systems. MPI is a library of subroutines for message passing, collective operations, andother forms of inter-processor communication. The programmer is responsible for implementingdata distribution, synchronization, and reassembly of results using explicit MPI calls.

Using MPI, the programmer can largely ignore the physical organization of processors into nodes and simply treat the system as a collection of independent processors.

The system Fish also support the GPU and PGAS programming models.

Hardware Level

Model

Description

Node Level GPU

GPU

Fish supports several programming models used to interact with GPU devices. The PGI and Cray compilers support OpenACC directives for interacting
with GPU devices. The PGI compiler supports CUDA Fortran and CUDA C. The NVidia nvcc compiler is also available

Events

ARSC Office has Relocated

As of November 3, 2014, the ARSC office has moved to the Elvey Building, suite 508, on the UAF campus. ARSC staff and User Support are available Monday through Friday 8am to 5pm in Elvey 508 . The physical address for the new location is 903 Koyukuk Drive. Phone numbers, email addresses, and all other ARSC services have remained the same. The HPC clusters and archival storage silo will remain in the Butrovich Computing Facility.

Connect with ARSC

The University of Alaska Fairbanks is an affirmative action/equal
opportunity employer and educational institution and is a part of the University
of Alaska system.
Arctic Region Supercomputing Center (ARSC) |PO Box 756020, Fairbanks, AK 99775 | voice: 907-450-8602 | fax: 907-450-8601 | Supporting high performance computational research in science and engineering with emphasis on high latitudes and the arctic.
For questions or comments regarding this website, contact info@arsc.edu