System overview

Rockhopper (rockhopper.uits.iu.edu) is Penguin
Computing's Penguin-On-Demand (POD) supercomputing cloud appliance
hosted by Indiana University. The Rockhopper POD is a collaborative
effort between Penguin Computing, IU, the University of Virginia, the
University of California Berkeley, and the University of Michigan to
provide supercomputing cloud services in a secure US
facility. Researchers at US institutions of higher education and
Federally Funded Research and Development Centers (FFRDCs) can
purchase computing time from Penguin Computing, and receive access via
high-speed national research networks operated by IU.

Rockhopper consists of 11 Penguin Computing Altus 1804 servers,
each containing four AMD Opteron 6172 12-core processors and 128 GB of
RAM. The total RAM in the system is 1.5 TB. Each server chassis has a
QDR (40 Gbps) InfiniBand interconnect to the cluster's switch fabric,
which is then connected via four trunked 10 Gbps Ethernet links to
IU's network infrastructure. For hardware configuration details, see
this document's System information section.

The Rockhopper nodes run CentOS 5. Job management and scheduling
are provided by the Sun Grid Engine (SGE) resource manager. The
Modules system is used to simplify application and environment
configuration. Users may log into the cluster via SSH, using their
Penguin POD user IDs.

Rockhopper does not include a separate
scratch file system. Disk usage is a billable item.

Data Capacitor II

Note: Shared
scratch space on Data Capacitor II is available only to IU
students, faculty, and staff.

Shared scratch space is hosted on the Data Capacitor II (DC2) file
system. The DC2 scratch directory is a temporary workspace. Scratch
space is not allocated, and its total capacity fluctuates based on
project space requirements. The DC2 file system is mounted on IU
research systems as /N/dc2/scratch and behaves like any
other disk device. If you have an account on an IU research system,
you can access /N/dc2/scratch/username (replace
username with your IU Network ID username). Access to
/N/dc2/projects requires an allocation. For details, see
The Data Capacitor II and DC-WAN2 high-speed file systems at Indiana University. Files in shared scratch space may be purged if
they have not been accessed for more than 60 days.

Scholarly Data Archive (SDA)

Note:
Archival storage space on IU's HPSS-based Scholarly Data Archive
(SDA) is available only to IU students, faculty, and staff

System access

Requesting an account

Researchers at US institutions of higher education (with
.edu domain names) or Federally Funded Research and
Development Centers (FFRDCs) can
purchase computing time from Penguin Computing, and
then receive access to the on-demand HPC cloud service hosted on
Rockhopper at IU. Prospective users can request accounts by filling
out and submitting Penguin Computing's
account request form. To pay for your account, you need to enter
your credit card information when completing the account request
form. To request an alternate financial arrangement, email Penguin
Computing directly.

Methods of access

Users may log into the Rockhopper POD via SSH, using
their Penguin POD user IDs. Users create SSH keys and obtain
instructions on how to use them as part of the account creation
process.

SSH2 clients may be used to connect to
rockhopper.uits.iu.edu, which resolves to one of
Rockhopper's two login nodes:

Available software

Software packages installed on Rockhopper are made available to
users via Modules, an environment management system that lets you
easily and dynamically add software packages to your user
environment. For a list of software modules available on Rockhopper,
see Rockhopper
Applications.

SFTP provides file access, transfer, and management, and offers
client functionality similar to FTP. For example, from a
computer with a command line SFTP client (e.g., a Linux or
Mac OS X workstation), you could transfer files as
follows:

Application development

Programming models

Rockhopper is designed to support codes that have reasonably large
shared memory and/or distributed memory parallelism.

Compiling

The GNU Compiler Collection (GCC), Intel Compiler Suite, and
Portland Group (PGI) compilers for C, C++, and Fortran codes are
installed on the Rockhopper POD. Open MPI wrappers for these compilers
are available for MPI programs. Use the -O3 switch for
optimization of serial and parallel codes. The Intel Math Kernel
Library (MKL)
and AMD Core Math Library (ACML)
are available. For debugging, the GNU Project Debugger (GDB) and Intel Debugger
(IDB)
are available.

Running your applications

Rockhopper has one general-purpose queue (all.q) with "First In
First Out" (FIFO) scheduling and no maximum walltime limit.

Rockhopper uses the Sun Grid Engine (SGE) for job management and
scheduling. If you're familiar with TORQUE (or OpenPBS),
you'll most likely be comfortable working with SGE. Many job
submission (qsub) parameters are identical in TORQUE and
SGE; the only difference is SGE replaces #PBS with
#$. For complete user information, see the Grid
Engine manual pages.

Penguin Computing also provides the PODTools
package for submitting and managing jobs from your personal
workstation without connecting to the Rockhopper login node. The
package includes POD
Shell for remote job submission, data staging tool, and file
management, and POD
Report for querying your core-hour and storage usage.

Submitting jobs

To submit a job to run on the Rockhopper POD, use the
qsub command. If the command exits successfully, it will
return a job ID, for example:

If you need attribute values different from the defaults, but less
than the maximum allowed, specify these either in the job script using
SGE directives, or on the command line with the -l
switch. For example, to submit a job requiring 10 hours of walltime,
use:

qsub -l time=10:00:00 job.script

Note: Command-line arguments override directives
in the job script, and you may specify many attributes on the command
line, either as comma-separated options following the -l
switch, or each with its own -l switch. The following
commands are equivalent:

Reference

Policies

Accounts

Researchers at US institutions of higher education (with
.edu domain names) or Federally Funded Research and
Development Centers (FFRDCs) can
purchase computing time from Penguin Computing, and
then receive access to the on-demand HPC cloud service hosted on
Rockhopper at IU. Prospective users can request accounts by filling
out and submitting Penguin Computing's
account request form. To pay for your account, you need to enter
your credit card information when completing the account request
form. To request an alternate financial arrangement, email Penguin
Computing directly.

Computational resources (queues)

Rockhopper has only one queue, and all jobs submitted will execute
in the default queue. The only restriction is that individual jobs
are limited to 128 cores.

Mail usage

Rockhopper does not provide a production mail service; however, SGE
communicates via email. Mail forwarding is not configured during
account creation; you should consider establishing a mail forwarding
.forward file; see ARCHIVED: How do I forward my mail from a Unix account?

Scheduled downtime

Rockhopper does not have a regularly scheduled maintenance window.
Information about pending outages is sent via email to account
holders.

Support

This document was developed with support from National Science Foundation (NSF) grants 1053575 and 1548562. Any opinions, findings, conclusions, or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the
views of the NSF.

This is documentbbjbin the Knowledge Base.Last modified on2015-06-30 00:00:00.