ABOUT NICS

Running Jobs

General Information

When you log into Darter, you will be directed to one of the login nodes. The login nodes should only be used for basic tasks such as file editing, code compilation, data backup, and job submission.

The login nodes should not be used to run production jobs. Production work should be performed on the system's compute resources. The serial jobs (pre- and post-processing, etc.) may be run on the compute nodes as long as they are statically linked. For one or more single-processor jobs please refer to the Job Execution section for more information. Access to compute resources is managed by the Portable Batch System (PBS). Job scheduling is handled by Moab, which interacts with PBS and the XC30 system software.

This page provides information for getting started with the batch facilities of PBS with Moab as well as basic job execution. Sometimes you may want to chain your submissions to complete a full simulation without the need to resubmit, you can read about this here (Please read it carefully).

Batch Scripts

Batch scripts can be used to run a set of commands on a system's compute partition. Batch scripts allow users to run non-interactive batch jobs, which are useful for submitting a group of commands, allowing them to run through the queue, and then viewing the results. However It is sometimes useful to run a job interactively (primarily for debugging purposes). Please refer to the Interactive Batch Jobs section for more infomation on how to run batch jobs interactively.

All non-interactive jobs must be submitted on Darter using job scripts via the qsub command. The batch script is a shell script containing PBS flags and commands to be interpreted by a shell. The batch script is submitted to the batch manager, PBS, where it is parsed. Based on the parsed data, PBS places the script in the queue as a job. Once the job makes its way through the queue, the script will be executed on the head node of the allocated resources.

All job scripts start with an Interpreter line, followed by a series of #PBS declarations that describe requirements of the job to the scheduler. The rest is a shell script, which sets up and runs the executable.

Batch scripts are divided into the following three sections:

Shell interpreter (one line)

The first line of a script can be used to specify the script's interpreter.

This line is optional.

If not used, the submitter's default shell will be used.

The line uses the syntax #!/path/to/shell, where the path to the shell may be

/usr/bin/csh

/usr/bin/ksh

/bin/bash

/bin/sh

PBS submission options

The PBS submission options are preceded by #PBS, making them appear as comments to a shell.

PBS will look for #PBS options in a batch script from the script's first line through the first non-comment line. A comment line begins with #.

#PBS options entered after the first non-comment line will not be read by PBS.

Shell commands

The shell commands follow the last #PBS option and represent the executable content of the batch job.

If any #PBS lines follow executable statements, they will be treated as comments only. The exception to this rule is shell specification on the first line of the script.

The execution section of a script will be interpreted by a shell and can contain multiple lines of executables, shell commands, and comments.

During normal execution, the batch script will end and exit the queue after the last line of the script.

The following example shows a typical job script that includes the minimal requirements to submit a parallel job that executes ./a.out on 96 cores, charged to the fictitious account UT-NTNL0121 with a wall clock limit of one hour and 35 minutes:

Jobs should be submitted from within a directory in the Lustre file system. It is best to always execute cd $PBS_O_WORKDIR as the first command. Please refer to the PBS Environment Variables section for further details.

On Darter you must request size=cores to be a multiple of 16 - since there are 16 physical cores per node and it is not possible to allocate less than 16 cores even if planning to use less. If you want to run only on 8 cores (-n 8), for example, you still need to request 16 cores (#PBs -l size=16). Otherwise you will receive the following error:

Notice: Your job was NOT submitted
Core requests on Darter must be a multiple of 16. You have requested
an invalid number of cores ( 8 ). Please resubmit the
job requesting an appropriate number of cores.

There is an online documentation that describes the PBS options that can be used for more complex job scripts.

Unless otherwise specified your default shell interpreter will be used to execute shell commands in job scripts. In some cases it may even try to guess what interpreter to use. If the job script should use a different interpreter, then specify the correct interpreter using:

#PBS -S /bin/XXXX

The following example shows a typical job script that saves a file to HPSS. Note that you must log in using your OTP token before submitting an HPSS job. The job is charged to the fictitious account UT-NTNL0121 with a wall clock limit of 5 hours and 20 minutes. You should not specify -l size in HPSS jobs.

Interactive Batch Jobs

Interactive batch jobs give users interactive access to compute resources. A common use for interactive batch jobs is debugging. This section demonstrates how to run interactive jobs through the batch system and provides common usage tips.

Users are not allowed to run interactive jobs on compute resources from the login nodes. Running a batch-interactive PBS job is done by using the -I option with qsub. After the interactive job starts, the user should run the computationally intense applications on the lustre scratch space, and place the executable after the aprun command. The aprun command will send the application to the compute nodes to run.

Interactive Batch Example

For interactive batch jobs, PBS options are passed through qsub on the command line. Refer to the following example:

Enables X11 forwarding which is necessary for interactive GUIs. Note that you must have X11 forwarding enabled when you log in to Darter

-l size=16,walltime=1:00:00

Request 16 physical compute cores for one hour

After running this command, you will have to wait until enough compute nodes are available, just as in any other batch job. However, once the job starts, the standard input and standard output of this terminal will be linked directly to the head node of our allocated resource. The executable should be placed on the same line after the aprun command, just like it is in the batch script.

> cd /lustre/medusa/$USER
> aprun -n 16 ./a.out

Issuing the exit command will end the interactive job. From here commands may be executed directly instead of through a batch script.

Using Interactive Batch Jobs to Debug

A common use of interactive batch jobs is debugging (see the Debugging page). The tips below may be useful while interactively debugging the code through PBS. To help a job run quickly rather than sit in the queue it is important to choose the job size appropriately. You can use the showbf command (for “show back fill) to see immediately available resources that would allow your job to be backfilled (and thus started) by the scheduler. For example, the snapshot below shows that there are 9 nodes available, so a job requesting 2 compute nodes would run immediately.

Common PBS Options

This section gives a quick overview of common PBS options.

Necessary PBS options

Option

Use

Description

A

#PBS -A <account>

Causes the job time to be charged to <account>. The account string UT-NTNL0121 is typically composed of three letters followed by three digits and optionally followed by a subproject identifier. The utility showusage can be used to list your valid assigned project ID(s). This is the only option required by all jobs.

l

#PBS -l size=<cores>

Number of physical cores. Must request entire nodes (multiples of 16).

#PBS -l walltime=<time>

Maximum wall-clock time. <time> is in the format HH:MM:SS. Default is 1 hour.

Other PBS Options

Option

Use

Description

o

#PBS -o <name>

Writes standard output to <name> instead of <job script>.o$PBS_JOBID. $PBS_JOBID is an environment variable created by PBS that contains the PBS job identifier.

e

#PBS -e <name>

Writes standard error to <name> instead of <job script>.e$PBS_JOBID.

j

#PBS -j {oe,eo}

Combines standard output and standard error into the standard error file (eo) or the standard out file (oe).

m

#PBS -m a

Sends email to the submitter when the job aborts.

#PBS -m b

Sends email to the submitter when the job begins.

#PBS -m e

Sends email to the submitter when the job ends.

M

#PBS -M <address>

Specifies email address to use for -m options.

N

#PBS -N <name>

Sets the job name to <name> instead of the name of the job script.

S

#PBS -S <shell>

Sets the shell to interpret the job script.

q

#PBS -q <queue>

Directs the job to the specified queue.This option is not required to run in the general production queue.

Note: Please do not use the PBS -V option. This can propagate large numbers of environment variable settings from the submitting shell into a job which may cause problems for the batch environment. Instead of using PBS -V, please pass only necessary environment variables using -v <comma_separated_list_of_ needed_envars>. You can also include module load statements in the job script.

Example:

#PBS -v PATH,LD_LIBRARY_PATH,PV_NCPUS,PV_LOGIN,PV_LOGIN_PORT

Further details and other PBS options may be found using the man qsub command.

PBS Environment Variables

PBS sets the environment variable PBS_O_WORKDIR to the directory from which the batch job was submitted.

By default, a job starts in your home directory. Often, you would want to do cd $PBS_O_WORKDIR to move back to the directory you were in. The current working directory when you start aprun should be on Lustre Space.

Include the following command in your script if you want it to start in the submission directory:

cd $PBS_O_WORKDIR

PBS_JOBID

PBS sets the environment variable PBS_JOBID to the job's ID.

A common use for PBS_JOBID is to append the job's ID to the standard output and error file(s).

Include the following command in your script to append the job's ID to the standard output and error file(s)

#PBS -o scriptname.o$PBS_JOBID

PBS_NNODES

PBS sets the environment variable PBS_NNODES to the number of logical cores requested (not nodes). Given that Darter has 16 physical cores per node, the number of nodes would be given by $PBS_NNODES/16.

For example, a standard MPI program is generally started with aprun -n $($PBS_NNODES) ./a.out. See the Job Execution section for more details.

Users can alter certain attributes of queued jobs until they start running. The order in which jobs are run depends on the following factors:

number of cores requested - jobs that request more cores get a higher priority.

queue wait time - a job's priority increases as the time it waits to run.

account balance - jobs that use an account with a negative balance will have significantly lowered priority.

number of jobs - a maximum of five jobs per user, at a time, will be eligible to run. The rest will be blocked.

In certain special cases, the priority of a job may be manually increased upon request. To request priority change you may contact NICS User Support. NICS will need the job ID and reason to submit the request.

Queues

Queues are used by the batch scheduler to aid in the organization of jobs. This section lists the available queues on Darter. An individual user may have up to 5 jobs eligible to start at any one time (regardless of how many jobs may already be running), while a project may have a total of 10 jobs eligible to run across all the users charging against that project. Jobs in excess of these limits will not be considered for execution. Additionally, users are limited to 25 simultaneous running jobs and projects are limited to 40 simultaneous running jobs.

For example, if you submit 12 jobs, 5 would be eligible, and 7 would be blocked (with an "Idle" state). If three of the jobs run, some blocked jobs will be released so that there are still 5 eligible jobs, and 4 blocked jobs. This continues until all jobs are run. This is done to make it easier to schedule the jobs (there are fewer jobs to consider), and to prevent a single user from dominating the system with many small jobs.

Job priority on Darter is based on the number of cores and wall clock time requested. Jobs with large core counts intentionally get the highest priority. Jobs with smaller core counts do run effectively on Darter as backfill. While the scheduler is collecting nodes for larger jobs, those with short wall clock limits and small core counts may use those nodes temporarily without delaying the start time of the larger job.

Capability jobs on Darter

Users are encouraged to submit capability on Darter at any time. However, capability jobs are only executed at specific times at the discretion of NICS. Capability jobs are generally executed after preventative maintenance periods or by demand, if preventative maintenance is not performed. Users who plan on running or have questions concerning capability jobs are encouraged to contact help@xsede.org or their NICS point of contact.

Darter Queues

Jobs on Darter are sorted into queues based on size and walltime.

Darter Queue

Min Size

Max Size

Max Wall Clock Limit

hpss

n/a

n/a

24:00:00

batch

16

11,968

24:00:00

capability

6,016

11,584

24:00:00

* Requests for jobs on Darter must be multiples of 16. For example, the smallest job on Darter would request 16 (physical) cores.

Job Execution

Once the access to compute resources has been allocated through the batch system, users have the ability to execute jobs on the allocated resources. This section gives examples of job execution and provides common tips.

The PBS script is executed on the aprun node (or login node for interactive jobs). All execution calls made directly to programs(eg./a.out), they will be executed on the service node. This may be useful for records keeping, staging data, etc. Any memory- or computationally-intensive programs should be run using aprun, otherwise it bogs down the node, and may cause system problems. You may run non-MPI programs on a compute node using aprun, see the Single-Processor (Serial) jobs and Multiple Single-Processor Programs sections below.

To launch parallel jobs on one or more compute nodes, use the aprun command. System specifications for Darter should be kept in mind when running a job using aprun. A Darter XC30 node consists of two sockets, each with 8 physical cores (16 logical cores if using Hyper-threading ), so there are 16 physical cores per node (32 if using HT). The PBS size option requests physical compute cores, not logical cores. This is not necessarily the number of cores that will be used, but rather the number of physical cores that will be made available for your job (idle cores are still inaccessible to other users). The easiest way to determine this number may be by calculating the number of nodes that will be occupied (with or without Hyper-threading) and multiplying that number by 16 physical cores/node.

The following options are commonly used with aprun:

Commonly used options for aprun

-n

Total number of MPI processes (default: 1)

-N

Number of MPI processes per node (1-16)

-S

Number of MPI processes per socket (1-8)

-d

Specifies number of cores per MPI process (for use with OpenMP, 1-16)

-j 2

Turns on Hyper-threading on the processor (by default is always off) and indicates to allocate two processes per physical core

The best way to understand the effects of these options is to try them yourself. Please see our tutorial on the subject.

MPI examples

aprun -n $PBS_NNODES ./a.out

This uses all physical cores, one MPI process on each core. The environmental variable PBS_NNODES is the number of cores requested at the top of the PBS script. In most cases, it is unnecessary to do anything beyond this.

aprun -n 25 ./a.out

If for some reason you want to use a number of cores that is not a multiple of 16, that is valid. Round up to the next multiple of 16 for the resource request, the extra cores will remain idle. This example would require #PBS -l size=32.

aprun -n 12 -N 6 ./a.out

This will make the XC30 to emulate the 6 cores/socket layout of the XT5: there will be six MPI processes per node, all on one socket. This example would require you to request 32 physical cores on the XC30.

aprun -n 8 -S 2 ./a.out

On the XC30, this is similar to the previous example, running 4 MPI processes per node. However, now they are running two on each socket. This ensures that both sockets are used, and that the memory is evenly distributed among the sockets. This ensures even distribution of L3 cache, and memory (a process can access memory on the other socket, but not as quickly as its own memory).

aprun -n 32 -j 2 ./a.out

On the XC30, there will be 32 running MPI processes on the same node running one MPI rank per logical core by exploiting hyper-threading.

MPI/OpenMP

Darter supports threaded programming within a node. The aprun -d flag is used to specify the number of cores per MPI process, so with OpenMP, aprun -d $OMP_NUM_THREADS uses one thread per core. When using every core, this would require at least n*d cores to be requested, the following examples assume that three nodes have been requested – #PBS -l size=48.

export OMP_NUM_THREADS=2
aprun -n24 -N8 -S4 -d2 ./a.out

Here, each MPI process has two OpenMP threads, filling three whole nodes. For some codes, two OpenMP threads per MPI process may be optimal. If the reason for using OpenMP is instead to increase the available memory, you may want to use 8 or even 16 threads per MPI process instead, though there is some performance penalty for using OpenMP across sockets in XC30's current configuration (using QPI QuickPath Interconnect).

export OMP_NUM_THREADS=7
aprun -n6 -N2 -S1 -d7 ./a.out

The -d flag specifies the depth, or number of cores to assign to each MPI process (when the MPI process spawns an OpenMP thread, it has a dedicated core to put it on). The -S option causes the second process to be put all on the second socket, rather than filling out the first socket first.

Single-Processor (Serial) Jobs

Serial programs which are memory or computationally intensive should never be run on the service nodes (anything outside of aprun). Service nodes have limited resources shared by all users, and when they run out, system problems may take place. To run serial programs on the compute nodes, the program must be compiled with the compiler wrappers (cc, CC or ftn). You would then request one node (16 cores) with PBS (#PBS -l size=16). Use the following line to run a serial executable on a compute node:

aprun -n 1 ./a.out

Running Multiple Single-Processor Programs

If you need to run many instances of a serial code (as in a typical parameter
sweep study for instance), we highly recommend using Eden. Eden is a simple
script-based master-worker framework for running multiple serial jobs within
a single PBS job. Detailed instructions for using Eden are found
here.

Job Accounting

Projects are charged based on usage of compute resources. This section gives details on how each job’s usage is calculated. PBS allocates cores to batch jobs in units of the number of cores available per node. A node cannot be allocated to multiple jobs, so a job is charged for the entire node whether or not it uses all its cores. The PBS -l size option specifies the number of cores to allocate to a job. For example on Darter a multiple of 16 must be requested.

Getting Accounting Information

This section illustrates the usage of two commonly used utilities for obtaining accounting information.

showusage

The showusage utility can be used to view your project allocation and overall usage through the last job accounting posting (usually the previous night).

glsjob

More detailed accounting information can be obtained using the glsjob command:

glsjob -u <username>

Prints current accounting information for a particular user.

glsjob -J <jobid>.xt5

Can be used to find information for a particular job.

glsjob -p <project>

Prints current accounting information for all jobs charged to a particular project account.

glsjob --man

Displays documentation for glsjob

Note: The user can grep the particular information from the output as they need.

On Darter the service unit charge for each job is:

32 x walltime x number of nodes

where walltime is the number of wall clock hours used by the job;
number of nodes is PBS'size'/ 16

Job Refund Policy

NICS will provide refunds for user jobs which are adversely impacted by system issues beyond the control of the user. Refund requests must be made within two calendar weeks of a job’s completion date by submitting a ticket to help@xsede.org. Please provide: username, machine name, jobID, reason for refund request.

Examples of refund requests that will not be approved include: jobs run on projects that have a negative balance, jobs that started and completed after the project’s end date, and jobs that failed because they reached the user-specified wallclock limit.

NICS strongly encourages the use of application checkpoint restart files. Users should only request refunds from the time of the last successful checkpoint. The refund limit for eligible jobs is six hours. Exceptions to the maximum refund will only be considered for cases where appropriate checkpointing can not effectively mitigate loss due to the nature of the underlying machine problem.