Running jobs

General Information

When you log in, you will be directed to one of the login nodes. The login nodes should only be used for basic tasks such as file editing, code compilation, and job submission.

The login nodes should not be used to run production jobs. Production work should be performed on the system's compute resources. Serial jobs (pre- and post-processing, etc.) may be run on the compute nodes. Access to compute resources is managed by Torque (a PBS-like system). Job scheduling is handled by Moab, which interacts with Torque and system software.

This page provides information for getting started with the batch facilities of Torque with Moab as well as basic job execution. Sometimes you may want to chain your submissions to complete a full simulation without the need to resubmit, you can read about this here .

Batch Scripts

Batch scripts can be used to run a set of commands on a system's compute partition. Batch scripts allow users to run non-interactive batch jobs, which are useful for submitting a group of commands, allowing them to run through the queue, and then viewing the results. However It is sometimes useful to run a job interactively (primarily for debugging purposes). Please refer to the Interactive Batch Jobs section for more information on how to run batch jobs interactively.

All non-interactive jobs must be submitted on Beacon using job scripts via the qsub command. The batch script is a shell script containing PBS flags and commands to be interpreted by a shell. The batch script is submitted to the resource manager, Torque, where it is parsed. Based on the parsed data, Torque places the script in the queue as a job. Once the job makes its way through the queue, the script will be executed on the head node of the allocated resources.

Jobs are submitted to the batch job scheduler in units of nodes via the -l nodes=# option. By default, MPI jobs will place one task per node. The default behavior can be overridden by adding the '-ppn=# -f $PBS_NODEFILE' option to the mpirun command. Nodes can be oversubscribed (i.e. utilizing more MPI ranks than the node has cores); however, the default behavior will be to fill all cores on all nodes before adding the additional MPI ranks. This will be done by adding ranks to each node again up to the number of cores per node available. This process is repeated until all MPI ranks have been allocated. For example a job that requests 3 nodes (-l nodes=3) that have 16 total cores available and submits an MPI job using 144 ranks (mpirun -n 144) will first place 16 MPI ranks on each node on each of the 3 nodes (48 ranks over 3 nodes) before placing an addition set of 48 ranks in the same way (16 ranks per node over 3 nodes). Finally, the remaining set of 48 ranks will be allocated to all the nodes in the same way (16 ranks per node over 3 nodes).

If all MPI ranks have not been allocated it will place this same number of MPI ranks starting again on each node, starting with the first, until all MPI Ranks have been allocated. In cases were the number of MPI ranks per node is less than the available cores per node, these MPI ranks are evenly spread across processor cores. For example if 8 MPI ranks are placed on a 16 core node (2 processors of 8 cores each) then four MPI ranks will land on the first processor and the four MPI ranks will land on the second processor.

All job scripts start with an interpreter line, followed by a series of #PBS declarations that describe requirements of the job to the scheduler. The rest is a shell script, which sets up and runs the executable.

Batch scripts are divided into the following three sections:

Shell interpreter (one line)

The first line of a script can be used to specify the script's interpreter.

This line is optional.

If not used, the submitter's default shell will be used.

The line uses the syntax #!/path/to/shell, where the path to the shell may be

/usr/bin/csh

/usr/bin/ksh

/bin/bash

/bin/sh

PBS submission options

The PBS submission options are preceded by #PBS, making them appear as comments to a shell.

PBS will look for #PBS options in a batch script from the script's first line through the first non-comment line. A comment line begins with #.

#PBS options entered after the first non-comment line will not be read by PBS.

Shell commands

The shell commands follow the last #PBS option and represent the executable content of the batch job.

If any #PBS lines follow executable statements, they will be treated as comments only. The exception to this rule is shell specification on the first line of the script.

The execution section of a script will be interpreted by a shell and can contain multiple lines of executables, shell commands, and comments.

During normal execution, the batch script will end and exit the queue after the last line of the script.

The following examples show typical job script header with various mpirun commands to submit a parallel job that executes ./a.out on 3 nodes with a wall clock limit of two hours:

Jobs should be submitted from within a directory in the Lustre file system. It is best to always execute cd $PBS_O_WORKDIR as the first command. Please refer to the PBS Environment Variables section for further details.

Documentation that describes PBS options can be used for more complex job scripts.

Unless otherwise specified, your default shell interpreter will be used to execute shell commands in job scripts. If the job script should use a different interpreter, then specify the correct interpreter using:

#PBS -S /bin/XXXX

Altering Batch Jobs

This section shows how to remove or alter batch jobs.

Remove Batch Job from the Queue

Jobs in the queue in any state can be stopped and removed from the queue using the command qdel.

For example, to remove a job with a PBS ID of 1234, use the following command:

> qdel 1234

More details on the qdel utility can be found on the qdel man page.

Hold Queued Job

Jobs in the queue in a non-running state may be placed on hold using the qhold command. Jobs placed on hold will not be removed from the queue, but they will not be eligible for execution.

For example, to move a currently queued job with a PBS ID of 1234 to a hold state, use the following command:

> qhold 1234

More details on the qhold utility can be found on the qhold man page.

Release Held Job

Once on hold the job will not be eligible to run until it is released to return to a queued state. The qrls command can be used to remove a job from the held state.

For example, to release job 1234 from a held state, use the following command:

> qrls 1234

More details on the qrls utility can be found on the qrls man page.

Modify Job Details

Non-running (or on hold) jobs can only be modified with the qalter PBS command. For example, this command can be used to:

Modify the job´s name,

$ qalter -N <newname> <jobid>

Modify the number of requested nodes,

$ qalter -l nodes=<NumNodes> <jobid>

Modify the job´s wall time

$ qalter -l walltime=<hh:mm:ss> <jobid>

Set job´s dependencies

$ qalter -W depend=type:argument <jobid>

Remove a job´s dependency (omit :argument):

$ qalter -W depend=type <jobid>

Notes:

Use qstat -f <jobid> to gather all the information about a job, including job dependencies.

Use qstat -a <jobid> to verify the changes afterward.

Users cannot specify a new walltime for their job that exceeds the maximum walltime of the queue where your job is.

If you need to modify a running job, please contact us. Certain alterations can only be performed by administrators.

Interactive Batch Jobs

Interactive batch jobs give users interactive access to compute resources. A common use for interactive batch jobs is debugging. This section demonstrates how to run interactive jobs through the batch system and provides common usage tips.

Users are not allowed to run interactive jobs from login nodes. Running a batch-interactive PBS job is done by using the -I option with qsub. After the interactive job starts, the user should run the computationally intense applications on the lustre scratch space, and place the executable after the mpirun command.

Interactive Batch Example

For interactive batch jobs, PBS options are passed through qsub on the command line. Refer to the following example:

qsub -I -A UT-NTNL0121 -l nodes=1,walltime=1:00:00

Option

Description

-I

Start an interactive session

-A

Charge to the “UT-NTNL0121” project

-l

Request 1 physical compute node (16 cores) for one hour

After running this command, you will have to wait until enough compute nodes are available, just as in any other batch job. However, once the job starts, the standard input and standard output of this terminal will be linked directly to the head node of our allocated resource. The executable should be placed on the same line after the mpirun command, just like it is in the batch script.

> cd /lustre/medusa/$USER
> mpirun -n 16 ./a.out

Issuing the exit command will end the interactive job.

Common PBS Options

This section gives a quick overview of common PBS options.

Necessary PBS options

Option

Use

Description

A

#PBS -A <account>

Causes the job time to be charged to <account>. The account string is typically composed of three letters followed by three digits and optionally followed by a subproject identifier. The utility showusage can be used to list your valid assigned project ID(s). This is the only option required by all jobs.

l

#PBS -l nodes=<nodes>

Number of requested nodes.

#PBS -l walltime=<time>

Maximum wall-clock time. <time> is in the format HH:MM:SS. Default is 1 hour.

Other PBS Options

Option

Use

Description

o

#PBS -o <name>

Writes standard output to <name> instead of <job script>.o$PBS_JOBID. $PBS_JOBID is an environment variable created by PBS that contains the PBS job identifier.

e

#PBS -e <name>

Writes standard error to <name> instead of <job script>.e$PBS_JOBID.

j

#PBS -j {oe,eo}

Combines standard output and standard error into the standard error file (eo) or the standard out file (oe).

m

#PBS -m a

Sends email to the submitter when the job aborts.

#PBS -m b

Sends email to the submitter when the job begins.

#PBS -m e

Sends email to the submitter when the job ends.

M

#PBS -M <address>

Specifies email address to use for -m options.

N

#PBS -N <name>

Sets the job name to <name> instead of the name of the job script.

S

#PBS -S <shell>

Sets the shell to interpret the job script.

qos

#PBS -q <queue>

Directs the job to the run under the specified QoS. This option is not required to run in the default QoS.

Note: Please do not use the PBS -V option. This can propagate large numbers of environment variable settings from the submitting shell into a job which may cause problems for the batch environment. Instead of using PBS -V, please pass only necessary environment variables using -v <comma_separated_list_of_ needed_envars>. You can also include module load statements in the job script.

Example:

#PBS -v PATH,LD_LIBRARY_PATH,PV_NCPUS,PV_LOGIN,PV_LOGIN_PORT

Further details and other PBS options may be found using the man qsub command.

PBS Environment Variables

PBS sets the environment variable PBS_O_WORKDIR to the directory from which the batch job was submitted.

By default, a job starts in your home directory. Often, you would want to do cd $PBS_O_WORKDIR to move back to the directory you were in. The current working directory when you start mpirun should be on Lustre Space.

Include the following command in your script if you want it to start in the submission directory:

cd $PBS_O_WORKDIR

PBS_JOBID

PBS sets the environment variable PBS_JOBID to the job's ID.

A common use for PBS_JOBID is to append the job's ID to the standard output and error file(s).

Include the following command in your script to append the job's ID to the standard output and error file(s)

#PBS -o scriptname.o$PBS_JOBID

PBS_NNODES

PBS sets the environment variable PBS_NNODES to the number of logical cores requested (not nodes). Given that Beacon has 16 physical cores per node, the number of nodes would be given by $PBS_NNODES/16.

For example, a standard MPI program is generally started with mpirun -n $($PBS_NNODES) ./a.out. See the Job Execution section for more details.

Monitoring Job Status

This page lists some ways to monitor jobs in the batch queue. Torque and Moab provide multiple tools to view the queues, system, and job status. Below are the most common and useful ones of these tools.

Users can alter certain attributes of queued jobs until they start running. The order in which jobs are run depends on the following factors:

number of cores requested - jobs that request more cores get a higher priority.

queue wait time - a job's priority increases as the time it waits to run.

account balance - jobs that use an account with a negative balance will have significantly lowered priority.

number of jobs - a maximum of five jobs per user, at a time, will be eligible to run. The rest will be blocked.

In certain special cases, the priority of a job may be manually increased upon request. To request priority change you may contact NICS User Support. NICS will need the job ID and reason to submit the request.

Queues

Queues are used by the batch scheduler to aid in the organization of jobs. There is currently only one queue -'batch'. Jobs are instead categorized by the Quality of Service (QoS) attribute.

Job priority is based on several factors including the QoS, the number of nodes and wall clock time requested. Jobs with larger node counts receive higher priority. Jobs with smaller node counts do run effectively as backfill. While the scheduler is collecting nodes for larger jobs, those with short wall clock limits and small node counts may use those nodes without delaying the start time of the larger job.

Jobs on are given a specific QoS based on investment type.

QoS

Priority

Jobs Queued/Running

Min Size

Max Size

Max Wall Clock Limit

priority

High

64/16

1 node

None

6 days

campus

Medium

32/8

1 node

64 nodes

3 days

backfill

Backfill

100/None

1 node

16 nodes

24:00:00

Job Accounting

Projects are charged based on usage of compute resources. This section gives details on how each job’s usage is calculated. PBS allocates cores to batch jobs in units of the number of cores available per node. A node cannot be allocated to multiple jobs, so a job is charged for the entire node whether or not it uses all its cores.