This section provides a brief summary of what can be done once jobs
are submitted. The basic mechanisms for monitoring a job are
introduced, but the commands are discussed briefly.
You are encouraged to
look at the man pages of the commands referred to (located in
Chapter 9 beginning on
page ) for more information.

When jobs are submitted, Condor will attempt to find resources
to run the jobs.
A list of all those with jobs submitted
may be obtained through condor_ status
with the
-submitters option.
An example of this would yield output similar to:

This output contains many columns of information about the
queued jobs.
The ST column (for status) shows the status of
current jobs in the queue. An R in the status column
means the the job is currently running.
An I stands for idle.
The job is not running right
now, because it is waiting for a machine to become available.
The status
H is the hold state. In the hold state,
the job will not be scheduled to
run until it is released (see the condor_ hold
reference page located on page
and the condor_ release
reference page located on page ).
Older versions of Condor used a
U in the status column to stand for unexpanded.
In this state,
a job has never
produced a checkpoint,
and when the job starts running, it will start running from the
beginning.
Newer versions of Condor do not use the U state.

The CPU_USAGE time reported for a job is the time that has been
committed to the job. It is not updated for a job until
the job checkpoints. At that time, the job has made guaranteed forward
progress. Depending upon how the site administrator configured the pool,
several hours may pass between checkpoints, so do not worry if you do
not observe the CPU_USAGE entry changing by the hour.
Also note that this is actual CPU
time as reported by the operating system; it is not time as
measured by a wall clock.

Another useful method of tracking the progress of jobs is through the
user log. If you have specified a log command in
your submit file, the progress of the job may be followed by viewing the
log file. Various events such as execution commencement, checkpoint, eviction
and termination are logged in the file.
Also logged is the time at which the event occurred.

When your job begins to run, Condor starts up a condor_ shadow process
on the submit machine. The shadow process is the mechanism by which the
remotely executing jobs can access the environment from which it was
submitted, such as input and output files.

It is normal for a machine which has submitted hundreds of jobs to have
hundreds of shadows running on the machine. Since the text segments of
all these processes is the same, the load on the submit machine is usually
not significant. If, however, you notice degraded performance, you can limit
the number of jobs that can run simultaneously through the
MAX_JOBS_RUNNING configuration parameter. Please talk to your
system administrator for the necessary configuration change.

You can also find all the machines that are running your job through the
condor_ status command.
For example, to find all the machines that are
running jobs submitted by ``breach@cs.wisc.edu,'' type:

A job can be removed from the queue at any time by using the condor_ rm
command. If the job that is being removed is currently running, the job
is killed without a checkpoint, and its queue entry is removed.
The following example shows the queue of jobs before and after
a job is removed.

2.6.3 Placing a job on hold

A job in the queue may be placed on hold by running the command
condor_ hold.
A job in the hold state remains in the hold state until later released
for execution by the command condor_ release.

Use of the condor_ hold command causes a hard kill signal to be sent
to a currently running job (one in the running state).
For a standard universe job, this means that no checkpoint is
generated before the job stops running and enters the hold state.
When released, this standard universe job continues its execution
using the most recent checkpoint available.

Jobs in universes other than the standard universe that are running
when placed on hold will start over from the beginning when
released.

The manual page for condor_ hold
on page
and the manual page for condor_ release
on page
contain usage details.

2.6.4 Changing the priority of jobs

In addition to the priorities assigned to each user, Condor also provides
each user with the capability of assigning priorities to each submitted job.
These job priorities are local to each queue and can be any integer value, with
higher values meaning better priority.

The default priority of a job is 0, but can be changed using the condor_ prio
command.
For example, to change the priority of a job to -15,

It is important to note that these job priorities are completely
different from the user priorities assigned by Condor. Job priorities
do not impact user priorities. They are only a mechanism for the user to
identify the relative importance of jobs among all the jobs submitted by the
user to that specific queue.

2.6.5 Why does the job not run?

Users sometimes find that their jobs do not run. There are several reasons why
a specific job does not run. These reasons include failed job or machine
constraints, bias due to preferences, insufficient priority, and the preemption
throttle that is implemented by the condor_ negotiator to prevent
thrashing. Many of these reasons can be diagnosed by using the -analyze
option of condor_ q.
For example, a job (assigned the cluster.process value of
331228.2359) submitted to the local pool at UW-Madison
is not running.
Running condor_ q's analyzer provided the following information:

While the analyzer can diagnose most common problems, there are some situations
that it cannot reliably detect due to the instantaneous and local nature of the
information it uses to detect the problem. Thus, it may be that the analyzer
reports that resources are available to service the request, but the job still
does not run. In most of these situations, the delay is transient, and the
job will run during the next negotiation cycle.

If the problem persists and the analyzer is unable to detect the situation, it
may be that the job begins to run but immediately terminates due to some
problem. Viewing the job's error and log files
(specified in the submit command file) and Condor's SHADOW_LOG file
may assist in tracking down the problem. If the cause is still unclear, please
contact your system administrator.

2.6.6 In the log file

In a job's log file are a log of events (a listing of events in
chronological order) that occurred during the life of the job.
The formatting of the events is always the same,
so that they may be machine readable.
Four fields are always present,
and they will most often be followed by other fields that give further
information that is specific to the type of event.

The first field in an event is the numeric value assigned as the
event type in a 3-digit format.
The second field identifies the job which generated the event.
Within parentheses are the ClassAd job attributes of
ClusterId value,
ProcId value,
and the MPI-specific rank for MPI universe jobs or a set of zeros
(for jobs run under universes other than MPI),
separated by periods.
The third field is the date and time of the event logging.
The fourth field is a string that briefly describes the event.
Fields that follow the fourth field give further information for the specific
event type.

These are all of the events that can show up in a job log file:

Event Number: 000
Event Name: Job submitted
Event Description: This event occurs when a user submits a job.
It is the first event you will see for a job, and it should only occur
once.

Event Number: 001
Event Name: Job executing
Event Description: This shows up when a job is running.
It might occur more than once.

Event Number: 002
Event Name: Error in executable
Event Description: The job couldn't be run because the
executable was bad.

Event Number: 003
Event Name: Job was checkpointed
Event Description: The job's complete state was written to a checkpoint
file.
This might happen without the job being removed from a machine,
because the checkpointing can happen periodically.

Event Number: 004
Event Name: Job evicted from machine
Event Description: A job was removed from a machine before it finished,
usually for a policy reason: perhaps an interactive user has claimed
the computer, or perhaps another job is higher priority.

Event Number: 006
Event Name: Image size of job updated
Event Description: This is informational.
It is referring to the memory that the job is using while running. It
does not reflect the state of the job.

Event Number: 007
Event Name: Shadow exception
Event Description:
The condor_ shadow, a program on the submit computer that watches
over the job and performs some services for the job, failed for some
catastrophic reason. The job will leave the machine and go back into
the queue.

Event Number: 010
Event Name: Job was suspended
Event Description: The job is still on the computer, but it is no longer
executing.
This is usually for a policy reason, like an interactive user using
the computer.

Event Number: 012
Event Name: Job was held
Event Description: The user has paused the job, perhaps with
the condor_ hold command.
It was stopped, and will go back into the queue again until it is
aborted or released.

Event Number: 013
Event Name: Job was released
Event Description: The user is requesting that a job on hold be re-run.

Event Number: 016
Event Name: POST script terminated
Event Description: A node in a DAGMan workflow has a script
that should be run after a job.
The script is run on the submit host.
This event signals that the post script has completed.

Event Number: 025
Event Name: Grid Resource Back Up
Event Description: A grid resource that was previously
unavailable is now available.

Event Number: 026
Event Name: Detected Down Grid Resource
Event Description: The grid resource that a job is to
run on is unavailable.

Event Number: 027
Event Name: Job submitted to grid resource
Event Description: A job has been submitted,
and is under the auspices of the grid resource.

2.6.7 Job Completion

When your Condor job completes(either through normal means or abnormal
termination by signal), Condor will remove it from the job queue (i.e.,
it will no longer appear in the output of condor_ q) and insert it into
the job history file. You can examine the job history file with the
condor_ history command. If you specified a log file in your submit
description file, then the job exit status will be recorded there as well.

By default, Condor will send you an email message
when your job completes. You can modify this behavior with the
condor_ submit ``notification'' command.
The message will include the exit status of your job (i.e., the
argument your job passed to the exit system call when it completed) or
notification that your job was killed by a signal. It will also
include the following statistics (as appropriate) about your job:

Submitted at:

when the job was submitted with condor_ submit

Completed at:

when the job completed

Real Time:

elapsed time between when the job was submitted and
when it completed (days hours:minutes:seconds)

Run Time:

total time the job was running (i.e., real time minus
queuing time)

Committed Time:

total run time that contributed to job
completion (i.e., run time minus the run time that was lost because
the job was evicted without performing a checkpoint)

Remote User Time:

total amount of committed time the job spent
executing in user mode

Remote System Time:

total amount of committed time the job spent
executing in system mode

Total Remote Time:

total committed CPU time for the job

Local User Time:

total amount of time this job's
condor_ shadow (remote system call server) spent executing in user
mode

Local System Time:

total amount of time this job's
condor_ shadow spent executing in system mode

Total Local Time:

total CPU usage for this job's condor_ shadow

Leveraging Factor:

the ratio of total remote time to total
system time (a factor below 1.0 indicates that the job ran
inefficiently, spending more CPU time performing remote system calls
than actually executing on the remote machine)

Virtual Image Size:

memory size of the job, computed when the
job checkpoints

Checkpoints written:

number of successful checkpoints performed
by the job

Checkpoint restarts:

number of times the job successfully
restarted from a checkpoint

Network:

total network usage by the job for checkpointing and
remote system calls

Buffer Configuration:

configuration of remote system call I/O
buffers

Total I/O:

total file I/O detected by the remote system call
library

I/O by File:

I/O statistics per file produced by the remote
system call library

Remote System Calls:

listing of all remote system calls
performed (both Condor-specific and Unix system calls) with a count of
the number of times each was performed