DESCRIPTION

OPTIONS

A Chirp server acting as the Confuga head node uses normal
chirp_server(1) options. In order to run the Chirp server as the
Confuga head node, use the --root switch with the Confuga URI. You must
also enable job execution with the --jobs switch.

The format for the Confuga URI is:
confuga:///path/to/workspace?option1=value&option2=value. The workspace
path is the location Confuga maintains metadata and databases for the head
node. Confuga specific options are also passed through the URI, documented
below. Examples demonstrating how to start Confuga and a small cluster are at
the end of this manual.

auth <method>

Enable this method for Head Node to Storage Node authentication. The default is to enable all available authentication mechanisms.

concurrency <limit>

Limits the number of concurrent jobs executed by the cluster. The default is 0 for limitless.

pull-threshold <bytes>

Sets the threshold for pull transfers. The default is 128MB.

replication <type>

Sets the replication mode for satisfying job dependencies. type may be push-sync or push-async-N. The default is push-async-1.

scheduler <type>

Sets the scheduler used to assign jobs to storage nodes. The default is fifo-0.

tickets <tickets>

Sets tickets to use for authenticating with storage nodes. Paths must be absolute.

STORAGE NODES

Confuga uses regular Chirp servers as storage nodes. Each storage node is
added to the cluster using the confuga_adm(1) command. All storage
node Chirp servers must be run with:

You must also ensure that the storage nodes and the Confuga head node are using
the same catalog_server(1). By default, this should be the case. The
EXAMPLES section below includes an example cluster using a manually
hosted catalog server.

ADDING STORAGE NODES

To add storage nodes to the Confuga cluster, use the confuga_adm(1)
administrative tool.

EXECUTING WORKFLOWS

The easiest way to execute workflows on Confuga is through makeflow(1).
Only two options to Makeflow are required, --batch-type and
--working-dir. Confuga uses the Chirp job protocol, so the batch type is
chirp. It is also necessary to define the executing server, the Confuga
Head Node, and the namespace the workflow executes in. For example:

The workflow namespace is logically prepended to all file paths defined in the
Makeflow specification. So for example, if you have this Makeflow file:

a: exe
./exe > a

Confuga will execute /path/to/workflow/exe and produce the output file /path/to/workflow/a.

Unlike other batch systems used with Makeflow, like Condor or Work Queue,
all files used by a workflow must be in the Confuga file system. Condor
and Work Queue both stage workflow files from the submission site to the
execution sites. In Confuga, the entire workflow dataset, including
executables, is already resident. So when executing a new workflow, you need
to upload the workflow dataset to Confuga. The easiest way to do this is using
the chirp(1) command line tool:

chirp confuga.example.com put workflow/ /path/to/

Finally, Confuga does not save the stdout or stderr of jobs.
If you want these files for debugging purposes, you must explicitly save them.
To streamline the process, you may use Makeflow's --wrapper options to
save stdout and stderr:

COPYRIGHT

The Cooperative Computing Tools are Copyright (C) 2003-2004 Douglas Thain and Copyright (C) 2005-2015 The University of Notre Dame. This software is distributed under the GNU General Public License. See the file COPYING for details.