If you use environment variables in your config file, they can be expanded one of three places:

at launch time, on the jobsub_submit commandline -- $VARIABLE

in the job when fife_wrap is invoked -- \$VARIABLE

within the script or dataset declaration -- \\\$VARIABLE

in a file copy-out etc. -- \\\\\\\$VARIABLE (yes that is 7 backslashes)

You need the latter form if you want to use, for example, an environment variable that comes from a setup action, prescript, or postscript.

Note that if you are specifying this in an override on the command line, this moves up another layer of escaping,and to get the last layer, you need (shudder) 15 backslashes; i.e.

-Oexecutable.arg_6=output_of_\\\\\\\\\\\\\\\${fname}

Note that (as of v3_3 and later) for environment variables that fife_wrap puts in the multifile loop, like fname for filename furi for file uri and nthfile for the number of input files, have %(_fname)s %(_furi)s %(_nthfile) shorthands in the config which expand to \\\\\\\${fname} etc.

[You can actually put the -e arguments in the submit section, but this is more readable.]While there are too many environment variables to list that you might want to send, particularly when debugging new jobs, we reccommend;

add_to_dataset name -- add a metadata dataset.tag value to files, and declare as a dataset. A special shorthand "_poms_task" can be given for the name, which will be replaced with poms_depends_${POMS_TASK_ID}_1 for the input dataset for a dependant stage.

dataset_exclude = glob -- exclude files matching glob from dataset

add_location = True -- add a location for file after copying it to dest