If proxy-prefix is enabled and you’re running more than one Galaxy
instance behind one hostname, you will want to set this to the
same path as the prefix in the filter above. This value becomes
the “path” attribute set in the cookie so the cookies from each
instance will not clobber each other.

By default, Galaxy uses a SQLite database at
‘database/universe.sqlite’. You may use a SQLAlchemy connection
string to specify an external database instead. This string takes
many options which are explained in detail in the config file
documentation.

If large database query results are causing memory or response
time issues in the Galaxy process, leave the result on the server
instead. This option is only available for PostgreSQL and is
highly recommended.

If auto-creating a postgres database on startup - it can be based
on an existing template database. This will set that. This is
probably only useful for testing but documentation is included
here for completeness.

Slow query logging. Queries slower than the threshold indicated
below will be logged to debug. A value of ‘0’ is disabled. For
example, you would set this to .005 to log all queries taking
longer than 5 milliseconds.

By default, Galaxy will use the same database to track user data
and tool shed install data. There are many situations in which it
is valuable to separate these - for instance bootstrapping fresh
Galaxy instances with pretested installs. The following option
can be used to separate the tool shed install database (all other
options listed above but prefixed with install_ are also
available).

Tool config files, defines what tools are available in Galaxy.
Tools can be locally developed or installed from Galaxy tool
sheds. (config/tool_conf.xml.sample will be used if left unset and
config/tool_conf.xml does not exist).

Enable / disable checking if any tools defined in the above non-
shed tool_config_files (i.e., tool_conf.xml) have been migrated
from the Galaxy code distribution to the Tool Shed. This setting
should generally be set to False only for development Galaxy
environments that are often rebuilt from scratch where migrated
tools do not need to be available in the Galaxy tool panel. If
the following setting remains commented, the default setting will
be True.

Tool config maintained by tool migration scripts. If you use the
migration scripts to install tools that have been migrated to the
tool shed upon a new release, they will be added to this tool
config file.

File that contains the XML section and tool tags from all tool
panel config files integrated into a single file that defines the
tool panel layout. This file can be changed by the Galaxy
administrator to alter the layout of the tool panel. If not
present, Galaxy will create it.

Path to the directory in which tool dependencies are placed. This
is used by the Tool Shed to install dependencies and can also be
used by administrators to manually install or link to
dependencies. For details, see:
https://galaxyproject.org/admin/config/tool-dependencies Set the
string to None to explicitly disable tool dependency handling. If
this option is set to none or an invalid path, installing tools
with dependencies from the Tool Shed will fail.

The dependency resolvers config file specifies an ordering and
options for how Galaxy resolves tool dependencies (requirement
tags in Tool XML). The default ordering is to the use the Tool
Shed for tools installed that way, use local Galaxy packages, and
then use Conda if available. See https://github.com/galaxyproject/
galaxy/blob/dev/doc/source/admin/dependency_resolvers.rst for more
information on these options.

conda_prefix is the location on the filesystem where Conda
packages and environments are installed IMPORTANT: Due to a
current limitation in conda, the total length of the conda_prefix
and the job_working_directory path should be less than 50
characters!

You must set this to True if conda_prefix and
job_working_directory are not on the same volume, or some conda
dependencies will fail to execute at job runtime. Conda will copy
packages content instead of creating hardlinks or symlinks. This
will prevent problems with some specific packages (perl, R), at
the cost of extra disk space usage and extra time spent copying
packages.

Certain dependency resolvers (namely Conda) take a considerable
amount of time to build an isolated job environment in the
job_working_directory if the job working directory is on a network
share. Set the following option to True to cache the dependencies
in a folder. This option is beta and should only be used if you
experience long waiting times before a job is actually submitted
to your cluster.

By default, when using a cached dependency manager, the
dependencies are cached when installing new tools and when using
tools for the first time. Set this to False if you prefer
dependencies to be cached only when installing new tools.

Set to True to enable monitoring of tools and tool directories
listed in any tool config file specified in tool_config_file
option. If changes are found, tools are automatically reloaded.
Watchdog ( https://pypi.python.org/pypi/watchdog ) must be
installed and available to Galaxy to use this option. Other
options include ‘auto’ which will attempt to watch tools if the
watchdog library is available but won’t fail to load Galaxy if it
is not and ‘polling’ which will use a less efficient monitoring
scheme that may work in wider range of scenarios than the watchdog
default.

Enable Galaxy to fetch Docker containers registered with quay.io
generated from tool requirements resolved through conda. These
containers (when available) have been generated using mulled -
https://github.com/mulled. These containers are highly beta and
availability will vary by tool. This option will additionally only
be used for job destinations with Docker enabled.

Container resolvers configuration (beta). Setup a file describing
container resolvers to use when discovering containers for Galaxy.
If this is set to None, the default containers loaded is
determined by enable_beta_mulled_containers.

involucro is a tool used to build Docker containers for tools from
Conda dependencies referenced in tools as `requirement`s. The
following path is the location of involucro on the Galaxy host.
This is ignored if the relevant container resolver isn’t enabled,
and will install on demand unless involucro_auto_init is set to
False.

Enable automatic polling of relative tool sheds to see if any
updates are available for installed repositories. Ideally only
one Galaxy server process should be able to check for repository
updates. The setting for hours_between_check should be an integer
between 1 and 24.

Enable automatic polling of relative tool sheds to see if any
updates are available for installed repositories. Ideally only
one Galaxy server process should be able to check for repository
updates. The setting for hours_between_check should be an integer
between 1 and 24.

Enable use of an in-memory registry with bi-directional
relationships between repositories (i.e., in addition to lists of
dependencies for a repository, keep an in-memory registry of
dependent items for each repository.

XML config file that contains additional data table entries for
the ToolDataTableManager. This file is automatically generated
based on the current installed tool shed repositories that contain
valid tool_data_table_conf.xml.sample files. At the time of
installation, these entries are automatically added to the
following file, which is parsed and applied to the
ToolDataTableManager at server start up.

Set to True to enable monitoring of the tool_data and
shed_tool_data_path directories. If changes in tool data table
files are found, the tool data tables for that data manager are
automatically reloaded. Watchdog (
https://pypi.python.org/pypi/watchdog ) must be installed and
available to Galaxy to use this option. Other options include
‘auto’ which will attempt to use the watchdog library if it is
available but won’t fail to load Galaxy if it is not and ‘polling’
which will use a less efficient monitoring scheme that may work in
wider range of scenarios than the watchdog default.

Datatypes config file(s), defines what data (file) types are
available in Galaxy (.sample is used if default does not exist).
If a datatype appears in multiple files, the last definition is
used (though the first sniffer is used so limit sniffer
definitions to one file).

Visualizations config directory: where to look for individual
visualization plugins. The path is relative to the Galaxy root
dir. To use an absolute path begin the path with ‘/’. This is a
comma separated list. Defaults to “config/plugins/visualizations”.

Interactive environment plugins root directory: where to look for
interactive environment plugins. By default none will be loaded.
Set to config/plugins/interactive_environments to load Galaxy’s
stock plugins. These will require Docker to be configured and have
security considerations, so proceed with caution. The path is
relative to the Galaxy root dir. To use an absolute path begin
the path with ‘/’. This is a comma separated list.

To run interactive environment containers in Docker Swarm mode (on
an existing swarm), set this option to True and set
docker_connect_port in the IE plugin config (ini) file(s) of any
IE plugins you have enabled and ensure that you are not using any
docker run-specific options in your plugins’ command_inject
options (swarm mode services run using docker service create,
which has a different and more limited set of options). This
option can be overridden on a per-plugin basis by using the
swarm_mode option in the plugin’s ini config file.

Interactive tour directory: where to store interactive tour
definition files. Galaxy ships with several basic interface tours
enabled, though a different directory with custom tours can be
specified here. The path is relative to the Galaxy root dir. To
use an absolute path begin the path with ‘/’. This is a comma
separated list.

Webhooks directory: where to store webhooks - plugins to extend
the Galaxy UI. By default none will be loaded. Set to
config/plugins/webhooks/demo to load Galaxy’s demo webhooks. To
use an absolute path begin the path with ‘/’. This is a comma
separated list. Add test/functional/webhooks to this list to
include the demo webhooks used to test the webhook framework.

Set the default shell used by non-containerized jobs Galaxy-wide.
This defaults to bash for all jobs and can be overridden at the
destination level for heterogeneous clusters. conda job resolution
requires bash or zsh so if this is switched to /bin/sh for
instance - conda resolution should be disabled. Containerized jobs
always use /bin/sh - so more maximum portability tool authors
should assume generated commands run in sh.

Citation related caching. Tool citations information maybe
fetched from external sources such as http://dx.doi.org/ by Galaxy
- the following parameters can be used to control the caching used
to store this information.

Citation related caching. Tool citations information maybe
fetched from external sources such as http://dx.doi.org/ by Galaxy
- the following parameters can be used to control the caching used
to store this information.

Citation related caching. Tool citations information maybe
fetched from external sources such as http://dx.doi.org/ by Galaxy
- the following parameters can be used to control the caching used
to store this information.

Tools with a number of outputs not known until runtime can write
these outputs to a directory for collection by Galaxy when the job
is done. Previously, this directory was new_file_path, but using
one global directory can cause performance problems, so using
job_working_directory (‘.’ or cwd when a job is run) is
encouraged. By default, both are checked to avoid breaking
existing tools.

Galaxy sends mail for various things: subscribing users to the
mailing list if they request it, password resets, reporting
dataset errors, and sending activation emails. To do this, it
needs to send mail through an SMTP server, which you may define
here (host:port). Galaxy will automatically try STARTTLS but will
continue upon failure.

On the user registration form, users may choose to join a mailing
list. This is the address used to subscribe to the list. Uncomment
and leave empty if you want to remove this option from the user
registration form.

Datasets in an error state include a link to report the error.
Those reports will be sent to this address. Error reports are
disabled if no address is set. Also this email is shown as a
contact to user in case of Galaxy misconfiguration and other
events user may encounter.

Email address to use in the ‘From’ field when sending emails for
account activations, workflow step notifications and password
resets. We recommend using string in the following format: Galaxy
Project <galaxy-no-reply@example.com> If not configured, ‘<galaxy-
no-reply@HOSTNAME>’ will be used.

E-mail domains blacklist is used for filtering out users that are
using disposable email address during the registration. If their
address domain matches any domain in the blacklist, they are
refused the registration.

User account activation feature global flag. If set to “False”,
the rest of the Account activation configuration is ignored and
user activation is disabled (i.e. accounts are active since
registration). The activation is also not working in case the SMTP
server is not defined.

Activation grace period (in hours). Activation is not forced
(login is not disabled) until grace period has passed. Users
under grace period can’t run jobs. Enter 0 to disable grace
period. Users with OpenID logins have grace period forever.

Password expiration period (in days). Users are required to change
their password every x days. Users will be redirected to the
change password screen when they log in after their password
expires. Enter 0 to disable password expiration.

Galaxy can display data at various external browsers. These
options specify which browsers should be available. URLs and
builds available at these browsers are defined in the specifield
files. If use_remote_user = True, display application servers
will be denied access to Galaxy and so displaying datasets in
these sites will fail. display_servers contains a list of
hostnames which should be allowed to bypass security to display
datasets. Please be aware that there are security implications if
this is allowed. More details (including required changes to the
proxy server config) are available in the Apache proxy
documentation on the Galaxy Community Hub. The list of servers in
this sample config are for the UCSC Main, Test and Archaea
browsers, but the default if left commented is to not allow any
display sites to bypass security (you must uncomment the line
below to allow them).

To disable the old-style display applications that are hardcoded
into datatype classes, set enable_old_display_applications =
False. This may be desirable due to using the new-style, XML-
defined, display applications that have been defined for many of
the datatypes that have the old-style. There is also a potential
security concern with the old-style applications, where a
malicious party could provide a link that appears to reference the
Galaxy server, but contains a redirect to a third-party server,
tricking a Galaxy user to access said site.

URL (with schema http/https) of the Galaxy instance as accessible
within your local network - if specified used as a default by
pulsar file staging and Jupyter Docker container for communicating
back with Galaxy via the API. If you are attempting to setup GIEs
on Mac OS X with Docker for Mac - this should likely be the IP
address of your machine on the virtualbox network (vboxnet0) setup
for the Docker host VM. This can found by running ifconfig and
using the IP address of the network vboxnet0.

If the above URL cannot be determined ahead of time in dynamic
environments but the port which should be used to access Galaxy
can be - this should be set to prevent Galaxy from having to
guess. For example if Galaxy is sitting behind a proxy with
REMOTE_USER enabled - infrastructure shouldn’t talk to Python
processes directly and this should be set to 80 or 443, etc… If
unset this file will be read for a server block defining a port
corresponding to the webapp.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

Serve static content, which must be enabled if you’re not serving
it via a proxy server. These options should be self explanatory
and so are not documented individually. You can use these paths
(or ones in the proxy server) to point to your own styles.

For help on configuring the Advanced proxy features, see:
http://usegalaxy.org/production Apache can handle file downloads
(Galaxy-to-user) via mod_xsendfile. Set this to True to inform
Galaxy that mod_xsendfile is enabled upstream.

The same download handling can be done by nginx using X-Accel-
Redirect. This should be set to the path defined in the nginx
config as an internal redirect with access to Galaxy’s data files
(see documentation linked above).

The following default adds a header to web request responses that
will cause modern web browsers to not allow Galaxy to be embedded
in the frames of web applications hosted at other hosts - this can
help prevent a class of attack called clickjacking
(https://www.owasp.org/index.php/Clickjacking). If you configure
a proxy in front of Galaxy - please ensure this header remains
intact to protect your users. Uncomment and leave empty to not
set the X-Frame-Options header.

nginx can also handle file uploads (user-to-Galaxy) via
nginx_upload_module. Configuration for this is complex and
explained in detail in the documentation linked above. The upload
store is a temporary directory in which files uploaded by the
upload module will be placed.

Galaxy can also use nginx_upload_module to receive files staged
out upon job completion by remote job runners (i.e. Pulsar) that
initiate staging operations on the remote end. See the Galaxy
nginx documentation for the corresponding nginx configuration.

Galaxy can also use nginx_upload_module to receive files staged
out upon job completion by remote job runners (i.e. Pulsar) that
initiate staging operations on the remote end. See the Galaxy
nginx documentation for the corresponding nginx configuration.

Have Galaxy manage dynamic proxy component for routing requests to
other services based on Galaxy’s session cookie. It will attempt
to do this by default though you do need to install node+npm and
do an npm install from lib/galaxy/web/proxy/js. It is generally
more robust to configure this externally, managing it however
Galaxy is managed. If True, Galaxy will only launch the proxy if
it is actually going to be used (e.g. for Jupyter).

Additionally, when the dynamic proxy is proxied by an upstream
server, you’ll want to specify a prefixed URL so both Galaxy and
the proxy reside under the same path that your cookies are under.
This will result in a url like https://FQDN/galaxy-
prefix/gie_proxy for proxying

The golang proxy uses a RESTful HTTP API for communication with
Galaxy instead of a JSON or SQLite file for IPC. If you do not
specify this, it will be set randomly for you. You should set this
if you are managing the proxy manually.

Turn on logging of user actions to the database. Actions
currently logged are grid views, tool searches, and use of
“recently” used tools menu. The log_events and log_actions
functionality will eventually be merged.

Sanitize all HTML tool output. By default, all tool output served
as ‘text/html’ will be sanitized thoroughly. This can be disabled
if you have special tools that require unaltered output. WARNING:
disabling this does make the Galaxy instance susceptible to XSS
attacks initiated by your users.

Whitelist sanitization file. Datasets created by tools listed in
this file are trusted and will not have their HTML sanitized on
display. This can be manually edited or manipulated through the
Admin control panel – see “Manage Display Whitelist”

By default Galaxy will serve non-HTML tool output that may
potentially contain browser executable JavaScript content as plain
text. This will for instance cause SVG datasets to not render
properly and so may be disabled by setting the following option to
True.

Return a Access-Control-Allow-Origin response header that matches
the Origin header of the request if that Origin hostname matches
one of the strings or regular expressions listed here. This is a
comma separated list of hostname strings or regular expressions
beginning and ending with /. E.g.
mysite.com,google.com,usegalaxy.org,/^[w.]*example.com/ See:
https://developer.mozilla.org/en-
US/docs/Web/HTTP/Access_control_CORS

Set the following to True to use Jupyter nbconvert to build HTML
from Jupyter notebooks in Galaxy histories. This process may
allow users to execute arbitrary code or serve arbitrary HTML. If
enabled, Jupyter must be available and on Galaxy’s PATH, to do
this run pip install jinja2 pygments jupyter in Galaxy’s
virtualenv.

Debug enables access to various config options useful for
development and debugging: use_lint, use_profile, use_printdebug
and use_interactive. It also causes the files used by PBS/SGE
(submission script, output, and error) to remain on disk after the
job is complete.

Control the period (in seconds) between dumps. Use -1 to disable.
Regardless of this setting, if use_heartbeat is enabled, you can
send a Galaxy process (unless running with uWSGI) SIGUSR1 (kill
-USR1) to force a dump.

Log to Sentry Sentry is an open source logging and error
aggregation platform. Setting sentry_dsn will enable the Sentry
middleware and errors will be sent to the indicated sentry
instance. This connection string is available in your sentry
instance under <project_name> -> Settings -> API Keys.

Log to statsd Statsd is an external statistics aggregator
(https://github.com/etsy/statsd) Enabling the following options
will cause galaxy to log request timing and other statistics to
the configured statsd instance. The statsd_prefix is useful if
you are running multiple Galaxy instances and want to segment
statistics between them within the same aggregator.

Log to statsd Statsd is an external statistics aggregator
(https://github.com/etsy/statsd) Enabling the following options
will cause galaxy to log request timing and other statistics to
the configured statsd instance. The statsd_prefix is useful if
you are running multiple Galaxy instances and want to segment
statistics between them within the same aggregator.

Log to statsd Statsd is an external statistics aggregator
(https://github.com/etsy/statsd) Enabling the following options
will cause galaxy to log request timing and other statistics to
the configured statsd instance. The statsd_prefix is useful if
you are running multiple Galaxy instances and want to segment
statistics between them within the same aggregator.

Log to graphite Graphite is an external statistics aggregator
(https://github.com/graphite-project/carbon) Enabling the
following options will cause galaxy to log request timing and
other statistics to the configured graphite instance. The
graphite_prefix is useful if you are running multiple Galaxy
instances and want to segment statistics between them within the
same aggregator.

Log to graphite Graphite is an external statistics aggregator
(https://github.com/graphite-project/carbon) Enabling the
following options will cause galaxy to log request timing and
other statistics to the configured graphite instance. The
graphite_prefix is useful if you are running multiple Galaxy
instances and want to segment statistics between them within the
same aggregator.

Log to graphite Graphite is an external statistics aggregator
(https://github.com/graphite-project/carbon) Enabling the
following options will cause galaxy to log request timing and
other statistics to the configured graphite instance. The
graphite_prefix is useful if you are running multiple Galaxy
instances and want to segment statistics between them within the
same aggregator.

Add an option to the library upload form which allows authorized
non-administrators to upload a directory of files. The configured
directory must contain sub-directories named the same as the non-
admin user’s Galaxy login ( email ). The non-admin user is
restricted to uploading files or sub-directories of files
contained in their directory.

For security reasons, users may not import any files that actually
lie outside of their user_library_import_dir (e.g. using
symbolic links). A list of directories can be allowed by setting
the following option (the list is comma-separated). Be aware that
any user with library import permissions can import from
anywhere in these directories (assuming they are able to create
symlinks to them).

In conjunction or alternatively, Galaxy can restrict user library
imports to those files that the user can read (by checking basic
unix permissions). For this to work, the username has to match the
username on the filesystem.

Allow admins to paste filesystem paths during upload. For
libraries this adds an option to the admin library upload tool
allowing admins to paste filesystem paths to files and directories
in a box, and these paths will be added to a library. For history
uploads, this allows pasting in paths as URIs. (i.e. prefixed with
file://). Set to True to enable. Please note the security
implication that this will give Galaxy Admins access to anything
your Galaxy user has access to.

Users may choose to download multiple files from a library in an
archive. By default, Galaxy allows users to select from a few
different archive formats if testing shows that Galaxy is able to
create files using these formats. Specific formats can be disabled
with this option, separate more than one format with commas.
Available formats are currently ‘zip’, ‘gz’, and ‘bz2’.

Some sequencer integration features in beta allow you to
automatically transfer datasets. This is done using a lightweight
transfer manager which runs outside of Galaxy (but is spawned by
it automatically). Galaxy will communicate with this manager over
the port specified here.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Boosts are used to customize this instance’s toolbox search. The
higher the boost, the more importance the scoring algorithm gives
to the given field. Section refers to the tool group in the tool
panel. Rest of the fields are tool’s attributes.

Galaxy encodes various internal values when these values will be
output in some format (for example, in a URL or cookie). You
should set a key to be used by the algorithm that encodes and
decodes these values. It can be any string up to 448 bits long.
One simple way to generate a value for this is with the shell
command: python -c ‘import time; print time.time()’ | md5sum |
cut -f 1 -d ‘ ‘

User authentication can be delegated to an upstream proxy server
(usually Apache). The upstream proxy should set a REMOTE_USER
header in the request. Enabling remote user disables regular
logins. For more information, see:
https://galaxyproject.org/admin/config/apache-proxy

If use_remote_user is enabled and your external authentication
method just returns bare usernames, set a default mail domain to
be appended to usernames, to become your Galaxy usernames (email
addresses).

If use_remote_user is enabled, the header that the upstream proxy
provides the remote username in defaults to HTTP_REMOTE_USER (the
‘HTTP_’ is prepended by WSGI). This option allows you to change
the header. Note, you still need to prepend ‘HTTP_’ to the header
in this option, but your proxy server should not include ‘HTTP_’
at the beginning of the header name.

If use_remote_user is enabled, anyone who can log in to the Galaxy
host may impersonate any other user by simply sending the
appropriate header. Thus a secret shared between the upstream
proxy server, and Galaxy is required. If anyone other than the
Galaxy user is using the server, then apache/nginx should pass a
value in the header ‘GX_SECRET’ that is identical to the one
below.

If an e-mail address is specified here, it will hijack remote user
mechanics (use_remote_user) and have the webapp inject a
single fixed user. This has the effect of turning Galaxy into a
single user application with no login or external proxy required.
Such applications should not be exposed to the world.

Administrative users - set this to a comma-separated list of valid
Galaxy users (email addresses). These users will have access to
the Admin section of the server, and will have access to create
users, groups, roles, libraries, and more. For more information,
see: https://galaxyproject.org/admin/

By default, users’ data will be public, but setting this to True
will cause it to be private. Does not affect existing users and
data, only ones created after this option is set. Users may still
change their default back to public.

Expose user list. Setting this to True will expose the user list
to authenticated users. This makes sharing datasets in smaller
galaxy instances much easier as they can type a name/email and
have the correct user show up. This makes less sense on large
public Galaxy instances where that data shouldn’t be exposed. For
semi-public Galaxies, it may make sense to expose just the
username and not email, or vice versa.

Expose user list. Setting this to True will expose the user list
to authenticated users. This makes sharing datasets in smaller
galaxy instances much easier as they can type a name/email and
have the correct user show up. This makes less sense on large
public Galaxy instances where that data shouldn’t be exposed. For
semi-public Galaxies, it may make sense to expose just the
username and not email, or vice versa.

Whitelist for local network addresses for “Upload from URL”
dialog. By default, Galaxy will deny access to the local network
address space, to prevent users making requests to services which
the administrator did not intend to expose. Previously, you could
request any network service that Galaxy might have had access to,
even if the user could not normally access it. It should be a
comma separated list of IP addresses or IP address/mask, e.g.
10.10.10.10,10.0.1.0/24,fd00::/8

Set the following to a number of threads greater than 1 to spawn a
Python task queue for dealing with large tool submissions (either
through the tool form or as part of an individual workflow step
across large collection). This affects workflow scheduling and web
processes, not job handlers. This is a beta option and should not
be used in production.

Following options only apply to workflows scheduled using the
legacy workflow run API (running workflows via a POST to
/api/workflows). Force usage of Galaxy’s beta workflow scheduler
under certain circumstances - this workflow scheduling forces
Galaxy to schedule workflows in the background so initial
submission of the workflows is significantly sped up. This does
however force the user to refresh their history manually to see
newly scheduled steps (for “normal” workflows - steps are still
scheduled far in advance of them being queued and scheduling here
doesn’t refer to actual cluster job scheduling). Workflows
containing more than the specified number of steps will always use
the Galaxy’s beta workflow scheduling.

Following options only apply to workflows scheduled using the
legacy workflow run API (running workflows via a POST to
/api/workflows). Force usage of Galaxy’s beta workflow scheduler
under certain circumstances - this workflow scheduling forces
Galaxy to schedule workflows in the background so initial
submission of the workflows is significantly sped up. This does
however force the user to refresh their history manually to see
newly scheduled steps (for “normal” workflows - steps are still
scheduled far in advance of them being queued and scheduling here
doesn’t refer to actual cluster job scheduling). Workflows
containing more than the specified number of steps will always use
the Galaxy’s beta workflow scheduling. Switch to using Galaxy’s
beta workflow scheduling for all workflows involving collections.

If multiple job handlers are enabled, allow Galaxy to schedule
workflow invocations in multiple handlers simultaneously. This is
discouraged because it results in a less predictable order of
workflow datasets within in histories.

This is the maximum amount of time a workflow invocation may stay
in an active scheduling state in seconds. Set to -1 to disable
this maximum and allow any workflow invocation to schedule
indefinitely. The default corresponds to 1 month.

Specify a maximum number of jobs that any given workflow
scheduling iteration can create. Set this to a positive integer to
prevent large collection jobs in a workflow from preventing other
jobs from executing. This may also mitigate memory issues
associated with scheduling workflows at the expense of increased
total DB traffic because model objects are expunged from the SQL
alchemy session between workflow invocation scheduling iterations.
Set to -1 to disable any such maximum (the default).

Master key that allows many API admin actions to be used without
actually having a defined admin user in the database/config. Only
set this if you need to bootstrap Galaxy, you probably do not want
to set this on public servers.

Enable a feature when running workflows. When enabled, default
datasets are selected for “Set at Runtime” inputs from the history
such that the same input will not be selected twice, unless there
are more inputs than compatible datasets in the history. When
False, the most recently added compatible item in the history will
be used for each “Set at Runtime” input, independent of others in
the Workflow

Enable Galaxy’s “Upload via FTP” interface. You’ll need to
install and configure an FTP server (we’ve used ProFTPd since it
can use Galaxy’s database for authentication) and set the
following two options. This should point to a directory containing
subdirectories matching users’ identifier (defaults to e-mail),
where Galaxy will look for files.

User attribute to use as subdirectory in calculating default
ftp_upload_dir pattern. By default this will be email so a user’s
FTP upload directory will be ${ftp_upload_dir}/${user.email}. Can
set this to other attributes such as id or username though.

This option allows users to see the full path of datasets via the
“View Details” option in the history. This option also exposes the
command line to non-administrative users. Administrators can
always see dataset paths.

To increase performance of job execution and the web interface,
you can separate Galaxy into multiple processes. There are more
than one way to do this, and they are explained in detail in the
documentation:
https://galaxyproject.org/admin/config/performance/scaling By
default, Galaxy manages and executes jobs from within a single
process and notifies itself of new jobs via in-memory queues.
Jobs are run locally on the system on which Galaxy is started.
Advanced job running capabilities can be configured through the
job configuration file.

When jobs fail due to job runner problems, Galaxy can be
configured to retry these or reroute the jobs to new destinations.
Very fine control of this is available with resubmit declarations
in job_conf.xml. For simple deployments of Galaxy though, the
following attribute can define resubmission conditions for all job
destinations. If any job destination defines even one resubmission
condition explicitly in job_conf.xml - the condition described by
this option will not apply to that destination. For instance, the
condition: ‘attempt < 3 and unknown_error and (time_running < 300
or time_since_queued < 300)’ would retry up to two times jobs that
didn’t fail due to detected memory or walltime limits but did fail
quickly (either while queueing or running). The commented out
default below results in no default job resubmission condition,
failing jobs are just failed outright.

Enable job recovery (if Galaxy is restarted while cluster jobs are
running, it can “recover” them when it starts). This is not safe
to use if you are running more than one Galaxy server using the
same database.

Although it is fairly reliable, setting metadata can occasionally
fail. In these instances, you can choose to retry setting it
internally or leave it in a failed state (since retrying
internally may cause the Galaxy process to be unresponsive). If
this option is set to False, the user will be given the option to
retry externally, or set metadata manually (when possible).

Very large metadata values can cause Galaxy crashes. This will
allow limiting the maximum metadata key size (in bytes used in
memory, not the end result database value size) Galaxy will
attempt to save with a dataset. Use 0 to disable this feature.
The default is 5MB, but as low as 1MB seems to be a reasonable
size.

This option will override tool output paths to write outputs to
the job working directory (instead of to the file_path) and the
job manager will move the outputs to their proper place in the
dataset directory on the Galaxy server after the job completes.
This is necessary (for example) if jobs run on a cluster and
datasets can not be created by the user running the jobs (e.g. if
the filesystem is mounted read-only or the jobs are run by a
different user than the galaxy user).

If your network filesystem’s caching prevents the Galaxy server
from seeing the job’s stdout and stderr files when it completes,
you can retry reading these files. The job runner will retry the
number of times specified below, waiting 1 second between tries.
For NFS, you may want to try the -noac mount option (Linux) or
-actimeo=0 (Solaris).

In the past Galaxy would preserve its Python environment when
running jobs ( and still does for internal tools packaged with
Galaxy). This behavior exposes Galaxy internals to tools and could
result in problems when activating Python environments for tools
(such as with Conda packaging). The default legacy_only will
restrict this behavior to tools identified by the Galaxy team as
requiring this environment. Set this to “always” to restore the
previous behavior (and potentially break Conda dependency
resolution for many tools). Set this to legacy_and_local to
preserve the environment for legacy tools and locally managed
tools (this might be useful for instance if you are installing
software into Galaxy’s virtualenv for tool development).

Clean up various bits of jobs left on the filesystem after
completion. These bits include the job working directory,
external metadata temporary files, and DRM stdout and stderr files
(if using a DRM). Possible values are: always, onsuccess, never

When running DRMAA jobs as the Galaxy user
(https://docs.galaxyproject.org/en/latest/admin/cluster.html
#submitting-jobs-as-the-real-user) Galaxy can extract the user
name from the email address (actually the local-part before the @)
or the username which are both stored in the Galaxy data base. The
latter option is particularly useful for installations that get
the authentication from LDAP. Also, Galaxy can accept the name of
a common system user (eg. galaxy_worker) who can run every job
being submitted. This user should not be the same user running the
galaxy system. Possible values are user_email (default), username
or <common_system_user>

File to source to set up the environment when running jobs. By
default, the environment in which the Galaxy server starts is used
when running jobs locally, and the environment set up per the
DRM’s submission method and policy is used when running jobs on a
cluster (try testing with qsub on the command line).
environment_setup_file can be set to the path of a file on the
cluster that should be sourced by the user to set up the
environment prior to running tools. This can be especially useful
for running jobs as the actual user, to remove the need to
configure each user’s environment individually.

Optional file containing job resource data entry fields
definition. These fields will be presented to users in the tool
forms and allow them to overwrite default job resources such as
number of processors, memory and walltime.

If using job concurrency limits (configured in job_config_file),
several extra database queries must be performed to determine the
number of jobs a user has dispatched to a given destination. By
default, these queries will happen for every job that is waiting
to run, but if cache_user_job_count is set to True, it will only
happen once per iteration of the handler queue. Although better
for performance due to reduced queries, the trade-off is a greater
possibility that jobs will be dispatched past the configured
limits if running many handlers.

Galaxy uses AMQP internally for communicating between processes.
For example, when reloading the toolbox or locking job execution,
the process that handled that particular request will tell all
others to also reload, lock jobs, etc. For connection examples,
see http://docs.celeryproject.org/projects/kombu/en/latest/usergui
de/connections.html Without specifying anything here, galaxy will
first attempt to use your specified database_connection above. If
that’s not specified either, Galaxy will automatically create and
use a separate sqlite database located in your <galaxy>/database
folder (indicated in the commented out line below).