The Galaxy Main instance is available as a free public service at UseGalaxy.org. This is the Galaxy Project's production Galaxy instance. Main is where Galaxy's data and tools are functionality integrated and ready to use. Main is useful for sharing/publishing data and methods with colleagues for routine analysis or with the larger scientific community for publications and supplemental material. Test is also free and public, but is considered a beta site.

Anyone can use the public servers, with or without an account, but Galaxy user accounts are simple to create (email, password, user name and go!). With an account, data quotas are increased and full functionality across sessions opens up, such as naming, saving, sharing, and publishing Galaxy objects (Histories, Workflows, Datasets, Pages). Remember, Galaxy's Terms and Conditions specifically declare a "one-account per user" requirement.

Job resubmission to Stampede

Certain tools will be automatically "resubmitted" to Stampede (see Job execution on Stampede for more about Stampede) if they initially run on Galaxy's local cluster but exceed the walltime (run time limit). The walltime differs per tool and is calculated based on previous average runtimes of that tool:

Tools

Tool

Walltime

BWA

3 hours, 41 minutes

BWA-MEM

4 hours, 55 minutes

Bowtie

2 hours, 35 minutes

Tophat

6 hours, 11 minutes

Cufflinks

4 hours, 5 minutes

Cuffdiff

8 hours, 11 minutes

Cuffmerge

1 hour, 6 minutes

Legacy Tools

Map with BWA for Illumina

4 hours, 54 minutes

Map with Bowtie for Illumina

2 hours, 18 minutes

Tophat (version 1)

6 hours, 26 minutes

When a job is resubmitted you will see its state turn from running (yellow) back to gray (queued) and a blue message box will appear when the dataset is expanded explaining that the job has been resubmitted.

Our goal with the Stampede resubmission system is to provide a balance to Galaxy users: to allow those with relatively small jobs to run them quickly without a wait, but still be able to support larger scale analyses with a reasonable wait but higher job concurrency limits. See the User data and job quotas section below for more on concurrency limits.

If you know (due to previous runs of the tool using similar inputs and parameters) that your job will reach the walltime on the local cluster, you should directly submit it to Stampede to avoid the time wasted running to walltime on the Galaxy cluster.

Direct job execution on Stampede

Tools in the previous section can also be manually submitted directly to Stampede. This is a good idea if you know (or strongly suspect) that a tool will exceed the walltime on the local cluster. On the form for these tools, a Job Resource Parameters parameter is available that, if selected, will display a Compute Resources selection parameter. The options for this parameter are:

Some tools or job destinations have stricter job concurrency limits than the overall limits above. These tools include all of the tools that can be run on Stampede (listed above), and some additional tools. These limits are:

Per-resource job concurrency quotas

Increased memory tools:

1 (registered or unregistered)

Galaxy cluster:

2 registered, unregistered not allowed

TACC Stampede:

4 registered, unregistered not allowed

Galaxy cluster test/development:

1 registered, unregistered not allowed

TACC Stampede test/development:

1 registered, unregistered not allowed

"Increased memory tools" refers to a set of tools that are granted additional memory over the 8 GB default.

If you job that failed for any reason, or a reason due to resources was given (job exceeded memory or run-time quotas), see this wiki and related sections for help: Support#Error_from_tools

More about job execution

Your actual number of concurrent jobs may be less at any particular time, or certain job types may run quicker than others, as the different job queues are shared among all users, some job types run on busies queues, and resources are distributed evenly. Unsure about job status? Read more here...

Terms and Conditions: Attempts to subvert these limits by creating multiple accounts or through any other method may result in termination of all associated accounts.

Monitoring data use

Exceeding quotas will prevent new jobs from running, but Galaxy users can monitor and manage datasets in several ways:

Percent of quota limit used by a user account is noted in the top right corner of the Galaxy interface within a bar icon.

Size of individual datasets can be found within a dataset's expanded box either written directly under the dataset's name and/or by viewing the dataset's Details (click on View Details icon ).

User Account Quotas

How will I know if my quota has been exceeded?

Data

A red message indicating that the user data quota has been exceeded will be displayed at the top of the left history pane. Any new jobs queued will remain in the status "paused" (colored light blue) until the data size is within the quota limits.

Jobs

Any jobs queued after the limit of 8 has been met will remain in the status "paused" (colored light blue) until job quota is met.

When can I run jobs on the Main instance again?

Data

Reduce the amount of data in your account. Start with removing any Histories that are no longer needed on the Options → Saved Histories form with the option Delete Permanently. More information about how to manage data is covered in this wiki Managing Datasets and in this video Managing Histories.

Jobs

To gain access to the server again, no user action is needed. When your existing jobs complete and number less than 6, new jobs will be added to the queue to execute (maximum of 6 concurrent).

My job ended with a failure due to memory or execution time

Description and Solutions

Please see the Support wiki for help in determining if this is the case and possible solutions.

Developers and Administrators

New Admin features have been added and more are planned for in the near term. Details explained in: Disk Quotas. Feedback about the implementation of quota management is welcomed at the mailing listgalaxy-dev@bx.psu.edu .