Please note: access to the petra4 partition implies access to the xfel-guest partition. See Maxwell for XFEL for details!

Interactive Login Nodes

There are no login nodes associated with the PETRA4 resources in Maxwell. Having access to the petra4 partition does NOT allow to use max-display, you need one of the regular maxwell resources for this.

ssh max-display.desy.de: will connect you to one of the display nodes but only if you have a regular maxwell resource. FastX might your better choice. Please have a look at the Remote Login and the FastX documentation.

ssh max-wgs: will connect you to the generic login node.

Please note: max-display.desy.de is directly accessible from outside. All other login nodes can only be reached by first connecting to bastion.desy.de.

Login nodes are always shared resources sometimes used by a large number of concurrent users. Don't run compute or memory intense jobs on the login nodes, use a batch job instead!

The PETRA4 Batch resource in Maxwell

As a first step login to one of the login nodes and check which Maxwell resources are available for your account using the my-partitions command:

my-partitionsExpand source

[@max-wgs ~]$ my-partitions
Partition Access Allowed groups
----------------------------------------------------------------------------------------------------------------
all yes all <------- will be available if any of the resources below is "yes!
cfel no cfel-wgs-users
cms no max-cms-uhh-users,max-cms-desy-users
cms-desy no max-cms-desy-users
cms-uhh no max-cms-uhh-users
cssb no max-cssb-users
epyc-eval no all
exfel no exfel-wgs-users
exfel-spb no exfel-theory-users,school-users
exfel-th no exfel-theory-users
exfel-theory no exfel-theory-users
exfel-wp72 no exfel-theory-users
fspetra no max-fspetra-users
grid no max-grid-users
jhub no all
maxwell yes maxwell-users,school-users <------- might be granted if you have suitable applications
p06 no max-p06-users
petra4 yes p4_sim <------- look for this one as a PETRA4 member
ps no max-ps2-users
psx no max-psx2-users
uke no max-uke-users
upex no upex-users,school-users
xfel-guest yes max-xfel-guest-users,p4_sim <------- an additional partition. preemption rules apply!
xfel-op no max-xfel-op-users
xfel-sim no max-xfel-sim-users

If it says "yes" for partition "petra4" you are ready to go. If so you will also see a "yes" at least for partition "all". If not: get in touch with M admins! Let's assume that you've got the petra4-resource.

If you have an application, which is started by a script called my-application, and doesn't require a GUI, you can simply submit the script as a batch-job:

This works for any application smart enough not to strictly require an X-environment, matlab, comsol, ansys, mathematica, idl and many others can be executed as batch jobs. To make it more convenient you can add the SLURM directives directly into the script:

sbatch scriptExpand source

[@max-wgs ~]$ cat my-application
#!/bin/bash
#SBATCH --partition=petra4
#SBATCH --time=1-12:00:00 # request 1 day and 12 hours
#SBATCH --mail-type=END,FAIL # send mail when the job has finished or failed
#SBATCH --nodes=1 # number of nodes
#SBATCH --output=%x-%N-%j.out # per default slurm writes output to slurm-<jobid>.out. There are a number of options to customize the job
[...] # the actual script.

The email-notification will be sent to <user-id>@mail.desy.de. That should always work, so you don't actually need to specify an email-address. If you do, please make sure it's a valid address. For further examples and instructions please read Running Jobs on Maxwell.

If you think that it's much to complicated to write job-scripts or if you can't afford to invest the time to look into it: we are happy to assist. Please drop a message to maxwell.service@desy.de, we'll try our best.

Running interactive batch jobs

If you absolutely need an interactive environment, X-windows features like a GUI, there are options to do that in the batch environment. For example:

Interactive jobs with salloc easily get forgotten, leaving precious resources idle. We do accounting and monitoring!

Keep the time short: there is hardly a good reason to run an interactive jobs for more than the working hours. Use a batch job instead.

Terminate allocations as soon as the job is done!

Other Maxwell Resources

Being member of PETRA4 and maybe having access to the petra4 partition doesn't need to be the end of the story. If you have parallelized applications suitable for the Maxwell cluster you can apply for the Maxwell resource like everyone else on campus. Please send a message to maxwell.service@desy.de briefly explaining your use case. You might also have access to the xfel-guest partition. You can easily distribute your job over the partitions:

multiple partitionsExpand source

[@max-wgs ~]$ cat my-application
#!/bin/bash
#SBATCH --partition=petra4,maxwell,xfel-guest,all
#SBATCH --time=1-12:00:00 # request 1 day and 12 hours
#SBATCH --mail-type=END,FAIL # send mail when the job has finished or failed
#SBATCH --nodes=1 # number of nodes
#SBATCH --output=%x-%N-%j.out # per default slurm writes output to slurm-<jobid>.out. There are a number of options to customize the job
[...] # the actual script.

The partition will be selected from petra4 OR maxwell OR xfel-guest OR all starting with the highest priority partition. So your job will run on the petra4 partition if nodes are available, on the maxwell partition if nodes are available and finally on the all partition if none of the other partitions specified have free nodes. Keep in mind that you should however select the partition according to the type of work you are doing. And a job can never combine nodes from different partitions, so check the limits applying to partitions.