Getting started on the Maxwell Cluster

Gener=
al resources

The Maxwell cluster is composed of a Maxwell partition and resources whi=
ch are contributed by various groups on the DESY campus.

The Maxwell partition is available for DESY members as well as external =
users of the Photon Science Facilities - under one condition: your applicat=
ion is suitable for high-performance computing. That could include

multi-core MPI applications which can efficiently use most of the cores=
on a single or multiple nodes

Embarrassingly parallel applications like Monte-Carlo simulations are us=
ually poor examples to utilize the resources effectively. We would advise t=
o use the BIRD cluster instead.

If you think that the Maxwell cluster might help with your computational=
problems and your application is suited: just drop a message to m=
axwell.service@desy.de asking for the resource, but please expl=
ain very briefly what kind of applications you intend to run.

Your group is not among the ones listed? You still could contribute your=
resources to the Maxwell cluster for the benefit of everyone on campus. Ch=
eck the "Bringing resources to maxwell" pages for options.

Very first step<=
/h2>

You first want to verify which resources you can already use, or whom to=
contact in case your missing a resource. There are various ways to do that=
, depending on the account type available software (e.g. FastX).

What always works: open a terminal (e.g. putty), ssh to bastion.de=
sy.de (desy-ps-ext.desy.de for external photon science users) and run a sma=
ll scriplet called my-resources:

Home Directori=
es

The HOME-Directory on Maxwell is /home/$USER. /home is mounted on a clus=
ter file system (GPFS). More important:

/home has a single daily snapshot. It's not in backup and will n=
ot be archived. The snapshots are located in /home/.snapshot!

/home has a hard quota of 20GB!

Make sure to transfer important data to suitable resources (e.g.=
group specific storage).

Don't use it for any data crucial for your group! Once your acco=
unt expires the data will be removed and will not be recoverable!<=
/p>

Storage and =
Scratch

Everyone with access to the Maxwell cluster also has access to BeeGFS st=
orage space. To create your BeeGFS directory under /beegfs/desy/user/$USER =
just invoke the command mk-beegfs=
on one of the maxwell nodes. For more information on BeeGFS an=
d other storage elements available please have a look at Storage on Maxwell.

Kerberos &amp=
; AFS

Slurm jobs will NOT support AFS token or a Kerberos ticket!=
It means jobs will not suffer from expiring tokens or =
tickets. It also means that you can't rely on their existence. If your jobs=
need access to AFS directories it might be favorable to set ACLs enabling =
token-free access - if possible.

Stay informed

Announcements about updates, maintenance and so on will be communicated =
via the maxwell-user@desy.de mailing list. maxwell user=
s are automatically subscribed to the mailing list.=

We strongly recommend to self-subscribe to b=
e informed about changes and downtimes even if you are using only group-spe=
cific resources. Self-subscriptions are moderated and might take a moment.<=
/p>

Acknowledging the Maxwell Cluster

If the Maxwell cluster was an important asset in your work resulting in =
a publication, we'd greatly appreciate your acknowledgment. There is curren=
tly no publication to refer to and feel free to formulate an acknowledgment=
in your favorite terms. An example could look like this: This re=
search was supported in part through the European XFEL and DESY funded Maxwell computational resources operated at Deutsches Elektronen-Synchrotron (=
DESY), Hamburg, Germany. We would definitely like to here about the pu=
blication! Please send references to publications to us, either to Frank Sc=
hluenzen or simply to the maxwell.service@desy.de. Have a loo=
k at the list of contributed publicati=
ons.