GB Unit big-mem Interactive Servers

These are high memory machines bought by the Unit where you can log in for performing interactive computing. These are especially well suited when your resources need changes as you work or when you need a lot of RAM ; this second situation is less of a problem since the IT-managed LSF cluster now has 8 similar beasts (bigmem queue). An good example situation is when you sporadically need e.g. 20 cores and hundreds Gb of RAM to quickly perform matrix operations in R. In such a situation, you'd need to book the maximum resources you could during your work session to be able to work on the cluster ; resulting in massive waste of resources.

Using interactive servers allows you to run job without specifying the memory usage. However, VERY IMPORTANTLY, please check the capacity left on the machine before running the jobs and always use a fair share of the resources. Long-lasting jobs with predictable resource usage should always go to the LSF cluster.

EMBL LSF Cluster

EMBL Cluster Nodes

60 nodes comprising more than 700 CPU cores, including 8 nodes with 1TB of RAM and 40 Cores and 52 nodes with 16GB of RAM and 8 Cores

Runs LSF 7.0.6 and CentOS 6.2

The default memory limit is 2GB, but you can request more memory and CUPs when submitting the jobs

When the job has problems, you can use "bjobs -l" to check the detail information about the job. For example, it shows which computer (node) the job is executed. You can ssh to that node (such as compute036) through submaster machine (not from schroedinger/spinoza) to check the job running detail status.