Yes, You need to load the module first before you can use R.
List all available module

module avail

The load the R module version you want to use.

module load R/3.1.1

Howto set your own R library location
Create a writable R directory in your /exports directory.
set the environment variable R_LIBS=/to/your/created/R/path
Now you can use this path for your own library's.
To permenently set this variable edit your ~/.bashrc file and add
export R_LIBS=/to/your/created/R/path

Can I restrict java to a fixed number of cpu's?

Yes, java -XX:ParallelGCThreads=1 (set 1 to the number of cpu you want)

To restrict the memory java uses : java -Xmx2048m -Xms2048m

When stuff doesn't work.

why won't my job start immediately?

The cluster uses a ticketing system for the prioritisation of jobs. Tickets are
distributed over the departments by ratio of investment. Any overcapacity is
available for incidental processing or for people that have used up their
tickets. If you find that your jobs are not scheduled in a timely manner,
your department has probably used its share of tickets. If this happens on a
structural basis, investment from your departement is the solution.

Whenever this happens it's most likely that there are no slots available in your queue(s). Run
qstat -u "*" or unique-user-slots.sh and view the usage of the shark cluster. You can still submit your job with qsub, the fair-share policy will take care that your jobs gets scheduled according to your rights.

Why is my job killed when I my script uses more then 4G of memory

Shark has a default h_vmem=4G value set. If your job exceeds the memory threshold of 4G Open Grid Scheduler will auto kill that job. You should specify a higher h_vmem limit if your jobs needs more then 4G of memory like this

qsub -l h_vmem=xxG

The G means Gigabyte.

What does this exit code means?

Read the manual for accounting go to exit_status.

man accounting

exit_statusExit status of the job script (or xxQS_NAMExx specific status in case of certain error conditions). The exit status is determined by following the normal shell conventions. If the command terminates normally the value of the command is its exit status. However, in the case that the command exits abnormally, a value of 0200 (octal), 128 (decimal) is added to the value of the command to make up the exit status. For example: If a job dies through signal 9 (SIGKILL) then the exit status becomes 128 + 9 = 137.

Why does my script not run or returns an error /bin/bash^M: bad interpreter: No such file or directory

your script has been edited with an editor under windows, your line breaks for windows are ^M characters.
You can view your script file and check if you have these characters with the command:cat -v <script_name>

If you submit a script with the ^M with qsub your job will get in an error state Eqw
If this happens you can see with the command qstat -j <job_ID> and error at the line : error reason

execvp(/var/spool/sge/silvertipshark/job_scripts/<job_id>, "/var/spool/sge/silvertipshark/job_scripts/<job_id>") failed: No such file or directory

Is there a backup for the /exports directory

No the isilon storage has no backup, there is no way to retrieve the files, Its is your own responsibility to backup your files on Shark.

I can not log into shark with my RESEARCHLUMC\username

In the email send to you the username has RESEARCHLUMC\username as an username construction.
This only works if you login into a windows workstation attached to the RESEARCHLUMC domain
or if you change your password on the change password site https://passwd.researchlumc.nl/
For all shark related usernames DO NOT use the RESEARCHLUMC\username construction leave the RESEARCHLUMC\ out,
hence your username for shark and the SSH jumpserver is : username

Can I run on specific nodes?

Yes you can give a list of hosts where you would like to run on with:-l hostname="host1|host2|host3|host4"