does anyone know of easy to run code that would swallow the cpu/memory
on the chassis but also a tesla card? A lot of the tools i typically
used in the past that have been ported to GPU's don't seem to use up
much of the memory, or use all the GPU constantly. I'm running
through NAMD at the moment which does seem to make pretty good use of
the gpu processor, it doesn't seem to use much if any of the memory.
Cuda-Linpack seems to cough an error on runtime, but hopefully i'll
get that going, but i curious if there was anything else i didn't know
about.