This is an old revision of the document!


6. GPU selection

In order to use a GPU you should submit your job to the GPU queue, and request GPU shares. Each node equipped with a GPU provides as many GPU shares as it has cores, independent of how many GPUs are built in. So, on the new nodes, which have 24 cores, the following would give you exclusive access to GPUs:

#BSUB -R "rusage[ngpus_shared=24]"

Note that you need not necessarily also request 24 cores with -n 24, as jobs from the MPI queue may utilize free CPU cores if you do not need them. The new nodes have two GPUs each, and you should use both, if possible.

If you request less shares than cores available, other jobs may also utilize the GPUs. However, we have currently no mechanism to select a specific one for a job. This would have to be handled in the application or your job script.

A good way to use the new nodes with jobs only working on one GPU would be to put two together in one job script and preselect a GPU for each one.

Currently we have two generations of NVidia GPUs in the cluster, selectable in the same way as CPU generations:

nvgen=1 : Kepler
nvgen=2 : Maxwell

Most GPUs are commodity graphics cards, and only provide good performance for single precision calculations. If you need double precision performance, or error correcting memory (ECC RAM), you can select the Tesla GPUs with

#BSUB -R tesla

Our Tesla K40 are of the Kepler generation (nvgen=1).

If you want to make sure to run on a node equipped with two GPUs use:

#BSUB -R "ngpus=2"

7. New frontend

The new frontend gwdu103 has 2×12 Intel Broadwell CPUs and 64 GB memory. If you compile a program on gwdu103 it will often be automatically optimized for Broadwell CPUs / intel=4. In that case it will probably also run on intel=3 / Haswell, but not below that or on AMD nodes.

The same policies apply as for all other frontends: You may use it to compile software, to copy from and to the new scratch-Filesystem, and for short tests of your compiled binaries. You must not use it to run long time tasks, especially if they are CPU or memory intensive.