Table of Contents
High Performance Computing
For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. Please refer to Account Activation on how to get your account activated for the compute cluster.
Latest News
- [hpc-announce] Scratch File System Issues (2021/03/04 16:21)
- [hpc-announce] Scratch File System Issues (2021/03/04 14:38)
- [hpc-announce] Spack on SCC (2021/02/22 16:14)
- [hpc-announce] Problems with NFS filesystems since yesterday evening (2021/02/17 10:40)
- [hpc-announce] Downtime NFS mounted homes uni0[1-4] and mpg0[1-3] (2021/01/25 09:28)
An archive of all news items can be found at the HPC-announce maling list.
Access
Once you gain access, you can login to the frontend nodes gwdu101.gwdg.de
, gwdu102.gwdg.de
and gwdu103.gwdg.de
. These nodes are accessible via ssh from the GÖNET. If you come from the internet, first login to login.gwdg.de. From there you can then reach the frontends.
The frontends are meant for editing, compiling, and interacting with the batch system, but please don't use them for testing for more than a few minutes, since all users share resources on the frontends and will be impaired in their daily work, if you overuse them. gwdu101
and gwdu102
are Intel Cascade Lake-based systems (2x 48 cores, 192 GB RAM), while gwdu103
is Intel Sandy Bridge based (2x 12 cores, 64 GB RAM). If your software takes advantage of special CPU dependent features, it is recommended to use the same CPU architecture for compiling as targeted for running your jobs.
The frontends and transfer nodes also have descriptive names of the form $func-$site.hpc.gwdg.de
based on their primary function and site, where $func
is either login
or transfer
while $site
is either mdc
(mobile data center, access to scratch
) or fas
(GWDG at Faßberg, access to scratch2
). For example, to reach any login node at the MDC site, you would connect to login-mdc.hpc.gwdg.de
.
Hardware Overview
The following documentation is valid for this list of hardware:
Nodes | # | CPU | GPU | Cores | Frequency | Memory | IB | Partition | Launched |
---|---|---|---|---|---|---|---|---|---|
gwdd[169-176] | 8 | Ivy-Bridge Intel E5-2670 v2 | none | 2✕10 | 2.5 GHz | 64 GB | none | medium | 2013-11 |
gwde001 | 1 | Haswell Intel E7-4809 v3 | none | 4✕8 | 2.0 GHz | 2 TB | none | fat+ | 2016-01 |
sa[001-032]* | 32 | Haswell Intel E5-2680 v3 | none | 2✕12 | 2.5 GHz | 256 GB | QDR | sa | 2015-03 |
em[001-032]* hh[001-040]* | 72 | Haswell Intel E5-2640 v3 | none | 2✕8 | 2.6 GHz | 128 GB | QDR | em\\hh | 2015-03 |
dfa[001-015] | 15 | Broadwell Intel E5-2650 v4 | none | 2✕12 | 2.2 GHz | 512 GB | FDR | fat/fat+ | 2016-08 |
dmp[011-076] | 76 | Broadwell Intel E5-2650 v4 | none | 2✕12 | 2.2 GHz | 128 GB | FDR | medium | 2016-08 |
dsu[001-005] | 5 | Haswell Intel E5-4620 v3 | none | 4✕10 | 2.0 GHz | 1.5 TB | FDR | fat+ | 2016-08 |
gwdo[161-180]* | 20 | Ivy-Bridge Intel E3-1270 v2 | NVidia GTX 770 | 1✕4 | 3.5 GHz | 16 GB | none | gpu-hub | 2014-01 |
dge[001-007] | 7 | Broadwell Intel E5-2650 v4 | NVidia GTX 1080 | 2✕12 | 2.2 GHz | 128 GB | FDR | gpu | 2016-08 |
dge[008-015] | 8 | Broadwell Intel E5-2650 v4 | NVidia GTX 980 | 2✕12 | 2.2 GHz | 128 GB | FDR | gpu | 2016-08 |
dge[016-045]* | 30 | Broadwell Intel E5-2630 v4 | NVidia GTX 1070 | 2✕10 | 2.2 GHz | 64 GB | none | gpu-hub | 2017-06 |
dte[001-010] | 10 | Broadwell Intel E5-2650 v4 | NVidia K40 | 2✕12 | 2.2 GHz | 128 GB | FDR | gpu | 2016-08 |
amp[001-092] | 92 | Cascade Lake Intel Platinum 9242 | none | 2✕48 | 2.3 GHz | 384 GB | OPA | medium | 2020-11 |
agq[001-012] | 12 | Cascade Lake Intel Gold 6242 | NVidia Quadro RTX5000 | 2✕16 | 2.8 GHz | 192 GB | OPA | gpu | 2020-11 |
agt[001-002] | 2 | Cascade Lake Intel Gold 6252 | NVidia Tesla V100 / 32G | 2✕24 | 2.1 GHz | 384 GB | OPA | gpu | 2020-11 |
Explanation: Systems marked with an asterisk (*) are only available for research groups participating in the corresponding hosting agreement. GB = Gigabyte, TB = Terabyte, Gb/s = Gigabit per second, GHz = Gigahertz, GT/s = Giga transfer per second, IB = Infiniband, QDR = Quad data rate, FDR = Fourteen Data Rate.
For a complete overview of hardware, located in Göttingen, look at https://www.gwdg.de/web/guest/hpc-on-campus/scc
Preparing Binaries
Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding “module” must be loaded. Furthermore, through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as PATH
, MANPATH
, LD_LIBRARY_PATH
and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering module avail
on one of the frontends. The command module list
gives you a list of currently loaded modules.
To use a module, you can explicitly load the version you want with module load software/version
. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as module purge
. You can unload single modules by entering module unload software
.
The recommended compiler module for C, C++, and Fortran code is the default Intel compiler intel/compiler
. We also provide GNU and Open64 compilers, the PGI compiler suite will follow. Open64 is often recommended for AMD CPUs, but we do not have experience with it. For math (BLAS and fftw3) the Intel MKL is a good default choice intel/mkl
, with ACML being an alternative for AMD processors. Usually it is not necessary to use fftw3 modules alongside with the MKL, as the latter provides fftw support as well. Please note that the module python/scipy/mkl/0.12.0
provides Python's numpy and scipy libraries compiled with Intel MKL math integration, thus offering good math function performance in a scripting language.
intel/mpi
and the various OpenMPI flavors are recommended for MPI, mostly due to the fact that the mvapich and mvapich2 libraries lack testing.