Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:start [2019/04/08 18:20]
tehlers [Latest nodes]
en:services:application_services:high_performance_computing:start [2019/10/30 11:01]
hsommer [Hardware Overview] typo
Line 1: Line 1:
 ====== High Performance Computing ====== ====== High Performance Computing ======
  
-For using our compute cluster you need a GWDG account. This account is, by default, not activated for the use of the compute resources. To get it activated, please send an informal email to <​hpc@gwdg.de>​+For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. To get it activated ​or if you are unsure whether you have a full GWDG account, please send an informal email to <​hpc@gwdg.de>​
  
 ===== Access ===== ===== Access =====
Line 11: Line 11:
 The following documentation is valid for this list of hardware: The following documentation is valid for this list of hardware:
  
-^ Nodes                        ^ #    ^ CPU                             ^ GPU              ^ Cores  ^ Frequency ​ ^ Memory ​ ^ IB    ^ Queue     ^ Launched ​ ^ +^ Nodes                        ^ #    ^ CPU                             ^ GPU              ^ Cores  ^ Frequency ​ ^ Memory ​ ^ IB    ^ Partition ​    ^ Launched ​ ^ 
-| gwdd[001-168] ​               | 168  | Ivy-Bridge \\ Intel E5-2670 v2  | none             | 2✕10 ​  | 2.5 GHz    | 64 GB   | QDR   ​| ​mpi       | 2013-11 ​  |+| gwdd[001-168] ​               | 168  | Ivy-Bridge \\ Intel E5-2670 v2  | none             | 2✕10 ​  | 2.5 GHz    | 64 GB   | QDR   ​| ​medium ​      | 2013-11 ​  |
 | gwda[023-048] ​               | 25   | Abu-Dhabi \\ AMD Opteron 6378   | none             | 4✕16 ​  | 2.4 GHz    | 256 GB  | QDR   | fat       | 2013-04 ​  | | gwda[023-048] ​               | 25   | Abu-Dhabi \\ AMD Opteron 6378   | none             | 4✕16 ​  | 2.4 GHz    | 256 GB  | QDR   | fat       | 2013-04 ​  |
-| sa[001-032]* ​                | 32   | Haswell \\ Intel E5-2680 v3     | none             | 2✕12 ​  | 2.5 GHz    | 256 GB  | QDR   ​| ​mpi       | 2015-03 ​  | +| sa[001-032]* ​                | 32   | Haswell \\ Intel E5-2680 v3     | none             | 2✕12 ​  | 2.5 GHz    | 256 GB  | QDR   ​| ​sa       | 2015-03 ​  | 
-| em[001-032]*\\ hh[001-040]* ​ | 72   | Haswell \\ Intel E5-2640 v3     | none             | 2✕8    | 2.6 GHz    | 128 GB  | QDR   ​| ​mpi       | 2015-03 ​  |+| em[001-032]*\\ hh[001-040]* ​ | 72   | Haswell \\ Intel E5-2640 v3     | none             | 2✕8    | 2.6 GHz    | 128 GB  | QDR   ​| ​em\\hh ​      | 2015-03 ​  |
 | gwde001 ​                     | 1    | Haswell \\ Intel E7-4809 v3     | none             | 4✕8    | 2.0 GHz    | 2 TB    | QDR   | fat+      | 2016-01 ​  | | gwde001 ​                     | 1    | Haswell \\ Intel E7-4809 v3     | none             | 4✕8    | 2.0 GHz    | 2 TB    | QDR   | fat+      | 2016-01 ​  |
 | dfa[001-015] ​                | 15   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12 ​  | 2.2 GHz    | 512 GB  | FDR   | fat/​fat+ ​ | 2016-08 ​  | | dfa[001-015] ​                | 15   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12 ​  | 2.2 GHz    | 512 GB  | FDR   | fat/​fat+ ​ | 2016-08 ​  |
-| dmp[011-076] ​                | 76   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   ​| ​mpi       | 2016-08 ​  |+| dmp[011-076] ​                | 76   | Broadwell \\ Intel E5-2650 v4   | none             | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   ​| ​medium ​      | 2016-08 ​  |
 | dsu[001-005] ​                | 5    | Haswell \\ Intel E5-4620 v3     | none             | 4✕10 ​  | 2.0 GHz    | 1.5 TB  | FDR   | fat+      | 2016-08 ​  | | dsu[001-005] ​                | 5    | Haswell \\ Intel E5-4620 v3     | none             | 4✕10 ​  | 2.0 GHz    | 1.5 TB  | FDR   | fat+      | 2016-08 ​  |
-| gwdo[161-180]* ​              | 20   | Ivy-Bridge \\ Intel E3-1270 v2  | NVidia GTX 770   | 1✕4    | 3.5 GHz    | 16 GB   | none  | gpu       | 2014-01 ​  |+| gwdo[161-180]* ​              | 20   | Ivy-Bridge \\ Intel E3-1270 v2  | NVidia GTX 770   | 1✕4    | 3.5 GHz    | 16 GB   | none  | gpu-hub       | 2014-01 ​  |
 | dge[001-007] ​                | 7    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 1080  | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  | | dge[001-007] ​                | 7    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 1080  | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  |
 | dge[008-015] ​                | 8    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 980   | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  | | dge[008-015] ​                | 8    | Broadwell \\ Intel E5-2650 v4   | NVidia GTX 980   | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  |
-| dge[016-045]* ​               | 30   | Broadwell \\ Intel E5-2630 v4   | NVidia GTX 1070  | 2✕10 ​  | 2.2 GHz    | 64 GB   | none  | gpu       | 2017-06 ​  |+| dge[016-045]* ​               | 30   | Broadwell \\ Intel E5-2630 v4   | NVidia GTX 1070  | 2✕10 ​  | 2.2 GHz    | 64 GB   | none  | gpu-hub       | 2017-06 ​  |
 | dte[001-010] ​                | 10   | Broadwell \\ Intel E5-2650 v4   | NVidia K40       | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  | | dte[001-010] ​                | 10   | Broadwell \\ Intel E5-2650 v4   | NVidia K40       | 2✕12 ​  | 2.2 GHz    | 128 GB  | FDR   | gpu       | 2016-08 ​  |
  
 //​Explanation://​ //​Explanation://​
-Systems marked with an asterisk (*) are only available for research ​group participating in the corresponding hosting agreement.+Systems marked with an asterisk (*) are only available for research ​groups ​participating in the corresponding hosting agreement.
 **GB** = Gigabyte, **GB** = Gigabyte,
 **TB** = Terabyte, **TB** = Terabyte,
Line 40: Line 40:
 =====  Preparing Binaries ​ ===== =====  Preparing Binaries ​ =====
  
-Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as PATH, MANPATH, LD_LIBRARY_PATH and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends ​gwdu101 or gwdu102. The command ''​module list''​ gives you a list of currently loaded modules. ​+Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as ''​PATH''​''​MANPATH''​''​LD_LIBRARY_PATH'' ​and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends. The command ''​module list''​ gives you a list of currently loaded modules. ​
  
 To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​. To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​.
Line 51: Line 51:
   * [[Running Jobs Slurm]]   * [[Running Jobs Slurm]]
  
-old: 
  
-  * [[Running Jobs]] 
-  * [[Running Jobs (for experienced users)]] 
  
 ===== Latest nodes ===== ===== Latest nodes =====
-Old (now part of [[Running Jobs Slurm]]): + part of [[Running Jobs Slurm]]
- +
-You can find all important information about the newest nodes [[en:​services:​application_services:​high_performance_computing:​new_nodes|here]]+
  
 =====  Applications ​ ===== =====  Applications ​ =====
Line 65: Line 60:
   * [[Gaussian09]]   * [[Gaussian09]]
   * [[IPython Parallel]]   * [[IPython Parallel]]
-  * [[Jupyter]] 
   * [[Molpro]]   * [[Molpro]]
   * [[Orca]]   * [[Orca]]
Line 71: Line 65:
   * [[Turbomole]]   * [[Turbomole]]
   * [[Singularity]]   * [[Singularity]]
 +  * [[Spark]]
  
 =====  User provided application documentation ​ ===== =====  User provided application documentation ​ =====
Line 90: Line 85:
 =====  Downloads ​ ===== =====  Downloads ​ =====
  
-[[https://​info.gwdg.de/​docs/​lib/​exe/​fetch.php?​media=en:​services:​application_services:​high_performance_computing:​parallelkurs.pdf|Using the GWDG Scientific Compute Cluster - An Introduction]]+[[https://​info.gwdg.de/​docs/​lib/​exe/​fetch.php?​media=en:​services:​application_services:​high_performance_computing:​hpc-course-2019-10.pdf|Using the GWDG Scientific Compute Cluster - An Introduction]]
  
 {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}} {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}}