Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
en:services:application_services:high_performance_computing:start [2019/04/08 18:19]
tehlers [Running Jobs]
en:services:application_services:high_performance_computing:start [2019/10/09 16:36]
vend [High Performance Computing] ammend information on full GWDG account
Line 1: Line 1:
 ====== High Performance Computing ====== ====== High Performance Computing ======
  
-For using our compute cluster you need a GWDG account. This account is, by default, not activated for the use of the compute resources. To get it activated, please send an informal email to <​hpc@gwdg.de>​+For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. To get it activated ​or if you are unsure whether you have a full GWDG account, please send an informal email to <​hpc@gwdg.de>​
  
 ===== Access ===== ===== Access =====
Line 40: Line 40:
 =====  Preparing Binaries ​ ===== =====  Preparing Binaries ​ =====
  
-Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as PATH, MANPATH, LD_LIBRARY_PATH and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends ​gwdu101 or gwdu102. The command ''​module list''​ gives you a list of currently loaded modules. ​+Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as ''​PATH''​''​MANPATH''​''​LD_LIBRARY_PATH'' ​and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends. The command ''​module list''​ gives you a list of currently loaded modules. ​
  
 To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​. To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​.
Line 51: Line 51:
   * [[Running Jobs Slurm]]   * [[Running Jobs Slurm]]
  
-old: 
  
-  * [[Running Jobs]] 
-  * [[Running Jobs (for experienced users)]] 
  
 ===== Latest nodes ===== ===== Latest nodes =====
-You can find all important information about the newest nodes [[en:​services:​application_services:​high_performance_computing:​new_nodes|here]]+ part of [[Running Jobs Slurm]]
  
 =====  Applications ​ ===== =====  Applications ​ =====
Line 63: Line 60:
   * [[Gaussian09]]   * [[Gaussian09]]
   * [[IPython Parallel]]   * [[IPython Parallel]]
-  * [[Jupyter]] 
   * [[Molpro]]   * [[Molpro]]
   * [[Orca]]   * [[Orca]]
Line 69: Line 65:
   * [[Turbomole]]   * [[Turbomole]]
   * [[Singularity]]   * [[Singularity]]
 +  * [[Spark]]
  
 =====  User provided application documentation ​ ===== =====  User provided application documentation ​ =====
Line 88: Line 85:
 =====  Downloads ​ ===== =====  Downloads ​ =====
  
-[[https://​info.gwdg.de/​docs/​lib/​exe/​fetch.php?​media=en:​services:​application_services:​high_performance_computing:​parallelkurs.pdf|Using the GWDG Scientific Compute Cluster - An Introduction]]+[[https://​info.gwdg.de/​docs/​lib/​exe/​fetch.php?​media=en:​services:​application_services:​high_performance_computing:​hpc-course-2019-10.pdf|Using the GWDG Scientific Compute Cluster - An Introduction]]
  
 {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}} {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}}