Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:start [2018/11/14 16:07]
akhuziy [Table] fixed amount of gwda nodes
en:services:application_services:high_performance_computing:start [2019/08/09 13:29] (current)
ckoehle2 add link to Spark documentation
Line 40: Line 40:
 =====  Preparing Binaries ​ ===== =====  Preparing Binaries ​ =====
  
-Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as PATH, MANPATH, LD_LIBRARY_PATH and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends ​gwdu101 or gwdu102. The command ''​module list''​ gives you a list of currently loaded modules. ​+Most of the third-party software installed on the cluster is not located in the default path. To use it, the corresponding "​module"​ must be loaded. Furthermore,​ through the module system you can setup environment settings for your compiler to use special libraries. The big advantage of this system is the (relative) simplicity with which one can coordinate environment settings, such as ''​PATH''​''​MANPATH''​''​LD_LIBRARY_PATH'' ​and other relevant variables, dependent on the requirements of the use-case. You can find a list of installed modules, sorted by categories, by entering ''​module avail''​ on one of the frontends. The command ''​module list''​ gives you a list of currently loaded modules. ​
  
 To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​. To use a module, you can explicitly load the version you want with ''​module load software/​version''​. If you leave out the version, the default version will be used. Logging off and back in will unload all modules, as well as ''​module purge''​. You can unload single modules by entering ''​module unload software''​.
Line 48: Line 48:
 ''​intel/​mpi''​ and the various OpenMPI flavors are recommended for MPI, mostly due to the fact that the mvapich and mvapich2 libraries lack testing. ''​intel/​mpi''​ and the various OpenMPI flavors are recommended for MPI, mostly due to the fact that the mvapich and mvapich2 libraries lack testing.
 =====  Running Jobs  ===== =====  Running Jobs  =====
 +
 +  * [[Running Jobs Slurm]]
 +
 +old:
  
   * [[Running Jobs]]   * [[Running Jobs]]
Line 53: Line 57:
  
 ===== Latest nodes ===== ===== Latest nodes =====
 +Old (now part of [[Running Jobs Slurm]]):
 +
 You can find all important information about the newest nodes [[en:​services:​application_services:​high_performance_computing:​new_nodes|here]] You can find all important information about the newest nodes [[en:​services:​application_services:​high_performance_computing:​new_nodes|here]]
  
Line 65: Line 71:
   * [[Turbomole]]   * [[Turbomole]]
   * [[Singularity]]   * [[Singularity]]
 +  * [[Spark]]
  
 =====  User provided application documentation ​ ===== =====  User provided application documentation ​ =====
Line 84: Line 91:
 =====  Downloads ​ ===== =====  Downloads ​ =====
  
-{{:​en:​services:​scientific_compute_cluster:​parallelkurs.pdf|}}+[[https://​info.gwdg.de/​docs/​lib/​exe/​fetch.php?​media=en:​services:​application_services:​high_performance_computing:​parallelkurs.pdf|Using the GWDG Scientific Compute Cluster - An Introduction]]
  
 {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}} {{:​en:​services:​scientific_compute_cluster:​script.sh.gz|}}