Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:singularity [2019/04/08 13:31]
akhuziy [Jupyter and IPython Parallel with Singularity]
en:services:application_services:high_performance_computing:singularity [2020/06/24 14:46] (current)
akhuziy
Line 1: Line 1:
 ====== Singularity on the SCC ====== ====== Singularity on the SCC ======
  
-[[http://singularity.lbl.gov/​|Singularity]] is a containerization system focused on scientific needs and designed for running on HPC resources. At GWDG Singularity can be used by simply loading the corresponding module:+[[https://sylabs.io/​singularity/​|Singularity]] is a containerization system focused on scientific needs and designed for running on HPC resources. At GWDG Singularity can be used by simply loading the corresponding module:
  
 <​code>​ <​code>​
Line 10: Line 10:
  
 For building you can use Docker images or Singularity bootstrap files. You can find the documentation for a building process at For building you can use Docker images or Singularity bootstrap files. You can find the documentation for a building process at
-http://singularity.lbl.gov/docs-build-container.+https://sylabs.io/docs/.
 ====== Examples ====== ====== Examples ======
 Several examples of Singularity usecases will be shown below. Several examples of Singularity usecases will be shown below.
Line 21: Line 21:
  
 <​code>​ <​code>​
-singularity pull --name sjupyter.simg shub://​A33a/​sjupyter+singularity pull --name sjupyter.sif shub://​A33a/​sjupyter
 </​code>​ </​code>​
  
-Now the sjupyter.simg image is ready to be containerized. To submit the corresponding job, run the command:+Now the sjupyter.sif image is ready to be containerized. To submit the corresponding job, run the command:
  
-In LSF:<​code>​ +<​code>​ 
-bsub -ISs -q int singularity shell sjupyter.simg +srun --pty -p int singularity shell sjupyter.sif
-</​code>​ +
- +
- +
-In Slurm:<​code>​ +
-srun --pty -p int singularity shell sjupyter.simg+
 </​code>​ </​code>​
-Here we are requesting a shell to the container in the interactive ​queue.+Here we are requesting a shell to the container in the interactive ​partition.
  
 ===== GPU access within the container ===== ===== GPU access within the container =====
 GPU devices are visible within the container by default. Only driver and necessary libraries should be installed or binded to the container. ​ GPU devices are visible within the container by default. Only driver and necessary libraries should be installed or binded to the container. ​
-You can install Nvidia drivers yourself or bind it to the container. To bind it automatically you need to run the container with ''​%%--%%nv''​ flag. For instance ''​singularity shell %%--%%nv sjupyter.simg''​. If you want to use specific version of the driver you can install it within the container or link existing version of a driver provided by the cluster to the container. For drivers to be visible inside the container, you have to add their location to environment variable ''​LD_LIBRARY_PATH''​. Here is example of linking Nvidia driver version 384.111:+You can install Nvidia drivers yourself or bind it to the container. To bind it automatically you need to run the container with ''​%%--%%nv''​ flag. For instance ''​singularity shell %%--%%nv sjupyter.sif''​. If you want to use specific version of the driver you can install it within the container or link existing version of a driver provided by the cluster to the container. For drivers to be visible inside the container, you have to add their location to environment variable ''​LD_LIBRARY_PATH''​. Here is example of linking Nvidia driver version 384.111:
 <​code>​ <​code>​
 export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/​cm/​local/​apps/​cuda-driver/​libs/​384.111/​lib64 export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/​cm/​local/​apps/​cuda-driver/​libs/​384.111/​lib64
Line 44: Line 39:
 When running a conitainer the corresponding path should be binded to it with ''​-B''​ option: When running a conitainer the corresponding path should be binded to it with ''​-B''​ option:
 <​code>​ <​code>​
-singularity shell -B /​cm/​local/​apps jupyterCuda.simg+singularity shell -B /​cm/​local/​apps jupyterCuda.sif
 </​code>​ </​code>​
  
Line 81: Line 76:
 You can shell into the container with: You can shell into the container with:
 <​code>​ <​code>​
-singularity shell -B /​cm/​local/​apps CONTAINERNAME.simg+singularity shell -B /​cm/​local/​apps CONTAINERNAME.sif
 </​code>​ </​code>​
 +
 +===== Distributed PyTorch on GPU =====
 +In case if you are using PyTorch for ML, you may want to try out to run it in the container on our GPU nodes using its distributed package. Here is the link ([[https://​info.gwdg.de/​wiki/​doku.php?​id=wiki:​hpc:​pytorch_on_the_hpc_clusters|PyTorch on the HPC]]) where you can find the complete documentation.