Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:singularity [2019/06/26 10:28]
sbinger1 [Singularity on the SCC]
en:services:application_services:high_performance_computing:singularity [2020/06/24 14:46] (current)
akhuziy
Line 21: Line 21:
  
 <​code>​ <​code>​
-singularity pull --name sjupyter.simg shub://​A33a/​sjupyter+singularity pull --name sjupyter.sif shub://​A33a/​sjupyter
 </​code>​ </​code>​
  
-Now the sjupyter.simg image is ready to be containerized. To submit the corresponding job, run the command:+Now the sjupyter.sif image is ready to be containerized. To submit the corresponding job, run the command:
  
 <​code>​ <​code>​
-srun --pty -p int singularity shell sjupyter.simg+srun --pty -p int singularity shell sjupyter.sif
 </​code>​ </​code>​
 Here we are requesting a shell to the container in the interactive partition. Here we are requesting a shell to the container in the interactive partition.
Line 33: Line 33:
 ===== GPU access within the container ===== ===== GPU access within the container =====
 GPU devices are visible within the container by default. Only driver and necessary libraries should be installed or binded to the container. ​ GPU devices are visible within the container by default. Only driver and necessary libraries should be installed or binded to the container. ​
-You can install Nvidia drivers yourself or bind it to the container. To bind it automatically you need to run the container with ''​%%--%%nv''​ flag. For instance ''​singularity shell %%--%%nv sjupyter.simg''​. If you want to use specific version of the driver you can install it within the container or link existing version of a driver provided by the cluster to the container. For drivers to be visible inside the container, you have to add their location to environment variable ''​LD_LIBRARY_PATH''​. Here is example of linking Nvidia driver version 384.111:+You can install Nvidia drivers yourself or bind it to the container. To bind it automatically you need to run the container with ''​%%--%%nv''​ flag. For instance ''​singularity shell %%--%%nv sjupyter.sif''​. If you want to use specific version of the driver you can install it within the container or link existing version of a driver provided by the cluster to the container. For drivers to be visible inside the container, you have to add their location to environment variable ''​LD_LIBRARY_PATH''​. Here is example of linking Nvidia driver version 384.111:
 <​code>​ <​code>​
 export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/​cm/​local/​apps/​cuda-driver/​libs/​384.111/​lib64 export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/​cm/​local/​apps/​cuda-driver/​libs/​384.111/​lib64
Line 39: Line 39:
 When running a conitainer the corresponding path should be binded to it with ''​-B''​ option: When running a conitainer the corresponding path should be binded to it with ''​-B''​ option:
 <​code>​ <​code>​
-singularity shell -B /​cm/​local/​apps jupyterCuda.simg+singularity shell -B /​cm/​local/​apps jupyterCuda.sif
 </​code>​ </​code>​
  
Line 76: Line 76:
 You can shell into the container with: You can shell into the container with:
 <​code>​ <​code>​
-singularity shell -B /​cm/​local/​apps CONTAINERNAME.simg+singularity shell -B /​cm/​local/​apps CONTAINERNAME.sif
 </​code>​ </​code>​
 +
 +===== Distributed PyTorch on GPU =====
 +In case if you are using PyTorch for ML, you may want to try out to run it in the container on our GPU nodes using its distributed package. Here is the link ([[https://​info.gwdg.de/​wiki/​doku.php?​id=wiki:​hpc:​pytorch_on_the_hpc_clusters|PyTorch on the HPC]]) where you can find the complete documentation.