Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:ipython_parallel [2019/02/18 16:49]
akhuziy
en:services:application_services:high_performance_computing:ipython_parallel [2019/07/22 14:54]
akhuziy
Line 9: Line 9:
  
 <​code>​ <​code>​
-bsub -ISs -int singularity shell -B /opt sjupyter.simg+srun --pty -p int singularity shell -B /opt sjupyter.sif
 </​code>​ </​code>​
  
-To be able to run LSF commands within the container, ​the LSF variables need to be initialized:+To be able to run Slurm commands within the container, ​additional libraries and directories should ​be bound into the container:
  
 <​code>​ <​code>​
-. /opt/lsf/conf/profile.lsf+singularity -B /​var/​run/​munge,/​run/​munge,/​usr/​lib64/​libmunge.so.2,/usr/lib64/libmunge.so.2.0.0,/​etc/profile.d/slurm.sh ... 
 +</​code>​ 
 + 
 +Also you need to add //​slurmadmin//​ user to the container when building the image, with following commands:  
 +<​code>​ 
 +echo "​slurmadmin:​x:​300:​300::/​opt/​slurm/​slurm:/​bin/​false"​ >> /​etc/​passwd 
 +echo "​slurmadmin:​x:​300:"​ >> /etc/group
 </​code>​ </​code>​
  
Line 23: Line 29:
  
 <​code>​ <​code>​
-ipython profile create --parallel --profile=lsf+ipython profile create --parallel --profile=myslurm
 </​code>​ </​code>​
  
-This will create the profile at ''​%%$HOME/​.ipython/​profile_lsf%%''​. Now you need to configure it for LSF.+This will create the profile at ''​%%$HOME/​.ipython/​profile_myslurm%%''​. Now you need to configure it for Slurm.
  
-Add following config lines to the file ''​%%$HOME/​.ipython/​profile_lsf/​ipcluster_config.py%%'':​+Add following config lines to the file ''​%%$HOME/​.ipython/​profile_myslurm/​ipcluster_config.py%%'':​
  
 <​code>​ <​code>​
-c.IPClusterEngines.engine_launcher_class = 'LSFEngineSetLauncher+c.IPClusterEngines.engine_launcher_class = 'SlurmEngineSetLauncher
-c.IPClusterStart.controller_launcher_class = 'LSFControllerLauncher+c.IPClusterStart.controller_launcher_class = 'SlurmControllerLauncher
-c.LSFControllerLauncher.batch_template_file ='lsf.controller.template'​ +c.SlurmControllerLauncher.batch_template_file ='slurm.controller.template'​ 
-c.LSFEngineSetLauncher.batch_template_file = 'lsf.engine.template'​+c.SlurmEngineSetLauncher.batch_template_file = 'slurm.engine.template'​
 </​code>​ </​code>​
 and comment out the following parameters: and comment out the following parameters:
 <​code>​ <​code>​
-#c.LSFControllerLauncher.batch_template = "​..."​ +#c.SlurmControllerLauncher.batch_template = "​..."​ 
-#c.LSFEngineSetLauncher.batch_template = "​..."​+#c.SlurmEngineSetLauncher.batch_template = "​..."​
 </​code>​ </​code>​
  
-Add the following line to ''​%%$HOME/​.ipython/​profile_lsf/​ipcontroller_config.py%%'':​+Add the following line to ''​%%$HOME/​.ipython/​profile_myslurm/​ipcontroller_config.py%%'':​
  
 <​code>​ <​code>​
Line 48: Line 54:
 </​code>​ </​code>​
  
-IPython Parallel is almost ready to use. For submitting ​LSF jobs in a specific queue and with additional parameters, create templates for batch jobs in the directory you want to start the container using the names specified in the configuration file, i.e. ''​%%lsf.controller.template%%''​ and ''​%%lsf.engine.template%%''​.+IPython Parallel is almost ready to use. For submitting ​Slurm jobs in a specific queue and with additional parameters, create templates for batch jobs in the directory you want to start the container using the names specified in the configuration file, i.e. ''​%%slurm.controller.template%%''​ and ''​%%slurm.engine.template%%''​.
  
-lsf.controller.template:​+slurm.controller.template:​
  
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
  
-#BSUB -J ipcontroller +#SBATCH -p medium 
-#BSUB -o ipcontroller.%J +#​SBATCH ​-J ipcontroller 
-#BSUB -n 1 +#SBATCH ​-o jupyterhub-gwdg/​current.ipcontroller.log 
-#BSUB -W 5:00  +#SBATCH ​-n 1  
-#BSUB -q int+#SBATCH ​-t 1:00:00
  
-export PATH=$PATH:/​usr/​bin +export PATH=$PATH:/​usr/​bin:/​usr/​local/bin 
-export PATH=$PATH:/​cm/​shared/​apps/​singularity/​3.0.2/bin/+export PATH=$PATH:/​cm/​shared/​apps/​singularity/​3.2.0/bin/
  
-singularity exec sjupyter.simg ipcontroller --profile-dir={profile_dir} --location=$HOSTNAME+singularity exec sjupyter.sif ipcontroller --profile-dir={profile_dir} --location=$HOSTNAME
 </​code>​ </​code>​
  
-lsf.engine.template:​+slurm.engine.template:​
  
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
  
-#BSUB -n {n} +#SBATCH -p medium 
-#BSUB -o ipengine.%J +#SBATCH -J ipengine 
-#BSUB -W 5:00 +#​SBATCH ​-n {n} 
-#BSUB -q mpi +#SBATCH ​-o jupyterhub-gwdg/​current.ipengine.log 
-#BSUB -R span[ptile='​!'​] +#SBATCH ​-t 1:00:00
-#BSUB -R same[model] +
-#BSUB -a intelmpi+
  
-export PATH=$PATH:/​usr/​bin +export PATH=$PATH:/​usr/​bin:/​usr/​local/bin 
-export PATH=$PATH:/​cm/​shared/​apps/​singularity/​3.0.2/bin/ +export PATH=$PATH:/​cm/​shared/​apps/​singularity/​3.2.0/bin/
-export I_MPI_ROOT=/​cm/​shared/​apps/​intel/​compilers_and_libraries/​2017.2.174/mpi +
-export PATH=$PATH:​$I_MPI_ROOT/​intel64/bin +
-export LD_LIBRARY_PATH=$I_MPI_ROOT/​intel64/lib+
  
-mpirun.lsf ​singularity exec sjupyter.simg ipengine --profile-dir={profile_dir}+srun singularity exec sjupyter.sif ipengine --profile-dir={profile_dir}
 </​code>​ </​code>​
  
Line 103: Line 104:
 For ''​%%<​host>​%%''​ insert the node where the container is running. Open the link in the Jupyter output in your browswer. For ''​%%<​host>​%%''​ insert the node where the container is running. Open the link in the Jupyter output in your browswer.
  
-To start the cluster, use the IPython Clusters tab in the Jupyter interface, select the lsf profile and amount of processes and click **start**. You will be able to see the engines running with the bsub command.+To start the cluster, use the IPython Clusters tab in the Jupyter interface, select the myslurm ​profile and amount of processes and click **start**. You will be able to see the engines running with the "​squeue -u $USER" ​command.
  
 To test if it is working, simply run following script in the Jupyter notebook: To test if it is working, simply run following script in the Jupyter notebook:
Line 109: Line 110:
 <​code>​ <​code>​
 import ipyparallel as ipp import ipyparallel as ipp
-c = ipp.Client(profile="​lsf")+c = ipp.Client(profile="​myslurm")
 c.ids c.ids
 c[:​].apply_sync(lambda : "​Hello,​ World"​) c[:​].apply_sync(lambda : "​Hello,​ World"​)
 </​code>​ </​code>​