This is an old revision of the document!

Parallel execution of Jupyter notebooks on the cluster

if the resources of the interactive queue are enough for the Jupyter notebook itself, then the easiest way to use IPython parallel with Jupyter on the SCC would be to run it via Jupyter-Hub on SCC. Where you can spawn a parallel notebook along with a normal one.

This documentation is based on the IPython example used in our Singularity on the SCC documentation page.

When submitting that image, please add the -B option of singularity for mounting the /opt folder:

bsub -ISs -q int singularity shell -B /opt sjupyter.simg

To be able to run LSF commands within the container, the LSF variables need to be initialized:

. /opt/lsf/conf/profile.lsf

For using IPython Parallel with the cluster, we need to configure it. These steps are required only once, everything will be kept in your $HOME/.ipython directory, even if you destroy the container.

To create a new profile and configure it for the compute cluster, run following command:

ipython profile create --parallel --profile=lsf

This will create the profile at $HOME/.ipython/profile_lsf. Now you need to configure it for LSF.

Add following config lines to the file $HOME/.ipython/profile_lsf/

c.IPClusterEngines.engine_launcher_class = 'LSFEngineSetLauncher'
c.IPClusterStart.controller_launcher_class = 'LSFControllerLauncher'
c.LSFControllerLauncher.batch_template_file ='lsf.controller.template'
c.LSFEngineSetLauncher.batch_template_file = 'lsf.engine.template'

Add the following line to $HOME/.ipython/profile_lsf/

c.HubFactory.ip = '*'

IPython Parallel is almost ready to use. For submitting LSF jobs in a specific queue and with additional parameters, create templates for batch jobs in the directory you want to start the container using the names specified in the configuration file, i.e. lsf.controller.template and lsf.engine.template.



#BSUB -J ipcontroller
#BSUB -o ipcontroller.%J
#BSUB -n 1
#BSUB -W 5:00 
#BSUB -q int

export PATH=$PATH:/usr/bin
export PATH=$PATH:/cm/shared/apps/singularity/bin/

singularity exec sjupyter.simg ipcontroller --profile-dir={profile_dir} --location=$HOSTNAME



#BSUB -n {n}
#BSUB -o ipengine.%J
#BSUB -W 5:00
#BSUB -q mpi
#BSUB -R span[ptile='!']
#BSUB -R same[model]
#BSUB -a intelmpi

export PATH=$PATH:/usr/bin
export PATH=$PATH:/cm/shared/apps/singularity/bin/
export I_MPI_ROOT=/cm/shared/apps/intel/compilers_and_libraries/2017.2.174/mpi
export PATH=$PATH:$I_MPI_ROOT/intel64/bin
export LD_LIBRARY_PATH=$I_MPI_ROOT/intel64/lib

mpirun.lsf singularity exec sjupyter.simg ipengine --profile-dir={profile_dir}

Now you can launch a jupyter instance:

jupyter notebook --port <port> --ip --no-browser

For <port> choose a random unrestricted port number, for example 8769. Tunnel the port from the node to your local PC:

ssh -L<port>:<port> ssh -L<port>:<host>:<port> gwdu101 -N

For <host> insert the node where the container is running. Open the link in the Jupyter output in your browswer.

To start the cluster, use the IPython Clusters tab in the Jupyter interface, select the lsf profile and amount of processes and click start. You will be able to see the engines running with the bsub command.

To test if it is working, simply run following script in the Jupyter notebook:

import ipyparallel as ipp
c = ipp.Client(profile="lsf")
c[:].apply_sync(lambda : "Hello, World")
This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies