Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
en:services:application_services:jupyter:hpc [2021/05/11 09:46] – [Resources] akhuziyen:services:application_services:jupyter:hpc [2023/12/07 11:43] (current) – [Running your own Singularity Container] skrey
Line 1: Line 1:
 +====== Jupyter-Hub on the SCC ======
 +GWDG also offers [[https://jupyter-hpc.gwdg.de|Jupyter-Hub on HPC]] as a [[en:services:application_services:jupyter:beta|beta service]] for users of Python.
  
 +**Note** that for using Jupyter-Hub on the Scientific Compute Cluster (SCC) you need a GWDG account which is activated for the use of the compute cluster. Information on how to activate your account can be found [[en:services:application_services:high_performance_computing:account_activation|here]].
 +
 +===== How to use Jupyter-Hub on the SCC =====
 +Jupyter-Hub on the SCC can be used identically to the [[en:services:application_services:jupyter:start|Jupyter / Jupyter-Hub]] and currently supports ''Python'' and ''R'' kernels. After a successful authentication, there are 3 options of spawning Jupyter notebooks available:
 +  - GWDG HPC
 +  - GWDG HPC with IPython Parallel
 +  - GWDG HPC with Own Container
 +
 +If you just need a Jupyter notebook, then select the 1st option. If you want to use [[https://ipython.org/ipython-doc/3/parallel/|IPython Parallel]] then select the 2nd option. In case if you have your own Singularity container and want to use the notebook from that container, then select the 3rd option. 
 +
 +IPython Parallel allows to increase computational resources and spawn compute workers on any nodes of the HPC cluster (not in the interactive queue)
 +
 +A Jupyter notebook and IPython Parallel workers run as normal jobs in the cluster.
 +
 +There are several options you can set on the spawning page to adjust the resourses, like number of cores and amount of memory.
 +
 +===== Options =====
 +**Job profile**: this option sets which notebook you want to use, normal one, with IPython Parallel or your own.
 +
 +**Singularity container location**: if you selected your own Container, then you have to provide the full path (or using ''$HOME'' variable) to the container you want to spawn. More on that further in the documentation.
 +
 +**Duration**: The duration of the job, note that after that time, the job will be killed and you should spawn the notebook once again.
 +
 +**Number of cores**: This option sets the number of cores accessible by the notebook. Note that cores are not for exclusive usage and might be shared if there are more notebooks spawned than available resources. 
 +
 +**Amount of memory**: The option sets the amount of memory accessible by the notebook. Same as the amount of cores, memory might be shared if there are more notebooks than resources. 
 +
 +**Notebook's Home directory**: You can provide some custom location for a default home of the Jupyter. This path will be a default path which will be opened when the notebook is spawned. 
 +
 +===== Resources =====
 +Jupyter notebooks in [[https://jupyter-hpc.gwdg.de|Jupyter-Hub on HPC]] service are launched in the [[en:services:application_services:high_performance_computing:interactive_queue|interactive queue]] of the [[en:services:application_services:high_performance_computing:|High Performance Computing Cluster]]. 
 +
 +It means that ~24 CPUs and ~128GB memory of the nodes in the interactive queue are available to use in Jupyter notebook and shared between all users who simultaneously use the same node. Currently there are 4 nodes in the interactive queue and more nodes will be added in case of high demand.  
 +
 +Also you can use /scratch (/scratch2) shared storage, which allows to store large files (terabytes), as well as your HOME directory.
 +
 +<WRAP center round important 90%>
 +Jupyter notebooks by default start not in the root of your HOME directory but in the folder ''~/jupyterhub-gwdg''. Place your notebooks and files in the corresponding folder. Or set the **Home directory** option described above.
 +</WRAP>
 +
 +Currently one session of Jupyter notebook can run maximum **8 hours**, after that it will be killed, but your files will stay intact. 
 +
 +===== Using IPython Parallel =====
 +In order to make use of IPython Parallel, Jupyter should be started with ''GWDG HPC with IPython Parallel'' spawner. 
 +
 +After the Jupyter notebook is launched, you can run engines using "IPython Clusters" tab of the web interface. There at **slurm** profile you should select the amount of engines to run and click the start button. 
 +
 +**Note**, that workers start as normal jobs in the ''medium'' partition and it might take some time. However, the GUI doesn't have any functionality to check the state of workers, thus please wait before the engines are spawned. Nevertheless, you can always check the current state of the jobs with ''squeue -u $USER'' command, which should be run in the terminal. 
 +
 +After the engines are up, the spawned cluster of workers can be checked by the following script:
 +<code python>
 +import ipyparallel as ipp
 +c = ipp.Client(profile="slurm")
 +c.ids
 +c[:].apply_sync(lambda : "Hello, World")
 +</code>
 +
 +Workers currently configured to run maximum **1 hour**. If you want to change that, you can edit the submission scripts of workers in ''~/.ipython/profile_slurm/ipcluster_config.py''
 +
 +==== Installing additional Python modules ====
 +Additional Python modules can be installed via the terminal and the Python package manager "pip". To do this, a terminal must be opened via the menu "New" -> "Terminal"
 +
 +By default the Internet is not accessible within the Notebook, in order to install or download from Internet, you need to use Proxy by exporting following environment variables:
 +<code bash>export https_proxy="https://www-cache.gwdg.de:3128/"
 +export http_proxy="http://www-cache.gwdg.de:3128/"</code>
 +Afterwards 
 +<code bash>python3 -m pip install --user <module></code> 
 +installs a new module in the home directory.
 +
 +The installation of large Python modules like "tensorflow" may fail with a message "No space left on device". This is caused by the temporary space under "/tmp" being too small for pip to work the downloaded packages. The following steps use a temporary directory in the much larger user home directory for this one installation:
 +
 +<code bash>
 +mkdir -v ~/.user-temp
 +TMPDIR=~/.user-temp python3 -m pip install --user <module>
 +</code>
 +
 +You also can use self defined kernels and install conda environments on non-parallel notebook. Please refer to [[en:services:application_services:jupyter:start#installation_of_additional_packages_and_environments_via_conda|Installing additional environments via conda]]
 +
 +===== Running your own Container =====
 +You can build your own Apptainer container with Jupyter notebook and run it. It will allow you to not only be independent of our Jupyter notebooks, but also easily spawn the same notebook in your local environment for development or tests. 
 +    
 +Here is an example of the Apptainer container you might use for your own Jupyter notebook:
 +    
 +<code bash>
 +Bootstrap: docker
 +From: condaforge/miniforge3
 +
 +%post
 +        addgroup --system --gid 300 slurmadmin
 +        adduser --system --uid 300 --gid 300 --home /opt/slurm/slurm \
 +                --shell /bin/false slurmadmin
 +        
 +        conda install --quiet --yes \
 +                'notebook>=7.0.3' \
 +                'jupyterhub==2.3.1' \
 +                'jupyterlab>=4.0.5' \
 +                'ipyparallel>=8.6.1'
 +</code>
 +More example recipes can also be found at [[https://gitlab-ce.gwdg.de/gwdg/hpc-usage-examples/-/tree/main/jupyter-hpc|HPC Usage Examples]].
 +    
 +You can extend that notebooks as much as you want. The important thing is that the ''jupyterhub-singleuser'' binary which comes with the ''jupyterhub'' is available in the $PATH. This binary is called when the notebook starts. Also it is important to use the version 2.3.1  of ''jupyterhub'' for compatibility with the JupyterHub on SCC. The Hub Control panel will be available under "File > Hub Control Panel" and the ipyparallel profiles are under the "IP" tabs in the sidebar in newer versions of jupyterlab.
 +    
 +The commands involving ''slurmadmin'' are required for slurm commands to work inside the container when it is running on SCC (in particular for the slurm_new ipyparallel profile to work). The container can still be used normally on your local machine without slurm.
 +    
 +Previously, you had to install [[https://sylabs.io/singularity|Singularity]] on your local machine, build the image and then [[en:services:application_services:high_performance_computing:transfer_data|transfer]] the image on SCC.
 +
 +Starting with the rev/23.12 software release, you can use the ''apptainer'' module to build your containers also on the compute nodes of SCC.