Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:jupyter:hpc [2018/11/19 09:47]
akhuziy [How to use Jupyter-Hub on the SCC]
en:services:application_services:jupyter:hpc [2019/04/25 11:54] (current)
akhuziy [Using IPython Parallel]
Line 30: Line 30:
 In order to make use of IPython Parallel, Jupyter should be started with ''​GWDG HPC with IPython Parallel''​ spawner. ​ In order to make use of IPython Parallel, Jupyter should be started with ''​GWDG HPC with IPython Parallel''​ spawner. ​
  
-After the Jupyter notebook is launched, you can run engines using "​IPython Clusters"​ tab of the web interface. There you should select the amount of engines to run and click the start button. ​+After the Jupyter notebook is launched, you can run engines using "​IPython Clusters"​ tab of the web interface. There at **slurm** profile ​you should select the amount of engines to run and click the start button. ​
  
-**Note**, that workers start as normal jobs in the ''​mpi'' ​queue and it might take some time. However, the GUI doesn'​t have any functionality to check the state of workers, thus please wait before the engines are spawned. Nevertheless,​ you can always check the current state of the jobs with ''​bjobs''​ command, which should be run in the terminal. ​+**Note**, that workers start as normal jobs in the ''​medium'' ​partition ​and it might take some time. However, the GUI doesn'​t have any functionality to check the state of workers, thus please wait before the engines are spawned. Nevertheless,​ you can always check the current state of the jobs with ''​squeue -u $USER''​ command, which should be run in the terminal. ​
  
 After the engines are up, the spawned cluster of workers can be checked by the following script: After the engines are up, the spawned cluster of workers can be checked by the following script:
 <code python> <code python>
 import ipyparallel as ipp import ipyparallel as ipp
-c = ipp.Client(profile="​lsf")+c = ipp.Client(profile="​slurm")
 c.ids c.ids
 c[:​].apply_sync(lambda : "​Hello,​ World"​) c[:​].apply_sync(lambda : "​Hello,​ World"​)
 </​code>​ </​code>​
  
-Workers currently configured to run maximum **1 hour**. If you want to change that, you can edit the submission scripts of workers in ''​~/​.ipython/​profile_lsf/​ipcluster_config.py''​+Workers currently configured to run maximum **1 hour**. If you want to change that, you can edit the submission scripts of workers in ''​~/​.ipython/​profile_slurm/​ipcluster_config.py''​
  
 ==== Installing additional Python modules ==== ==== Installing additional Python modules ====