Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:running_jobs_slurm [2019/05/03 08:09]
mboden [Table]
en:services:application_services:high_performance_computing:running_jobs_slurm [2019/07/09 17:48] (current)
mboden [Using Job Scripts] #SBATCH at top of the script
Line 28: Line 28:
  
 **MPI job**\\ **MPI job**\\
-A Job with distributed memory parallelization,​ realized with MPI. Can use several job slots on several nodes and needs to be started with ''​mpirun''​ or substitute.+A Job with distributed memory parallelization,​ realized with MPI. Can use several job slots on several nodes and needs to be started with ''​mpirun''​ or the Slurm substitute ​''​srun''​.
  
 **Partition**\\ **Partition**\\
Line 113: Line 113:
 **<​nowiki>​-n <​tasks></​nowiki>​**\\ **<​nowiki>​-n <​tasks></​nowiki>​**\\
 The number of tasks for this job. The default is one task per node. The number of tasks for this job. The default is one task per node.
 +
 +**<​nowiki>​-c <cpus per task></​nowiki>​**\\
 +The number of cpus per tasks. The default is one cpu per task.
  
 **<​nowiki>​-N <​minNodes[,​maxNodes]></​nowiki>​**\\ **<​nowiki>​-N <​minNodes[,​maxNodes]></​nowiki>​**\\
Line 120: Line 123:
 Number of tasks per node. If -n and <​nowiki>​--ntasks-per-node</​nowiki>​ is specified, this options specifies the maximum number tasks per node. Number of tasks per node. If -n and <​nowiki>​--ntasks-per-node</​nowiki>​ is specified, this options specifies the maximum number tasks per node.
 \\ \\
- 
 === Memory Selection === === Memory Selection ===
  
Line 154: Line 156:
 ====  Using Job Scripts ​ ==== ====  Using Job Scripts ​ ====
  
-A job script is a shell script with a special comment section: In each line beginning with ''#​SBATCH''​ the following text is interpreted as a ''​sbatch''​ option. Here is an example:+A job script is a shell script with a special comment section: In each line beginning with ''#​SBATCH''​ the following text is interpreted as a ''​sbatch''​ option. These options have to be at the top of the script before any other commands are executed. Here is an example:
  
 <​code>​ <​code>​
Line 186: Line 188:
 <​code>​ <​code>​
 export OMP_NUM_THREADS=4 export OMP_NUM_THREADS=4
-sbatch --exclusive -p mpi -N 2 --ntasks-per-node=4 --wrap="​mpirun ./​hybrid_job"​+sbatch --exclusive -p medium ​-N 2 --ntasks-per-node=4 --wrap="​mpirun ./​hybrid_job"​
 </​code>​ </​code>​
 (each MPI process creates 4 OpenMP threads in this case). (each MPI process creates 4 OpenMP threads in this case).
Line 245: Line 247:
 As stated before, ''​sbatch''​ is used to submit jobs to the cluster, but there is also ''​srun''​ command wich can be used to execute a task directly on the allocated nodes. That command is helpful to start interactive session on the node. You can use interactive session to avoid running large tests on the frontend (a good idea!) you can get an interactive session (with the bash shell) on one of the ''​medium''​ nodes with As stated before, ''​sbatch''​ is used to submit jobs to the cluster, but there is also ''​srun''​ command wich can be used to execute a task directly on the allocated nodes. That command is helpful to start interactive session on the node. You can use interactive session to avoid running large tests on the frontend (a good idea!) you can get an interactive session (with the bash shell) on one of the ''​medium''​ nodes with
  
-<​code>​srun --pty -p medium -N 1 -16 /​bin/​bash</​code>​+<​code>​srun --pty -p medium -N 1 -16 /​bin/​bash</​code>​
 \\ \\
-''<​nowiki>​--pty</​nowiki>''​ requests support for an interactive shell, and ''​-p medium''​ the corresponding partition. ''​-16''​ ensures that you 16 cores on the node. You will get a shell prompt, as soon as a suitable node becomes available. Single thread, non-interactive jobs can be run with+''<​nowiki>​--pty</​nowiki>''​ requests support for an interactive shell, and ''​-p medium''​ the corresponding partition. ''​-16''​ ensures that you 16 cpus on the node. You will get a shell prompt, as soon as a suitable node becomes available. Single thread, non-interactive jobs can be run with
 <​code>​srun -p medium ./​myexecutable</​code>​ <​code>​srun -p medium ./​myexecutable</​code>​
  
Line 351: Line 353:
   *  If you have a lot of failed jobs send at least two outputs. You can also list the jobids of all failed jobs to help us even more with understanding your problem.   *  If you have a lot of failed jobs send at least two outputs. You can also list the jobids of all failed jobs to help us even more with understanding your problem.
   *  If you don’t mind us looking at your files, please state this in your request. You can limit your permission to specific directories or files.   *  If you don’t mind us looking at your files, please state this in your request. You can limit your permission to specific directories or files.
- 
-====== not yet migrated to SLURM ====== 
- 
-===== MPI jobs ===== 
- 
-Note that a single thread job submitted like above will share its execution host with other jobs. It is therefore expected that it does not use more than the memory available per core! On the ''​mpi''​ nodes this amount is 4 GB, as well as on the newer ''​fat''​ nodes. If your job requires more, you must assign additional cores. For example, if your single thread job requires 64 GB of memory, you must submit it like this: 
- 
-<​code>​ 
-bsub -q mpi -n 16 ./​myexecutable</​code>​ 
-\\ 
-OpenMPI jobs can be submitted as follows: 
- 
-<​code>​ 
-bsub -q mpi -n 256 -a openmpi mpirun.lsf ./​myexecutable</​code>​ 
-\\ 
-For Intel MPI jobs it suffices to use ''​-a intelmpi''​ instead of ''​-a openmpi''​. Please note that LSF will not load the correct modules (compiler, library, MPI) for you. You either have to do that before executing ''​bsub'',​ in which case your setup will be copied to the execution hosts, or you will have to use a job script and load the required modules there. ​ 
- 
-A new feature in LSF is ''​pinning''​ support. ''​Pinning''​ (in its most basic version) means instructing the operating system to not apply its standard scheduling algorithms to your workloads, but instead keep processes on the CPU core they have been started on. This may significantly improve performance for some jobs, especially on the ''​fat''​ nodes with their high CPU core count. ''​Pinning''​ is managed via the MPI library, and currently only OpenMPI is supported. There is not much experience with this feature, so we are interested in your feedback. Here is an example: 
- 
-<​code>​ 
-bsub -R "​select[np16] span[ptile=16] affinity[core(1):​cpubind=core]"​ -q mpi -n 256 -a openmpi mpirun.lsf ./​myexecutable</​code>​ 
-\\ 
-The affinity string ''"​affinity[core(1):​cpubind=core]"''​ means that each task is using one core and that the binding should be done based on cores (as opposed to sockets, NUMA units, etc). Because this example is for a pure MPI application,​ x in ''​core(x)''​ is one. In an SMP/MPI hybrid job, x would be equal to the number of threads per task (e. g., equal to ''​OMP_NUM_THREADS''​ for Openmp/MPI hybrid jobs). 
- 
-===== SMP jobs ===== 
- 
-Shared memory parallelized jobs can be submitted with 
- 
-<​code>​ 
-bsub -q mpi -n 8,20 -R '​span[hosts=1]'​ -a openmp ./​myexecutable</​code>​ 
-\\ 
-The ''​span''​ option is required, without it, LSF will assign cores to the job from several nodes, if that is advantageous from the scheduling perspective. 
- 
-===== Using the fat+ queue ===== 
- 
-Nodes with a lot of memory are very expensive and should not normally be used for jobs which could also run on our other nodes. Therefore, please note the following policies: 
- 
-  * Your job must need more than 250 GB RAM. 
-  * Your job must use at least a full 512 GB node or half a 1.5 TB or 2 TB node: 
- 
-  * For a full 512 GB node: 
-<​code>​ 
-#BSUB -x 
-#BSUB -R "​maxmem < 600000"​ 
-</​code>​ 
- 
-  * For half a 1.5 TB node (your job needs more than 500 GB RAM): 
-<​code>​ 
-#BSUB -n 20 
-#BSUB -R span[hosts=1] 
-#BSUB -R "​maxmem < 1600000 && maxmem > 600000"​ 
-</​code>​ 
- 
-  * For a full 1.5 TB node (your job needs more than 700 GB RAM): 
-<​code>​ 
-#BSUB -x 
-#BSUB -R "​maxmem < 1600000 && maxmem > 600000"​ 
-</​code>​ 
- 
-  * For half a 2 TB node (your job needs more than 700 GB RAM): 
-<​code>​ 
-#BSUB -n 16 
-#BSUB -R span[hosts=1] 
-#BSUB -R "​maxmem > 1600000"​ 
-</​code>​ 
- 
-  * For a full 2 TB node (your job needs more than 1.5 TB RAM): 
-<​code>​ 
-#BSUB -x 
-#BSUB -R "​maxmem > 1600000"​ 
-</​code>​ 
- 
-The 512 GB nodes are also available in the fat queue, without these restrictions. However, fat jobs on these nodes have a lower priority compared to fat+ jobs. 
- 
-===== CPU architecture selection ===== 
- 
-Our cluster provides four generations of Intel CPUs and two generations of AMD CPUs. However, the main difference between these CPU types is whether they support Intel'​s AVX2 or not. For selecting this we have introduced the x64inlvl (for x64 instruction level) label: 
- 
-<​code>​ 
-x64inlvl=1 : Supports only AVX 
-x64inlvl=2 : Supports AVX and AVX2 
-</​code>​ 
- 
-In order to choose an AVX2 capable node you therefore have to include 
-<​code>​ 
-#BSUB -R "​x64inlvl=2"​ 
-</​code>​ 
-in your submission script. 
- 
-If you need to be more specific, you can also directly choose the CPU generation: 
- 
-<​code>​ 
-amd=1 : Interlagos 
-amd=2 : Abu Dhabi 
- 
-intel=1 : Sandy Bridge 
-intel=2 : Ivy Bridge 
-intel=3 : Haswell 
-intel=4 : Broadwell 
-</​code>​ 
- 
-So, in order to choose any AMD CPU: 
-<​code>​ 
-#BSUB -R amd 
-</​code>​ 
-In order to choose an Intel CPU of at least Haswell generation: 
-<​code>​ 
-#BSUB -R "​intel>​=3"​ 
-</​code>​ 
-This is equivalent to ''​x64inlvl=2''​. 
- 
-===== Memory selection ===== 
- 
-Note that the following paragraph is about **selecting** nodes with enough memory for a job. The mechanism to actually **reserve** that memory does not change: The memory you are allowed to use equals memory per core times slots (-n option) requested. 
- 
-You can select a node either by currently available memory (mem) or by maximum available memory (maxmem). If you request complete nodes, the difference is actually very small, as a free node's available memory is close to its maximum memory. All requests are in MB. 
- 
-To select a node with more than about 500 GB available memory use: 
-<​code>​ 
-#BSUB -R "​mem>​500000"​ 
-</​code>​ 
-To select a node with more than about 6 GB maximum memory per core use: 
-<​code>​ 
-#BSUB -R "​maxmem/​ncpus>​6000"​ 
-</​code>​ 
-(Yes, you can do basic math in the requirement string!) 
- 
-It bears repeating: None of the above is a memory reservation. If you actually want to reserve "​mem"​ memory, the easiest way is to combine ''​-R "​mem>​...''​ with ''​-x''​ for an exclusive job. 
- 
-Finally, note that the ''​-M''​ option just denotes the memory limit of your job per core (in KB). This is of no real consequence,​ as we do not enforce these limits and it has no influence on the host selection. 
- 
- 
-Besides the options shown in this article, you can of course use the options for controlling walltime limits (-W), output (-o), and your other requirements as usual. You can also continue to use job scripts instead of the command line (with the ''#​BSUB <​option>​ <​value>''​ syntax). 
- 
-Please consult the LSF man pages if you need further information. 
- 
- 
- 
- 
  
 [[Kategorie:​ Scientific Computing]] [[Kategorie:​ Scientific Computing]]