Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:running_jobs_slurm [2020/04/14 14:25]
mboden [Available QOS]
en:services:application_services:high_performance_computing:running_jobs_slurm [2020/04/14 14:47] (current)
mboden [''sbatch'': Specifying node properties with ''-C'']
Line 53: Line 53:
  
 **fat+**\\ **fat+**\\
-This partition is meant for very memory intensive jobs. These partitions are for jobs that require more than 512 GB RAM on single node. Nodes of fat+ partitions have 1.5 and 2 TB RAM. You are required to have specify your memory needs on job submission to use these nodes. For Information on how to do that, refer to [[en:​services:​application_services:​high_performance_computing:​running_jobs_slurm#​resource_selection|resource selection.]] As general advice: Try your jobs on the smaller nodes in the fat partition first and work your way up and don't be afraid to ask for help here.+This partition is meant for very memory intensive jobs. These partitions are for jobs that require more than 512 GB RAM on single node. Nodes of fat+ partitions have 1.5 and 2 TB RAM. You are required to have specify your memory needs on job submission to use these nodes (see [[en:​services:​application_services:​high_performance_computing:​running_jobs_slurm#​resource_selection|resource selection]]).\\ 
 +As general advice: Try your jobs on the smaller nodes in the fat partition first and work your way up and don't be afraid to ask for help here.
  
 **gpu** - A partition for nodes containing GPUs. Please refer to [[en:​services:​application_services:​high_performance_computing:​running_jobs_slurm#​gpu_selection]] ​ **gpu** - A partition for nodes containing GPUs. Please refer to [[en:​services:​application_services:​high_performance_computing:​running_jobs_slurm#​gpu_selection]] ​
Line 116: Line 117:
 **<​nowiki>​-c <cpus per task></​nowiki>​**\\ **<​nowiki>​-c <cpus per task></​nowiki>​**\\
 The number of cpus per tasks. The default is one cpu per task. The number of cpus per tasks. The default is one cpu per task.
 +
 +**<​nowiki>​-c vs -n</​nowiki>​**\\
 +As a rule of thumb, if you run your code on a single node, use -c. For multi-node MPI-jobs, use -n.\\
  
 **<​nowiki>​-N <​minNodes[,​maxNodes]></​nowiki>​**\\ **<​nowiki>​-N <​minNodes[,​maxNodes]></​nowiki>​**\\
Line 152: Line 156:
 **-C scratch[2]**\\ **-C scratch[2]**\\
 The node must have access to shared ''/​scratch''​ or ''/​scratch2''​. The node must have access to shared ''/​scratch''​ or ''/​scratch2''​.
 +
 +**-C fmz / -C fas**\\
 +The node has to be at that location. It is pretty similar to -C scratch / -C scratch2, since the nodes in the FMZ have access to scratch and those at the Fassberg location have access to  scratch2. This is mainly for easy compatibility with our old partition naming scheme.
 +
 +**-C [architecture]**\\
 +request a specific CPU architecture. Available Options are: abu-dhabi, ivy-bridge, haswell, broadwell. See [[en:​services:​application_services:​high_performance_computing:​start#​hardware_overview|this table]] for the corresponding nodes.