Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:orca [2016/11/21 15:23]
tehlers [Using ORCA]
en:services:application_services:high_performance_computing:orca [2019/05/14 13:12] (current)
mboden [Using ORCA]
Line 15: Line 15:
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
-#BSUB -q mpi +#SBATCH ​-p medium 
-#BSUB -n 1 +#SBATCH ​-n 1 
-#BSUB -W 24:00+#SBATCH ​-t 1-00:00:00
  
 INPUTFILE=test.inp INPUTFILE=test.inp
Line 23: Line 23:
 $ORCA_PATH/​orca ${INPUTFILE}</​code>​ $ORCA_PATH/​orca ${INPUTFILE}</​code>​
 \\ \\
-This tells the batch system to submit the job to queue mpi and require 1 processor for 24 hours.+This tells the batch system to submit the job to queue medium ​and require 1 processor for 24 hours.
  
 For parallel jobs, this needs a little trick, since orca can't run on shared filesystems like NFS, CVFS or FHGFS. We need to use /local as a local filesystem for the run: For parallel jobs, this needs a little trick, since orca can't run on shared filesystems like NFS, CVFS or FHGFS. We need to use /local as a local filesystem for the run:
Line 29: Line 29:
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
-#BSUB -q mpi +#SBATCH ​-p medium 
-#BSUB -J ORCA +#SBATCH ​-J ORCA 
-#BSUB -n 16 +#SBATCH ​-n 20 
-#BSUB -W 24:00 +#SBATCH ​-N 1 
-#BSUB -R span[ptile='​!'​] +#SBATCH -t 1-00:00:00 
-#BSUB -R same[model] +#SBATCH ​--ntasks-per-node=20 
-#BSUB -a intelmpi+#SBATCH ​--signal=B:​12@600
  
 INPUTFILE=test.inp INPUTFILE=test.inp
  
-TEMP=/​local/​${USER}/​orca.${LSB_JOBID} +work=$PWD
-blaunch "mkdir -p ${TEMP} >/​dev/​null 2>&​1"​ +
-work=$(pwd)+
  
-trap 'blaunch "/bin/cp -af ${TEMP}/* ${work}/ ​>/​dev/​null 2>&​1"​; exit 12' 12+trap 'srun -n ${SLURM_JOB_NUM_NODES} --ntasks-per-node=1 ​cp -af ${TMP_LOCAL}/* ${work}/; exit 12' 12
  
-/bin/cp -af ${INPUTFILE} ${work}/​*.gbw ${work}/​*.pot ${TEMP}/+cp -af ${INPUTFILE} ${work}/​*.gbw ${work}/​*.pot ${TMP_LOCAL}/
  
-cd $TEMP+cd $TMP_LOCAL
  
-$ORCA_PATH/​orca ${INPUTFILE}+$ORCA_PATH/​orca ${INPUTFILE} ​
 +wait
  
-blaunch "/bin/cp -af ${TEMP}/* ${work}/ >/​dev/​null 2>&1+srun -n ${SLURM_JOB_NUM_NODES} --ntasks-per-node=1 ​cp -af ${TMP_LOCAL}/* ${work}/ >/​dev/​null 2>&1
-cd ${work}+
  
-if [ "​${TEMP#/​local}"​ != "​${TEMP}"​ ]; then +</​code>​
-blaunch "rm -rf $TEMP"​ +
-fi</​code>​+
 \\ \\
-This tells the batch system to submit the job to queue mpi and require ​16 processors for 24 hours. **Please make sure that your input file in this case (-n 16) contains the line '%pal nprocs ​16 end' (without quotes)!** '%pal nprocs ​16 end' must equal the number of processes you reserve with the '​-n'​ option.+This tells the batch system to submit the job to partition medium ​and require ​20 processors ​on one node for 24 hours. **Please make sure that your input file in this case (-n 20) contains the line '%pal nprocs ​20 end' (without quotes)!** '%pal nprocs ​20 end' must equal the number of processes you reserve with the '​-n'​ option.
  
 Save the script as myjob.job, for example, and submit with Save the script as myjob.job, for example, and submit with
  
 <​code>​ <​code>​
-bsub < myjob.job</​code>​+sbatch ​myjob.job</​code>​
 \\ \\
 [[Kategorie:​ Scientific Computing]] [[Kategorie:​ Scientific Computing]]