This is an old revision of the document!


To run Orca, you should register at For now we are allowed to have Orca installed centrally in the SCC. This may change in the future!

Using ORCA

Login via ssh to gwdu101 or gwdu102. Load the following modules:

module load openmpi/gcc/64/1.6.4
module load orca

For a serial job, create a jobscript like:

#BSUB -q mpi
#BSUB -n 1
#BSUB -W 24:00



This tells the batch system to submit the job to queue mpi and require 1 processor for 24 hours.

For parallel jobs, this needs a little trick, since orca can't run on shared filesystems like NFS, CVFS or FHGFS. We need to use /local as a local filesystem for the run:

#BSUB -q mpi
#BSUB -n 16
#BSUB -W 24:00
#BSUB -R span[ptile='!']
#BSUB -R same[model]
#BSUB -a intelmpi


blaunch "mkdir -p ${TEMP} >/dev/null 2>&1"

trap 'blaunch "/bin/cp -af ${TEMP}/* ${work}/ >/dev/null 2>&1"; exit 12' 12

/bin/cp -af ${INPUTFILE} ${work}/*.gbw ${work}/*.pot ${TEMP}/

cd $TEMP


blaunch "/bin/cp -af ${TEMP}/* ${work}/ >/dev/null 2>&1"
cd ${work}

if [ "${TEMP#/local}" != "${TEMP}" ]; then
blaunch "rm -rf $TEMP"

This tells the batch system to submit the job to queue mpi and require 16 processors for 24 hours. Please make sure that your input file in this case (-n 16) contains the line '%pal nprocs 16 end' (without quotes)! '%pal nprocs 16 end' must equal the number of processes you reserve with the '-n' option.

Save the script as myjob.job, for example, and submit with

bsub < myjob.job

Scientific Computing

This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies