This is an old revision of the document!


Orca

To run Orca, you should register at https://orcaforum.cec.mpg.de/. For now we are allowed to have Orca installed centrally in the SCC. This may change in the future!

Using ORCA

Login via ssh to gwdu101 or gwdu102. Load the following modules:

module load openmpi/gcc/64/1.6.4
module load orca


For a serial job, create a jobscript like:

#!/bin/bash
#SBATCH -p medium
#SBATCH -n 1
#SBATCH -t 1-00:00:00

INPUTFILE=test.inp

$ORCA_PATH/orca ${INPUTFILE}


This tells the batch system to submit the job to queue medium and require 1 processor for 24 hours.

For parallel jobs, this needs a little trick, since orca can't run on shared filesystems like NFS, CVFS or FHGFS. We need to use /local as a local filesystem for the run:

#!/bin/bash
#SBATCH -p medium
#SBATCH -J ORCA
#SBATCH -n 20
#SBATCH -N 1
#SBATCH -t 1-00:00:00
#SBATCH --ntasks-per-node=20
#SBATCH --signal=B:12@600

INPUTFILE=test.inp

work=$PWD

trap 'srun -n ${SLURM_JOB_NUM_NODES} --ntasks-per-node=1 cp -af ${TMP_LOCAL}/* ${work}/; exit 12' 12

cp -af ${INPUTFILE} ${work}/*.gbw ${work}/*.pot ${TMP_LOCAL}/

cd $TMP_LOCAL

$ORCA_PATH/orca ${INPUTFILE}

srun -n ${SLURM_JOB_NUM_NODES} --ntasks-per-node=1 cp -af ${TMP_LOCAL}/* ${work}/ >/dev/null 2>&1


This tells the batch system to submit the job to partition medium and require 20 processors on one node for 24 hours. Please make sure that your input file in this case (-n 20) contains the line '%pal nprocs 20 end' (without quotes)! '%pal nprocs 20 end' must equal the number of processes you reserve with the '-n' option.

Save the script as myjob.job, for example, and submit with

sbatch myjob.job


Scientific Computing