Examples of SLURM scripts
Here is examples of simple SLURM scripts for running serial, OpenMP and MPI jobs.
Note that a very nice introduction to running SLURM scripts exist at https://hpc.ku.dk/documentation/slurm.html.
Simple Batch Script For Running Serial Programs
Submission scripts are really just shell scripts (here we use bash syntax) with a few additional variable specifications at the beginning. These are prefixed with "#SBATCH" and otherwise use the same keywords and syntax as the command line options described for the sbatch command. The following script presents a minimal example:
#!/bin/bash #SBATCH --job-name=isothermal # shows up in the output of 'squeue' #SBATCH --partition=astro_devel # specify the partition to run on srun /bin/echo "Hello World!"
Specifying Mail Notifications And Manage Output And Error Files:
You can enable e-Mail notification on various events, this can be specified via the --mail-type
option which takes the following values: BEGIN
, END
, FAIL
, REQUEUE
and ALL
.
#SBATCH --mail-type=FAIL #SBATCH --output=/astro/username/que/job.%J.out #SBATCH --error=/astro/username/que/job.%J.err
The standard output and error can be written to specific files with the --output
and --error
commands. By default, both files are directed to a file of the name slurm-%j.out
, where the %j
is replaced with the job number.
Simple Batch Script For Running MPI-Parallel Jobs
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the
$SLURM_SUBMIT_DIR
environment variable, and is where SLURM will execute the script from).
#!/bin/bash # # SLURM resource specifications # (use an extra '#' in front of SBATCH to comment-out any unused options) # #SBATCH --job-name=isothermal # shows up in the output of 'squeue' #SBATCH --time=4-23:59:59 # specify the requested wall-time #SBATCH --partition=astro_long # specify the partition to run on #SBATCH --nodes=4 # number of nodes allocated for this job #SBATCH --ntasks-per-node=20 # number of MPI ranks per node #SBATCH --cpus-per-task=1 # number of OpenMP threads per MPI rank ##SBATCH --exclude=<node list> # avoid nodes (e.g. --exclude=node786) # Load default settings for environment variables module load astro # If required, replace specific modules # module unload intelmpi # module load mvapich2 # When compiling remember to use the same environment and modules # Execute the code srun <executable> [args...]
Simple Batch Script For Jobs Using OpenMP Only
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the
$SLURM_SUBMIT_DIR
environment variable).
Note: By choosing --cpus-per-task=40
along with --threads-per-core=2
, you have assumed that your program will take advantage of hyper threading. If this is not the case, or you are uncertain, use --cpus-per-task=20
along with --threads-per-core=1
, and your program will be executed with 20 threads.
#!/bin/bash # # Define SLURM resource specifications # (use an extra '#' in front of SBATCH to comment-out any unused options) # #SBATCH --job-name=isothermal # shows up in the output of 'squeue' #SBATCH --time=4-23:59:59 # specify the requested wall-time #SBATCH --partition=astro_long # specify the partition to run on #SBATCH --cpus-per-task=40 # total number of threads #SBATCH --threads-per-core=2 # threads per core (hyper-threading) ##SBATCH --exclude=<node list> # avoid nodes (e.g. --exclude=node786) # Load default settings for environment variables module load astro # OpenMP affinity # no hyperthreading # export KMP_AFFINITY="granularity=core,scatter,1,0" # hyperthreading export KMP_AFFINITY="granularity=thread,scatter,1,0" # When compiling remember to use the same environment # Execute the code srun --cpu_bind=threads <executable> [args...]
Hybrid MPI + OpenMP Batch Script
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the
$SLURM_SUBMIT_DIR
environment variable, and is where SLURM will execute the script from).
#!/bin/bash # # Define SLURM resource specifications # (use an extra '#' in front of SBATCH to comment-out any unused options) # #SBATCH --job-name=isothermal # shows up in the output of 'squeue' #SBATCH --time=4-23:59:59 # specify the requested wall-time #SBATCH --partition=astro_long # specify the partition to run on #SBATCH --nodes=32 # number of nodes allocated for this job #SBATCH --ntasks-per-node=8 # lower than the usual 20 for MPI only #SBATCH --cpus-per-task=5 # number of CPUs per MPI rank #SBATCH --threads-per-core=2 # threads per core (hyper-threading) ##SBATCH --exclude=<node list> # avoid nodes (e.g. --exclude=node786) # Load default settings for environment variables module load astro # OpenMP affinity export KMP_AFFINITY="granularity=thread,scatter,1,0" # If required, replace specific modules # module unload intelmpi # module load mvapich2 # When compiling remember to use the same environment and modules # Execute the code cd $SLURM_SUBMIT_DIR srun --cpu_bind=threads <executable> [args...]