<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Philip</id>
	<title>Tycho - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Philip"/>
	<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/tycho/Special:Contributions/Philip"/>
	<updated>2026-04-03T18:23:37Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.7</generator>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Examples_of_SLURM_scripts&amp;diff=189</id>
		<title>Examples of SLURM scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Examples_of_SLURM_scripts&amp;diff=189"/>
		<updated>2023-11-15T14:47:32Z</updated>

		<summary type="html">&lt;p&gt;Philip: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is examples of simple SLURM scripts for running serial, OpenMP and MPI jobs.&lt;br /&gt;
&lt;br /&gt;
Note that a very nice introduction to running SLURM scripts exist at [https://hpc.ku.dk/documentation/slurm.html#readme SCIENCE HPC SLURM].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt; &lt;br /&gt;
Simple Batch Script For Running Serial Programs &lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submission scripts are really just shell scripts (here we use bash syntax) with a few additional variable specifications at the beginning. These are prefixed with &amp;quot;#SBATCH&amp;quot; and otherwise use the same keywords and syntax as the command line options described for the sbatch command. The following script presents a minimal example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=isothermal     # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --partition=astro_devel   # specify the partition to run on&lt;br /&gt;
srun  /bin/echo &amp;quot;Hello World!&amp;quot; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Specifying Mail Notifications And Manage Output And Error Files:&lt;br /&gt;
You can enable e-Mail notification on various events, this can be specified via the &amp;lt;code&amp;gt;--mail-type&amp;lt;/code&amp;gt;&lt;br /&gt;
option which takes the following values: &amp;lt;code&amp;gt;BEGIN&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;END&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;FAIL&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;REQUEUE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ALL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --output=/astro/username/que/job.%J.out&lt;br /&gt;
#SBATCH --error=/astro/username/que/job.%J.err&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard output and error can be written to specific files with the &amp;lt;code&amp;gt;--output&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--error&amp;lt;/code&amp;gt; commands. By default, both files are directed to a file of the name &amp;lt;code&amp;gt;slurm-%j.out&amp;lt;/code&amp;gt;, where the &amp;lt;code&amp;gt;%j&amp;lt;/code&amp;gt; is replaced with the job number.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Simple Batch Script For Running MPI-Parallel Jobs&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable, and is where SLURM will execute the script from).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --nodes=4               # number of nodes allocated for this job&lt;br /&gt;
#SBATCH --ntasks-per-node=20    # number of MPI ranks per node&lt;br /&gt;
#SBATCH --cpus-per-task=1       # number of OpenMP threads per MPI rank&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# If required, replace specific modules&lt;br /&gt;
# module unload intelmpi&lt;br /&gt;
# module load mvapich2&lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment and modules&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
srun &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Simple Batch Script For Jobs Using OpenMP Only&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable).&lt;br /&gt;
&lt;br /&gt;
Note: By choosing &amp;lt;code&amp;gt;--cpus-per-task=40&amp;lt;/code&amp;gt; along with &amp;lt;code&amp;gt;--threads-per-core=2&amp;lt;/code&amp;gt;, you have assumed that your program will take advantage of [https://en.wikipedia.org/wiki/Hyper-threading#readme hyper threading]. If this is not the case, or you are uncertain, use &amp;lt;code&amp;gt;--cpus-per-task=20&amp;lt;/code&amp;gt; along with &amp;lt;code&amp;gt;--threads-per-core=1&amp;lt;/code&amp;gt;, and your program will be executed with 20 threads.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# Define SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --cpus-per-task=40      # total number of threads&lt;br /&gt;
#SBATCH --threads-per-core=2     # threads per core (hyper-threading)&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# OpenMP affinity&lt;br /&gt;
# no hyperthreading&lt;br /&gt;
# export KMP_AFFINITY=&amp;quot;granularity=core,scatter,1,0&amp;quot;&lt;br /&gt;
# hyperthreading&lt;br /&gt;
export KMP_AFFINITY=&amp;quot;granularity=thread,scatter,1,0&amp;quot; &lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
srun --cpu_bind=threads &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Hybrid MPI + OpenMP Batch Script&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable, and is where SLURM will execute the script from).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# Define SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --nodes=32              # number of nodes allocated for this job&lt;br /&gt;
#SBATCH --ntasks-per-node=8     # lower than the usual 20 for MPI only &lt;br /&gt;
#SBATCH --cpus-per-task=5       # number of CPUs per MPI rank&lt;br /&gt;
#SBATCH --threads-per-core=2     # threads per core (hyper-threading)&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# OpenMP affinity&lt;br /&gt;
export KMP_AFFINITY=&amp;quot;granularity=thread,scatter,1,0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# If required, replace specific modules&lt;br /&gt;
# module unload intelmpi&lt;br /&gt;
# module load mvapich2&lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment and modules&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
srun --cpu_bind=threads &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Running_batch_jobs&amp;diff=186</id>
		<title>Running batch jobs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Running_batch_jobs&amp;diff=186"/>
		<updated>2023-11-15T14:45:33Z</updated>

		<summary type="html">&lt;p&gt;Philip: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;&lt;br /&gt;
Submit Jobs Via The SLURM Queueing System&lt;br /&gt;
&amp;lt;/h1&amp;gt;&lt;br /&gt;
This is the preferred method of submitting batch jobs to the cluster queueing system and to run jobs interactively.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Most Important Queue Commands:&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Here we list the most commonly used queueing commands. If you are migrating from a different scheduling system, this [https://slurm.schedmd.com/rosetta.pdf#readme cheat sheet] may be useful for you. There also exists a compact [https://slurm.schedmd.com/pdfs/summary.pdf#readme two-page overview] of the most important commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &amp;lt;code&amp;gt;&#039;Sinfo&#039;&amp;lt;/code&amp;gt; Command To Display Information About Available Resources:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
If you use the command without any options, it will display all available partitions. Use the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; switch to select a specific partition, for instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; sinfo -p astro2&lt;br /&gt;
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
astro2       up 10-00:00:0      1  down* node458&lt;br /&gt;
astro2       up 10-00:00:0     13  alloc node[454-457,459-462,480-481]&lt;br /&gt;
astro2       up 10-00:00:0     18   idle node[463-479,482]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The command displays how many nodes in the partition are offline (&amp;lt;code&amp;gt;down&amp;lt;/code&amp;gt;), are busy (&amp;lt;code&amp;gt;alloc&amp;lt;/code&amp;gt;) and how many are still available (&amp;lt;code&amp;gt;idle&amp;lt;/code&amp;gt;)). For each sub-category, a &amp;lt;code&amp;gt;NODELIST&amp;lt;/code&amp;gt; is displayed. The &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column shows the maximum job duration allowed for the partition in &amp;lt;code&amp;gt;days-hours:minutes:seconds&amp;lt;/code&amp;gt; format. You can find more information about how to use the [https://slurm.schedmd.com/sinfo.html#readme sinfo] command on the official [https://slurm.schedmd.com/man_index.html#readme SLURM man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Squeue&#039; Command To Display Information About Scheduled Jobs:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; squeue -astro_long&lt;br /&gt;
 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
 566136    astro_long jobname1 username  R      47:22      1 node485&lt;br /&gt;
 566135    astro_long jobname2 username  R      54:22      1 node481&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The command displays a table with useful information. Use the &amp;lt;code&amp;gt;JOBID&amp;lt;/code&amp;gt; of your job to modify or cancel a scheduled or already running job (see below). The status column (&amp;lt;code&amp;gt;ST&amp;lt;/code&amp;gt;) shows the state of the queued job, the letters stand for: &amp;lt;code&amp;gt;PD&amp;lt;/code&amp;gt; (pending), &amp;lt;code&amp;gt;R&amp;lt;/code&amp;gt; (running), &amp;lt;code&amp;gt;CA&amp;lt;/code&amp;gt; (cancelled), &amp;lt;code&amp;gt;CG&amp;lt;/code&amp;gt; (completing), &amp;lt;code&amp;gt;CD&amp;lt;/code&amp;gt; (completed), &amp;lt;code&amp;gt;F&amp;lt;/code&amp;gt; (failed), &amp;lt;code&amp;gt;TO&amp;lt;/code&amp;gt; (timeout), and &amp;lt;code&amp;gt;NF&amp;lt;/code&amp;gt; (node failure). &lt;br /&gt;
&lt;br /&gt;
Useful command line switches for &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; include &amp;lt;code&amp;gt;-u&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;--users&amp;lt;/code&amp;gt;) for only listing jobs that belong to a specific user. You can find more information about how to use the [https://slurm.schedmd.com/squeue.html#readme squeue] command on the official [https://slurm.schedmd.com/man_index.html#readme SLURM man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Scancel&#039; Command To Cancel A Scheduled Or Running Job:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; scancel 566136&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the [https://slurm.schedmd.com/scancel.html#readme scancel] command on the official [https://slurm.schedmd.com/man_index.html#readme SLURM man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Srun&#039; Command To Run Jobs Interactively:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
You can run serial, openMP- or MPI-parallel code interactively using the &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; command. Always make sure to specify the partition to run on via the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; command line switch. When running an MPI job, you can use the &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; switch to specify the number of MPI tasks that you require. Command line arguments for your program can be passed at the end.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; srun -p astro_devel -n 20 &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the [https://slurm.schedmd.com/srun.html#readme srun] command on the official [https://slurm.schedmd.com/man_index.html#readme SLURM man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Sbatch&#039; Command To Queue A Job Via A Submission Script:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; sbatch [additional options] job-submission-script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the [https://slurm.schedmd.com/sbatch.html#readme sbatch] command on the official [https://slurm.schedmd.com/man_index.html#readme SLURM man pages].&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Running_batch_jobs&amp;diff=181</id>
		<title>Running batch jobs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Running_batch_jobs&amp;diff=181"/>
		<updated>2023-11-15T14:39:08Z</updated>

		<summary type="html">&lt;p&gt;Philip: Created page with &amp;quot;&amp;lt;h1&amp;gt; Submit Jobs Via The SLURM Queueing System &amp;lt;/h1&amp;gt; This is the preferred method of submitting batch jobs to the cluster queueing system and to run jobs interactively.  &amp;lt;h2&amp;gt; Most Important Queue Commands: &amp;lt;/h2&amp;gt; Here we list the most commonly used queueing commands. If you are migrating from a different scheduling system, this cheat sheet may be useful for you. There also exists a compact two-page overview of the most important commands.  &amp;lt;h3&amp;gt; Use The &amp;lt;code&amp;gt;&amp;#039;Sinfo&amp;#039;&amp;lt;/code...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;h1&amp;gt;&lt;br /&gt;
Submit Jobs Via The SLURM Queueing System&lt;br /&gt;
&amp;lt;/h1&amp;gt;&lt;br /&gt;
This is the preferred method of submitting batch jobs to the cluster queueing system and to run jobs interactively.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Most Important Queue Commands:&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Here we list the most commonly used queueing commands. If you are migrating from a different scheduling system, this cheat sheet may be useful for you. There also exists a compact two-page overview of the most important commands.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &amp;lt;code&amp;gt;&#039;Sinfo&#039;&amp;lt;/code&amp;gt; Command To Display Information About Available Resources:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
If you use the command without any options, it will display all available partitions. Use the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; switch to select a specific partition, for instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; sinfo -p astro2&lt;br /&gt;
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
astro2       up 10-00:00:0      1  down* node458&lt;br /&gt;
astro2       up 10-00:00:0     13  alloc node[454-457,459-462,480-481]&lt;br /&gt;
astro2       up 10-00:00:0     18   idle node[463-479,482]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The command displays how many nodes in the partition are offline (&amp;lt;code&amp;gt;down&amp;lt;/code&amp;gt;), are busy (&amp;lt;code&amp;gt;alloc&amp;lt;/code&amp;gt;) and how many are still available (&amp;lt;code&amp;gt;idle&amp;lt;/code&amp;gt;)). For each sub-category, a &amp;lt;code&amp;gt;NODELIST&amp;lt;/code&amp;gt; is displayed. The &amp;lt;code&amp;gt;TIMELIMIT&amp;lt;/code&amp;gt; column shows the maximum job duration allowed for the partition in &amp;lt;code&amp;gt;days-hours:minutes:seconds&amp;lt;/code&amp;gt; format. You can find more information about how to use the &amp;lt;code&amp;gt;sinfo&amp;lt;/code&amp;gt; command on the official SLURM main pages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Squeue&#039; Command To Display Information About Scheduled Jobs:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; squeue -astro_long&lt;br /&gt;
 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
 566136    astro_long jobname1 username  R      47:22      1 node485&lt;br /&gt;
 566135    astro_long jobname2 username  R      54:22      1 node481&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The command displays a table with useful information. Use the &amp;lt;code&amp;gt;JOBID&amp;lt;/code&amp;gt; of your job to modify or cancel a scheduled or already running job (see below). The status column (&amp;lt;code&amp;gt;ST&amp;lt;/code&amp;gt;) shows the state of the queued job, the letters stand for: &amp;lt;code&amp;gt;PD&amp;lt;/code&amp;gt; (pending), &amp;lt;code&amp;gt;R&amp;lt;/code&amp;gt; (running), &amp;lt;code&amp;gt;CA&amp;lt;/code&amp;gt; (cancelled), &amp;lt;code&amp;gt;CG&amp;lt;/code&amp;gt; (completing), &amp;lt;code&amp;gt;CD&amp;lt;/code&amp;gt; (completed), &amp;lt;code&amp;gt;F&amp;lt;/code&amp;gt; (failed), &amp;lt;code&amp;gt;TO&amp;lt;/code&amp;gt; (timeout), and &amp;lt;code&amp;gt;NF&amp;lt;/code&amp;gt; (node failure). &lt;br /&gt;
&lt;br /&gt;
Useful command line switches for &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; include &amp;lt;code&amp;gt;-u&amp;lt;/code&amp;gt; (or &amp;lt;code&amp;gt;--users&amp;lt;/code&amp;gt;) for only listing jobs that belong to a specific user. You can find more information about how to use the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command on the official SLURM man pages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Scancel&#039; Command To Cancel A Scheduled Or Running Job:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; scancel 566136&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the scancel command on the official SLURM main pages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Srun&#039; Command To Run Jobs Interactively:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
You can run serial, openMP- or MPI-parallel code interactively using the &amp;lt;code&amp;gt;srun&amp;lt;/code&amp;gt; command. Always make sure to specify the partition to run on via the &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt; command line switch. When running an MPI job, you can use the &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; switch to specify the number of MPI tasks that you require. Command line arguments for your program can be passed at the end.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; srun -p astro_devel -n 20 &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the srun command on the official SLURM man pages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
Use The &#039;Sbatch&#039; Command To Queue A Job Via A Submission Script:&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
astro06:&amp;gt; sbatch [additional options] job-submission-script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can find more information about how to use the sbatch command on the official SLURM man pages.&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=173</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=173"/>
		<updated>2023-11-15T14:25:03Z</updated>

		<summary type="html">&lt;p&gt;Philip: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Running batch jobs]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Module system]]&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=172</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=172"/>
		<updated>2023-11-15T14:24:41Z</updated>

		<summary type="html">&lt;p&gt;Philip: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [Running batch jobs]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Module system]]&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Examples_of_SLURM_scripts&amp;diff=167</id>
		<title>Examples of SLURM scripts</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Examples_of_SLURM_scripts&amp;diff=167"/>
		<updated>2023-11-15T14:18:56Z</updated>

		<summary type="html">&lt;p&gt;Philip: Created page with &amp;quot;Here is examples of simple SLURM scripts for running serial, OpenMP and MPI jobs.  Note that a very nice introduction to running SLURM scripts exist at https://hpc.ku.dk/documentation/slurm.html.  &amp;lt;h2&amp;gt;  Simple Batch Script For Running Serial Programs  &amp;lt;/h2&amp;gt;  Submission scripts are really just shell scripts (here we use bash syntax) with a few additional variable specifications at the beginning. These are prefixed with &amp;quot;#SBATCH&amp;quot; and otherwise use the same keywords and syn...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is examples of simple SLURM scripts for running serial, OpenMP and MPI jobs.&lt;br /&gt;
&lt;br /&gt;
Note that a very nice introduction to running SLURM scripts exist at https://hpc.ku.dk/documentation/slurm.html.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt; &lt;br /&gt;
Simple Batch Script For Running Serial Programs &lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submission scripts are really just shell scripts (here we use bash syntax) with a few additional variable specifications at the beginning. These are prefixed with &amp;quot;#SBATCH&amp;quot; and otherwise use the same keywords and syntax as the command line options described for the sbatch command. The following script presents a minimal example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name=isothermal     # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --partition=astro_devel   # specify the partition to run on&lt;br /&gt;
srun  /bin/echo &amp;quot;Hello World!&amp;quot; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Specifying Mail Notifications And Manage Output And Error Files:&lt;br /&gt;
You can enable e-Mail notification on various events, this can be specified via the &amp;lt;code&amp;gt;--mail-type&amp;lt;/code&amp;gt;&lt;br /&gt;
option which takes the following values: &amp;lt;code&amp;gt;BEGIN&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;END&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;FAIL&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;REQUEUE&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;ALL&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=FAIL&lt;br /&gt;
#SBATCH --output=/astro/username/que/job.%J.out&lt;br /&gt;
#SBATCH --error=/astro/username/que/job.%J.err&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The standard output and error can be written to specific files with the &amp;lt;code&amp;gt;--output&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--error&amp;lt;/code&amp;gt; commands. By default, both files are directed to a file of the name &amp;lt;code&amp;gt;slurm-%j.out&amp;lt;/code&amp;gt;, where the &amp;lt;code&amp;gt;%j&amp;lt;/code&amp;gt; is replaced with the job number.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Simple Batch Script For Running MPI-Parallel Jobs&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable, and is where SLURM will execute the script from).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --nodes=4               # number of nodes allocated for this job&lt;br /&gt;
#SBATCH --ntasks-per-node=20    # number of MPI ranks per node&lt;br /&gt;
#SBATCH --cpus-per-task=1       # number of OpenMP threads per MPI rank&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# If required, replace specific modules&lt;br /&gt;
# module unload intelmpi&lt;br /&gt;
# module load mvapich2&lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment and modules&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
srun &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Simple Batch Script For Jobs Using OpenMP Only&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable).&lt;br /&gt;
&lt;br /&gt;
Note: By choosing &amp;lt;code&amp;gt;--cpus-per-task=40&amp;lt;/code&amp;gt; along with &amp;lt;code&amp;gt;--threads-per-core=2&amp;lt;/code&amp;gt;, you have assumed that your program will take advantage of hyper threading. If this is not the case, or you are uncertain, use &amp;lt;code&amp;gt;--cpus-per-task=20&amp;lt;/code&amp;gt; along with &amp;lt;code&amp;gt;--threads-per-core=1&amp;lt;/code&amp;gt;, and your program will be executed with 20 threads.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# Define SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --cpus-per-task=40      # total number of threads&lt;br /&gt;
#SBATCH --threads-per-core=2     # threads per core (hyper-threading)&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# OpenMP affinity&lt;br /&gt;
# no hyperthreading&lt;br /&gt;
# export KMP_AFFINITY=&amp;quot;granularity=core,scatter,1,0&amp;quot;&lt;br /&gt;
# hyperthreading&lt;br /&gt;
export KMP_AFFINITY=&amp;quot;granularity=thread,scatter,1,0&amp;quot; &lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
srun --cpu_bind=threads &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h2&amp;gt;&lt;br /&gt;
Hybrid MPI + OpenMP Batch Script&lt;br /&gt;
&amp;lt;/h2&amp;gt;&lt;br /&gt;
Note: The following example script assumes that you submit the script from the directory where the code will be executed (the path to that directory is stored in the&lt;br /&gt;
&amp;lt;code&amp;gt;$SLURM_SUBMIT_DIR&amp;lt;/code&amp;gt; environment variable, and is where SLURM will execute the script from).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# Define SLURM resource specifications&lt;br /&gt;
# (use an extra &#039;#&#039; in front of SBATCH to comment-out any unused options)&lt;br /&gt;
#&lt;br /&gt;
#SBATCH --job-name=isothermal   # shows up in the output of &#039;squeue&#039;&lt;br /&gt;
#SBATCH --time=4-23:59:59       # specify the requested wall-time&lt;br /&gt;
#SBATCH --partition=astro_long  # specify the partition to run on&lt;br /&gt;
#SBATCH --nodes=32              # number of nodes allocated for this job&lt;br /&gt;
#SBATCH --ntasks-per-node=8     # lower than the usual 20 for MPI only &lt;br /&gt;
#SBATCH --cpus-per-task=5       # number of CPUs per MPI rank&lt;br /&gt;
#SBATCH --threads-per-core=2     # threads per core (hyper-threading)&lt;br /&gt;
##SBATCH --exclude=&amp;lt;node list&amp;gt;  # avoid nodes (e.g. --exclude=node786)&lt;br /&gt;
&lt;br /&gt;
# Load default settings for environment variables&lt;br /&gt;
module load astro&lt;br /&gt;
&lt;br /&gt;
# OpenMP affinity&lt;br /&gt;
export KMP_AFFINITY=&amp;quot;granularity=thread,scatter,1,0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# If required, replace specific modules&lt;br /&gt;
# module unload intelmpi&lt;br /&gt;
# module load mvapich2&lt;br /&gt;
&lt;br /&gt;
# When compiling remember to use the same environment and modules&lt;br /&gt;
&lt;br /&gt;
# Execute the code&lt;br /&gt;
cd $SLURM_SUBMIT_DIR&lt;br /&gt;
srun --cpu_bind=threads &amp;lt;executable&amp;gt; [args...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Philip</name></author>
	</entry>
</feed>