<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Haugboel</id>
	<title>Tycho - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Haugboel"/>
	<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/tycho/Special:Contributions/Haugboel"/>
	<updated>2026-04-06T08:51:52Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=238</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=238"/>
		<updated>2025-06-26T07:20:09Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: /* CPUs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L1: 64 KB / core, L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 2 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===CPUs===&lt;br /&gt;
Our cluster nodes use Intel Cascade-lake, AMD Genoa, and AMD Turin for the pure CPU nodes, and AMD Rome and AMD Genoa for the GPU does providing varying real performance per clock, with the newest Genoa and Turin does having the best performance. For a deep dive in to CPU architecture, see:&lt;br /&gt;
* Intel Cascade-lake and AMD Rome: https://www.nas.nasa.gov/assets/nas/pdf/papers/NAS_Technical_Report_NAS-2022_01.pdf&lt;br /&gt;
* AMD Genoa: https://www.chpc.utah.edu/documentation/white_papers/cpus_may2023_v3.pdf#:~:text=There%20are%20additional%20two%20256%20bit%20vector,the%201x64%20or%201x96%20~105%25%20theoretical%20peak.&lt;br /&gt;
* AMD Turin:&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue. They are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 4 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. The connection between the CPU and the GPU is through PCIe Gen 5 x16 link with 128 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/h100/ .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=237</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=237"/>
		<updated>2025-06-26T07:15:23Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L1: 64 KB / core, L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 2 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===CPUs===&lt;br /&gt;
Our cluster nodes use Intel Cascade-lake, AMD Genoa, and AMD Turin for the pure CPU nodes, and AMD Rome and AMD Genoa for the GPU does providing varying real performance per clock, with the newest Genoa and Turin does having the best performance. For a deep dive in to CPU architecture, and in particular Intel Cascade-lake and AMD Rome see https://www.nas.nasa.gov/assets/nas/pdf/papers/NAS_Technical_Report_NAS-2022_01.pdf&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue. They are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 4 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. The connection between the CPU and the GPU is through PCIe Gen 5 x16 link with 128 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/h100/ .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=236</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=236"/>
		<updated>2025-05-19T13:06:39Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L1: 64 KB / core, L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 2 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue. They are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. The connection between the CPU and the GPU is through PCIe Gen 4 x16 link with 64 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 4 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. The connection between the CPU and the GPU is through PCIe Gen 5 x16 link with 128 GB/s. You can read more about their specs here https://www.nvidia.com/en-us/data-center/h100/ .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=235</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=235"/>
		<updated>2025-05-06T11:45:28Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L1: 64 KB / core, L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L1: 64 KB / core, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L1: 64 KB / core, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 6 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/data-center/h100/ .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=234</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=234"/>
		<updated>2025-03-14T07:03:04Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 6 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/data-center/h100/ .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=233</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=233"/>
		<updated>2025-03-14T07:01:54Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* 8 A100 GPUs are available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* 3 A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* 4 RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* 6 H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/h100 .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=232</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=232"/>
		<updated>2025-03-14T07:00:59Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more than 10,000 CPU cores and 21 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current FP64 peak performance is 443 TFlops from the CPUs and 276 TFlops from the GPUs. The GPUs are even more powerful at lower precision having 700 TFlops FP32 performance, and more than 18 PFlops of FP16 Tensor core performance for machine learning workloads.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Running batch jobs]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
* [[FAQs]]&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Module system]]&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=231</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=231"/>
		<updated>2025-03-13T10:51:13Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 TB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/h100 .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=230</id>
		<title>FAQs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=230"/>
		<updated>2025-03-06T09:59:05Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;How do I access the HPC Cluster?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can remotely access the frontend machines astro01-09 via SSH to submit jobs or to analyze data (see [[Hardware]] for an up to date list of available frontends). For example, you can login to the astro01 machine entering the following in the command line&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh username@astro01.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From which you will be prompted to enter your password (Note: If the connection times out, it&#039;s likely that your IP address is not recognised). By entering the correct password, you will arrive at your home directory /groups/astro/username. It may be a good idea to check the load factor after logging in, using &amp;quot;top&amp;quot;, and choose a different frontend if the CPU or memory use is already high (use &amp;quot;&amp;lt;&amp;quot; or &amp;quot;&amp;gt;&amp;quot; in top to temporarily change from sorting on CPU to memory / virtual memory).&lt;br /&gt;
&lt;br /&gt;
You can cut this down to&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
by adding these lines to the file ~/.ssh/config:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host astro01        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro01.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do not use the ssh option -Y, which &amp;quot;enables trusted X11 forwarding&amp;quot;.  This means, basically, that you turn off some essential X security features and say &amp;quot;I trust the remote host completely&amp;quot;. Forwarding of X should work without any extra options.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How can I setup a passwordless ssh?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can avoid to type in your password each time for the authenticating of SSH sessions, by using an SSH key with the [http://en.wikipedia.org/wiki/RSA_(cryptosystem)#readme RSA] encryption, in combination with ssh-add and ssh-agent. Therefore, you generate with [http://en.wikipedia.org/wiki/Ssh-keygen#readme ssh-keygen] on your local laptop/computer a pair of a personal (id_rsa) and public key (id_rsa.pub). A widely recommended and secure cipher to use is based on ED25519 encryption&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-keygen -t ed25519&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the option -t you specify the encryption type. Here we use ED25519; many others exist but that is what is recommended across large organizations to keep your key, login, and data secure.  You will asked for the filename, you can just press enter and use the default. Then, you will be asked for a passphrase to protect your personal key.&lt;br /&gt;
You should NOT under any circumstances use an empty pass phrase; it is not necessary for convenience reasons (see below), and could endanger your access to remote supercomputers.  Use instead a &amp;quot;pass phrase&amp;quot;; similar to a password.&lt;br /&gt;
There will be stored two keys (id_ed25519 and id_ed25519.pub) in the hidden folder ~/.ssh/ on your client (normally your laptop, but it could also be a server you want to use to login to Tycho). To use the key-pair for logging in you have to copy your public key from your client to Tycho and store it in the file ~/.ssh/authorized_keys. From your client (e.g. the laptop) do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat ~/.ssh/id_rsa.pub | ssh username@astro01.hpc.ku.dk &#039;cat &amp;gt;&amp;gt; .ssh/authorized_keys&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you login to Tycho with ssh, instead of asking for the password, the server will request your pass phrase from the client (the passphrase you supplied when running the ssh-keygen command). However, rather than writing it every time at the ssh prompt, you can use &amp;quot;ssh-add&amp;quot; to give the pass phrase once and keep the decrypted private key in memory. On Mac and Linux this works transparently, on windows you need to make sure that another program, ssh-agent, is running on the system and can store the decrypted key. You can read some suggestions about how to do that on Windows here: https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement&lt;br /&gt;
&lt;br /&gt;
If your ssh command is issued in a window inside a VNC session, or in a window on your local laptop, chances are you can just type &amp;quot;ssh-add&amp;quot; to store the password in a process called ssh-agent, which typically is already running (most X-sessions -- such as the ones you start under VNC -- are automatically started as child processes of an ssh-agent).&lt;br /&gt;
&lt;br /&gt;
If an ssh-agent process is already running (and your ssh command is a descendant of it), it stores the credentials created by your once-per-reboot ssh-add command and automatically answers requests from hosts you try to connect to.  If for some reason no ssh-agent is running on an intermediate host, then as an alternative you can forward your ssh-credentials from your laptop, by using &amp;quot;ssh -A&amp;quot; to login to astro0X, and continuing an ssh from there (having stored the id_rsa.pub from your laptop on the remote supercomputer).&lt;br /&gt;
&lt;br /&gt;
Finally, if for some reason you start out from a place (such a just a command window) where no ssh-agent is running, you can just start one, by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-agent bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This starts an ssh-agent, which starts bash as a descendant. Then execute &amp;quot;ssh-add&amp;quot;, and you&#039;re free from typing the pass phrase, for as long as you keep the ssh-agent running.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do I get my IP white-listed&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On any of the astro0X hosts, use the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hpc-setup-firewall.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have three personal slots, but other people&#039;s slots also work for you.   The first time you login from home, or a new place, you may need to login via muon.nbi.dk, or some other NBI host, from which astro0X is already open.   To find your IP-number, either use one of the web-services (but beware of spam-ware), or just type &amp;quot;finger -m $user&amp;quot; after logging in to an NBI host.&lt;br /&gt;
SSHFS.&lt;br /&gt;
&lt;br /&gt;
You can locally access remote data on the frontend very conveniently by the SSH file system ([http://en.wikipedia.org/wiki/SSHFS#readme SSHFS]). To use SSHFS you need to install [http://en.wikipedia.org/wiki/Filesystem_in_Userspace#readme FUSE]. For Linux you can install [http://fuse.sourceforge.net/#readme fuse] and for Mac there is [https://osxfuse.github.io/#readem osxfuse]. You need to create an empty file as mount point on your local laptop/computer, e.g.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
makedir ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then you mount your by specifying the host and the mount point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sshfs username@astro01.hpc.ku.dk:/astro/username/ ~/nbi/ -oauto_cache,reconnect,volname=nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can unmount the filesystem with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fusermount -u ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SSH tunneling: To create a tunnel for for example display :11 on astro01 using ssh, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro01 -L 5911:localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can do the same without starting a remote shell by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro01 -L 5911:localhost:5911 -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you succeed in always using display :11 you can add the tunnel configuration as part of the ~/.ssh/config file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host tunnel        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro01.hpc.ku.dk &lt;br /&gt;
    LocalForward 5911 &lt;br /&gt;
    localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then start the tunnel with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh tunnel -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GENERAL &amp;amp; SUPPORT===&lt;br /&gt;
&#039;&#039;&#039;Whom should I ask which questions?&#039;&#039;&#039;&lt;br /&gt;
# login nodes are unreachable: mailto:support@hpc.ku.dk&lt;br /&gt;
# login nodes are alive, but I don&#039;t have a homedir: mailto:support@hpc.ku.dk&lt;br /&gt;
# I don&#039;t have access to the software I need: mailto:support@hpc.ku.dk&lt;br /&gt;
# forgot my password: mailto:support@hpc.ku.dk&lt;br /&gt;
* Is there a sharing policy?&lt;br /&gt;
# cluster queues&lt;br /&gt;
# disk space&lt;br /&gt;
# analysis servers&lt;br /&gt;
* Can I run codes interactively, fx on a front end, or on an analysis server?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ENVIRONMENTS&lt;br /&gt;
&lt;br /&gt;
* What are &#039;modules&#039; and how do I use them?&lt;br /&gt;
* What is my default environment setup?&lt;br /&gt;
- MPI &lt;br /&gt;
- libraries&lt;br /&gt;
- compilers&lt;br /&gt;
-...&lt;br /&gt;
&lt;br /&gt;
* Is there a standard module to load which lays out everything - no sweat?&lt;br /&gt;
Yes, access to the astro software is provided by writing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load astro&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
command that makes all astro specific modules available. You can see what is available by writing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load avail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and list what you have loaded with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module load list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To undo the default and start from scratch, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SOFTWARE&lt;br /&gt;
&lt;br /&gt;
* What software do I have available?&lt;br /&gt;
* Do I have access &lt;br /&gt;
* What is SLURM?&lt;br /&gt;
* Can I submit SLURM jobs from anywhere?&lt;br /&gt;
* How do I find out which queues I can submit to?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
STORAGE&lt;br /&gt;
&lt;br /&gt;
* Where should I put data from large simulations?&lt;br /&gt;
* How much disk space can I claim?&lt;br /&gt;
* How long can I have stuff on disk?&lt;br /&gt;
* Are my data backed up, and where?&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=229</id>
		<title>FAQs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=229"/>
		<updated>2025-03-06T09:35:07Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;How do I access the HPC Cluster?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can remotely access the frontend machines astro01-09 via SSH to submit jobs or to analyze data (see [[Hardware]] for an up to date list of available frontends). For example, you can login to the astro01 machine entering the following in the command line&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh username@astro01.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From which you will be prompted to enter your password (Note: If the connection times out, it&#039;s likely that your IP address is not recognised). By entering the correct password, you will arrive at your home directory /groups/astro/username. It may be a good idea to check the load factor after logging in, using &amp;quot;top&amp;quot;, and choose a different frontend if the CPU or memory use is already high (use &amp;quot;&amp;lt;&amp;quot; or &amp;quot;&amp;gt;&amp;quot; in top to temporarily change from sorting on CPU to memory / virtual memory).&lt;br /&gt;
&lt;br /&gt;
You can cut this down to&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro01&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
by adding these lines to the file ~/.ssh/config:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host astro01        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro01.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do not use the ssh option -Y, which &amp;quot;enables trusted X11 forwarding&amp;quot;.  This means, basically, that you turn off some essential X security features and say &amp;quot;I trust the remote host completely&amp;quot;. Forwarding of X should work without any extra options.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How can I setup a passwordless ssh?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can avoid to type in your password each time for the authenticating of SSH sessions, by using an SSH key with the [http://en.wikipedia.org/wiki/RSA_(cryptosystem)#readme RSA] encryption, in combination with ssh-add and ssh-agent. Therefore, you generate with [http://en.wikipedia.org/wiki/Ssh-keygen#readme ssh-keygen] on your local laptop/computer a pair of a personal (id_rsa) and public key (id_rsa.pub). A widely recommended and secure cipher to use is based on ED25519 encryption&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-keygen -t ed25519&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the option -t you specify the encryption type. Here we use ED25519; many others exist but that is what is recommended across large organizations to keep your key, login, and data secure.  You will asked for the filename, you can just press enter and use the default. Then, you will be asked for a passphrase to protect your personal key.&lt;br /&gt;
You should NOT under any circumstances use an empty pass phrase; it is not necessary for convenience reasons (see below), and could endanger your access to remote supercomputers.  Use instead a &amp;quot;pass phrase&amp;quot;; similar to a password.&lt;br /&gt;
There will be stored two keys (id_ed25519 and id_ed25519.pub) in the hidden folder ~/.ssh/ on your client (normally your laptop, but it could also be a server you want to use to login to Tycho). To use the key-pair for logging in you have to copy your public key from your client to Tycho and store it in the file ~/.ssh/authorized_keys. From your client (e.g. the laptop) do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat ~/.ssh/id_rsa.pub | ssh username@astro01.hpc.ku.dk &#039;cat &amp;gt;&amp;gt; .ssh/authorized_keys&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Next time you login to Tycho with ssh, instead of asking for the password, the server will request your pass phrase from the client (the one where you issued the ssh-keygen command).   However, rather than wait for the ssh prompt, you should use &amp;quot;ssh-add&amp;quot; to give the pass phrase, once and for all (e.g. just after rebooting of you client).&lt;br /&gt;
If your ssh command is issued in a window inside a VNC session, or in a window on your local laptop, chances are you can just type &amp;quot;ssh-add&amp;quot; to store the password in a process called ssh-agent, which typically is already running (most X-sessions -- such as the ones you start under VNC -- are automatically started as child processes of an ssh-agent).&lt;br /&gt;
&lt;br /&gt;
If an ssh-agent process is already running (and your ssh command is a descendant of it), it stores the credentials created by your once-per-reboot ssh-add command and automatically answers requests from hosts you try to connect to.  If for some reason no ssh-agent is running on an intermediate host, then as an alternative you can forward your ssh-credentials from your laptop, by using &amp;quot;ssh -A&amp;quot; to login to astro0X, and continuing an ssh from there (having stored the id_rsa.pub from your laptop on the remote supercomputer).&lt;br /&gt;
Finally, if for some reason you start out from a place (such a just a command window) where no ssh-agent is running, you can just start one, by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-agent bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This starts an ssh-agent, which starts bash (or tcsh) as a descendant.   Then execute &amp;quot;ssh-add&amp;quot;, and you&#039;re free from typing the pass phrase, for as long as you keep the ssh-agent running.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do I get my IP white-listed&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On any of the astro0X hosts, use the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hpc-setup-firewall.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have three personal slots, but other people&#039;s slots also work for you.   The first time you login from home, or a new place, you may need to login via muon.nbi.dk, or some other NBI host, from which astro0X is already open.   To find your IP-number, either use one of the web-services (but beware of spam-ware), or just type &amp;quot;finger -m $user&amp;quot; after logging in to an NBI host.&lt;br /&gt;
SSHFS.&lt;br /&gt;
&lt;br /&gt;
You can locally access remote data on the frontend very conveniently by the SSH file system ([http://en.wikipedia.org/wiki/SSHFS#readme SSHFS]). To use SSHFS you need to install [http://en.wikipedia.org/wiki/Filesystem_in_Userspace#readme FUSE]. For Linux you can install [http://fuse.sourceforge.net/#readme fuse] and for Mac there is [https://osxfuse.github.io/#readem osxfuse]. You need to create an empty file as mount point on your local laptop/computer, e.g.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
makedir ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then you mount your by specifying the host and the mount point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sshfs username@astro06.hpc.ku.dk:/astro/username/ ~/nbi/ -oauto_cache,reconnect,volname=nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can unmount the filesystem with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fusermount -u ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SSH tunneling: To create a tunnel for for example display :11 on astro06 using ssh, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro06 -L 5911:localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can do the same without starting a remote shell by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro06 -L 5911:localhost:5911 -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you succeed in always using display :11 you can add the tunnel configuration as part of the ~/.ssh/config file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host tunnel        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro06.hpc.ku.dk &lt;br /&gt;
    LocalForward 5911 &lt;br /&gt;
    localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then start the tunnel with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh tunnel -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GENERAL &amp;amp; SUPPORT===&lt;br /&gt;
&#039;&#039;&#039;Whom should I ask which questions?&#039;&#039;&#039;&lt;br /&gt;
# login nodes are unreachable: mailto:support@hpc.ku.dk&lt;br /&gt;
# login nodes are alive, but I don&#039;t have a homedir: mailto:support@hpc.ku.dk&lt;br /&gt;
# I don&#039;t have access to the software I need: mailto:support@hpc.ku.dk&lt;br /&gt;
# forgot my password: mailto:support@hpc.ku.dk&lt;br /&gt;
* Is there a sharing policy?&lt;br /&gt;
# cluster queues&lt;br /&gt;
# disk space&lt;br /&gt;
# analysis servers&lt;br /&gt;
* Can I run codes interactively, fx on a front end, or on an analysis server?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ENVIRONMENTS&lt;br /&gt;
&lt;br /&gt;
* What are &#039;modules&#039; and how do I use them?&lt;br /&gt;
* What is my default environment setup?&lt;br /&gt;
- MPI &lt;br /&gt;
- libraries&lt;br /&gt;
- compilers&lt;br /&gt;
-...&lt;br /&gt;
&lt;br /&gt;
* Is there a standard module to load which lays out everything - no sweat?&lt;br /&gt;
Yes, several standard modules are loaded by default via the same&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
command that makes the module command available:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source /software/astro/startup.{sh,csh}  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the .sh for bash and .csh for tcsh.&lt;br /&gt;
To undo the default and start from scratch, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SOFTWARE&lt;br /&gt;
&lt;br /&gt;
* What software do I have available?&lt;br /&gt;
* Do I have access &lt;br /&gt;
* What is SLURM?&lt;br /&gt;
* Can I submit SLURM jobs from anywhere?&lt;br /&gt;
* How do I find out which queues I can submit to?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
STORAGE&lt;br /&gt;
&lt;br /&gt;
* Where should I put data from large simulations?&lt;br /&gt;
* How much disk space can I claim?&lt;br /&gt;
* How long can I have stuff on disk?&lt;br /&gt;
* Are my data backed up, and where?&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=228</id>
		<title>FAQs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=228"/>
		<updated>2025-03-06T09:08:07Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;How do I access the HPC Cluster?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can remotely access the frontend machines astro01-09 via SSH to submit jobs or to analyze data (see [[Hardware]] for an up to date list of available frontends). For example, you can login to the astro06 machine entering the following in the command line&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh username@astro06.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From which you will be prompted to enter your password (Note: If the connection times out, it&#039;s likely that your IP address is not recognised). By entering the correct password, you will arrive at your home directory /groups/astro/username. It may be a good idea to check the load factor after logging in, using &amp;quot;top&amp;quot;, and choose a different frontend if the CPU or memory use is already high (use &amp;quot;&amp;lt;&amp;quot; or &amp;quot;&amp;gt;&amp;quot; in top to temporarily change from sorting on CPU to memory / virtual memory).&lt;br /&gt;
&lt;br /&gt;
You can cut this down to&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro06&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
by adding these lines to the file ~/.ssh/config:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host astro06        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro06.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do not use the ssh option -Y, which &amp;quot;enables trusted X11 forwarding&amp;quot;.  This means, basically, that you turn off some essential X security features and say &amp;quot;I trust the remote host completely&amp;quot;.   Forwarding of X should work without any extra options.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How can I setup a passwordless ssh?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You can avoid to type in your password each time for the authenticating of SSH sessions, by using an SSH key with the [http://en.wikipedia.org/wiki/RSA_(cryptosystem)#readme RSA] or [http://en.wikipedia.org/wiki/Digital_Signature_Algorithm# readme DSA] encryption, in combination with ssh-add and ssh-agent. Therefore, you generate with [http://en.wikipedia.org/wiki/Ssh-keygen#readme ssh-keygen] on your local laptop/computer a pair of a personal (id_rsa) and public key (id_rsa.pub).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the option -t you specify the encryption type. Here we use RSA, but you can also use DSA. You will asked for the filename, you can just press enter and use the default. Then, you will be asked for a passphrase to protect your personal key.&lt;br /&gt;
You should NOT under any circumstances use an empty pass phrase; it is not necessary for convenience reasons (see below), and could endanger your access to remote supercomputers.  Use instead a &amp;quot;pass phrase&amp;quot;; similar to a password.&lt;br /&gt;
There will be stored two keys (id_rsa and id_rsa.pub) in the hidden folder ~/.ssh/ on your client (laptop or remote host you want to be a client of another host). You need to copy your public key to the accepted key list of on your host machine (authorized_keys).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat ~/.ssh/id_rsa.pub | ssh username@astro06.hpc.ku.dk &#039;cat &amp;gt;&amp;gt; .ssh/authorized_keys&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you connect now, the server will request your pass phrase from the client (the one where you issued the ssh-keygen command).   However, rather than wait for the ssh prompt, you should use &amp;quot;ssh-add&amp;quot; to give the pass phrase, once and for all (e.g. just after rebooting of you client).&lt;br /&gt;
If your ssh command is issued in a window inside a VNC session, or in a window on your local laptop, chances are you can just type &amp;quot;ssh-add&amp;quot; to store the password in a process called ssh-agent, which typically is already running (most X-sessions -- such as the ones you start under VNC -- are automatically started as child processes of an ssh-agent).&lt;br /&gt;
&lt;br /&gt;
If an ssh-agent process is already running (and your ssh command is a descendant of it), it stores the credentials created by your once-per-reboot ssh-add command and automatically answers requests from hosts you try to connect to.  If for some reason no ssh-agent is running on an intermediate host, then as an alternative you can forward your ssh-credentials from your laptop, by using &amp;quot;ssh -A&amp;quot; to login to astro0X, and continuing an ssh from there (having stored the id_rsa.pub from your laptop on the remote supercomputer).&lt;br /&gt;
Finally, if for some reason you start out from a place (such a just a command window) where no ssh-agent is running, you can just start one, by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh-agent bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This starts an ssh-agent, which starts bash (or tcsh) as a descendant.   Then execute &amp;quot;ssh-add&amp;quot;, and you&#039;re free from typing the pass phrase, for as long as you keep the ssh-agent running.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How do I get my IP white-listed&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On any of the astro0X hosts, use the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hpc-setup-firewall.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You have three personal slots, but other people&#039;s slots also work for you.   The first time you login from home, or a new place, you may need to login via muon.nbi.dk, or some other NBI host, from which astro0X is already open.   To find your IP-number, either use one of the web-services (but beware of spam-ware), or just type &amp;quot;finger -m $user&amp;quot; after logging in to an NBI host.&lt;br /&gt;
SSHFS.&lt;br /&gt;
&lt;br /&gt;
You can locally access remote data on the frontend very conveniently by the SSH file system ([http://en.wikipedia.org/wiki/SSHFS#readme SSHFS]). To use SSHFS you need to install [http://en.wikipedia.org/wiki/Filesystem_in_Userspace#readme FUSE]. For Linux you can install [http://fuse.sourceforge.net/#readme fuse] and for Mac there is [https://osxfuse.github.io/#readem osxfuse]. You need to create an empty file as mount point on your local laptop/computer, e.g.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
makedir ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then you mount your by specifying the host and the mount point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sshfs username@astro06.hpc.ku.dk:/astro/username/ ~/nbi/ -oauto_cache,reconnect,volname=nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can unmount the filesystem with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fusermount -u ~/nbi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SSH tunneling: To create a tunnel for for example display :11 on astro06 using ssh, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro06 -L 5911:localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can do the same without starting a remote shell by doing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh astro06 -L 5911:localhost:5911 -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you succeed in always using display :11 you can add the tunnel configuration as part of the ~/.ssh/config file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host tunnel        &lt;br /&gt;
    User username        &lt;br /&gt;
    HostName astro06.hpc.ku.dk &lt;br /&gt;
    LocalForward 5911 &lt;br /&gt;
    localhost:5911&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then start the tunnel with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh tunnel -fN&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GENERAL &amp;amp; SUPPORT===&lt;br /&gt;
&#039;&#039;&#039;Whom should I ask which questions?&#039;&#039;&#039;&lt;br /&gt;
# login nodes are unreachable: mailto:support@hpc.ku.dk&lt;br /&gt;
# login nodes are alive, but I don&#039;t have a homedir: mailto:support@hpc.ku.dk&lt;br /&gt;
# I don&#039;t have access to the software I need: mailto:support@hpc.ku.dk&lt;br /&gt;
# forgot my password: mailto:support@hpc.ku.dk&lt;br /&gt;
* Is there a sharing policy?&lt;br /&gt;
# cluster queues&lt;br /&gt;
# disk space&lt;br /&gt;
# analysis servers&lt;br /&gt;
* Can I run codes interactively, fx on a front end, or on an analysis server?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ENVIRONMENTS&lt;br /&gt;
&lt;br /&gt;
* What are &#039;modules&#039; and how do I use them?&lt;br /&gt;
* What is my default environment setup?&lt;br /&gt;
- MPI &lt;br /&gt;
- libraries&lt;br /&gt;
- compilers&lt;br /&gt;
-...&lt;br /&gt;
&lt;br /&gt;
* Is there a standard module to load which lays out everything - no sweat?&lt;br /&gt;
Yes, several standard modules are loaded by default via the same&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
command that makes the module command available:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source /software/astro/startup.{sh,csh}  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the .sh for bash and .csh for tcsh.&lt;br /&gt;
To undo the default and start from scratch, use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SOFTWARE&lt;br /&gt;
&lt;br /&gt;
* What software do I have available?&lt;br /&gt;
* Do I have access &lt;br /&gt;
* What is SLURM?&lt;br /&gt;
* Can I submit SLURM jobs from anywhere?&lt;br /&gt;
* How do I find out which queues I can submit to?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
STORAGE&lt;br /&gt;
&lt;br /&gt;
* Where should I put data from large simulations?&lt;br /&gt;
* How much disk space can I claim?&lt;br /&gt;
* How long can I have stuff on disk?&lt;br /&gt;
* Are my data backed up, and where?&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=227</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=227"/>
		<updated>2025-03-06T08:44:32Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro04, and the different sized virtual GPUs on astro02) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/h100 .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=226</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=226"/>
		<updated>2025-03-06T08:43:19Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro2, and astro04) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here https://www.nvidia.com/en-us/data-center/a100 .&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu .&lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/rtx-a6000 .&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here https://www.nvidia.com/en-us/design-visualization/h100 .&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=225</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=225"/>
		<updated>2025-03-06T08:42:46Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro2, and astro04) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here [https://www.nvidia.com/en-us/data-center/a100].&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here https://www.nvidia.com/en-us/data-center/products/a30-gpu.&lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here [https://www.nvidia.com/en-us/design-visualization/rtx-a6000].&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here [https://www.nvidia.com/en-us/design-visualization/h100].&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=224</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=224"/>
		<updated>2025-03-06T08:42:19Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro2, and astro04) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here [https://www.nvidia.com/en-us/data-center/a100].&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here [https://www.nvidia.com/en-us/data-center/products/a30-gpu].&lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here [https://www.nvidia.com/en-us/design-visualization/rtx-a6000].&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here [https://www.nvidia.com/en-us/design-visualization/h100].&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=223</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=223"/>
		<updated>2025-03-06T08:41:15Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===GPUs===&lt;br /&gt;
GPUs are accessible interactively on the astro01, astro02, and astro04 frontends machines and through SLURM in the astro_gpu and astro2_gpu queues. GPUs are small in numbers on Tycho but provide potentially enormous computational value. _Please_ test if your code can efficiently use e.g. a full GPU, or more than one GPU, before running long production jobs on them. In particular machine-learning jobs, or codes that off-load calculations from high-level languages such as Python and Julia may effectively block a full GPU, and sometimes speculatively allocate all GPU memory, without actually making good use of the resources. Therefore, test by either profiling your code, using timers, or simply running on the different machines (astro01, astro2, and astro04) to determine how well the workload scales.&lt;br /&gt;
&lt;br /&gt;
* The A100 GPUs available on the astro01 frontend machine and in the astro2_gpu queue are equipped with 40 GB of memory. They have full FP64 performance and are well suited for large-scale computations as well as machine learning workloads, but they are not the best ML GPUs available at Tycho. You can read more about their specs here [[https://www.nvidia.com/en-us/data-center/a100]].&lt;br /&gt;
&lt;br /&gt;
* The three A30 GPUs available on astro02 have 24GB of memory per GPU. They have been split up so that the first GPU is fully available, the second GPU is split in to 2 virtual GPUs (each with 28 SMs), and the third GPU is split in to 4 virtual GPUs (each with 14 SMs). The GPUs on astro02 have full FP64 capabilities, they are smaller than on other machines and very useful for longer running jobs that only require a smaller amount of GPU computing. You can read more about their specs here [[https://www.nvidia.com/en-us/data-center/products/a30-gpu]]. &lt;br /&gt;
&lt;br /&gt;
* The four RTX A6000 GPUs available on astro04 have 48 GB ram per GPU and very low FP64 performance. They are therefore not well suited for scientific calculations, but provide hardware accelerated remote visualization, and have similar performance for machine learning workloads to the A100 GPUs. You can read more about their specs here [[https://www.nvidia.com/en-us/design-visualization/rtx-a6000]].&lt;br /&gt;
&lt;br /&gt;
* The six H100 GPUs available through the astro_gpu queue have 94 GB of memory and are our newest GPUs. They have full FP64 performance and very high machine learning performance and are well-suited for both scientific and machine learning jobs. You can read more about their specs here [[https://www.nvidia.com/en-us/design-visualization/h100]].&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=222</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=222"/>
		<updated>2025-03-06T08:20:39Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more than 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current FP64 peak performance is 443 TFlops from the CPUs and 276 TFlops from the GPUs. The GPUs are even more powerful at lower precision having 700 TFlops FP32 performance, and more than 18 PFlops of FP16 Tensor core performance for machine learning workloads.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Running batch jobs]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
* [[FAQs]]&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Module system]]&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=221</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=221"/>
		<updated>2025-03-06T08:10:47Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more than 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Running batch jobs]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
* [[FAQs]]&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Module system]]&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=220</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=220"/>
		<updated>2025-03-06T08:10:00Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01, astro02, and astro04 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=219</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=219"/>
		<updated>2025-03-06T08:06:37Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 3 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=218</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=218"/>
		<updated>2025-02-21T12:07:06Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro07.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 2 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=217</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=217"/>
		<updated>2025-02-21T12:06:30Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro03.hpc.ku.dk || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 1,5 GB DDR5-4800 MHz - 12 GB / core || 922 GB/s, 7.2 GB/s/core || None || None || L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro04.hpc.ku.dk || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 9.6 GB/s/core || 4x RTX A6000 || 42 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, 2 x NDR 200 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro07.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro_gpu || 2 || 1 x 48 cores Epyc Genoa 9454P @ 2.75 GHz || 768 GB DDR5-4800 MHz - 16 GB / core || 461 GB/s, 9.6 GB/s/core || 2x H100 GPUs, L2: 1 MB / core, L3: 256 MB, AVX-512, HDR 100 Gbit/s&lt;br /&gt;
||-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Ethernet or Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=216</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=216"/>
		<updated>2024-03-01T07:51:42Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro07.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point in Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The backend storage servers for /groups/astro and /lustre/astro are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=215</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=215"/>
		<updated>2024-03-01T07:49:07Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro07.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. The total space (disregarding the transparent compression) is 1300 TB. The default quota on scratch is 5 TB, but if you need more please contact Troels Haugbølle with your supervisor / mentor / sponsor in CC and explain why and how much.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The storage server are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=214</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=214"/>
		<updated>2024-03-01T07:43:51Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro07.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The storage server are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Mattermost_discussion_forum&amp;diff=213</id>
		<title>Mattermost discussion forum</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Mattermost_discussion_forum&amp;diff=213"/>
		<updated>2024-01-30T14:29:26Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have a dedicated user forum for the cluster where you can contact other users, and get more instantaneous community feedback. This is also a good place to discuss usage or installation of scientific software.&lt;br /&gt;
&lt;br /&gt;
To get an account, please see the sign up link that you received in your welcome e-mail to the HPC system. Signup is restricted based on the e-mail account you submit. Please use your work e-mail. If you are an external user of Tycho (e.g. not from NBI or Globe), please contact Troels Haugbølle (haugboel@nbi.ku.dk)&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=212</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=212"/>
		<updated>2024-01-15T08:14:49Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group. &#039;&#039;&#039;Remember to sign the rules-of-conduct-form&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* If you have received the old welcome e-mail without a link to this wiki, the info is not correct. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware). &lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software. You can read more about the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;br /&gt;
&lt;br /&gt;
* If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall &#039;&#039;&#039;before&#039;&#039;&#039; traveling. You can do so by following the detailed instructions [https://hpc.ku.dk/documentation/otp.html: here]&lt;br /&gt;
&lt;br /&gt;
* You can set up e.g. Visual Studio Code for remote development for transparent editing of files on the cluster. See [[Visual Studio Remote Development]]&lt;br /&gt;
&lt;br /&gt;
* Every user has a 50 GB quota for the home folder (&amp;lt;code&amp;gt;/groups/astro/yourusername&amp;lt;/code&amp;gt;. Whenever you are going to be working with large amounts of data, considering using the &amp;lt;code&amp;gt;/lustre/astro/yourusername&amp;lt;/code&amp;gt; directory. This scratch folder is residing on a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=180</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=180"/>
		<updated>2023-11-15T14:37:15Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group. &#039;&#039;&#039;Remember to sign the rules-of-conduct-form&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software. You can read more about the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;br /&gt;
&lt;br /&gt;
* If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall &#039;&#039;&#039;before&#039;&#039;&#039; travelling. You can do so by following the detailed instructions [https://hpc.ku.dk/documentation/otp.html: here]&lt;br /&gt;
&lt;br /&gt;
* You can set up e.g. Visual Studio Code for remote development for transparent editing of files on the cluster. See [[Visual Studio Remote Development]]&lt;br /&gt;
&lt;br /&gt;
* Every user has a 50 GB quota for the home folder (&amp;lt;code&amp;gt;/groups/astro/yourusername&amp;lt;/code&amp;gt;. Whenever you are going to be working with large amounts of data, considering using the &amp;lt;code&amp;gt;/lustre/astro/yourusername&amp;lt;/code&amp;gt; directory. This scratch folder is residing on a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=177</id>
		<title>FAQs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=FAQs&amp;diff=177"/>
		<updated>2023-11-15T14:28:34Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* How do I access the HPC Cluster?&lt;br /&gt;
Enter the following in the command terminal&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh username@astroXX.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can remotely access the frontend machines astro06-09 via SSH to submit jobs or to analyze data.  It may be a good idea to check the load factor after logging in, using &amp;quot;top&amp;quot;, and choose a different frontend if the CPU or memory use is already high (use &amp;quot;&amp;lt;&amp;quot; or &amp;quot;&amp;gt;&amp;quot; in top to temporarily change from sorting on CPU to memory / virtiual memory).&lt;br /&gt;
Login&lt;br /&gt;
You can login to the astro06 machine for example by&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Compilers&amp;diff=174</id>
		<title>Compilers</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Compilers&amp;diff=174"/>
		<updated>2023-11-15T14:26:03Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We recommend using the Intel ifort, icc, and icpc (fortran, C, C++) compilers for production, because of their superior speed optimization.  Compiling for MPI is done with mpiifort, mpiicc, mpiicpc (note the double ii), which are wrapper scripts around the compilers that link in the correct MPI libraries. &lt;br /&gt;
&lt;br /&gt;
However, using gfortran for development and testing can be a very good idea, especially if / when problems arise, and are difficult to locate.  Compiling the same code with different compilers gives a better chance to discover problems, or to exclude a compiler bug as the reason for a problem under investigation.   Gfortran&#039;s MPI wrapper is called mpif90 or mpifort (note the single i!).&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=170</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=170"/>
		<updated>2023-11-15T14:24:09Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group. &#039;&#039;&#039;Remember to sign the rules-of-conduct-form&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software. You can read more about the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;br /&gt;
&lt;br /&gt;
* If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall &#039;&#039;&#039;before&#039;&#039;&#039; travelling. You can do so by following the detailed instructions [https://hpc.ku.dk/documentation/otp.html: here]&lt;br /&gt;
&lt;br /&gt;
* Consider setting up Visual Studio Code for remote development for transparent editing of files on the cluster. See [[Visual Studio Remote Development]]&lt;br /&gt;
&lt;br /&gt;
* Whenever you are going to be working with large amounts of data, considering using the &amp;lt;code&amp;gt;/lustre/astro/yourusername&amp;lt;/code&amp;gt; directory. The scratch directory is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Acknowledging_the_use_of_Tycho_in_articles_and_presentations&amp;diff=163</id>
		<title>Acknowledging the use of Tycho in articles and presentations</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Acknowledging_the_use_of_Tycho_in_articles_and_presentations&amp;diff=163"/>
		<updated>2023-11-15T14:14:36Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: Created page with &amp;quot;Tycho is a group effort and we rely entirely on external funding to maintain the hardware. Users are strongly encouraged to acknowledge Tycho in all publications that describe results obtained using Tycho resources.  Users shall use the following wording in such acknowledgement in all papers and other publications. Having a standard formulation helps when assessing the impact of Tycho in e.g. future grant applications, and benefit all of us.  ====Acknowledging Tycho in a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho is a group effort and we rely entirely on external funding to maintain the hardware. Users are strongly encouraged to acknowledge Tycho in all publications that describe results obtained using Tycho resources.&lt;br /&gt;
&lt;br /&gt;
Users shall use the following wording in such acknowledgement in all papers and other publications. Having a standard formulation helps when assessing the impact of Tycho in e.g. future grant applications, and benefit all of us.&lt;br /&gt;
&lt;br /&gt;
====Acknowledging Tycho in a publication====&lt;br /&gt;
The Tycho supercomputer hosted at the SCIENCE HPC center at the University of Copenhagen was used for supporting this work.&lt;br /&gt;
&lt;br /&gt;
====Referencing Tycho in a presentation====&lt;br /&gt;
You may reference Tycho in your presentations. You may use the Tycho logo and/or select images from the [[media gallery]].&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=139</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=139"/>
		<updated>2023-11-15T14:00:45Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Acknowledging the use of Tycho in articles and presentations]] &lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=135</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=135"/>
		<updated>2023-11-15T13:58:14Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group. &#039;&#039;&#039;Remember to sign the rules-of-conduct-form&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;br /&gt;
&lt;br /&gt;
* If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall &#039;&#039;&#039;before&#039;&#039;&#039; travelling. You can do so by following the detailed instructions [https://hpc.ku.dk/documentation/otp.html: here]&lt;br /&gt;
&lt;br /&gt;
* Consider setting up Visual Studio Code for remote development for transparent editing of files on the cluster. See [[Visual Studio Remote Development]]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=128</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=128"/>
		<updated>2023-11-15T13:54:07Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group. &#039;&#039;&#039;Remember to sign the rules-of-conduct-form&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=122</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=122"/>
		<updated>2023-11-15T13:51:09Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
===First steps===&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Erda]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* [[Adding a second IP Address]]&lt;br /&gt;
* [[Setting up One-Time-Password Access before travelling]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Scientific Software===&lt;br /&gt;
&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
* [[GRChombo]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=119</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=119"/>
		<updated>2023-11-15T13:49:58Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=116</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=116"/>
		<updated>2023-11-15T13:49:30Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for a list of available frontends).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=115</id>
		<title>Hardware</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Hardware&amp;diff=115"/>
		<updated>2023-11-15T13:48:24Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tycho contains both frontends (&amp;quot;Analysis Hardware&amp;quot;) that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes (&amp;quot;Cluster Hardware&amp;quot;) that are only accessible through the SLURM queue systems.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Analysis Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Name !! CPUs !! Memory !! Memory Bandwidth !! GPUs !! Scratch !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 410 GB/s, 8.5 GB/s/core || 4x A100 || 11 TB || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 461 GB/s, 7.2 GB/s/core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|-&lt;br /&gt;
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || || L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ Cluster Hardware&lt;br /&gt;
|-&lt;br /&gt;
! Queue Name !! #Nodes !! CPUs !! Memory !! Memory Bandwidth !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || 120 GB/s, 6 GB/s/core || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || 282 GB/s, 5.9 GB/s/core || L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || 922 GB/s, 7.2 GB/s/core || L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s&lt;br /&gt;
|-&lt;br /&gt;
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 410 GB/s, 12.8 GB/s/core || 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Servers===&lt;br /&gt;
The astro_XX nodes are based on a Dell C6220II shoe-box design with dual 10-core ivy-bridge CPUs. The astro2_XX nodes are Huawei Fusion server pro X6000 with dual 24-core cascade-lake CPUs. The astro3_XX nodes are XFusion servers model 1258H V7 with dual 64-core Genoa CPUs.&lt;br /&gt;
&lt;br /&gt;
===Global Storage===&lt;br /&gt;
* The home directory (/groups/astro) is a fully backed up Lustre filesystem. We have a shared 6TB quota and individual quotas of 50 GB per user.&lt;br /&gt;
* The scratch directory (/lustre/astro) is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.&lt;br /&gt;
* Archive are two ZFS filesystems exported as NFS volumes from a storage server connected to the clusters with a 10 Gbit/s network connection. Each archive system can be found under /groups/astro/archive0 and /groups/astro/archive1. These filesystems are old and new users will not get directories on them. They will soon be decommissioned.&lt;br /&gt;
&lt;br /&gt;
===Scratch storage===&lt;br /&gt;
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.&lt;br /&gt;
&lt;br /&gt;
===Networks===&lt;br /&gt;
* External connection: The local HPC center is a Tier-1 CERN node and has a direct dual 400 gbit/s connection to the Danish entrance point Lyngby of the European GEANT network. In practice we easily reach 100 MB/s for transfer of larger files, with higher speeds possible by doing parallel transfers.&lt;br /&gt;
* The storage server are all inter-connected with 100 Gbit/s HDR Infiniband. This switch has uplinks to the different cluster networks.&lt;br /&gt;
* All frontend machines have Infiniband adapters to provide optimal bandwidth to the I/O.&lt;br /&gt;
* Astro_XX nodes have FDR (56 Gbit/s) infiniband connected to a single switch.&lt;br /&gt;
* Astro2_XX nodes have EDR (100 Gbit/s) with a 2:1 blocking factor and 24 nodes per switch (3 uplink switches, 1 core switch).&lt;br /&gt;
* Astro3_XX nodes have two NDR-200 (200 Gbit/s) adapters with one adapter per CPU socket. They are connected directly to a single 128-port NDR-200 switch.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=112</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=112"/>
		<updated>2023-11-15T13:47:28Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT&#039;&#039;&#039;. Most important, you should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for a list of available frontends).&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=107</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=107"/>
		<updated>2023-11-15T13:44:45Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fdsnkflsdnk&lt;br /&gt;
sdfnlkdsf&lt;br /&gt;
ksndfds&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=101</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=101"/>
		<updated>2023-11-15T13:41:41Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
* Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
* Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; file to have access to all the custom installed software.&lt;br /&gt;
* Consider changing &amp;lt;code&amp;gt;umask 077&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;umask 027&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;$HOME/.bashrc&amp;lt;/code&amp;gt; to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=90</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=90"/>
		<updated>2023-11-15T13:36:37Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
Overview:&lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
* Software:&lt;br /&gt;
&lt;br /&gt;
* [[Running Mathematica on compute nodes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=89</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=89"/>
		<updated>2023-11-15T13:36:06Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
Overview:&lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* Software:&lt;br /&gt;
 * [[Running Mathematica on compute nodes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=87</id>
		<title>Tycho Technical Documentation</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Tycho_Technical_Documentation&amp;diff=87"/>
		<updated>2023-11-15T13:35:44Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the technical documentation for the Tycho high performance computing resources. See [https://nbi.ku.dk/english/research_infrastructure/tycho-supercomputer/ the Infrastructure page] at NBI for a non-technical overview of the cluster.&lt;br /&gt;
&lt;br /&gt;
Tycho contain in total more 10,000 CPU cores and 13 data center class GPUs. The cluster is complemented by a 1,300 TB data storage archive and a number of powerful analysis machines used as frontends for the cluster and for pre- and post-prcessing. Current peak performance is 443 TFlops from the CPUs and 93 TFlops from the GPUs.&lt;br /&gt;
&lt;br /&gt;
Tycho is hosted at the [http://www.dcsc.ku.dk/: High Performance Computing center] at the faculty of SCIENCE, University of Copenhagen.&lt;br /&gt;
&lt;br /&gt;
Please visit the [[first steps]] page to get started&lt;br /&gt;
&lt;br /&gt;
Overview:&lt;br /&gt;
* [[Getting Help]]&lt;br /&gt;
* [[Mattermost discussion forum]]&lt;br /&gt;
* [[Being a good HPC user]]&lt;br /&gt;
* [[Accessing Tycho]]&lt;br /&gt;
* [[Using Jupyter notebooks on the frontends]]&lt;br /&gt;
* [[Visual Studio Remote Development]]&lt;br /&gt;
* [[Virtual Desktop]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Using GPUs]]&lt;br /&gt;
* [[Compilers]]&lt;br /&gt;
* Software:&lt;br /&gt;
  * [[Running Mathematica on compute nodes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [[Debugging and Profiling]]&lt;br /&gt;
* [[MPI Libraries]]&lt;br /&gt;
* [[Examples of SLURM scripts]]&lt;br /&gt;
* [[Codes]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*************************&lt;br /&gt;
&lt;br /&gt;
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=79</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=79"/>
		<updated>2023-11-15T13:20:57Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
 * Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
dsfsdfs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://hpc.ku.dk/account.html https://hpc.ku.dk/account.html]&lt;br /&gt;
&lt;br /&gt;
You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=76</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=76"/>
		<updated>2023-11-15T13:18:37Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
&lt;br /&gt;
 * Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
dsfsdfs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html . You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=71</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=71"/>
		<updated>2023-11-15T13:07:35Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
 * Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
dsfsdfs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html . You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=69</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=69"/>
		<updated>2023-11-15T13:06:58Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
 1. Get a user account by contacting a sponsor and signing up at &lt;br /&gt;
&lt;br /&gt;
https://hpc.ku.dk/account.html . You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=68</id>
		<title>First steps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=First_steps&amp;diff=68"/>
		<updated>2023-11-15T13:06:17Z</updated>

		<summary type="html">&lt;p&gt;Haugboel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To start using Tycho, please follow these steps&lt;br /&gt;
 * Get a user account by contacting a sponsor and signing up at https://hpc.ku.dk/account.html . You have to select &amp;quot;Astro&amp;quot; as your group&lt;br /&gt;
&lt;br /&gt;
Set up an SSH key pair for secure passwordless login (see [Accessing Tycho])&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;code&amp;gt;module load astro&amp;lt;/code&amp;gt; to you &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haugboel</name></author>
	</entry>
</feed>