Hardware: Difference between revisions

From Tycho
Jump to navigation Jump to search
(Created page with "Tycho contains both frontends that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes that are only accessible through the SLURM queue systems. {| class="wikitable" style="margin:auto" |+ Analysis Hardware |- ! Name !! CPUs !! RAM !! GPUs !! Scratch !! Notes |- | astro01.hpc.ku.dk || 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz || 1 TB DDR4-3200 MHz - 21 GB / core || 4x A100 || 11 TB || L2: 512 KB /...")
 
No edit summary
Line 1: Line 1:
Tycho contains both frontends that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes that are only accessible through the SLURM queue systems.
Tycho contains both frontends ("Analysis Hardware") that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes ("Cluster Hardware") that are only accessible through the SLURM queue systems.


{| class="wikitable" style="margin:auto"
{| class="wikitable" style="margin:auto"
Line 10: Line 10:
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage
| astro02.hpc.ku.dk || 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz || 768 GB DDR5-4800 MHz - 12 GB / core || 3x A30 || 28 TB || L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage
|-
|-
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR5-4800 MHz - 12 GB / core || || || L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
|-
|-
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR5-4800 MHz - 12 GB / core || || || L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
| astro06.hpc.ku.dk || 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz || 512 GB DDR4-2400 MHz - 20 GB / core || || || L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
|}
 
{| class="wikitable" style="margin:auto"
|+ Cluster Hardware
|-
! Queue Name !! #Nodes !! CPUs !! RAM !! GPUs !! Notes
|-
| astro_XX || 16 || 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz || 64 GB DDR3-1866 MHz - 3.2 GB / core || || L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s
|-
| astro2_XX || 70 || 2 x 24 cores Xeon 6248R @ 3.0GHz || 192 GB DDR4-2933 MHz - 4 GB / core || || L2: 1 MB / core, L3: 35.75 MB, AVX-512, EDR 100 Gbit/s to storage
|-
| astro3_XX || 50 || 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz || 768 GB DDR5-4800 MHz -  6 GB / core || || L2: 512 KB / core, L3: 35 MB, AVX-512, 2 x NDR 200 Gbit/s
|-
| astro2_gpu || 1 || 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz || 1 TB DDR4-3200 MHz - 32 GB / core || 4x A100 || L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s
|}
|}

Revision as of 12:36, 14 November 2023

Tycho contains both frontends ("Analysis Hardware") that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes ("Cluster Hardware") that are only accessible through the SLURM queue systems.

Analysis Hardware
Name CPUs RAM GPUs Scratch Notes
astro01.hpc.ku.dk 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz 1 TB DDR4-3200 MHz - 21 GB / core 4x A100  11 TB L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage
astro02.hpc.ku.dk 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz 768 GB DDR5-4800 MHz - 12 GB / core 3x A30  28 TB L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage
astro06.hpc.ku.dk 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz 512 GB DDR4-2400 MHz - 20 GB / core L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
astro06.hpc.ku.dk 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz 512 GB DDR4-2400 MHz - 20 GB / core   L2: 512 KB / core, L3: 35 MB, AVX2, QDR 40 Gbit/s to storage
Cluster Hardware
Queue Name #Nodes CPUs RAM GPUs Notes
astro_XX 16 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz 64 GB DDR3-1866 MHz - 3.2 GB / core L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s
astro2_XX 70 2 x 24 cores Xeon 6248R @ 3.0GHz 192 GB DDR4-2933 MHz - 4 GB / core L2: 1 MB / core, L3: 35.75 MB, AVX-512, EDR 100 Gbit/s to storage
astro3_XX 50 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz 768 GB DDR5-4800 MHz - 6 GB / core L2: 512 KB / core, L3: 35 MB, AVX-512, 2 x NDR 200 Gbit/s
astro2_gpu 1 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz 1 TB DDR4-3200 MHz - 32 GB / core 4x A100 L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s