Hardware
Tycho contains both frontends ("Analysis Hardware") that are accessible from the outside and which can be used for interactive work, such as development and analysis, and compute nodes ("Cluster Hardware") that are only accessible through the SLURM queue systems.
Name | CPUs | Memory | Memory Bandwidth | GPUs | Scratch | Notes |
---|---|---|---|---|---|---|
astro01.hpc.ku.dk | 2 x 24 cores Epyc Rome 7F72 @ 3.2 GHz | 1 TB DDR4-3200 MHz - 21 GB / core | 410 GB/s, 8.5 GB/s/core | 4x A100 | 11 TB | L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s to storage |
astro02.hpc.ku.dk | 1 x 64 cores Epyc Genoa 9554P @ 3.1 GHz | 768 GB DDR5-4800 MHz - 12 GB / core | 461 GB/s, 7.2 GB/s/core | 3x A30 | 28 TB | L2: 1 MB / core, L3: 256 MB, AVX-512, EDR 100 Gbit/s to storage |
astro06.hpc.ku.dk | 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz | 512 GB DDR4-2400 MHz - 20 GB / core | L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage | |||
astro06.hpc.ku.dk | 2 x 14 cores Broadwell E5-2680 v4 @ 2.40GHz | 512 GB DDR4-2400 MHz - 20 GB / core | L2: 512 KB / core, L3: 35 MB / socket, AVX2, QDR 40 Gbit/s to storage |
Queue Name | #Nodes | CPUs | Memory | Memory Bandwidth | Notes |
---|---|---|---|---|---|
astro_XX | 16 | 2 x 10 cores Xeon E5-2680v2 @ 2.8GHz | 64 GB DDR3-1866 MHz - 3.2 GB / core | 120 GB/s, 6 GB/s/core | L2: 256 KB / core, L3: 25 MB / socket, AVX2, FDR 56 Gbit/s |
astro2_XX | 70 | 2 x 24 cores Xeon 6248R @ 3.0GHz | 192 GB DDR4-2933 MHz - 4 GB / core | 282 GB/s, 5.9 GB/s/core | L2: 1 MB / core, L3: 35.75 MB / socket, AVX-512, EDR 100 Gbit/s |
astro3_XX | 50 | 2 x 64 cores Epyc Genoa 9554 @ 3.1 GHz | 768 GB DDR5-4800 MHz - 6 GB / core | 922 GB/s, 7.2 GB/s/core | L2: 1 MB / core, L3: 256 MB / socket, AVX-512, 2 x NDR 200 Gbit/s |
astro2_gpu | 1 | 2 x 16 cores Epyc Rome 7302 @ 3.0 GHz | 1 TB DDR4-3200 MHz - 32 GB / core | 410 GB/s, 12.8 GB/s/core | 4x A100 GPUs, L2: 512 KB / core, L3: 192 MB / socket, AVX2, EDR 100 Gbit/s |
Notes
The scratch disks on astro01 and astro02 are RAID0 volumes consisting of a number of locally mounted NVMe disks. They have slightly higher bandwidth than the global filesystem, but can only be accessed from the specific machine. The scratch disks have several orders of magnitudes higher IOPS compared to the global filesystem, and random access I/O or operations that require opening and closing a lot of files will perform a lot faster on the scratch disks. _Space is limited_. Please clean up after use, and remember there are no backups or redundancy on the scratch disks.