First steps: Difference between revisions
Jump to navigation
Jump to search
(Created page with "===Getting help=== * ''Account renewal'': If you need to renew your account, you should first get the renewal approved by your sponsor (e.g. your supervisor, mentor etc), and then forward that confirmation to your group representative, Troels Haugbølle ([haugboel@nbi.ku.dk]), who will sign-off on the renewal and ask the admins to renew your account. * ''Contact support'': Tycho is a cluster installed at the HPC center at the faculty of SCIENCE. The administrators can...") |
No edit summary |
||
(22 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
To start using Tycho, please follow these steps | |||
* | * Get a user account by contacting a sponsor and signing up at | ||
https://hpc.ku.dk/account.html | |||
You have to select "Astro" as your group. '''Remember to sign the rules-of-conduct-form''' | |||
* If you have received the old welcome e-mail without a link to this wiki, the info is not correct. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see [[Hardware]] for an up to date list of available frontends / Analysis hardware). | |||
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]]) | |||
* Add <code>module load astro</code> to you <code>$HOME/.bashrc</code> file to have access to all the custom installed software. You can read more about the <code>module</code> command. | |||
* Consider changing <code>umask 077</code> to <code>umask 027</code> in <code>$HOME/.bashrc</code> to allow collaborators and/or your supervisor read access to your files when logged in to the cluster. | |||
* If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall '''before''' traveling. You can do so by following the detailed instructions [https://hpc.ku.dk/documentation/otp.html: here] | |||
* You can set up e.g. Visual Studio Code for remote development for transparent editing of files on the cluster. See [[Visual Studio Remote Development]] | |||
* Every user has a 50 GB quota for the home folder (<code>/groups/astro/yourusername</code>. Whenever you are going to be working with large amounts of data, considering using the <code>/lustre/astro/yourusername</code> directory. This scratch folder is residing on a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB. |
Latest revision as of 08:14, 15 January 2024
To start using Tycho, please follow these steps
- Get a user account by contacting a sponsor and signing up at
https://hpc.ku.dk/account.html
You have to select "Astro" as your group. Remember to sign the rules-of-conduct-form
- If you have received the old welcome e-mail without a link to this wiki, the info is not correct. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see Hardware for an up to date list of available frontends / Analysis hardware).
- Set up an SSH key pair for secure passwordless login (see Accessing Tycho)
- Add
module load astro
to you$HOME/.bashrc
file to have access to all the custom installed software. You can read more about themodule
command.
- Consider changing
umask 077
toumask 027
in$HOME/.bashrc
to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.
- If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall before traveling. You can do so by following the detailed instructions here
- You can set up e.g. Visual Studio Code for remote development for transparent editing of files on the cluster. See Visual Studio Remote Development
- Every user has a 50 GB quota for the home folder (
/groups/astro/yourusername
. Whenever you are going to be working with large amounts of data, considering using the/lustre/astro/yourusername
directory. This scratch folder is residing on a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.