First steps: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
* Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]]) | * Set up an SSH key pair for secure passwordless login (see [[Accessing Tycho]]) | ||
* Add <code>module load astro</code> to you <code>$HOME/.bashrc</code> file to have access to all the custom installed software. | * Add <code>module load astro</code> to you <code>$HOME/.bashrc</code> file to have access to all the custom installed software. You can read more about the <code>module</code> command. | ||
* Consider changing <code>umask 077</code> to <code>umask 027</code> in <code>$HOME/.bashrc</code> to allow collaborators and/or your supervisor read access to your files when logged in to the cluster. | * Consider changing <code>umask 077</code> to <code>umask 027</code> in <code>$HOME/.bashrc</code> to allow collaborators and/or your supervisor read access to your files when logged in to the cluster. | ||
Revision as of 14:24, 15 November 2023
To start using Tycho, please follow these steps
- Get a user account by contacting a sponsor and signing up at
https://hpc.ku.dk/account.html
You have to select "Astro" as your group. Remember to sign the rules-of-conduct-form
- BE AWARE THAT THE WELCOME E-MAIL IS NOT CORRECT. You should never login to fend0X.hpc.ku.dk, but instead use the Tycho-specific frontends called astro0X.hpc.ku.dk (see Hardware for an up to date list of available frontends / Analysis hardware).
- Set up an SSH key pair for secure passwordless login (see Accessing Tycho)
- Add
module load astro
to you$HOME/.bashrc
file to have access to all the custom installed software. You can read more about themodule
command.
- Consider changing
umask 077
toumask 027
in$HOME/.bashrc
to allow collaborators and/or your supervisor read access to your files when logged in to the cluster.
- If you are going to travel, but want to continue working on Tycho while you are travelling, you need to set up a Dynamic Firewall before travelling. You can do so by following the detailed instructions here
- Consider setting up Visual Studio Code for remote development for transparent editing of files on the cluster. See Visual Studio Remote Development
- Whenever you are going to be working with large amounts of data, considering using the
/lustre/astro/yourusername
directory. The scratch directory is a ZFS based high performance Lustre filesystem with dedicated hardware for our group. No quotas are enforced and the total space (disregarding the transparent compression) is 1300 TB.