<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ricolo</id>
	<title>Tycho - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nbi.ku.dk/w/tycho/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ricolo"/>
	<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/tycho/Special:Contributions/Ricolo"/>
	<updated>2026-04-23T06:25:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=165</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=165"/>
		<updated>2023-11-15T14:17:05Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote - SSH ==&lt;br /&gt;
=== Installing Remote - SSH ===&lt;br /&gt;
First you will need to install a SSH client compatible with &amp;lt;code&amp;gt;OpenSSH&amp;lt;/code&amp;gt;. Refer to [https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client this page] for more instructions (&#039;&#039;&#039;note:&#039;&#039;&#039; if you are running macOS, you do not have to do this).&lt;br /&gt;
&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote - SSH&amp;quot; (or if you are feeling fancy, install &amp;quot;Remote Development&amp;quot; that includes &amp;quot;Remote - SSH&amp;quot; instead). Alternatively, just click [https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh here].&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
You should now be able to browse/edit files on the cluster as if it is your local machine by clicking &amp;quot;Open Folder&amp;quot; or using &amp;quot;Explorer&amp;quot; on the activity bar.&lt;br /&gt;
&lt;br /&gt;
You can also create a new terminal session by clicking &amp;quot;Terminal&amp;quot; on the top and then select &amp;quot;New Terminal&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=161</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=161"/>
		<updated>2023-11-15T14:12:37Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Enabling and configuring Remote - SSH */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote - SSH ==&lt;br /&gt;
=== Installing Remote - SSH ===&lt;br /&gt;
First you will need to install a SSH client compatible with &amp;lt;code&amp;gt;OpenSSH&amp;lt;/code&amp;gt;. Refer to [https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client this page] for more instructions (&#039;&#039;&#039;note:&#039;&#039;&#039; if you are running macOS, you do not have to do this).&lt;br /&gt;
&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote - SSH&amp;quot; (or if you are feeling fancy, install &amp;quot;Remote Development&amp;quot; that includes &amp;quot;Remote - SSH&amp;quot; instead). Alternatively, just click [https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh here].&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
You should now be able to browse/edit files on the cluster as if it is your local machine by clicking &amp;quot;Open Folder&amp;quot; or using &amp;quot;Explorer&amp;quot; on the activity bar.&lt;br /&gt;
&lt;br /&gt;
You can also create a new terminal session by clicking &amp;quot;Terminal&amp;quot; on the top and then select &amp;quot;New Terminal&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=157</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=157"/>
		<updated>2023-11-15T14:10:37Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Enabling and configuring Remote - SSH */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote - SSH ==&lt;br /&gt;
=== Installing Remote - SSH ===&lt;br /&gt;
First you will need to install a SSH client compatible with &amp;lt;code&amp;gt;OpenSSH&amp;lt;/code&amp;gt;. Refer to [https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client this page] for more instructions (&#039;&#039;&#039;note:&#039;&#039;&#039; if you are running macOS, you do not have to do this).&lt;br /&gt;
&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote - SSH&amp;quot; (or if you are feeling fancy, install &amp;quot;Remote Development&amp;quot; that includes &amp;quot;Remote - SSH&amp;quot; instead). Alternatively, just click [https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh here].&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
You should now be able to browse/edit files on the cluster as if it is your local machine.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=154</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=154"/>
		<updated>2023-11-15T14:08:32Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote - SSH ==&lt;br /&gt;
=== Installing Remote - SSH ===&lt;br /&gt;
First you will need to install a SSH client compatible with &amp;lt;code&amp;gt;OpenSSH&amp;lt;/code&amp;gt;. Refer to [https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client this page] for more instructions (&#039;&#039;&#039;note:&#039;&#039;&#039; if you are running macOS, you do not have to do this).&lt;br /&gt;
&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote - SSH&amp;quot; (or if you are feeling fancy, install &amp;quot;Remote Development&amp;quot; that includes &amp;quot;Remote - SSH&amp;quot; instead). Alternatively, just click [https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh here]&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=125</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=125"/>
		<updated>2023-11-15T13:52:43Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote - SSH ==&lt;br /&gt;
=== Installing Remote - SSH ===&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote - SSH&amp;quot; (or if you are feeling fancy, install &amp;quot;Remote Development&amp;quot; that includes &amp;quot;Remote - SSH&amp;quot; instead).&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=111</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=111"/>
		<updated>2023-11-15T13:47:22Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote Development ==&lt;br /&gt;
=== Installing Remote Development ===&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Adding a remote host/server ===&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
== Installing more extensions on a remote machine for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=95</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=95"/>
		<updated>2023-11-15T13:39:05Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Enabling and configuring Remote Development */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote Development ==&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Click on that and a separate tab should pop up. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Refer to [[Accessing Tycho]] for more explanations.&lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=92</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=92"/>
		<updated>2023-11-15T13:37:11Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Enabling and configuring Remote Development */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote Development ==&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_username@astro02.hpc.ku.dk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=88</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=88"/>
		<updated>2023-11-15T13:35:50Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Enabling and configuring Remote Development */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote Development ==&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install the extension &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After reloading/reopening VSCode, on the activity bar (on the left) there should now be one more item &amp;quot;Remote Explorer&amp;quot;. Hover over &amp;quot;SSH&amp;quot; and a &amp;quot;+&amp;quot; symbol should show up. After clicking the &amp;quot;+&amp;quot; symbol, a prompt will show up and ask you to enter SSH connection command as if you are using a terminal app, such as&lt;br /&gt;
&amp;lt;code&amp;gt;ssh username@astro02.hpc.ku.dk&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=85</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=85"/>
		<updated>2023-11-15T13:28:27Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling and configuring Remote Development ==&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=82</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=82"/>
		<updated>2023-11-15T13:27:41Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer. For more details, refer to Microsoft&#039;s documentation [https://code.visualstudio.com/docs/remote/remote-overview here].&lt;br /&gt;
&lt;br /&gt;
== Enabling Remote Development ==&lt;br /&gt;
Follow the instructions [https://code.visualstudio.com/docs/editor/extension-marketplace here] for searching and installing extensions on VSCode. Search for and install &amp;quot;Remote Development&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Configuring Visual Studio Code == &lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=77</id>
		<title>Visual Studio Remote Development</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Visual_Studio_Remote_Development&amp;diff=77"/>
		<updated>2023-11-15T13:18:51Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: Created page with &amp;quot;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer.  == Installing the Remote Development extension pack ==  == Configuring Visual Studio Code ==   == Installing even more extensions for extra features == === for python === === for julia === === for jupyter notebook ===&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://github.com/Microsoft/vscode-remote-release#readme Remote development ] is a set of extensions for Visual Studio Code that allows you to interact with a remote server on VSCode as if it is running on your local computer.&lt;br /&gt;
&lt;br /&gt;
== Installing the Remote Development extension pack ==&lt;br /&gt;
&lt;br /&gt;
== Configuring Visual Studio Code == &lt;br /&gt;
&lt;br /&gt;
== Installing even more extensions for extra features ==&lt;br /&gt;
=== for python ===&lt;br /&gt;
=== for julia ===&lt;br /&gt;
=== for jupyter notebook ===&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=47</id>
		<title>Using GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=47"/>
		<updated>2023-11-15T12:53:10Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Installing CUDA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is a detailed guide on how to leverage the GPUs on the NBI cluster. &lt;br /&gt;
&lt;br /&gt;
== Preparation work: making sure that your software is GPU-aware ==&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).&lt;br /&gt;
&lt;br /&gt;
=== Installing CUDA ===&lt;br /&gt;
&lt;br /&gt;
Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g. &amp;lt;code&amp;gt;cuda/11.2&amp;lt;/code&amp;gt;; note: you will need to first load the &amp;lt;code&amp;gt;astro&amp;lt;/code&amp;gt; module) pre-installed on the system.&lt;br /&gt;
&lt;br /&gt;
Here we take a different route -- we install our own (and a newer version of) CUDA for greater control. Usually you would want to install the latest CUDA that your GPUs support, but as of the time of this writing, &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; lacks the support for the latest CUDA version 12.x so we opt for an earlier release (11.8).&lt;br /&gt;
&lt;br /&gt;
To install CUDA via &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt;, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install cuda -c nvidia/label/cuda-11.8.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check that your installation works by running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should match the version that you just installed.&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;NOTE:&#039;&#039;&#039; This number can be different from the number reported in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, since the number in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; is the latest CUDA that is supported by the driver installed. In other words, you should make sure that the CUDA you installed is &#039;at most&#039; that version)&lt;br /&gt;
&lt;br /&gt;
=== Installing torch ===&lt;br /&gt;
&lt;br /&gt;
Once you have CUDA properly installed, everything else should be a breeze. To install &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; with CUDA awareness, simply do&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import torch&lt;br /&gt;
&lt;br /&gt;
torch.zeros(100).cuda()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If there is no error message, that means you have now installed torch with CUDA support successfully.&lt;br /&gt;
&lt;br /&gt;
=== Installing cupy ===&lt;br /&gt;
&lt;br /&gt;
Again, if you have CUDA installed, the installation of &amp;lt;code&amp;gt;cupy&amp;lt;/code&amp;gt; is very straightforward. Simply run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install -c conda-forge cupy cudnn cutensor nccl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; will intelligently (and hopefully) detect the proper version to installed with your current installation of CUDA.&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import cupy&lt;br /&gt;
&lt;br /&gt;
cupy.random.rand(100).device&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;&amp;lt;CUDA Device 0&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Installing jax ===&lt;br /&gt;
&lt;br /&gt;
Installation of &amp;lt;code&amp;gt;jax&amp;lt;/code&amp;gt; with CUDA is also simple. Run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install --upgrade &amp;quot;jax[cuda11_pip]&amp;quot; -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import jax&lt;br /&gt;
&lt;br /&gt;
jax.devices()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;[cuda(id=0)]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running a job directly on a GPU-equipped headnode ==&lt;br /&gt;
&lt;br /&gt;
The GPU-equipped headnode/frontend is &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt; (the node is accessible with &amp;lt;code&amp;gt;astro02.hpc.ku.dk&amp;lt;/code&amp;gt;). There are physically 3 Nvidia-A30 GPUs. One of them is virutally split into 3 smaller and independent virtual GPUs (in Nvidia&#039;s term -- MIG or Multi-Instance GPU), one split into 2 smaller MIGs, and one remains &#039;unsplit&#039;.&lt;br /&gt;
&lt;br /&gt;
To specify which GPU to use, set the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. To see the list of &#039;compute instances&#039; available, run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvidia-smi -L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt;, you should see something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GPU 0: NVIDIA A30 (UUID: GPU-654aa619-952d-3f17-01ec-0c050ac8df88)&lt;br /&gt;
  MIG 1g.6gb      Device  0: (UUID: MIG-3868837f-57d0-5089-9887-19240a8809b4)&lt;br /&gt;
  MIG 1g.6gb      Device  1: (UUID: MIG-d28bcf9f-db13-5ad0-9be2-62d0e25c92a9)&lt;br /&gt;
  MIG 1g.6gb      Device  2: (UUID: MIG-e175ec33-0f38-5952-98d5-1c118bd9d398)&lt;br /&gt;
  MIG 1g.6gb      Device  3: (UUID: MIG-53cc4525-2ae7-5c11-9680-302d1d4177ba)&lt;br /&gt;
GPU 1: NVIDIA A30 (UUID: GPU-cb8c2438-a361-3e30-4ff5-4481d43c9e83)&lt;br /&gt;
  MIG 2g.12gb     Device  0: (UUID: MIG-0a768004-2ded-55f6-ac2b-4dd3f696a222)&lt;br /&gt;
  MIG 2g.12gb     Device  1: (UUID: MIG-0296d938-ea26-5174-a884-cd3c686bf660)&lt;br /&gt;
GPU 2: NVIDIA A30 (UUID: GPU-9bcd54bd-5a72-2e7b-90c8-3e3719d09e5c)&lt;br /&gt;
  MIG 4g.24gb     Device  0: (UUID: MIG-a8cb1bd5-6f68-54a1-8e88-ca2fa4ef80c0)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if we want to use the third &amp;lt;code&amp;gt;MIG 1g.6gb&amp;lt;/code&amp;gt; instance with the UUID &amp;lt;code&amp;gt;MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;, set the environment variable&lt;br /&gt;
&amp;lt;code&amp;gt;export CUDA_VISIBLE_DEVICES=MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then running the same test code for &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; and checking with &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, we see that&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| MIG devices:                                                                          |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |&lt;br /&gt;
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |&lt;br /&gt;
|                  |                                |        ECC|                       |&lt;br /&gt;
|==================+================================+===========+=======================|&lt;br /&gt;
|  0    3   0   0  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    4   0   1  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    5   0   2  |             107MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               2MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    6   0   3  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    1   0   0  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    2   0   1  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  2    0   0   0  |               1MiB / 24062MiB  | 56      0 |  4   0    4    1    1 |&lt;br /&gt;
|                  |               1MiB / 32768MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                            |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |&lt;br /&gt;
|        ID   ID                                                             Usage      |&lt;br /&gt;
|=======================================================================================|&lt;br /&gt;
|    0    5    0     530009      C   ...nda3/envs/igwn-py310/bin/python3.10       88MiB |&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Indeed we are using the desired MIG.&lt;br /&gt;
&lt;br /&gt;
== Submitting a job to the GPU partition with slurm ==&lt;br /&gt;
&lt;br /&gt;
Simply specify the GPU partition, &amp;lt;code&amp;gt;astro2_gpu&amp;lt;/code&amp;gt;, and how many ‘generic resources (GRES)’ (in this case, GPU), that you want to use when submitting a job with &amp;lt;code&amp;gt;slurm&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example command is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun -p astro2_gpu --gres=gpu:1 nvidia-smi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should show the GPU (not the virtual one/MIG) that is being assigned to you.&lt;br /&gt;
&lt;br /&gt;
As far as I know, there are 11 Nvidia A100 GPUs in this partition.&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=44</id>
		<title>Using GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=44"/>
		<updated>2023-11-15T12:52:39Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Installing jax with CUDA awareness */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is a detailed guide on how to leverage the GPUs on the NBI cluster. &lt;br /&gt;
&lt;br /&gt;
== Preparation work: making sure that your software is GPU-aware ==&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).&lt;br /&gt;
&lt;br /&gt;
=== Installing CUDA ===&lt;br /&gt;
&lt;br /&gt;
Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g. &amp;lt;code&amp;gt;cuda/11.2&amp;lt;/code&amp;gt;; note: you will need to first load the &amp;lt;code&amp;gt;astro&amp;lt;/code&amp;gt; module) pre-installed on the system.&lt;br /&gt;
&lt;br /&gt;
Here we take a different route -- we install our own (and a newer version of) CUDA for greater control. Usually you would want to install the latest CUDA that your GPUs support, but as of the time of this writing, &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; lacks the support for the latest CUDA version 12.x so we opt for an earlier release (11.8).&lt;br /&gt;
&lt;br /&gt;
To install CUDA via &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt;, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install cuda -c nvidia/label/cuda-11.8.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check that your installation works by running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should match the version that you just installed.&lt;br /&gt;
&lt;br /&gt;
(*NOTE:* This number can be different from the number reported in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, since the number in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; is the latest CUDA that is supported by the driver installed. In other words, you should make sure that the CUDA you installed is &#039;at most&#039; that version)&lt;br /&gt;
&lt;br /&gt;
=== Installing torch ===&lt;br /&gt;
&lt;br /&gt;
Once you have CUDA properly installed, everything else should be a breeze. To install &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; with CUDA awareness, simply do&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import torch&lt;br /&gt;
&lt;br /&gt;
torch.zeros(100).cuda()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If there is no error message, that means you have now installed torch with CUDA support successfully.&lt;br /&gt;
&lt;br /&gt;
=== Installing cupy ===&lt;br /&gt;
&lt;br /&gt;
Again, if you have CUDA installed, the installation of &amp;lt;code&amp;gt;cupy&amp;lt;/code&amp;gt; is very straightforward. Simply run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install -c conda-forge cupy cudnn cutensor nccl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; will intelligently (and hopefully) detect the proper version to installed with your current installation of CUDA.&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import cupy&lt;br /&gt;
&lt;br /&gt;
cupy.random.rand(100).device&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;&amp;lt;CUDA Device 0&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Installing jax ===&lt;br /&gt;
&lt;br /&gt;
Installation of &amp;lt;code&amp;gt;jax&amp;lt;/code&amp;gt; with CUDA is also simple. Run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install --upgrade &amp;quot;jax[cuda11_pip]&amp;quot; -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import jax&lt;br /&gt;
&lt;br /&gt;
jax.devices()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;[cuda(id=0)]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running a job directly on a GPU-equipped headnode ==&lt;br /&gt;
&lt;br /&gt;
The GPU-equipped headnode/frontend is &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt; (the node is accessible with &amp;lt;code&amp;gt;astro02.hpc.ku.dk&amp;lt;/code&amp;gt;). There are physically 3 Nvidia-A30 GPUs. One of them is virutally split into 3 smaller and independent virtual GPUs (in Nvidia&#039;s term -- MIG or Multi-Instance GPU), one split into 2 smaller MIGs, and one remains &#039;unsplit&#039;.&lt;br /&gt;
&lt;br /&gt;
To specify which GPU to use, set the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. To see the list of &#039;compute instances&#039; available, run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvidia-smi -L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt;, you should see something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GPU 0: NVIDIA A30 (UUID: GPU-654aa619-952d-3f17-01ec-0c050ac8df88)&lt;br /&gt;
  MIG 1g.6gb      Device  0: (UUID: MIG-3868837f-57d0-5089-9887-19240a8809b4)&lt;br /&gt;
  MIG 1g.6gb      Device  1: (UUID: MIG-d28bcf9f-db13-5ad0-9be2-62d0e25c92a9)&lt;br /&gt;
  MIG 1g.6gb      Device  2: (UUID: MIG-e175ec33-0f38-5952-98d5-1c118bd9d398)&lt;br /&gt;
  MIG 1g.6gb      Device  3: (UUID: MIG-53cc4525-2ae7-5c11-9680-302d1d4177ba)&lt;br /&gt;
GPU 1: NVIDIA A30 (UUID: GPU-cb8c2438-a361-3e30-4ff5-4481d43c9e83)&lt;br /&gt;
  MIG 2g.12gb     Device  0: (UUID: MIG-0a768004-2ded-55f6-ac2b-4dd3f696a222)&lt;br /&gt;
  MIG 2g.12gb     Device  1: (UUID: MIG-0296d938-ea26-5174-a884-cd3c686bf660)&lt;br /&gt;
GPU 2: NVIDIA A30 (UUID: GPU-9bcd54bd-5a72-2e7b-90c8-3e3719d09e5c)&lt;br /&gt;
  MIG 4g.24gb     Device  0: (UUID: MIG-a8cb1bd5-6f68-54a1-8e88-ca2fa4ef80c0)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if we want to use the third &amp;lt;code&amp;gt;MIG 1g.6gb&amp;lt;/code&amp;gt; instance with the UUID &amp;lt;code&amp;gt;MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;, set the environment variable&lt;br /&gt;
&amp;lt;code&amp;gt;export CUDA_VISIBLE_DEVICES=MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then running the same test code for &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; and checking with &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, we see that&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| MIG devices:                                                                          |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |&lt;br /&gt;
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |&lt;br /&gt;
|                  |                                |        ECC|                       |&lt;br /&gt;
|==================+================================+===========+=======================|&lt;br /&gt;
|  0    3   0   0  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    4   0   1  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    5   0   2  |             107MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               2MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    6   0   3  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    1   0   0  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    2   0   1  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  2    0   0   0  |               1MiB / 24062MiB  | 56      0 |  4   0    4    1    1 |&lt;br /&gt;
|                  |               1MiB / 32768MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                            |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |&lt;br /&gt;
|        ID   ID                                                             Usage      |&lt;br /&gt;
|=======================================================================================|&lt;br /&gt;
|    0    5    0     530009      C   ...nda3/envs/igwn-py310/bin/python3.10       88MiB |&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Indeed we are using the desired MIG.&lt;br /&gt;
&lt;br /&gt;
== Submitting a job to the GPU partition with slurm ==&lt;br /&gt;
&lt;br /&gt;
Simply specify the GPU partition, &amp;lt;code&amp;gt;astro2_gpu&amp;lt;/code&amp;gt;, and how many ‘generic resources (GRES)’ (in this case, GPU), that you want to use when submitting a job with &amp;lt;code&amp;gt;slurm&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example command is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun -p astro2_gpu --gres=gpu:1 nvidia-smi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should show the GPU (not the virtual one/MIG) that is being assigned to you.&lt;br /&gt;
&lt;br /&gt;
As far as I know, there are 11 Nvidia A100 GPUs in this partition.&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=42</id>
		<title>Using GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=42"/>
		<updated>2023-11-15T12:51:18Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Preparation work: making sure that your software is GPU-aware */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is a detailed guide on how to leverage the GPUs on the NBI cluster. &lt;br /&gt;
&lt;br /&gt;
== Preparation work: making sure that your software is GPU-aware ==&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).&lt;br /&gt;
&lt;br /&gt;
=== Installing CUDA ===&lt;br /&gt;
&lt;br /&gt;
Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g. &amp;lt;code&amp;gt;cuda/11.2&amp;lt;/code&amp;gt;; note: you will need to first load the &amp;lt;code&amp;gt;astro&amp;lt;/code&amp;gt; module) pre-installed on the system.&lt;br /&gt;
&lt;br /&gt;
Here we take a different route -- we install our own (and a newer version of) CUDA for greater control. Usually you would want to install the latest CUDA that your GPUs support, but as of the time of this writing, &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; lacks the support for the latest CUDA version 12.x so we opt for an earlier release (11.8).&lt;br /&gt;
&lt;br /&gt;
To install CUDA via &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt;, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install cuda -c nvidia/label/cuda-11.8.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check that your installation works by running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should match the version that you just installed.&lt;br /&gt;
&lt;br /&gt;
(*NOTE:* This number can be different from the number reported in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, since the number in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; is the latest CUDA that is supported by the driver installed. In other words, you should make sure that the CUDA you installed is &#039;at most&#039; that version)&lt;br /&gt;
&lt;br /&gt;
=== Installing torch ===&lt;br /&gt;
&lt;br /&gt;
Once you have CUDA properly installed, everything else should be a breeze. To install &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; with CUDA awareness, simply do&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import torch&lt;br /&gt;
&lt;br /&gt;
torch.zeros(100).cuda()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If there is no error message, that means you have now installed torch with CUDA support successfully.&lt;br /&gt;
&lt;br /&gt;
=== Installing cupy ===&lt;br /&gt;
&lt;br /&gt;
Again, if you have CUDA installed, the installation of &amp;lt;code&amp;gt;cupy&amp;lt;/code&amp;gt; is very straightforward. Simply run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install -c conda-forge cupy cudnn cutensor nccl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; will intelligently (and hopefully) detect the proper version to installed with your current installation of CUDA.&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import cupy&lt;br /&gt;
&lt;br /&gt;
cupy.random.rand(100).device&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;&amp;lt;CUDA Device 0&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Installing jax with CUDA awareness ===&lt;br /&gt;
&lt;br /&gt;
Installation of &amp;lt;code&amp;gt;jax&amp;lt;/code&amp;gt; with CUDA is also simple. Run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install --upgrade &amp;quot;jax[cuda11_pip]&amp;quot; -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import jax&lt;br /&gt;
&lt;br /&gt;
jax.devices()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;[cuda(id=0)]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running a job directly on a GPU-equipped headnode ==&lt;br /&gt;
&lt;br /&gt;
The GPU-equipped headnode/frontend is &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt; (the node is accessible with &amp;lt;code&amp;gt;astro02.hpc.ku.dk&amp;lt;/code&amp;gt;). There are physically 3 Nvidia-A30 GPUs. One of them is virutally split into 3 smaller and independent virtual GPUs (in Nvidia&#039;s term -- MIG or Multi-Instance GPU), one split into 2 smaller MIGs, and one remains &#039;unsplit&#039;.&lt;br /&gt;
&lt;br /&gt;
To specify which GPU to use, set the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. To see the list of &#039;compute instances&#039; available, run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvidia-smi -L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt;, you should see something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GPU 0: NVIDIA A30 (UUID: GPU-654aa619-952d-3f17-01ec-0c050ac8df88)&lt;br /&gt;
  MIG 1g.6gb      Device  0: (UUID: MIG-3868837f-57d0-5089-9887-19240a8809b4)&lt;br /&gt;
  MIG 1g.6gb      Device  1: (UUID: MIG-d28bcf9f-db13-5ad0-9be2-62d0e25c92a9)&lt;br /&gt;
  MIG 1g.6gb      Device  2: (UUID: MIG-e175ec33-0f38-5952-98d5-1c118bd9d398)&lt;br /&gt;
  MIG 1g.6gb      Device  3: (UUID: MIG-53cc4525-2ae7-5c11-9680-302d1d4177ba)&lt;br /&gt;
GPU 1: NVIDIA A30 (UUID: GPU-cb8c2438-a361-3e30-4ff5-4481d43c9e83)&lt;br /&gt;
  MIG 2g.12gb     Device  0: (UUID: MIG-0a768004-2ded-55f6-ac2b-4dd3f696a222)&lt;br /&gt;
  MIG 2g.12gb     Device  1: (UUID: MIG-0296d938-ea26-5174-a884-cd3c686bf660)&lt;br /&gt;
GPU 2: NVIDIA A30 (UUID: GPU-9bcd54bd-5a72-2e7b-90c8-3e3719d09e5c)&lt;br /&gt;
  MIG 4g.24gb     Device  0: (UUID: MIG-a8cb1bd5-6f68-54a1-8e88-ca2fa4ef80c0)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if we want to use the third &amp;lt;code&amp;gt;MIG 1g.6gb&amp;lt;/code&amp;gt; instance with the UUID &amp;lt;code&amp;gt;MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;, set the environment variable&lt;br /&gt;
&amp;lt;code&amp;gt;export CUDA_VISIBLE_DEVICES=MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then running the same test code for &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; and checking with &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, we see that&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| MIG devices:                                                                          |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |&lt;br /&gt;
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |&lt;br /&gt;
|                  |                                |        ECC|                       |&lt;br /&gt;
|==================+================================+===========+=======================|&lt;br /&gt;
|  0    3   0   0  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    4   0   1  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    5   0   2  |             107MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               2MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    6   0   3  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    1   0   0  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    2   0   1  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  2    0   0   0  |               1MiB / 24062MiB  | 56      0 |  4   0    4    1    1 |&lt;br /&gt;
|                  |               1MiB / 32768MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                            |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |&lt;br /&gt;
|        ID   ID                                                             Usage      |&lt;br /&gt;
|=======================================================================================|&lt;br /&gt;
|    0    5    0     530009      C   ...nda3/envs/igwn-py310/bin/python3.10       88MiB |&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Indeed we are using the desired MIG.&lt;br /&gt;
&lt;br /&gt;
== Submitting a job to the GPU partition with slurm ==&lt;br /&gt;
&lt;br /&gt;
Simply specify the GPU partition, &amp;lt;code&amp;gt;astro2_gpu&amp;lt;/code&amp;gt;, and how many ‘generic resources (GRES)’ (in this case, GPU), that you want to use when submitting a job with &amp;lt;code&amp;gt;slurm&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example command is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun -p astro2_gpu --gres=gpu:1 nvidia-smi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should show the GPU (not the virtual one/MIG) that is being assigned to you.&lt;br /&gt;
&lt;br /&gt;
As far as I know, there are 11 Nvidia A100 GPUs in this partition.&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=41</id>
		<title>Using GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=41"/>
		<updated>2023-11-15T12:50:13Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: /* Installing torch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is a detailed guide on how to leverage the GPUs on the NBI cluster. &lt;br /&gt;
&lt;br /&gt;
== Preparation work: making sure that your software is GPU-aware ==&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).&lt;br /&gt;
&lt;br /&gt;
=== Installing CUDA ===&lt;br /&gt;
&lt;br /&gt;
Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g. &amp;lt;code&amp;gt;cuda/11.2&amp;lt;/code&amp;gt;; note: you will need to first load the &amp;lt;code&amp;gt;astro&amp;lt;/code&amp;gt; module) pre-installed on the system.&lt;br /&gt;
&lt;br /&gt;
Here we take a different route -- we install our own (and a newer version of) CUDA for greater control. Usually you would want to install the latest CUDA that your GPUs support, but in my case, &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; lacks the support for the lastest CUDA version 12.x so I opt for an earlier release (11.8).&lt;br /&gt;
&lt;br /&gt;
To install CUDA via &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt;, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install cuda -c nvidia/label/cuda-11.8.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check that your installation works by running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should match the version that you just installed.&lt;br /&gt;
&lt;br /&gt;
(*NOTE:* This number can be different from the number reported in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, since the number in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; is the latest CUDA that is supported by the driver installed. In other words, you should make sure that the CUDA you installed is &#039;at most&#039; that version)&lt;br /&gt;
&lt;br /&gt;
=== Installing torch ===&lt;br /&gt;
&lt;br /&gt;
Once you have CUDA properly installed, everything else should be a breeze. To install &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; with CUDA awareness, simply do&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import torch&lt;br /&gt;
&lt;br /&gt;
torch.zeros(100).cuda()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If there is no error message, that means you have now installed torch with CUDA support successfully.&lt;br /&gt;
&lt;br /&gt;
=== Installing cupy ===&lt;br /&gt;
&lt;br /&gt;
Again, if you have CUDA installed, the installation of &amp;lt;code&amp;gt;cupy&amp;lt;/code&amp;gt; is very straightforward. Simply run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install -c conda-forge cupy cudnn cutensor nccl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; will intelligently (and hopefully) detect the proper version to installed with your current installation of CUDA.&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import cupy&lt;br /&gt;
&lt;br /&gt;
cupy.random.rand(100).device&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;&amp;lt;CUDA Device 0&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Installing jax with CUDA awareness ===&lt;br /&gt;
&lt;br /&gt;
Installation of &amp;lt;code&amp;gt;jax&amp;lt;/code&amp;gt; with CUDA is also simple. Run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install --upgrade &amp;quot;jax[cuda11_pip]&amp;quot; -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import jax&lt;br /&gt;
&lt;br /&gt;
jax.devices()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;[cuda(id=0)]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running a job directly on a GPU-equipped headnode ==&lt;br /&gt;
&lt;br /&gt;
The GPU-equipped headnode/frontend is &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt; (the node is accessible with &amp;lt;code&amp;gt;astro02.hpc.ku.dk&amp;lt;/code&amp;gt;). There are physically 3 Nvidia-A30 GPUs. One of them is virutally split into 3 smaller and independent virtual GPUs (in Nvidia&#039;s term -- MIG or Multi-Instance GPU), one split into 2 smaller MIGs, and one remains &#039;unsplit&#039;.&lt;br /&gt;
&lt;br /&gt;
To specify which GPU to use, set the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. To see the list of &#039;compute instances&#039; available, run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvidia-smi -L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt;, you should see something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GPU 0: NVIDIA A30 (UUID: GPU-654aa619-952d-3f17-01ec-0c050ac8df88)&lt;br /&gt;
  MIG 1g.6gb      Device  0: (UUID: MIG-3868837f-57d0-5089-9887-19240a8809b4)&lt;br /&gt;
  MIG 1g.6gb      Device  1: (UUID: MIG-d28bcf9f-db13-5ad0-9be2-62d0e25c92a9)&lt;br /&gt;
  MIG 1g.6gb      Device  2: (UUID: MIG-e175ec33-0f38-5952-98d5-1c118bd9d398)&lt;br /&gt;
  MIG 1g.6gb      Device  3: (UUID: MIG-53cc4525-2ae7-5c11-9680-302d1d4177ba)&lt;br /&gt;
GPU 1: NVIDIA A30 (UUID: GPU-cb8c2438-a361-3e30-4ff5-4481d43c9e83)&lt;br /&gt;
  MIG 2g.12gb     Device  0: (UUID: MIG-0a768004-2ded-55f6-ac2b-4dd3f696a222)&lt;br /&gt;
  MIG 2g.12gb     Device  1: (UUID: MIG-0296d938-ea26-5174-a884-cd3c686bf660)&lt;br /&gt;
GPU 2: NVIDIA A30 (UUID: GPU-9bcd54bd-5a72-2e7b-90c8-3e3719d09e5c)&lt;br /&gt;
  MIG 4g.24gb     Device  0: (UUID: MIG-a8cb1bd5-6f68-54a1-8e88-ca2fa4ef80c0)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if we want to use the third &amp;lt;code&amp;gt;MIG 1g.6gb&amp;lt;/code&amp;gt; instance with the UUID &amp;lt;code&amp;gt;MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;, set the environment variable&lt;br /&gt;
&amp;lt;code&amp;gt;export CUDA_VISIBLE_DEVICES=MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then running the same test code for &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; and checking with &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, we see that&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| MIG devices:                                                                          |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |&lt;br /&gt;
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |&lt;br /&gt;
|                  |                                |        ECC|                       |&lt;br /&gt;
|==================+================================+===========+=======================|&lt;br /&gt;
|  0    3   0   0  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    4   0   1  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    5   0   2  |             107MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               2MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    6   0   3  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    1   0   0  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    2   0   1  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  2    0   0   0  |               1MiB / 24062MiB  | 56      0 |  4   0    4    1    1 |&lt;br /&gt;
|                  |               1MiB / 32768MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                            |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |&lt;br /&gt;
|        ID   ID                                                             Usage      |&lt;br /&gt;
|=======================================================================================|&lt;br /&gt;
|    0    5    0     530009      C   ...nda3/envs/igwn-py310/bin/python3.10       88MiB |&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Indeed we are using the desired MIG.&lt;br /&gt;
&lt;br /&gt;
== Submitting a job to the GPU partition with slurm ==&lt;br /&gt;
&lt;br /&gt;
Simply specify the GPU partition, &amp;lt;code&amp;gt;astro2_gpu&amp;lt;/code&amp;gt;, and how many ‘generic resources (GRES)’ (in this case, GPU), that you want to use when submitting a job with &amp;lt;code&amp;gt;slurm&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example command is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun -p astro2_gpu --gres=gpu:1 nvidia-smi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should show the GPU (not the virtual one/MIG) that is being assigned to you.&lt;br /&gt;
&lt;br /&gt;
As far as I know, there are 11 Nvidia A100 GPUs in this partition.&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
	<entry>
		<id>https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=39</id>
		<title>Using GPUs</title>
		<link rel="alternate" type="text/html" href="https://wiki.nbi.ku.dk/w/tycho/index.php?title=Using_GPUs&amp;diff=39"/>
		<updated>2023-11-15T12:49:12Z</updated>

		<summary type="html">&lt;p&gt;Ricolo: Created page with &amp;quot;Here is a detailed guide on how to leverage the GPUs on the NBI cluster.   == Preparation work: making sure that your software is GPU-aware ==  Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).  === Installing CUDA ===  Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here is a detailed guide on how to leverage the GPUs on the NBI cluster. &lt;br /&gt;
&lt;br /&gt;
== Preparation work: making sure that your software is GPU-aware ==&lt;br /&gt;
&lt;br /&gt;
Before proceeding, it is recommended to have your local installation of (ana-/mini-)conda with newer python version (preferably 3.10+).&lt;br /&gt;
&lt;br /&gt;
=== Installing CUDA ===&lt;br /&gt;
&lt;br /&gt;
Normally, you would/should use the system-wide CUDA installation to make sure that it is compatible with the GPUs. In fact, there are environment modules for CUDA (e.g. &amp;lt;code&amp;gt;cuda/11.2&amp;lt;/code&amp;gt;; note: you will need to first load the &amp;lt;code&amp;gt;astro&amp;lt;/code&amp;gt; module) pre-installed on the system.&lt;br /&gt;
&lt;br /&gt;
Here we take a different route -- we install our own (and a newer version of) CUDA for greater control. Usually you would want to install the latest CUDA that your GPUs support, but in my case, &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; lacks the support for the lastest CUDA version 12.x so I opt for an earlier release (11.8).&lt;br /&gt;
&lt;br /&gt;
To install CUDA via &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt;, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install cuda -c nvidia/label/cuda-11.8.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should check that your installation works by running&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvcc --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should match the version that you just installed.&lt;br /&gt;
&lt;br /&gt;
(*NOTE:* This number can be different from the number reported in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, since the number in &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt; is the latest CUDA that is supported by the driver installed. In other words, you should make sure that the CUDA you installed is &#039;at most&#039; that version)&lt;br /&gt;
&lt;br /&gt;
=== Installing torch ===&lt;br /&gt;
&lt;br /&gt;
Once you have CUDA properly installed, everything else should be a breeze. To install `torch` with CUDA awareness, simply do&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import torch&lt;br /&gt;
&lt;br /&gt;
torch.zeros(100).cuda()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If there is no error message, that means you have now installed torch with CUDA support successfully.&lt;br /&gt;
&lt;br /&gt;
=== Installing cupy ===&lt;br /&gt;
&lt;br /&gt;
Again, if you have CUDA installed, the installation of &amp;lt;code&amp;gt;cupy&amp;lt;/code&amp;gt; is very straightforward. Simply run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
conda install -c conda-forge cupy cudnn cutensor nccl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; will intelligently (and hopefully) detect the proper version to installed with your current installation of CUDA.&lt;br /&gt;
&lt;br /&gt;
Test your installation with the following simple code snippet&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import cupy&lt;br /&gt;
&lt;br /&gt;
cupy.random.rand(100).device&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;&amp;lt;CUDA Device 0&amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Installing jax with CUDA awareness ===&lt;br /&gt;
&lt;br /&gt;
Installation of &amp;lt;code&amp;gt;jax&amp;lt;/code&amp;gt; with CUDA is also simple. Run&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
pip install --upgrade &amp;quot;jax[cuda11_pip]&amp;quot; -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test your installation with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
import jax&lt;br /&gt;
&lt;br /&gt;
jax.devices()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should say something like &amp;lt;code&amp;gt;[cuda(id=0)]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Running a job directly on a GPU-equipped headnode ==&lt;br /&gt;
&lt;br /&gt;
The GPU-equipped headnode/frontend is &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt; (the node is accessible with &amp;lt;code&amp;gt;astro02.hpc.ku.dk&amp;lt;/code&amp;gt;). There are physically 3 Nvidia-A30 GPUs. One of them is virutally split into 3 smaller and independent virtual GPUs (in Nvidia&#039;s term -- MIG or Multi-Instance GPU), one split into 2 smaller MIGs, and one remains &#039;unsplit&#039;.&lt;br /&gt;
&lt;br /&gt;
To specify which GPU to use, set the environment variable &amp;lt;code&amp;gt;CUDA_VISIBLE_DEVICES&amp;lt;/code&amp;gt;. To see the list of &#039;compute instances&#039; available, run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nvidia-smi -L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On &amp;lt;code&amp;gt;astro02&amp;lt;/code&amp;gt;, you should see something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GPU 0: NVIDIA A30 (UUID: GPU-654aa619-952d-3f17-01ec-0c050ac8df88)&lt;br /&gt;
  MIG 1g.6gb      Device  0: (UUID: MIG-3868837f-57d0-5089-9887-19240a8809b4)&lt;br /&gt;
  MIG 1g.6gb      Device  1: (UUID: MIG-d28bcf9f-db13-5ad0-9be2-62d0e25c92a9)&lt;br /&gt;
  MIG 1g.6gb      Device  2: (UUID: MIG-e175ec33-0f38-5952-98d5-1c118bd9d398)&lt;br /&gt;
  MIG 1g.6gb      Device  3: (UUID: MIG-53cc4525-2ae7-5c11-9680-302d1d4177ba)&lt;br /&gt;
GPU 1: NVIDIA A30 (UUID: GPU-cb8c2438-a361-3e30-4ff5-4481d43c9e83)&lt;br /&gt;
  MIG 2g.12gb     Device  0: (UUID: MIG-0a768004-2ded-55f6-ac2b-4dd3f696a222)&lt;br /&gt;
  MIG 2g.12gb     Device  1: (UUID: MIG-0296d938-ea26-5174-a884-cd3c686bf660)&lt;br /&gt;
GPU 2: NVIDIA A30 (UUID: GPU-9bcd54bd-5a72-2e7b-90c8-3e3719d09e5c)&lt;br /&gt;
  MIG 4g.24gb     Device  0: (UUID: MIG-a8cb1bd5-6f68-54a1-8e88-ca2fa4ef80c0)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, if we want to use the third &amp;lt;code&amp;gt;MIG 1g.6gb&amp;lt;/code&amp;gt; instance with the UUID &amp;lt;code&amp;gt;MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;, set the environment variable&lt;br /&gt;
&amp;lt;code&amp;gt;export CUDA_VISIBLE_DEVICES=MIG-e175ec33-0f38-5952-98d5-1c118bd9d398&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then running the same test code for &amp;lt;code&amp;gt;torch&amp;lt;/code&amp;gt; and checking with &amp;lt;code&amp;gt;nvidia-smi&amp;lt;/code&amp;gt;, we see that&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| MIG devices:                                                                          |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
| GPU  GI  CI  MIG |                   Memory-Usage |        Vol|      Shared           |&lt;br /&gt;
|      ID  ID  Dev |                     BAR1-Usage | SM     Unc| CE ENC DEC OFA JPG    |&lt;br /&gt;
|                  |                                |        ECC|                       |&lt;br /&gt;
|==================+================================+===========+=======================|&lt;br /&gt;
|  0    3   0   0  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    4   0   1  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    5   0   2  |             107MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               2MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  0    6   0   3  |              12MiB /  5952MiB  | 14      0 |  1   0    1    0    0 |&lt;br /&gt;
|                  |               0MiB /  8191MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    1   0   0  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  1    2   0   1  |              25MiB / 11968MiB  | 28      0 |  2   0    2    0    0 |&lt;br /&gt;
|                  |               0MiB / 16383MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
|  2    0   0   0  |               1MiB / 24062MiB  | 56      0 |  4   0    4    1    1 |&lt;br /&gt;
|                  |               1MiB / 32768MiB  |           |                       |&lt;br /&gt;
+------------------+--------------------------------+-----------+-----------------------+&lt;br /&gt;
&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
| Processes:                                                                            |&lt;br /&gt;
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |&lt;br /&gt;
|        ID   ID                                                             Usage      |&lt;br /&gt;
|=======================================================================================|&lt;br /&gt;
|    0    5    0     530009      C   ...nda3/envs/igwn-py310/bin/python3.10       88MiB |&lt;br /&gt;
+---------------------------------------------------------------------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Indeed we are using the desired MIG.&lt;br /&gt;
&lt;br /&gt;
== Submitting a job to the GPU partition with slurm ==&lt;br /&gt;
&lt;br /&gt;
Simply specify the GPU partition, &amp;lt;code&amp;gt;astro2_gpu&amp;lt;/code&amp;gt;, and how many ‘generic resources (GRES)’ (in this case, GPU), that you want to use when submitting a job with &amp;lt;code&amp;gt;slurm&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example command is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
srun -p astro2_gpu --gres=gpu:1 nvidia-smi&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This should show the GPU (not the virtual one/MIG) that is being assigned to you.&lt;br /&gt;
&lt;br /&gt;
As far as I know, there are 11 Nvidia A100 GPUs in this partition.&lt;/div&gt;</summary>
		<author><name>Ricolo</name></author>
	</entry>
</feed>