You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/docs/admin-docs/docs-hpc.md
+55-19Lines changed: 55 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,28 +10,67 @@ One of the architecturally defined features in Singularity is that it can execut
10
10
11
11
Additionally, because Singularity is not emulating a full hardware level virtualization paradigm, there is no need to separate out any sandboxed networks or file systems because there is no concept of user-escalation within a container. Users can run Singularity containers just as they run any other program on the HPC resource.
12
12
13
-
###Workflows
13
+
## Workflows
14
14
We are in the process of developing Singularity Hub, which will allow for generation of workflows using Singularity containers in an online interface, and easy deployment on standard research clusters (e.g., SLURM, SGE). Currently, the Singularity core software is installed on the following research clusters, meaning you can run Singularity containers as part of your jobs:
15
15
16
16
- The <ahref="http://sherlock.stanford.edu"target="_blank"class="no-after">Sherlock cluster</a> at <ahref="https://srcc.stanford.edu/"class="no-after"target="_blank">Stanford University</a>
17
17
- <ahref="https://www.xsede.org/news/-/news/item/7624"target="_blank"class="no-after">SDSC Comet and Gordon</a> (XSEDE)
18
18
19
19
### Integration with MPI
20
-
Another result of the Singularity architecture is the ability to properly integrate with the Message Passing Interface (MPI). Work has already been done for out of the box compatibility with Open MPI (both in Open MPI v2.x as well as part of Singularity). Here is a document that illustrates the Open MPI/Singularity workflow:
21
-
22
-
From the above image you can follow the invocation pathway:
23
-
1. mpirun is called by the resource manager or the user directly from a shell
24
-
2. Open MPI then calls the process management daemon (ORTED)
25
-
3. The ORTED process launches the Singularity container requested by the mpirun command
26
-
4. Singularity builds the container and namespace environment
27
-
5. Singularity then launches the MPI application within the container
28
-
6. The MPI application launches and loads the Open MPI libraries
29
-
7. The Open MPI libraries connect back to the ORTED process via the Process Management Interface (PMI)
30
-
8. At this point the processes within the container run as they would normally directly on the host at full bandwidth! This entire process happens behind the scenes, and from the user's perspective running via MPI is as simple as just calling mpirun on the host as they would normally.
31
-
32
-
#### Code
33
-
Below is an example snippet of building and installing OpenMPI into a container and then running an example MPI program through Singularity.
34
-
20
+
Another result of the Singularity architecture is the ability to properly integrate with the Message Passing Interface (MPI). Work has already been done for out of the box compatibility with Open MPI (both in Open MPI v2.x as well as part of Singularity). The Open MPI/Singularity workflow works as follows:
21
+
22
+
1. mpirun is called by the resource manager or the user directly from a shell
23
+
2. Open MPI then calls the process management daemon (ORTED)
24
+
3. The ORTED process launches the Singularity container requested by the mpirun command
25
+
4. Singularity builds the container and namespace environment
26
+
5. Singularity then launches the MPI application within the container
27
+
6. The MPI application launches and loads the Open MPI libraries
28
+
7. The Open MPI libraries connect back to the ORTED process via the Process Management Interface (PMI)
29
+
8. At this point the processes within the container run as they would normally directly on the host at full bandwidth! This entire process happens behind the scenes, and from the user's perspective running via MPI is as simple as just calling mpirun on the host as they would normally.
30
+
31
+
Below are example snippets of building and installing OpenMPI into a container and then running an example MPI program through Singularity.
32
+
33
+
34
+
#### MPI Development Example
35
+
36
+
**What are supported Open MPI Version(s)?**
37
+
To achieve proper container'ized Open MPI support, you must use Open MPI version 2.1. Open MPI version 2.1.0 includes a bug in its configure script affecting some interfaces (at least Mellanox cards operating in RoCE mode using libmxm). For this reason, we show the example first.
38
+
39
+
```bash
40
+
$ # Include the appropriate development tools into the container (notice we are calling
41
+
$ # singularity as root and the container is writeable)
The following example (using their master) should work fine on most hardware but if you have an issue, try running this example below:
73
+
35
74
```bash
36
75
$ # Include the appropriate development tools into the container (notice we are calling
37
76
$ # singularity as root and the container is writeable)
@@ -92,6 +131,3 @@ Process 17 exiting
92
131
Process 18 exiting
93
132
Process 19 exiting
94
133
```
95
-
96
-
### Notes
97
-
Supported Open MPI Version(s): To achieve proper container'ized Open MPI support, you must use Open MPI version 2.1 which at the time of this writing has not been released yet. The above example builds from the current master development branch of Open MPI.
0 commit comments