Skip to content

Commit 405a753

Browse files
committed
Merge branch 'l1ll1-mpi_update' into docs/2.3
2 parents e5e6d3c + 19a0751 commit 405a753

File tree

1 file changed

+55
-19
lines changed

1 file changed

+55
-19
lines changed

pages/docs/admin-docs/docs-hpc.md

Lines changed: 55 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -10,28 +10,67 @@ One of the architecturally defined features in Singularity is that it can execut
1010

1111
Additionally, because Singularity is not emulating a full hardware level virtualization paradigm, there is no need to separate out any sandboxed networks or file systems because there is no concept of user-escalation within a container. Users can run Singularity containers just as they run any other program on the HPC resource.
1212

13-
### Workflows
13+
## Workflows
1414
We are in the process of developing Singularity Hub, which will allow for generation of workflows using Singularity containers in an online interface, and easy deployment on standard research clusters (e.g., SLURM, SGE). Currently, the Singularity core software is installed on the following research clusters, meaning you can run Singularity containers as part of your jobs:
1515

1616
- The <a href="http://sherlock.stanford.edu" target="_blank" class="no-after">Sherlock cluster</a> at <a href="https://srcc.stanford.edu/" class="no-after" target="_blank">Stanford University</a>
1717
- <a href="https://www.xsede.org/news/-/news/item/7624" target="_blank" class="no-after">SDSC Comet and Gordon</a> (XSEDE)
1818

1919
### Integration with MPI
20-
Another result of the Singularity architecture is the ability to properly integrate with the Message Passing Interface (MPI). Work has already been done for out of the box compatibility with Open MPI (both in Open MPI v2.x as well as part of Singularity). Here is a document that illustrates the Open MPI/Singularity workflow:
21-
22-
From the above image you can follow the invocation pathway:
23-
1. mpirun is called by the resource manager or the user directly from a shell
24-
2. Open MPI then calls the process management daemon (ORTED)
25-
3. The ORTED process launches the Singularity container requested by the mpirun command
26-
4. Singularity builds the container and namespace environment
27-
5. Singularity then launches the MPI application within the container
28-
6. The MPI application launches and loads the Open MPI libraries
29-
7. The Open MPI libraries connect back to the ORTED process via the Process Management Interface (PMI)
30-
8. At this point the processes within the container run as they would normally directly on the host at full bandwidth! This entire process happens behind the scenes, and from the user's perspective running via MPI is as simple as just calling mpirun on the host as they would normally.
31-
32-
#### Code
33-
Below is an example snippet of building and installing OpenMPI into a container and then running an example MPI program through Singularity.
34-
20+
Another result of the Singularity architecture is the ability to properly integrate with the Message Passing Interface (MPI). Work has already been done for out of the box compatibility with Open MPI (both in Open MPI v2.x as well as part of Singularity). The Open MPI/Singularity workflow works as follows:
21+
22+
1. mpirun is called by the resource manager or the user directly from a shell
23+
2. Open MPI then calls the process management daemon (ORTED)
24+
3. The ORTED process launches the Singularity container requested by the mpirun command
25+
4. Singularity builds the container and namespace environment
26+
5. Singularity then launches the MPI application within the container
27+
6. The MPI application launches and loads the Open MPI libraries
28+
7. The Open MPI libraries connect back to the ORTED process via the Process Management Interface (PMI)
29+
8. At this point the processes within the container run as they would normally directly on the host at full bandwidth! This entire process happens behind the scenes, and from the user's perspective running via MPI is as simple as just calling mpirun on the host as they would normally.
30+
31+
Below are example snippets of building and installing OpenMPI into a container and then running an example MPI program through Singularity.
32+
33+
34+
#### MPI Development Example
35+
36+
**What are supported Open MPI Version(s)?**
37+
To achieve proper container'ized Open MPI support, you must use Open MPI version 2.1. Open MPI version 2.1.0 includes a bug in its configure script affecting some interfaces (at least Mellanox cards operating in RoCE mode using libmxm). For this reason, we show the example first.
38+
39+
```bash
40+
$ # Include the appropriate development tools into the container (notice we are calling
41+
$ # singularity as root and the container is writeable)
42+
$ sudo singularity exec -w /tmp/Centos-7.img yum groupinstall "Development Tools"
43+
$
44+
$ # Obtain the development version of Open MPI
45+
$ wget https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.0.tar.bz2
46+
$ tar jtf openmpi-2.1.0.tar.bz2
47+
$ cd openmpi-2.1.0
48+
$
49+
$ # Build OpenMPI in the working directory, using the tool chain within the container
50+
$ # This step is unusual in a stable release but there is a bug in the configure script
51+
$ # affecting some interfaces
52+
$ singularity exec /tmp/Centos-7.img ./autogen.pl
53+
$ singularity exec /tmp/Centos-7.img ./configure --prefix=/usr/local
54+
$ singularity exec /tmp/Centos-7.img make
55+
$
56+
$ # Install OpenMPI into the container (notice now running as root and container is writeable)
57+
$ sudo singularity exec -w -B /home /tmp/Centos-7.img make install
58+
$
59+
$ # Build the OpenMPI ring example and place the binary in this directory
60+
$ singularity exec /tmp/Centos-7.img mpicc examples/ring_c.c -o ring
61+
$
62+
$ # Install the MPI binary into the container at /usr/bin/ring
63+
$ sudo singularity copy /tmp/Centos-7.img ./ring /usr/bin/
64+
$
65+
$ # Run the MPI program within the container by calling the MPIRUN on the host
66+
$ mpirun -np 20 singularity exec /tmp/Centos-7.img /usr/bin/ring
67+
68+
```
69+
70+
#### Code Example
71+
72+
The following example (using their master) should work fine on most hardware but if you have an issue, try running this example below:
73+
3574
```bash
3675
$ # Include the appropriate development tools into the container (notice we are calling
3776
$ # singularity as root and the container is writeable)
@@ -92,6 +131,3 @@ Process 17 exiting
92131
Process 18 exiting
93132
Process 19 exiting
94133
```
95-
96-
### Notes
97-
Supported Open MPI Version(s): To achieve proper container'ized Open MPI support, you must use Open MPI version 2.1 which at the time of this writing has not been released yet. The above example builds from the current master development branch of Open MPI.

0 commit comments

Comments
 (0)