-
Notifications
You must be signed in to change notification settings - Fork 115
Description
Many Fortran codes use MPI and we should teach fpm about it and support it natively.
There are two main approaches to it:
-
Use the mpi compiler wrappers such as
mpif90, which are provided by MPI implementations such asmpichoropenmpi. Those compiler wrappers call the compiler with correct flags to link all MPI libraries correctly. -
You call the compiler wrappers
mpif90to figure out what flags to use, then you supply those flags manually when executing the given compiler.
CMake supports both options and I have used both in my projects. There are pros and cons of each approach and I have discussed this with quite a lot of people already. I personally lean towards the second approach, which treats MPI as a 3rd party library that you depend on. Ultimately, that is what it is. However, for fpm, the user does not actually see how the compiler is being invoked anyway, so the first approach might also work. So I would choose whatever approach is easier to implement and maintain in fpm and I think we can even switch later (internally) if needed.
Let's discuss the user facing design. It seems it might be as simple as adding mpi=true into fpm.toml for each executable that should be compiled with MPI. And fpm would then transitively enable MPI for each module and dependency that the executable needs. In terms of choosing which MPI implementation should be used, we can start by using whatever mpif90 is available in PATH.