Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions ompi/mca/pml/ob1/pml_ob1_comm.h
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,16 @@ static inline mca_pml_ob1_comm_proc_t *mca_pml_ob1_peer_lookup (struct ompi_comm
{
mca_pml_ob1_comm_t *pml_comm = (mca_pml_ob1_comm_t *)comm->c_pml_comm;

/**
* We have very few ways to validate the correct, and collective, creation of
* the communicator, and ensure all processes have the same cid. The least we
* can do is to check that we are not using a rank that is outside the scope
* of the communicator.
*/
if( OPAL_UNLIKELY(rank >= (int)pml_comm->num_procs) ) {
ompi_rte_abort(-1, "PML OB1 received a message from a rank outside the"
" valid range of the communicator. Please submit a bug request!");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this invoke the error handler on the communicator -- not just always abort?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the comment states, if we ever hit this case the communicator creation did something really bad and the communicator is broken globally, so there is only one realistic path forward, abort the job.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a funny statement coming from the ULFM guy. 😉

Shouldn't we let the application try to save its own state (e.g., if they selected ERRORS_RETURN)? I.e., yes, the state of OMPI is borked -- but it still may be desirable to do something outside the scope of MPI.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My position remains coherent, ULFM is about controlled behaviors in well understood scenarios (mainly process fault) while this particular PR is about coping with a broken state of the MPI implementation itself.

It makes no sense to allow the error handler on the communicator in question to be called, simply because we know (due to the triggered condition) that, at least, this communicator (and potentially all communicators created after it) are completely broken (mismatched cid on the participants). If we escalate the issue to MPI_COMM_WORLD, and trigger the error handler there, it might come at an unexpected time for the application leading to even more badness.

To summarize my position, when we reach this condition there is a possibility that we have already mismatched messages in this broken communicator, which would result in the state of the MPI application being inconsistent and thus unsafe for saving.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough points. Should we have an error code about that? E.g., MPI_ERR_INTERNAL_STATE_IS_BOKED? That would let the application decide whether it wants to save or just abort.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would clearly be a better approach. However, we currently profusely call ompi_rte_abort in all critical places where we do not want to clean the return path. Fixing this is a desirable long term improvement, until then this patch provides a bandaid (that hopefully will never be triggered).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, fair enough. Merge away. 😄

}
if (OPAL_UNLIKELY(NULL == pml_comm->procs[rank])) {
OPAL_THREAD_LOCK(&pml_comm->proc_lock);
if (NULL == pml_comm->procs[rank]) {
Expand Down