Skip to content

Conversation

@bosilca
Copy link
Member

@bosilca bosilca commented Mar 28, 2019

This is not fixing any issue, it is simply preventing a sefault if the
communicator creation has not happened as expected. Thus, this code path
should never really be hit in a correct MPI application with a valid
communicator creation support.

This PR provides a better conclusion to #6522.

Signed-off-by: George Bosilca [email protected]

This is not fixing any issue, it is simply preventing a sefault if the
communicator creation has not happened as expected. Thus, this code path
should never really be hit in a correct MPI application with a valid
communicator creation support.

Signed-off-by: George Bosilca <[email protected]>
*/
if( OPAL_UNLIKELY(rank >= (int)pml_comm->num_procs) ) {
ompi_rte_abort(-1, "PML OB1 received a message from a rank outside the"
" valid range of the communicator. Please submit a bug request!");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this invoke the error handler on the communicator -- not just always abort?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the comment states, if we ever hit this case the communicator creation did something really bad and the communicator is broken globally, so there is only one realistic path forward, abort the job.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a funny statement coming from the ULFM guy. 😉

Shouldn't we let the application try to save its own state (e.g., if they selected ERRORS_RETURN)? I.e., yes, the state of OMPI is borked -- but it still may be desirable to do something outside the scope of MPI.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My position remains coherent, ULFM is about controlled behaviors in well understood scenarios (mainly process fault) while this particular PR is about coping with a broken state of the MPI implementation itself.

It makes no sense to allow the error handler on the communicator in question to be called, simply because we know (due to the triggered condition) that, at least, this communicator (and potentially all communicators created after it) are completely broken (mismatched cid on the participants). If we escalate the issue to MPI_COMM_WORLD, and trigger the error handler there, it might come at an unexpected time for the application leading to even more badness.

To summarize my position, when we reach this condition there is a possibility that we have already mismatched messages in this broken communicator, which would result in the state of the MPI application being inconsistent and thus unsafe for saving.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough points. Should we have an error code about that? E.g., MPI_ERR_INTERNAL_STATE_IS_BOKED? That would let the application decide whether it wants to save or just abort.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would clearly be a better approach. However, we currently profusely call ompi_rte_abort in all critical places where we do not want to clean the return path. Fixing this is a desirable long term improvement, until then this patch provides a bandaid (that hopefully will never be triggered).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, fair enough. Merge away. 😄

@bosilca bosilca merged commit 8cf7a7e into open-mpi:master Apr 9, 2019
@bosilca bosilca deleted the topic/issue6522 branch February 29, 2020 21:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants