- 
                Notifications
    You must be signed in to change notification settings 
- Fork 4k
Keep exclusive/auto-delete queues with Khepri + network partition #14573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
2b31b23    to
    7cc220b      
    Compare
  
    | There are two test flakes in the new tests that I’m looking at. Otherwise, the patch looks ready for further testing. | 
b4d18c4    to
    69cf89c      
    Compare
  
    5f40cb5    to
    b850e37      
    Compare
  
    175133b    to
    ad3fdfa      
    Compare
  
    … message [Why] So far, when there was a network partition with Mnesia, the most popular partition handling strategies restarted RabbitMQ nodes. Therefore, `rabbit` would execute the boot steps and one of them would notify other members of the cluster that "this RabbitMQ node is live". With Khepri, nodes are not restarted anymore and thus, boot steps are not executed at the end of a network partition. As a consequence, other members are not notified that a member is back online. [How] When the node monitor receives the `nodeup` message (managed by Erlang, meaning that "a remote Erlang node just connected to this node through Erlang distribution"), a `node_up` message is sent to all cluster members (meaning "RabbitMQ is now running on the originating node"). Yeah, very poor naming... This lets the RabbitMQ node monitor know when other nodes running RabbitMQ are back online and react accordingly. If a node is restarted, it means that another node could receive the `node_up` message twice. The actions behind it must be idempotent.
f95aa4b    to
    47d59eb      
    Compare
  
    | end. | ||
|  | ||
| infinite_internal_delete(Q, ActingUser, Reason) -> | ||
| case delete_queue_record(Q, ActingUser, Reason) of | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the node is partitioned with a khepri leader on it this code could grow the khepri log infinitely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you mean. Then, I need to explore @lhoguin’s idea of waiting for the node_up message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, using a Khepri fence after the first delete attempt should be enough: the call waits for all updates to be applied locally. I just pushed that change.
[Why] With Mnesia, when the network partition strategy is set to `pause_minority`, nodes on the "minority side" are stopped. Thus, the exclusive queues that were hosted by nodes on that minority side are lost: * Consumers connected on these nodes are disconnected because the nodes are stopped. * Queue records on the majority side are deleted from the metadata store. This was ok with Mnesia and how this network partition handling strategy is implemented. However, it does not work with Khepri because the nodes on the "minority side" continue to run and serve clients. Therefore the cluster ends up in a weird situation: 1. The "majority side" deleted the queue records. 2. When the network partition is solved, the "minority side" gets the record deletion, but the queue processes continue to run. This was similar for auto-delete queues. [How] With Khepri, we stop to delete transient queue records in general, just because there is a node going down. Thanks to this, an exclusive or an auto-delete queue and its consumer(s) are not affected by a network partition: they continue to work. However, if a node is really lost, we need to clean up dead queue records. This was already done for durable queues with both Mnesia and Khepri. But with Khepri, transient queue records persist in the store like durable queue records (unlike with Mnesia). That's why this commit changes the clean-up function, `rabbit_amqqueue:forget_all_durable/1` into `rabbit_amqqueue:forget_all/1` which deletes all queue records of queues that were hosted on the given node, regardless if they are transient or durable. In addition to this, the queue process will spawn a temporary process who will try to delete the underlying record indefinitely if no other processes are waiting for a reply from the queue process. That's the case for queues that are deleted because of an internal event (like the exclusive/auto-delete conditions). The queue process will exit, which will notify connections that the queue is gone. Thanks to this, the temporary process will do its best to delete the record in case of a network partition, whether the consumers go away during or after that partition. That said, the node monitor drives some failsafe code that cleans up record if the queue process was killed before it could delete its own record. Fixes #12949, #12597, #14527.
47d59eb    to
    3c4d073      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything seems to work as expected. Thanks!
Keep exclusive/auto-delete queues with Khepri + network partition (backport #14573)
Keep exclusive/auto-delete queues with Khepri + network partition
Why
With Mnesia, when the network partition strategy is set to
pause_minority, nodes on the "minority side" are stopped.Thus, the exclusive queues that were hosted by nodes on that minority side are lost:
This was ok with Mnesia and how this network partition handling strategy is implemented. However, it does not work with Khepri because the nodes on the "minority side" continue to run and serve clients. Therefore the cluster ends up in a weird situation:
This was similar for auto-delete queues.
How
With Khepri, we stop to delete transient queue records in general, just because there is a node going down. Thanks to this, an exclusive or an auto-delete queue and its consumer(s) are not affected by a network partition: they continue to work.
However, if a node is really lost, we need to clean up dead queue records. This was already done for durable queues with both Mnesia and Khepri. But with Khepri, transient queue records persist in the store like durable queue records (unlike with Mnesia).
That's why this commit changes the clean-up function,
rabbit_amqqueue:forget_all_durable/1intorabbit_amqqueue:forget_all/1which deletes all queue records of queues that were hosted on the given node, regardless if they are transient or durable.In addition to this, the queue process will spawn a temporary process who will try to delete the underlying record indefinitely if no other processes are waiting for a reply from the queue process. That's the case for queues that are deleted because of an internal event (like the exclusive/auto-delete conditions). The queue process will exit, which will notify connections that the queue is gone.
Thanks to this, the temporary process will do its best to delete the record in case of a network partition, whether the consumers go away during or after that partition. That said, the node monitor drives some failsafe code that cleans up record if the queue process was killed before it could delete its own record.
Fixes #12949, #12597, #14527.