-
Notifications
You must be signed in to change notification settings - Fork 421
Initiate gossip sync only after receiving GossipTimestampFilter. #1452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -49,6 +49,9 @@ use core::{cmp, mem}; | |
| use bitcoin::bech32::u5; | ||
| use chain::keysinterface::{InMemorySigner, Recipient, KeyMaterial}; | ||
|
|
||
| #[cfg(feature = "std")] | ||
| use std::time::{SystemTime, UNIX_EPOCH}; | ||
|
|
||
| pub struct TestVecWriter(pub Vec<u8>); | ||
| impl Writer for TestVecWriter { | ||
| fn write_all(&mut self, buf: &[u8]) -> Result<(), io::Error> { | ||
|
|
@@ -341,6 +344,7 @@ fn get_dummy_channel_update(short_chan_id: u64) -> msgs::ChannelUpdate { | |
| pub struct TestRoutingMessageHandler { | ||
| pub chan_upds_recvd: AtomicUsize, | ||
| pub chan_anns_recvd: AtomicUsize, | ||
| pub pending_events: Mutex<Vec<events::MessageSendEvent>>, | ||
| pub request_full_sync: AtomicBool, | ||
| } | ||
|
|
||
|
|
@@ -349,6 +353,7 @@ impl TestRoutingMessageHandler { | |
| TestRoutingMessageHandler { | ||
| chan_upds_recvd: AtomicUsize::new(0), | ||
| chan_anns_recvd: AtomicUsize::new(0), | ||
| pending_events: Mutex::new(vec![]), | ||
| request_full_sync: AtomicBool::new(false), | ||
| } | ||
| } | ||
|
|
@@ -384,7 +389,35 @@ impl msgs::RoutingMessageHandler for TestRoutingMessageHandler { | |
| Vec::new() | ||
| } | ||
|
|
||
| fn peer_connected(&self, _their_node_id: &PublicKey, _init_msg: &msgs::Init) {} | ||
| fn peer_connected(&self, their_node_id: &PublicKey, init_msg: &msgs::Init) { | ||
| if !init_msg.features.supports_gossip_queries() { | ||
| return (); | ||
| } | ||
|
|
||
| let should_request_full_sync = self.request_full_sync.load(Ordering::Acquire); | ||
|
|
||
| #[allow(unused_mut, unused_assignments)] | ||
| let mut gossip_start_time = 0; | ||
| #[cfg(feature = "std")] | ||
| { | ||
| gossip_start_time = SystemTime::now().duration_since(UNIX_EPOCH).expect("Time must be > 1970").as_secs(); | ||
| if should_request_full_sync { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. would it be more legible to write ?
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe it was copied from the non-test code, so presumably fine-enough. |
||
| gossip_start_time -= 60 * 60 * 24 * 7 * 2; // 2 weeks ago | ||
| } else { | ||
| gossip_start_time -= 60 * 60; // an hour ago | ||
| } | ||
| } | ||
|
|
||
| let mut pending_events = self.pending_events.lock().unwrap(); | ||
| pending_events.push(events::MessageSendEvent::SendGossipTimestampFilter { | ||
| node_id: their_node_id.clone(), | ||
| msg: msgs::GossipTimestampFilter { | ||
| chain_hash: genesis_block(Network::Testnet).header.block_hash(), | ||
| first_timestamp: gossip_start_time as u32, | ||
| timestamp_range: u32::max_value(), | ||
| }, | ||
| }); | ||
| } | ||
|
|
||
| fn handle_reply_channel_range(&self, _their_node_id: &PublicKey, _msg: msgs::ReplyChannelRange) -> Result<(), msgs::LightningError> { | ||
| Ok(()) | ||
|
|
@@ -405,7 +438,10 @@ impl msgs::RoutingMessageHandler for TestRoutingMessageHandler { | |
|
|
||
| impl events::MessageSendEventsProvider for TestRoutingMessageHandler { | ||
| fn get_and_clear_pending_msg_events(&self) -> Vec<events::MessageSendEvent> { | ||
| vec![] | ||
| let mut ret = Vec::new(); | ||
| let mut pending_events = self.pending_events.lock().unwrap(); | ||
| core::mem::swap(&mut ret, &mut pending_events); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. purely out of curiosity, what's the benefit of mem::swap over drain?
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| ret | ||
| } | ||
| } | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we should just fix this to honor
request_full_syncand send a gossip filter on connected?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the filter is currently already sent in
peer_connected()and sincerequest_full_syncwas not used at all, I figured it would sense to simply remove it rather than to add much more logic to the so-far pretty slimTestRoutingMessageHandler. Or are you suggesting to introducerequest_full_syncto other places, e.g.,NetGraphMsgHandler, so it could be used byshould_request_full_sync()?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, who is sending the filter in
peer_connectedwhen we're using aTestRoutingMessageHandler? Itspeer_connectedmethod is empty.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, now I get what you're saying. You're right, its probably easiest here to let the
TestRoutingMessageHandlerimplementation ofpeer_connectedmirror the functionality ofNetGraphMsgHandler.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went that route (replicating the
peer_connectedfunctionality inTestRoutingMessgaeHandler) and it generally seems to work nicely. However, now the second peer receives only 86/100 channel updates and 43/50 channel announcements, i.e., the test now fails at the last twoassert_eqs. I currently struggle to find why this would be the case, especially since the logs show that the gossip is indeed just postponed until the peers have receivedgossip_timestamp_filtermessages. I'll come around to continue staring at the code next week.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and before I adapt the test I at least would like to understand why suddenly not the same number of channel updates are delivered, because its not intuitive to me from the changes I made...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the channel updates will be delivered eventually, but the test currently handles a specific number of round-trips between the peers and then makes sure the messages were all delivered. We now need one more round-trip, I believe (but its not quite in the same set of steps as the existing loop, so we can't just add one).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In 7a99de2 I now moved some things in the loop around. This makes the test succeed hopefully without reducing the coverage too much (i.e., it now checks for B's empty
outbound_dataonce in the end, not in every iteration).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I mean I think that's fine. That said, I'd kinda prefer to update the pre-loop code to add a few message delivery calls that match the new message flow, but if its too annoying to figure that out exactly I'd be okay landing this as-is. Its kinda nice to use this test to ensure that we don't write more gossip messages to the buffer after we reach our limit, even if it makes the test kinda brittle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, you're right. The gossip filter messages basically added half a round trip. a31362a should be a cleaner fix now.