Skip to content

Conversation

@ViktorT-11
Copy link
Contributor

Prerequisite for #1507, and addresses comment #1507 (comment).

Currently decode_update_add_htlc_onion returns the channel_state
lock to ensure that internal_update_add_htlc holds a single
channel_state lock in when the entire function execution. This is
unnecessary, and since we are moving the channel storage to the
per_peer_state, this no longer achieves the goal it was intended for.

We therefore avoid returning the channel_state from
decode_update_add_htlc_onion, and just retake the lock in
internal_update_add_htlc instead.

dunxen
dunxen previously approved these changes Aug 1, 2022
Copy link
Contributor

@dunxen dunxen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 2292538
Looks good!

Seems the GH actions error is just a transient crates one.

};

channel_state = Some(self.channel_state.lock().unwrap());
let mut channel_state = self.channel_state.lock().unwrap();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While you're here, can you move this down inside if let Some((err, code, chan_update)) = loop {? That way we're not holding the lock at all if we call return_err later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, is it ok if i place that cleanup in #1639, or would you prefer that this is based on #1639 instead?

Copy link
Contributor Author

@ViktorT-11 ViktorT-11 Aug 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nvm, I'll take the channel_state lock temporarily for the short_to_chan_info and then drop it.
36ce228

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't lock+unlock twice immediately back-to-back? That's unnecessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow.. Honestly disappointed in myself that I pushed that yesterday night.
Fixed with: 4a07ac0

@ViktorT-11 ViktorT-11 force-pushed the 2022-07-update-decode-update-add-htlc-onion-return-parameters branch 2 times, most recently from 36ce228 to 4a07ac0 Compare August 2, 2022 20:28
@TheBlueMatt
Copy link
Collaborator

LGTM, feel free to squash.

Currently `decode_update_add_htlc_onion` returns the `channel_state`
lock to ensure that `internal_update_add_htlc` holds a single
`channel_state` lock in when the entire function execution. This is
unnecessary, and since we are moving the channel storage to the
`per_peer_state`, this no longer achieves the goal it was intended for.

We therefore avoid returning the `channel_state` from
`decode_update_add_htlc_onion`, and just retake the lock in
`internal_update_add_htlc` instead.
@ViktorT-11 ViktorT-11 force-pushed the 2022-07-update-decode-update-add-htlc-onion-return-parameters branch from 4a07ac0 to 65e6fb7 Compare August 2, 2022 21:17
@ViktorT-11
Copy link
Contributor Author

Squashed without changes.

Thanks for the reviews @dunxen & @TheBlueMatt!

Copy link
Contributor

@dunxen dunxen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 65e6fb7

Unrelated: I remember now @TheBlueMatt mentioned that GH actions weirdness with that specific Rust version :/

Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@TheBlueMatt TheBlueMatt merged commit b4521f5 into lightningdevkit:main Aug 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants