-
Notifications
You must be signed in to change notification settings - Fork 436
Batch funding #2486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch funding #2486
Conversation
cc00689 to
ac5c8c5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wasn't as bad as I thought it would be, but is definitely gonna require re-reading most of channel and channelmanager to convince myself its correct :)
lightning/src/ln/channel.rs
Outdated
| WaitingForBatch = 1 << 13, | ||
| } | ||
| const BOTH_SIDES_SHUTDOWN_MASK: u32 = ChannelState::LocalShutdownSent as u32 | ChannelState::RemoteShutdownSent as u32; | ||
| const MULTI_STATE_FLAGS: u32 = BOTH_SIDES_SHUTDOWN_MASK | ChannelState::PeerDisconnected as u32 | ChannelState::MonitorUpdateInProgress as u32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the new state should also be added here, but we'll need test coverage - can you add a test that checks for receiving our peer's channel_ready on a channel which is currently WaitingForBatch? This is basically the peer accepting the channel 0conf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The WaitingForBatch flag is not a MULTI_STATE_FLAG I believe as it is only set on FundingSent? I did a pass through all the uses of FundingSent in channel.rs and added tests for the channel_ready behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, yea, the MULTI_STATE_FLAGS thing is a bit of a misnomer, I think, it maybe should just be NON_STATE_FLAGS - the channel_state is really the OR of a state (the initial states and then ChannelReady) and some flags. MULTI_STATE_FLAGS lists the flags. But, eg, is monitor_updating_restored correct - below the funding_broadcastable setting we set it back to None if channel_state & !MULTI_STATE_FLAGS >= ChannelReady (which is true if WaitingForBatch is set even if we're not ChannelReady). Similar in do_best_block_updated. Still, to your point, if we do add it to MULTI_STATE_FLAGS I think we'll probably break check_get_channel_ready (we'll need to also skip the send if we are in WaitingForBatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems there are a couple of use cases:
- A mask of flags that can be applied to multiple states and should be inherited when transitioning between states.
MULTI_STATE_FLAGSseems to fulfill that purpose for the*ShutdownSent,PeerDisconnected,MonitorUpdateInProgressflags. - A mask of all flags that can be applied to only look at the sequential states and compare them. I added a superset of
MULTI_STATE_FLAGScalledSTATE_FLAGShere.
I went through all uses of Channel::channel_state, verified the usage and adjusted any < or > comparisons to occur against self.channel_state & !STATE_FLAGS.
ac5c8c5 to
6ce3713
Compare
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #2486 +/- ##
==========================================
+ Coverage 88.86% 88.91% +0.05%
==========================================
Files 113 113
Lines 84517 85131 +614
Branches 84517 85131 +614
==========================================
+ Hits 75103 75696 +593
- Misses 7211 7222 +11
- Partials 2203 2213 +10
☔ View full report in Codecov by Sentry. |
6ce3713 to
e26ba06
Compare
ariard
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current approach of dropping the whole batch is the safest one from a malleability viewpoint, yet once the transaction is counter-signed by all counterparties and broadcast, malleability is no more an issue if we can confirm the transaction.
I think there are intersections here with dual-funding/splicing and per-counterparty negotiated option_zeroconf, at least in term of API.
Not sure what you're referring to? Are you referring to some specific above discussion or just stating that we need to drop the batch if some output doesn't get a counter-signature? |
bceda6a to
6761bc6
Compare
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, thanks for all the work here, I really like the overall patch, but a handful of comments on minor tweaks for readability. I'd also love to see more test coverage of some edge cases, eg what happens if a peer sends us a channel_ready before we've broadcasted while we're waiting on a batch, a test of the initial monitor update(s) completing async, and a test for shutting down with one of the two channels having had their initial monitor update persisted but not the other one.
| /// Flag which is set on `FundingSent` to indicate this channel is funded in a batch and the | ||
| /// broadcasting of the funding transaction is being held until all channels in the batch | ||
| /// have received funding_signed and have their monitors persisted. | ||
| WaitingForBatch = 1 << 13, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do kinda wonder if we shouldn't put this up further and renumber later flags. We'd have to juggle it a bunch in the serialization logic so maybe not worth it, but there's a few places it'd simplify the code cause we could keep >= kinda things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, right, it would make the serialization logic more complex. I do wonder if this can be refactored at some point to model the ordered states and flags without bitwise logic.
bd1ad7e to
fc880f2
Compare
|
Expanded testing coverage which covers:
|
3baad70 to
54010cd
Compare
54010cd to
3cd0b4d
Compare
25d5504 to
6d40a8f
Compare
6d40a8f to
570a1c7
Compare
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, I'd like to land this monday for 0.0.117 (later probably we can't make the release 😭), and am happy with the channel.rs changes and most of channelmanager, just want to see the lockorder inverted so we can simplify the potential for races.
535c19e to
7816cf6
Compare
|
In the latest change, the lock order is now reversed, acquiring It does require a central place to propagate closures of channels to the remaining channels in the batch. Previously, this was done in |
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor cleanups and assertions, still digging into test coverage.
9886bba to
7b013b5
Compare
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, please squash the fixup.
This is a step towards more unified closing of channels, and provides a place where the per_peer_state lock is not held.
7b013b5 to
8234199
Compare
|
Squashed. |
wpaulino
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Glad we were able to get rid of the additional tracking map!
| } | ||
|
|
||
| fn finish_force_close_channel(&self, shutdown_res: ShutdownResult) { | ||
| fn finish_close_channel(&self, shutdown_res: ShutdownResult) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: also update the Finishing force-closure of channel... log below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in follow-up.
| let events = nodes[0].node.get_and_clear_pending_events(); | ||
| assert_eq!(events.len(), 1); | ||
| match events[0] { | ||
| Event::ChannelClosed { channel_id, .. } => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: use check_closed_event?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, that function does not match exactly as there are multiple closure reasons (the root cause one and the batch closure). Tried to refactor this in the follow-up.
| /// - A list of HTLCs to fail back in the form of the (source, payment hash, and this channel's | ||
| /// counterparty_node_id and channel_id). | ||
| /// - An optional transaction id identifying a corresponding batch funding transaction. | ||
| pub(crate) type ShutdownResult = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably a good time to make this a struct now, but not a blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in follow-up.
| use crate::util::config::UserConfig; | ||
| use crate::util::string::UntrustedString; | ||
|
|
||
| use bitcoin::{PackedLockTime, Transaction, TxOut}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: these are all unused now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed in follow-up.
| btree_map::Entry::Vacant(vacant) => Some(vacant.insert(Vec::new())), | ||
| } | ||
| }); | ||
| for (channel_idx, &(temporary_channel_id, counterparty_node_id)) in temporary_channels.iter().enumerate() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: channel_idx is unused now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed in follow-up.
TheBlueMatt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets address any remaining feedback in a followup so we can land this.
| /// Return values are identical to [`Self::funding_transaction_generated`], respective to | ||
| /// each individual channel and transaction output. | ||
| /// | ||
| /// Do NOT broadcast the funding transaction yourself. This batch funding transcaction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transcaction 😂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in follow-up 😅
|
Following up in #2613. |
Implements batch funding:
ChannelManager, together with the temporary channel IDs it is funding.ChannelManager, tracking whether channels have receivedfunding_signedand whether the monitor is persisted.funding_signedhas been received but broadcasting the funding transaction is held until all channels in the batch have receivedfunding_signedand have their monitors persisted. This state is cleared byChannelManagerat the appropriate time.Fixes #1510.