]> git.bitcoin.ninja Git - rust-lightning/commitdiff
Set `holder_commitment_point` to `Available` on upgrade
authorMatt Corallo <git@bluematt.me>
Mon, 14 Oct 2024 15:13:34 +0000 (15:13 +0000)
committerMatt Corallo <git@bluematt.me>
Mon, 14 Oct 2024 15:31:58 +0000 (15:31 +0000)
When we upgrade from LDK 0.0.123 or prior, we need to intialize
`holder_commitment_point` with commitment point(s). In
1f7f3a366c9e62cff5a5025724b5b508255a89d7 we changed the point(s)
which we fetch from both the current and next per-commitment-point
(setting the value to `HolderCommitmentPoint::Available` on
upgrade) to only fetching the current per-commitment-point (setting
the value to `HolderCommitmentPoint::PendingNext` on upgrade).

In `commitment_signed` handling, we expect the next
per-commitment-point to be available (allowing us to `advance()`
the `holder_commitment_point`), as it was included in the
`revoke_and_ack` we most recently sent to our peer, so must've been
available at that time.

Sadly, these two interact negatively with each other - on upgrade,
assuming the channel is at a steady state and there are no pending
updates, we'll not make the next per-commitment-point available but
if we receive a `commitment_signed` from our peer we'll assume it
is. As a result, in debug mode, we'll hit an assertion failure, and
in production mode we'll force-close the channel.

Instead, here, we fix the upgrade logic to always upgrade directly
to `HolderCommitmentPoint::Available`, making the next
per-commitment-point available immediately.

We also attempt to resolve the next per-commitment-point in
`get_channel_reestablish`, allowing any channels which were
upgraded to LDK 0.0.124 and are in this broken state to avoid the
force-closure, as long as they don't receive a `commitment_signed`
in the interim.

lightning/src/ln/channel.rs

index 641a61c31402264f8b13f9ccd8b69ed8f38f89cd..f21788e8bd2bcfd65dbdc5f496baa64ea52f9db5 100644 (file)
@@ -7153,6 +7153,12 @@ impl<SP: Deref> Channel<SP> where
        pub fn get_channel_reestablish<L: Deref>(&mut self, logger: &L) -> msgs::ChannelReestablish where L::Target: Logger {
                assert!(self.context.channel_state.is_peer_disconnected());
                assert_ne!(self.context.cur_counterparty_commitment_transaction_number, INITIAL_COMMITMENT_NUMBER);
+               // This is generally the first function which gets called on any given channel once we're
+               // up and running normally. Thus, we take this opportunity to attempt to resolve the
+               // `holder_commitment_point` to get any keys which we are currently missing.
+               self.context.holder_commitment_point.try_resolve_pending(
+                       &self.context.holder_signer, &self.context.secp_ctx, logger,
+               );
                // Prior to static_remotekey, my_current_per_commitment_point was critical to claiming
                // current to_remote balances. However, it no longer has any use, and thus is now simply
                // set to a dummy (but valid, as required by the spec) public key.
@@ -9464,8 +9470,10 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch
                                // TODO(async_signing): remove this expect with the Uninitialized variant
                                let current = holder_signer.get_per_commitment_point(cur_holder_commitment_transaction_number, &secp_ctx)
                                        .expect("Must be able to derive the current commitment point upon channel restoration");
-                               HolderCommitmentPoint::PendingNext {
-                                       transaction_number: cur_holder_commitment_transaction_number, current,
+                               let next = holder_signer.get_per_commitment_point(cur_holder_commitment_transaction_number - 1, &secp_ctx)
+                                       .expect("Must be able to derive the next commitment point upon channel restoration");
+                               HolderCommitmentPoint::Available {
+                                       transaction_number: cur_holder_commitment_transaction_number, current, next,
                                }
                        },
                };