When we upgrade from LDK 0.0.123 or prior, we need to intialize
`holder_commitment_point` with commitment point(s). In
1f7f3a366c9e62cff5a5025724b5b508255a89d7 we changed the point(s)
which we fetch from both the current and next per-commitment-point
(setting the value to `HolderCommitmentPoint::Available` on
upgrade) to only fetching the current per-commitment-point (setting
the value to `HolderCommitmentPoint::PendingNext` on upgrade).
In `commitment_signed` handling, we expect the next
per-commitment-point to be available (allowing us to `advance()`
the `holder_commitment_point`), as it was included in the
`revoke_and_ack` we most recently sent to our peer, so must've been
available at that time.
Sadly, these two interact negatively with each other - on upgrade,
assuming the channel is at a steady state and there are no pending
updates, we'll not make the next per-commitment-point available but
if we receive a `commitment_signed` from our peer we'll assume it
is. As a result, in debug mode, we'll hit an assertion failure, and
in production mode we'll force-close the channel.
Instead, here, we fix the upgrade logic to always upgrade directly
to `HolderCommitmentPoint::Available`, making the next
per-commitment-point available immediately.
We also attempt to resolve the next per-commitment-point in
`get_channel_reestablish`, allowing any channels which were
upgraded to LDK 0.0.124 and are in this broken state to avoid the
force-closure, as long as they don't receive a `commitment_signed`
in the interim.
pub fn get_channel_reestablish<L: Deref>(&mut self, logger: &L) -> msgs::ChannelReestablish where L::Target: Logger {
assert!(self.context.channel_state.is_peer_disconnected());
assert_ne!(self.context.cur_counterparty_commitment_transaction_number, INITIAL_COMMITMENT_NUMBER);
+ // This is generally the first function which gets called on any given channel once we're
+ // up and running normally. Thus, we take this opportunity to attempt to resolve the
+ // `holder_commitment_point` to get any keys which we are currently missing.
+ self.context.holder_commitment_point.try_resolve_pending(
+ &self.context.holder_signer, &self.context.secp_ctx, logger,
+ );
// Prior to static_remotekey, my_current_per_commitment_point was critical to claiming
// current to_remote balances. However, it no longer has any use, and thus is now simply
// set to a dummy (but valid, as required by the spec) public key.
// TODO(async_signing): remove this expect with the Uninitialized variant
let current = holder_signer.get_per_commitment_point(cur_holder_commitment_transaction_number, &secp_ctx)
.expect("Must be able to derive the current commitment point upon channel restoration");
- HolderCommitmentPoint::PendingNext {
- transaction_number: cur_holder_commitment_transaction_number, current,
+ let next = holder_signer.get_per_commitment_point(cur_holder_commitment_transaction_number - 1, &secp_ctx)
+ .expect("Must be able to derive the next commitment point upon channel restoration");
+ HolderCommitmentPoint::Available {
+ transaction_number: cur_holder_commitment_transaction_number, current, next,
}
},
};