From: Chris Waterson Date: Sat, 7 Oct 2023 15:54:07 +0000 (-0700) Subject: Support async get_per_commitment_point, release_commitment_secret X-Git-Url: http://git.bitcoin.ninja/index.cgi?a=commitdiff_plain;h=6846f14f6fc430a50f454297280bfe7128c6325a;p=rust-lightning Support async get_per_commitment_point, release_commitment_secret Apologies in advance for the hair-ball. Mostly just wanted to get a 50,000 foot overview to see if things were headed in the right direction. If things look okay, I can take a step back and chop this into more modestly-sized PRs. In general, this PR adds state to the context to allow a `ChannelSigner` implementation to respond asynchronously to the `get_per_commitment_point`, `release_commitment_secret`, and `sign_counterparty_commitment` methods, which are the main signer methods that are called during channel setup and normal operation. These changes seem to work as advertised during normal channel operation (creation and payment exchange), and do not obviously fail during channel re-establishment or across node restart. That said, there are a lot more test scenarios to evaluate here. The details are below. Adds the RevokeAndAck and RAACommitemnt ordering to the `SignerResumeUpdates` struct that is returned from `signer_maybe_unblocked`. Adds `signer_maybe_unblocked` for both inbound and outbound unfunded channel states. We need these now because `get_per_commitment_point` is asynchronous -- and necessary for us to proceed out of the unfunded state into normal operation. Adds appropriate `SignerResumeUpdates` classes for both inbound and outbound unfunded states. Maintains `cur_holder_commitment_point` and `prev_holder_commitment_secret` on the channel context. By making these part of the context state, we can access them at any point we need them without requiring a signer call to regenerate. These are updated appropriately throughout the channel state machine by calling a new context method `update_holder_per_commitment`. Add several flags to indicate messages that may now be pending the remote signer unblocking: - `signer_pending_revoke_and_ack`, set when we're waiting to send the revoke-and-ack to the counterparty. This might _not_ just be us waiting on the signer; for example, if the commitment order requires sending the RAA after the commitment update, then even though we _might_ be able to generate the RAA (e.g., because we have the secret), we will not do so while the commitment update is pending (e.g., because we're missing the latest point or do not have a signed counterparty commitment). - `signer_pending_channel_ready`, set when we're waiting to send the channel-ready to the counterparty. - `signer_pending_commitment_point`, set when we're waiting for the signer to return us the commitment point for the current state. - `signer_pending_released_secret`, set when we're waiting for the signer to release the commitment secret for the previous state. This state (current commitment point, previous secret, flags) is persisted in the channel monitor, and restored when the channel is unpickled. When a monitor update completes, we may still be pending results from the remote signer. If that is the case, we ensure that we correctly maintain the above state flags before the channel state is resumed. For example, if the _monitor_ is pending a commitment signed, but we were not able to retrieve the commitment update from the signer, then we ensure that `signer_pending_commitment_update` is set. When unblocking the signer, we need to ensure that we honor message ordering constraints. For the commitment update and revoke-and-ack, ensure that we can honor the context's current `resend_order`. For example, assume we must send the RAA before the CU, and could potentially send a CU because we have the commitment point and a signed counterparty commitment transaction. _But_, we have not yet received the previous state's commitment secret. In this case, we ensure that no messages are emitted until the commitment secret is released, at which point the signer-unblocked call will emit both messages. A similar situation exists with the `channel_ready` message and the `funding_signed` / `funding_created` messages at channel startup: we make sure that we don't emit `channel_ready` before the funding message. There is at least one unsolved problem here: during channel re-establishment, we need to request an _arbitrary_ commitment point from the signer in order to verify that the counterparty's giving us a legitimate secret. For the time being, I've simply commented out this check; however, this is not a viable solution. There are a few options to consider here: 1. We could require that an asynchronous signer _cache_ the previous commitment points, so that any such request must necessarily succeed synchronously. 2. We could attempt to factor this method as to admit an asynchronous response that would restart the channel re-establish once the commitment point has been provided by the signer and the counterparty's secret can be verified. The former places some additional burden on a remote signer (albeit minimal) and seems reasonable to me. As for testing... For testing asynchronous channel signing, replaces the simple boolean ("everything is on, or everything is off") with flags that let us toggle different signing methods on and off. This lets us (sort of) simulate the signer returning responses in an arbitrary order. Adds a fully-exploded "send a one-hop payment" test to the async test suite. At each step, simulate the async signer being unavailable, and then unblocking. Vary the order to check all possible orderings of `get_per_commitment_point`, `release_commitment_secret`, and `sign_counterparty_commitment` being provided by the signer. But there is a lot more to be done here. Many of the odd-ball cases in the PR _aren't_ covered by unit tests and were instead uncovered by running the code in situ with an LND counterparty. So there are a lot more tests to write here. --- diff --git a/fuzz/src/chanmon_consistency.rs b/fuzz/src/chanmon_consistency.rs index ea0b78dc8..837a73a75 100644 --- a/fuzz/src/chanmon_consistency.rs +++ b/fuzz/src/chanmon_consistency.rs @@ -267,7 +267,7 @@ impl SignerProvider for KeyProvider { inner, state, disable_revocation_policy_check: false, - available: Arc::new(Mutex::new(true)), + unavailable: Arc::new(Mutex::new(0)), }) } diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs index 37c3383a3..14e788434 100644 --- a/lightning/src/chain/channelmonitor.rs +++ b/lightning/src/chain/channelmonitor.rs @@ -2833,9 +2833,7 @@ impl ChannelMonitorImpl { }, commitment_txid: htlc.commitment_txid, per_commitment_number: htlc.per_commitment_number, - per_commitment_point: self.onchain_tx_handler.signer.get_per_commitment_point( - htlc.per_commitment_number, &self.onchain_tx_handler.secp_ctx, - ), + per_commitment_point: htlc.per_commitment_point, feerate_per_kw: 0, htlc: htlc.htlc, preimage: htlc.preimage, diff --git a/lightning/src/chain/onchaintx.rs b/lightning/src/chain/onchaintx.rs index 5aba65938..5e892fe7b 100644 --- a/lightning/src/chain/onchaintx.rs +++ b/lightning/src/chain/onchaintx.rs @@ -178,6 +178,7 @@ pub(crate) struct ExternalHTLCClaim { pub(crate) htlc: HTLCOutputInCommitment, pub(crate) preimage: Option, pub(crate) counterparty_sig: Signature, + pub(crate) per_commitment_point: bitcoin::secp256k1::PublicKey, } // Represents the different types of claims for which events are yielded externally to satisfy said @@ -1165,9 +1166,16 @@ impl OnchainTxHandler }) .map(|(htlc_idx, htlc)| { let counterparty_htlc_sig = holder_commitment.counterparty_htlc_sigs[htlc_idx]; + + // TODO(waterson) fallible: move this somewhere! + let per_commitment_point = self.signer.get_per_commitment_point( + trusted_tx.commitment_number(), &self.secp_ctx, + ).unwrap(); + ExternalHTLCClaim { commitment_txid: trusted_tx.txid(), per_commitment_number: trusted_tx.commitment_number(), + per_commitment_point: per_commitment_point, htlc: htlc.clone(), preimage: *preimage, counterparty_sig: counterparty_htlc_sig, diff --git a/lightning/src/ln/async_signer_tests.rs b/lightning/src/ln/async_signer_tests.rs index a82fd2e42..574d10a86 100644 --- a/lightning/src/ln/async_signer_tests.rs +++ b/lightning/src/ln/async_signer_tests.rs @@ -10,13 +10,18 @@ //! Tests for asynchronous signing. These tests verify that the channel state machine behaves //! properly with a signer implementation that asynchronously derives signatures. +use bitcoin::secp256k1::PublicKey; use crate::events::{Event, MessageSendEvent, MessageSendEventsProvider}; +use crate::ln::ChannelId; use crate::ln::functional_test_utils::*; use crate::ln::msgs::ChannelMessageHandler; use crate::ln::channelmanager::{PaymentId, RecipientOnionFields}; +use crate::util::test_channel_signer::ops; + +const OPS: u32 = ops::GET_PER_COMMITMENT_POINT | ops::RELEASE_COMMITMENT_SECRET | ops::SIGN_COUNTERPARTY_COMMITMENT; #[test] -fn test_async_commitment_signature_for_funding_created() { +fn test_funding_created() { // Simulate acquiring the signature for `funding_created` asynchronously. let chanmon_cfgs = create_chanmon_cfgs(2); let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); @@ -37,7 +42,7 @@ fn test_async_commitment_signature_for_funding_created() { // But! Let's make node[0]'s signer be unavailable: we should *not* broadcast a funding_created // message... let (temporary_channel_id, tx, _) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 100000, 42); - nodes[0].set_channel_signer_available(&nodes[1].node.get_our_node_id(), &temporary_channel_id, false); + nodes[0].set_channel_signer_ops_available(&nodes[1].node.get_our_node_id(), &temporary_channel_id, OPS, false); nodes[0].node.funding_transaction_generated(&temporary_channel_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap(); check_added_monitors(&nodes[0], 0); @@ -51,7 +56,7 @@ fn test_async_commitment_signature_for_funding_created() { channels[0].channel_id }; - nodes[0].set_channel_signer_available(&nodes[1].node.get_our_node_id(), &chan_id, true); + nodes[0].set_channel_signer_ops_available(&nodes[1].node.get_our_node_id(), &chan_id, OPS, true); nodes[0].node.signer_unblocked(Some((nodes[1].node.get_our_node_id(), chan_id))); let mut funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id()); @@ -67,7 +72,7 @@ fn test_async_commitment_signature_for_funding_created() { } #[test] -fn test_async_commitment_signature_for_funding_signed() { +fn test_for_funding_signed() { // Simulate acquiring the signature for `funding_signed` asynchronously. let chanmon_cfgs = create_chanmon_cfgs(2); let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); @@ -92,7 +97,7 @@ fn test_async_commitment_signature_for_funding_signed() { // Now let's make node[1]'s signer be unavailable while handling the `funding_created`. It should // *not* broadcast a `funding_signed`... - nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, false); + nodes[1].set_channel_signer_ops_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, OPS, false); nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg); check_added_monitors(&nodes[1], 1); @@ -105,7 +110,7 @@ fn test_async_commitment_signature_for_funding_signed() { assert_eq!(channels.len(), 1, "expected one channel, not {}", channels.len()); channels[0].channel_id }; - nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &chan_id, true); + nodes[1].set_channel_signer_ops_available(&nodes[0].node.get_our_node_id(), &chan_id, OPS, true); nodes[1].node.signer_unblocked(Some((nodes[0].node.get_our_node_id(), chan_id))); expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id()); @@ -118,7 +123,7 @@ fn test_async_commitment_signature_for_funding_signed() { } #[test] -fn test_async_commitment_signature_for_commitment_signed() { +fn test_commitment_signed() { let chanmon_cfgs = create_chanmon_cfgs(2); let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]); @@ -144,29 +149,26 @@ fn test_async_commitment_signature_for_commitment_signed() { dst.node.handle_update_add_htlc(&src.node.get_our_node_id(), &payment_event.msgs[0]); - // Mark dst's signer as unavailable and handle src's commitment_signed: while dst won't yet have a - // `commitment_signed` of its own to offer, it should publish a `revoke_and_ack`. - dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, false); + // Mark dst's signer as unavailable and handle src's commitment_signed. If dst's signer is + // offline, it oughtn't yet respond with any updates. + dst.set_channel_signer_ops_available(&src.node.get_our_node_id(), &chan_id, OPS, false); dst.node.handle_commitment_signed(&src.node.get_our_node_id(), &payment_event.commitment_msg); check_added_monitors(dst, 1); - get_event_msg!(dst, MessageSendEvent::SendRevokeAndACK, src.node.get_our_node_id()); + { + let events = dst.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 0, "expected 0 events to be generated, got {}", events.len()); + } // Mark dst's signer as available and retry: we now expect to see dst's `commitment_signed`. - dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, true); + dst.set_channel_signer_ops_available(&src.node.get_our_node_id(), &chan_id, OPS, true); dst.node.signer_unblocked(Some((src.node.get_our_node_id(), chan_id))); - let events = dst.node.get_and_clear_pending_msg_events(); - assert_eq!(events.len(), 1, "expected one message, got {}", events.len()); - if let MessageSendEvent::UpdateHTLCs { ref node_id, .. } = events[0] { - assert_eq!(node_id, &src.node.get_our_node_id()); - } else { - panic!("expected UpdateHTLCs message, not {:?}", events[0]); - }; + get_revoke_commit_msgs(&dst, &src.node.get_our_node_id()); } #[test] -fn test_async_commitment_signature_for_funding_signed_0conf() { +fn test_funding_signed_0conf() { // Simulate acquiring the signature for `funding_signed` asynchronously for a zero-conf channel. let mut manually_accept_config = test_default_channel_config(); manually_accept_config.manually_accept_inbound_channels = true; @@ -209,7 +211,7 @@ fn test_async_commitment_signature_for_funding_signed_0conf() { // Now let's make node[1]'s signer be unavailable while handling the `funding_created`. It should // *not* broadcast a `funding_signed`... - nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, false); + nodes[1].set_channel_signer_ops_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, OPS, false); nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg); check_added_monitors(&nodes[1], 1); @@ -224,7 +226,7 @@ fn test_async_commitment_signature_for_funding_signed_0conf() { }; // At this point, we basically expect the channel to open like a normal zero-conf channel. - nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &chan_id, true); + nodes[1].set_channel_signer_ops_available(&nodes[0].node.get_our_node_id(), &chan_id, OPS, true); nodes[1].node.signer_unblocked(Some((nodes[0].node.get_our_node_id(), chan_id))); let (funding_signed, channel_ready_1) = { @@ -264,8 +266,214 @@ fn test_async_commitment_signature_for_funding_signed_0conf() { assert_eq!(nodes[1].node.list_usable_channels().len(), 1); } +/// Helper to run operations with a simulated asynchronous signer. +/// +/// Disables the signer for the specified channel and then runs `do_fn`, then re-enables the signer +/// and calls `signer_unblocked`. +#[cfg(test)] +pub fn with_async_signer<'a, DoFn>(node: &Node, peer_id: &PublicKey, channel_id: &ChannelId, masks: &Vec, do_fn: &'a DoFn) where DoFn: Fn() { + let mask = masks.iter().fold(0, |acc, m| (acc | m)); + node.set_channel_signer_ops_available(peer_id, channel_id, mask, false); + do_fn(); + for mask in masks { + node.set_channel_signer_ops_available(peer_id, channel_id, *mask, true); + node.node.signer_unblocked(Some((*peer_id, *channel_id))); + } +} + +#[cfg(test)] +fn do_test_payment(masks: &Vec) { + // This runs through a one-hop payment from start to finish, simulating an asynchronous signer at + // each step. + let chanmon_cfgs = create_chanmon_cfgs(2); + let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); + let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]); + let nodes = create_network(2, &node_cfgs, &node_chanmgrs); + let (_up1, _up2, channel_id, _tx) = create_announced_chan_between_nodes(&nodes, 0, 1); + + let alice = &nodes[0]; + let bob = &nodes[1]; + + let (route, payment_hash, payment_preimage, payment_secret) = get_route_and_payment_hash!(alice, bob, 8_000_000); + + with_async_signer(&alice, &bob.node.get_our_node_id(), &channel_id, masks, &|| { + alice.node.send_payment_with_route(&route, payment_hash, + RecipientOnionFields::secret_only(payment_secret), PaymentId(payment_hash.0)).unwrap(); + check_added_monitors!(alice, 1); + let events = alice.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 0, "expected 0 events, got {}", events.len()); + }); + + let payment_event = { + let mut events = alice.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 1); + SendEvent::from_event(events.remove(0)) + }; + assert_eq!(payment_event.node_id, bob.node.get_our_node_id()); + assert_eq!(payment_event.msgs.len(), 1); + + // alice --[update_add_htlc]--> bob + // alice --[commitment_signed]--> bob + with_async_signer(&bob, &alice.node.get_our_node_id(), &channel_id, masks, &|| { + bob.node.handle_update_add_htlc(&alice.node.get_our_node_id(), &payment_event.msgs[0]); + bob.node.handle_commitment_signed(&alice.node.get_our_node_id(), &payment_event.commitment_msg); + check_added_monitors(bob, 1); + }); + + // alice <--[revoke_and_ack]-- bob + // alice <--[commitment_signed]-- bob + { + let (raa, cu) = { + let events = bob.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 2, "expected 2 messages, got {}", events.len()); + match (&events[0], &events[1]) { + (MessageSendEvent::SendRevokeAndACK { msg: raa, .. }, MessageSendEvent::UpdateHTLCs { updates: cu, .. }) => { + assert_eq!(cu.update_add_htlcs.len(), 0, "expected 0 update_add_htlcs, got {}", cu.update_add_htlcs.len()); + (raa.clone(), cu.clone()) + } + (a, b) => panic!("expected SendRevokeAndAck and UpdateHTLCs, not {:?} and {:?}", a, b) + } + }; + + // TODO: run this with_async_signer once validate_counterparty_revocation supports it. + alice.node.handle_revoke_and_ack(&bob.node.get_our_node_id(), &raa); + check_added_monitors(alice, 1); + + with_async_signer(&alice, &bob.node.get_our_node_id(), &channel_id, masks, &|| { + alice.node.handle_commitment_signed(&bob.node.get_our_node_id(), &cu.commitment_signed); + check_added_monitors(alice, 1); + }); + } + + // alice --[revoke_and_ack]--> bob + // TODO: run this with_async_signer once validate_counterparty_revocation supports it. + let raa = get_event_msg!(alice, MessageSendEvent::SendRevokeAndACK, bob.node.get_our_node_id()); + bob.node.handle_revoke_and_ack(&alice.node.get_our_node_id(), &raa); + check_added_monitors(bob, 1); + + expect_pending_htlcs_forwardable!(bob); + + // Bob generates a PaymentClaimable to user code. + { + let events = bob.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1, "expected 1 event, got {}", events.len()); + match &events[0] { + Event::PaymentClaimable { .. } => { + bob.node.claim_funds(payment_preimage); + } + ev => panic!("Expected PaymentClaimable, got {:?}", ev) + } + check_added_monitors(bob, 1); + } + + // Bob generates a PaymentClaimed event to user code. + { + let events = bob.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 1, "expected 1 event, got {}", events.len()); + match &events[0] { + Event::PaymentClaimed { .. } => (), + ev => panic!("Expected PaymentClaimed, got {:?}", ev), + } + } + + // alice <--[update_fulfill_htlcs]-- bob + // alice <--[commitment_signed]-- bob + { + let cu = { + let events = bob.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 1, "expected 1 events, got {}", events.len()); + match &events[0] { + MessageSendEvent::UpdateHTLCs { updates, .. } => { + assert_eq!(updates.update_fulfill_htlcs.len(), 1, "expected 1 update_fulfill_htlcs, got {}", updates.update_fulfill_htlcs.len()); + updates.clone() + } + ev => panic!("Expected UpdateHTLCs, got {:?}", ev) + } + }; + + with_async_signer(&alice, &bob.node.get_our_node_id(), &channel_id, masks, &|| { + alice.node.handle_update_fulfill_htlc(&bob.node.get_our_node_id(), &cu.update_fulfill_htlcs[0]); + alice.node.handle_commitment_signed(&bob.node.get_our_node_id(), &cu.commitment_signed); + check_added_monitors(alice, 1); + }); + } + + // alice --[revoke_and_ack]--> bob + // alice --[commitment_signed]--> bob + { + let (raa, cu) = { + let events = alice.node.get_and_clear_pending_msg_events(); + assert_eq!(events.len(), 2, "expected 2 messages, got {}", events.len()); + match (&events[0], &events[1]) { + (MessageSendEvent::SendRevokeAndACK { msg: raa, .. }, MessageSendEvent::UpdateHTLCs { updates: cu, .. }) => { + assert_eq!(cu.update_fulfill_htlcs.len(), 0, "expected 0 update_fulfill_htlcs, got {}", cu.update_fulfill_htlcs.len()); + (raa.clone(), cu.clone()) + } + (a, b) => panic!("expected SendRevokeAndAck and UpdateHTLCs, not {:?} and {:?}", a, b) + } + }; + + // TODO: run with async once validate_counterparty_revocation supports it. + bob.node.handle_revoke_and_ack(&alice.node.get_our_node_id(), &raa); + check_added_monitors(bob, 1); + + with_async_signer(&bob, &alice.node.get_our_node_id(), &channel_id, masks, &|| { + bob.node.handle_commitment_signed(&alice.node.get_our_node_id(), &cu.commitment_signed); + check_added_monitors(bob, 1); + }); + } + + // alice <--[revoke_and_ack]-- bob + // TODO: run with async once validate_counterparty_revocation supports it. + let raa = get_event_msg!(bob, MessageSendEvent::SendRevokeAndACK, alice.node.get_our_node_id()); + alice.node.handle_revoke_and_ack(&bob.node.get_our_node_id(), &raa); + check_added_monitors(alice, 0); + + // Alice generates PaymentSent and PaymentPathSuccessful events to user code. + { + let events = alice.node.get_and_clear_pending_events(); + assert_eq!(events.len(), 2, "expected 2 event, got {}", events.len()); + match (&events[0], &events[1]) { + (Event::PaymentSent { .. }, Event::PaymentPathSuccessful { .. }) => (), + (a, b) => panic!("Expected PaymentSent and PaymentPathSuccessful, got {:?} and {:?}", a, b) + } + + check_added_monitors(alice, 1); // why? would have expected this after handling RAA... + } +} + +#[test] +fn test_payment_grs() { + do_test_payment(&vec![ops::GET_PER_COMMITMENT_POINT, ops::RELEASE_COMMITMENT_SECRET, ops::SIGN_COUNTERPARTY_COMMITMENT]); +} + +#[test] +fn test_payment_gsr() { + do_test_payment(&vec![ops::GET_PER_COMMITMENT_POINT, ops::SIGN_COUNTERPARTY_COMMITMENT, ops::RELEASE_COMMITMENT_SECRET]); +} + +#[test] +fn test_payment_rsg() { + do_test_payment(&vec![ops::RELEASE_COMMITMENT_SECRET, ops::SIGN_COUNTERPARTY_COMMITMENT, ops::GET_PER_COMMITMENT_POINT]); +} + +#[test] +fn test_payment_rgs() { + do_test_payment(&vec![ops::RELEASE_COMMITMENT_SECRET, ops::GET_PER_COMMITMENT_POINT, ops::SIGN_COUNTERPARTY_COMMITMENT]); +} + +#[test] +fn test_payment_srg() { + do_test_payment(&vec![ops::SIGN_COUNTERPARTY_COMMITMENT, ops::RELEASE_COMMITMENT_SECRET, ops::GET_PER_COMMITMENT_POINT]); +} + #[test] -fn test_async_commitment_signature_for_peer_disconnect() { +fn test_payment_sgr() { + do_test_payment(&vec![ops::SIGN_COUNTERPARTY_COMMITMENT, ops::GET_PER_COMMITMENT_POINT, ops::RELEASE_COMMITMENT_SECRET]); +} + +#[test] +fn test_peer_disconnect() { let chanmon_cfgs = create_chanmon_cfgs(2); let node_cfgs = create_node_cfgs(2, &chanmon_cfgs); let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]); @@ -291,33 +499,22 @@ fn test_async_commitment_signature_for_peer_disconnect() { dst.node.handle_update_add_htlc(&src.node.get_our_node_id(), &payment_event.msgs[0]); - // Mark dst's signer as unavailable and handle src's commitment_signed: while dst won't yet have a - // `commitment_signed` of its own to offer, it should publish a `revoke_and_ack`. - dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, false); + // Mark dst's signer as unavailable and handle src's commitment_signed. If dst's signer is + // offline, it oughtn't respond with any updates. + dst.set_channel_signer_ops_available(&src.node.get_our_node_id(), &chan_id, OPS, false); dst.node.handle_commitment_signed(&src.node.get_our_node_id(), &payment_event.commitment_msg); check_added_monitors(dst, 1); - get_event_msg!(dst, MessageSendEvent::SendRevokeAndACK, src.node.get_our_node_id()); - // Now disconnect and reconnect the peers. src.node.peer_disconnected(&dst.node.get_our_node_id()); dst.node.peer_disconnected(&src.node.get_our_node_id()); let mut reconnect_args = ReconnectArgs::new(&nodes[0], &nodes[1]); reconnect_args.send_channel_ready = (false, false); - reconnect_args.pending_raa = (true, false); + reconnect_args.pending_raa = (false, false); reconnect_nodes(reconnect_args); - // Mark dst's signer as available and retry: we now expect to see dst's `commitment_signed`. - dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, true); + // Mark dst's signer as available and retry: we now expect to see dst's commitment signed and RAA. + dst.set_channel_signer_ops_available(&src.node.get_our_node_id(), &chan_id, OPS, true); dst.node.signer_unblocked(Some((src.node.get_our_node_id(), chan_id))); - - { - let events = dst.node.get_and_clear_pending_msg_events(); - assert_eq!(events.len(), 1, "expected one message, got {}", events.len()); - if let MessageSendEvent::UpdateHTLCs { ref node_id, .. } = events[0] { - assert_eq!(node_id, &src.node.get_our_node_id()); - } else { - panic!("expected UpdateHTLCs message, not {:?}", events[0]); - }; - } + get_revoke_commit_msgs(dst, &src.node.get_our_node_id()); } diff --git a/lightning/src/ln/channel.rs b/lightning/src/ln/channel.rs index 4a09ffcef..b8d8183fe 100644 --- a/lightning/src/ln/channel.rs +++ b/lightning/src/ln/channel.rs @@ -533,15 +533,44 @@ pub(super) struct MonitorRestoreUpdates { pub announcement_sigs: Option, } -/// The return value of `signer_maybe_unblocked` +/// The return value of `signer_maybe_unblocked`. +/// +/// When the signer becomes unblocked, any non-`None` event accumulated here should be sent to the +/// peer by the caller. #[allow(unused)] pub(super) struct SignerResumeUpdates { + /// A `commitment_signed` message, possibly with additional HTLC-related messages (e.g., + /// `update_add_htlc`) that should be placed in the commitment. + /// + /// When both this and `raa` contain values, they should be sent to the peer using an ordering + /// consistent with `order`. pub commitment_update: Option, + /// A `revoke_and_ack` message that should be sent to the peer. + /// + /// When both this and `raa` contain values, they should be sent to the peer using an ordering + /// consistent with `order`. + pub raa: Option, + /// The order in which the `commitment_signed` and `revoke_and_ack` messages should be provided to + /// the peer. Only meaningful if both of these messages are present. + pub order: RAACommitmentOrder, + /// A `funding_signed` message that should be sent to the peer. pub funding_signed: Option, + /// A `funding_created` message that should be sent to the peer. pub funding_created: Option, + /// A `channel_ready` message that should be sent to the peer. If present, it should be sent last. pub channel_ready: Option, } +#[allow(unused)] +pub(super) struct UnfundedInboundV1SignerResumeUpdates { + pub accept_channel: Option, +} + +#[allow(unused)] +pub(super) struct UnfundedOutboundV1SignerResumeUpdates { + pub open_channel: Option, +} + /// The return value of `channel_reestablish` pub(super) struct ReestablishResponses { pub channel_ready: Option, @@ -732,6 +761,14 @@ pub(super) struct ChannelContext where SP::Target: SignerProvider { // cost of others, but should really just be changed. cur_holder_commitment_transaction_number: u64, + + // The commitment point corresponding to `cur_holder_commitment_transaction_number`, which is the + // *next* state. On initial channel construction, this value may be None, in which case that means + // that the first commitment point wasn't ready at the time that the channel needed to be created. + cur_holder_commitment_point: Option, + // The commitment secret corresponding to `cur_holder_commitment_transaction_number + 2`, which is + // the *previous* state. + prev_holder_commitment_secret: Option<[u8; 32]>, cur_counterparty_commitment_transaction_number: u64, value_to_self_msat: u64, // Excluding all pending_htlcs, fees, and anchor outputs pending_inbound_htlcs: Vec, @@ -766,10 +803,22 @@ pub(super) struct ChannelContext where SP::Target: SignerProvider { /// This flag is set in such a case. Note that we don't need to persist this as we'll end up /// setting it again as a side-effect of [`Channel::channel_reestablish`]. signer_pending_commitment_update: bool, + /// Similar to [`Self::signer_pending_commitment_update`]: indicates that we've deferred sending a + /// `revoke_and_ack`, and should do so once the signer has become unblocked. + signer_pending_revoke_and_ack: bool, /// Similar to [`Self::signer_pending_commitment_update`] but we're waiting to send either a /// [`msgs::FundingCreated`] or [`msgs::FundingSigned`] depending on if this channel is /// outbound or inbound. signer_pending_funding: bool, + /// Similar to [`Self::signer_pending_commitment_update`] but we're waiting to send a + /// [`msgs::ChannelReady`]. + signer_pending_channel_ready: bool, + /// If we attempted to retrieve the per-commitment point for the next transaction but the signer + /// wasn't ready, then this will be set to `true`. + signer_pending_commitment_point: bool, + /// If we attempted to release the per-commitment secret for the previous transaction but the + /// signer wasn't ready, then this will be set to `true`. + signer_pending_released_secret: bool, // pending_update_fee is filled when sending and receiving update_fee. // @@ -1233,6 +1282,46 @@ impl ChannelContext where SP::Target: SignerProvider { self.channel_ready_event_emitted = true; } + /// Retrieves the next commitment point and previous commitment secret from the signer. + pub fn update_holder_per_commitment(&mut self, logger: &L) where L::Target: Logger + { + let transaction_number = self.cur_holder_commitment_transaction_number; + let signer = self.holder_signer.as_ref(); + + log_trace!(logger, "Retrieving commitment point for {} transaction number {}", self.channel_id(), transaction_number); + self.cur_holder_commitment_point = match signer.get_per_commitment_point(transaction_number, &self.secp_ctx) { + Ok(point) => { + log_trace!(logger, "Commitment point for {} transaction number {} retrieved", self.channel_id(), transaction_number); + self.signer_pending_commitment_point = false; + Some(point) + } + + Err(_) => { + log_trace!(logger, "Commitment point for {} transaction number {} is not available", self.channel_id(), transaction_number); + self.signer_pending_commitment_point = true; + None + } + }; + + let releasing_transaction_number = transaction_number + 2; + if releasing_transaction_number <= INITIAL_COMMITMENT_NUMBER { + log_trace!(logger, "Retrieving commitment secret for {} transaction number {}", self.channel_id(), releasing_transaction_number); + self.prev_holder_commitment_secret = match signer.release_commitment_secret(releasing_transaction_number) { + Ok(secret) => { + log_trace!(logger, "Commitment secret for {} transaction number {} retrieved", self.channel_id(), releasing_transaction_number); + self.signer_pending_released_secret = false; + Some(secret) + } + + Err(_) => { + log_trace!(logger, "Commitment secret for {} transaction number {} is not available", self.channel_id(), releasing_transaction_number); + self.signer_pending_released_secret = true; + None + } + } + }; + } + /// Tracks the number of ticks elapsed since the previous [`ChannelConfig`] was updated. Once /// [`EXPIRE_PREV_CONFIG_TICKS`] is reached, the previous config is considered expired and will /// no longer be considered when forwarding HTLCs. @@ -1525,17 +1614,21 @@ impl ChannelContext where SP::Target: SignerProvider { #[inline] /// Creates a set of keys for build_commitment_transaction to generate a transaction which our - /// counterparty will sign (ie DO NOT send signatures over a transaction created by this to - /// our counterparty!) + /// counterparty will sign (ie DO NOT send signatures over a transaction created by this to our + /// counterparty!) The keys are specifically generated for the _next_ state to which the channel + /// is about to advance. /// The result is a transaction which we can revoke broadcastership of (ie a "local" transaction) /// TODO Some magic rust shit to compile-time check this? - fn build_holder_transaction_keys(&self, commitment_number: u64) -> TxCreationKeys { - let per_commitment_point = self.holder_signer.as_ref().get_per_commitment_point(commitment_number, &self.secp_ctx); + fn build_next_holder_transaction_keys(&self) -> TxCreationKeys { let delayed_payment_base = &self.get_holder_pubkeys().delayed_payment_basepoint; let htlc_basepoint = &self.get_holder_pubkeys().htlc_basepoint; let counterparty_pubkeys = self.get_counterparty_pubkeys(); + let cur_holder_commitment_point = self.cur_holder_commitment_point + .expect("Holder per-commitment point is not ready"); - TxCreationKeys::derive_new(&self.secp_ctx, &per_commitment_point, delayed_payment_base, htlc_basepoint, &counterparty_pubkeys.revocation_basepoint, &counterparty_pubkeys.htlc_basepoint) + TxCreationKeys::derive_new( + &self.secp_ctx, &cur_holder_commitment_point, delayed_payment_base, htlc_basepoint, + &counterparty_pubkeys.revocation_basepoint, &counterparty_pubkeys.htlc_basepoint) } #[inline] @@ -2120,7 +2213,7 @@ impl ChannelContext where SP::Target: SignerProvider { log_trace!(logger, "Counterparty commitment signature ready for funding_created message: clearing signer_pending_funding"); self.signer_pending_funding = false; } - + Some(msgs::FundingCreated { temporary_channel_id: self.temporary_channel_id.unwrap(), funding_txid: self.channel_transaction_parameters.funding_outpoint.as_ref().unwrap().txid, @@ -2680,7 +2773,10 @@ impl Channel where log_trace!(logger, "Initial counterparty tx for channel {} is: txid {} tx {}", &self.context.channel_id(), counterparty_initial_bitcoin_tx.txid, encode::serialize_hex(&counterparty_initial_bitcoin_tx.transaction)); - let holder_signer = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number); + // N.B. we'll have acquired the first per-commitment point from the signer during channel + // creation. Verify that the signature from the counterparty is correct so that we've got our + // signed refund transaction if we need to immediately close. + let holder_signer = self.context.build_next_holder_transaction_keys(); let initial_commitment_tx = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &holder_signer, true, false, logger).tx; { let trusted_tx = initial_commitment_tx.trust(); @@ -2703,7 +2799,6 @@ impl Channel where self.context.holder_signer.as_ref().validate_holder_commitment(&holder_commitment_tx, Vec::new()) .map_err(|_| ChannelError::Close("Failed to validate our commitment".to_owned()))?; - let funding_redeemscript = self.context.get_funding_redeemscript(); let funding_txo = self.context.get_funding_txo().unwrap(); let funding_txo_script = funding_redeemscript.to_v0_p2wsh(); @@ -2734,11 +2829,13 @@ impl Channel where self.context.channel_state = ChannelState::FundingSent as u32; } self.context.cur_holder_commitment_transaction_number -= 1; + self.context.update_holder_per_commitment(logger); self.context.cur_counterparty_commitment_transaction_number -= 1; log_info!(logger, "Received funding_signed from peer for channel {}", &self.context.channel_id()); let need_channel_ready = self.check_get_channel_ready(0).is_some(); + log_trace!(logger, "funding_signed {} channel_ready", if need_channel_ready { "needs" } else { "does not need" }); self.monitor_updating_paused(false, false, need_channel_ready, Vec::new(), Vec::new(), Vec::new()); Ok(channel_monitor) } @@ -3081,7 +3178,7 @@ impl Channel where let funding_script = self.context.get_funding_redeemscript(); - let keys = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number); + let keys = self.context.build_next_holder_transaction_keys(); let commitment_stats = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &keys, true, false, logger); let commitment_txid = { @@ -3245,8 +3342,11 @@ impl Channel where }; self.context.cur_holder_commitment_transaction_number -= 1; + self.context.update_holder_per_commitment(logger); + // Note that if we need_commitment & !AwaitingRemoteRevoke we'll call // build_commitment_no_status_check() next which will reset this to RAAFirst. + log_debug!(logger, "setting resend_order to CommitmentFirst"); self.context.resend_order = RAACommitmentOrder::CommitmentFirst; if (self.context.channel_state & ChannelState::MonitorUpdateInProgress as u32) != 0 { @@ -3724,7 +3824,7 @@ impl Channel where // Before proposing a feerate update, check that we can actually afford the new fee. let inbound_stats = self.context.get_inbound_pending_htlc_stats(Some(feerate_per_kw)); let outbound_stats = self.context.get_outbound_pending_htlc_stats(Some(feerate_per_kw)); - let keys = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number); + let keys = self.context.build_next_holder_transaction_keys(); let commitment_stats = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &keys, true, true, logger); let buffer_fee_msat = commit_tx_fee_sat(feerate_per_kw, commitment_stats.num_nondust_htlcs + outbound_stats.on_holder_tx_holding_cell_htlcs_count as usize + CONCURRENT_INBOUND_HTLC_FEE_BUFFER as usize, self.context.get_channel_type()) * 1000; let holder_balance_msat = commitment_stats.local_balance_msat - outbound_stats.holding_cell_msat; @@ -3906,11 +4006,10 @@ impl Channel where assert!(!self.context.is_outbound() || self.context.minimum_depth == Some(0), "Funding transaction broadcast by the local client before it should have - LDK didn't do it!"); self.context.monitor_pending_channel_ready = false; - let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); - Some(msgs::ChannelReady { - channel_id: self.context.channel_id(), - next_per_commitment_point, - short_channel_id_alias: Some(self.context.outbound_scid_alias), + self.get_channel_ready().or_else(|| { + log_trace!(logger, "Monitor was pending channel_ready with no commitment point available; setting signer_pending_channel_ready = true"); + self.context.signer_pending_channel_ready = true; + None }) } else { None }; @@ -3933,7 +4032,10 @@ impl Channel where } let raa = if self.context.monitor_pending_revoke_and_ack { - Some(self.get_last_revoke_and_ack()) + self.get_last_revoke_and_ack(logger).or_else(|| { + self.context.signer_pending_revoke_and_ack = true; + None + }) } else { None }; let commitment_update = if self.context.monitor_pending_commitment_signed { self.get_last_commitment_update_for_send(logger).ok() @@ -3942,13 +4044,37 @@ impl Channel where self.mark_awaiting_response(); } + if self.context.monitor_pending_commitment_signed && commitment_update.is_none() { + log_debug!(logger, "Monitor was pending_commitment_signed with no commitment update available; setting signer_pending_commitment_update = true"); + self.context.signer_pending_commitment_update = true; + } else { + // If the signer was pending a commitment update, but we happened to get one just now because + // the monitor retrieved it, then we can mark the signer as "not pending anymore". + if self.context.signer_pending_commitment_update && commitment_update.is_some() { + self.context.signer_pending_commitment_update = false; + } + } + if self.context.monitor_pending_revoke_and_ack && raa.is_none() { + log_debug!(logger, "Monitor was pending_revoke_and_ack with no RAA available; setting signer_pending_revoke_and_ack = true"); + self.context.signer_pending_revoke_and_ack = true; + } else { + // If the signer was pending a RAA, but we happened to get one just now because the monitor + // retrieved it, then we can mark the signer as "not pending anymore". + if self.context.signer_pending_revoke_and_ack && raa.is_some() { + self.context.signer_pending_revoke_and_ack = false; + } + } + self.context.monitor_pending_revoke_and_ack = false; self.context.monitor_pending_commitment_signed = false; + let order = self.context.resend_order.clone(); - log_debug!(logger, "Restored monitor updating in channel {} resulting in {}{} commitment update and {} RAA, with {} first", + log_debug!(logger, "Restored monitor updating in channel {} resulting in {}{} commitment update and {} RAA{}", &self.context.channel_id(), if funding_broadcastable.is_some() { "a funding broadcastable, " } else { "" }, if commitment_update.is_some() { "a" } else { "no" }, if raa.is_some() { "an" } else { "no" }, - match order { RAACommitmentOrder::CommitmentFirst => "commitment", RAACommitmentOrder::RevokeAndACKFirst => "RAA"}); + if commitment_update.is_some() && raa.is_some() { + match order { RAACommitmentOrder::CommitmentFirst => ", with commitment first", RAACommitmentOrder::RevokeAndACKFirst => ", with RAA first"} + } else { "" }); MonitorRestoreUpdates { raa, commitment_update, order, accepted_htlcs, failed_htlcs, finalized_claimed_htlcs, funding_broadcastable, channel_ready, announcement_sigs } @@ -3993,45 +4119,147 @@ impl Channel where /// blocked. #[allow(unused)] pub fn signer_maybe_unblocked(&mut self, logger: &L) -> SignerResumeUpdates where L::Target: Logger { - let commitment_update = if self.context.signer_pending_commitment_update { - self.get_last_commitment_update_for_send(logger).ok() - } else { None }; + log_trace!(logger, "Signing unblocked in channel {} at sequence {}", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number); + + if self.context.signer_pending_commitment_point || self.context.signer_pending_released_secret { + log_trace!(logger, "Attempting to update holder per-commitment for pending commitment point and secret..."); + self.context.update_holder_per_commitment(logger); + } + + // Make sure that we honor any ordering requirements between the commitment update and revoke-and-ack. + let (commitment_update, raa) = match &self.context.resend_order { + RAACommitmentOrder::CommitmentFirst => { + let cu = if self.context.signer_pending_commitment_update { + log_trace!(logger, "Attempting to generate pending commitment update..."); + self.get_last_commitment_update_for_send(logger).map(|cu| { + self.context.signer_pending_commitment_update = false; + cu + }).ok() + } else { None }; + + let raa = if self.context.signer_pending_revoke_and_ack && !self.context.signer_pending_commitment_update { + log_trace!(logger, "Attempting to generate pending RAA..."); + self.get_last_revoke_and_ack(logger).map(|raa| { + self.context.signer_pending_revoke_and_ack = false; + raa + }) + } else { None }; + + (cu, raa) + } + + RAACommitmentOrder::RevokeAndACKFirst => { + let raa = if self.context.signer_pending_revoke_and_ack { + log_trace!(logger, "Attempting to generate pending RAA..."); + self.get_last_revoke_and_ack(logger).map(|raa| { + self.context.signer_pending_revoke_and_ack = false; + raa + }) + } else { None }; + + let cu = if self.context.signer_pending_commitment_update && !self.context.signer_pending_revoke_and_ack { + log_trace!(logger, "Attempting to generate pending commitment update..."); + self.get_last_commitment_update_for_send(logger).map(|cu| { + self.context.signer_pending_commitment_update = false; + cu + }).ok() + } else { None }; + + (cu, raa) + } + }; + let funding_signed = if self.context.signer_pending_funding && !self.context.is_outbound() { + log_trace!(logger, "Attempting to generate pending funding signed..."); self.context.get_funding_signed_msg(logger).1 } else { None }; - let channel_ready = if funding_signed.is_some() { - self.check_get_channel_ready(0) - } else { None }; let funding_created = if self.context.signer_pending_funding && self.context.is_outbound() { + log_trace!(logger, "Attempting to generate pending funding created..."); self.context.get_funding_created_msg(logger) } else { None }; - log_trace!(logger, "Signer unblocked with {} commitment_update, {} funding_signed, {} funding_created, and {} channel_ready", - if commitment_update.is_some() { "a" } else { "no" }, - if funding_signed.is_some() { "a" } else { "no" }, - if funding_created.is_some() { "a" } else { "no" }, - if channel_ready.is_some() { "a" } else { "no" }); - + // Don't yield up a `channel_ready` message if we're still pending funding. + let channel_ready = if self.context.signer_pending_channel_ready && !self.context.signer_pending_funding { + log_trace!(logger, "Attempting to generate pending channel ready..."); + self.get_channel_ready().map(|msg| { + self.context.signer_pending_channel_ready = false; + msg + }) + } else { None }; + + let order = self.context.resend_order.clone(); + + log_debug!(logger, "Signing unblocked in channel {} at sequence {} resulted in {} commitment update, {} RAA{}, {} funding signed, {} funding created, {} channel ready", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number, + if commitment_update.is_some() { "a" } else { "no" }, + if raa.is_some() { "an" } else { "no" }, + if commitment_update.is_some() && raa.is_some() { + if order == RAACommitmentOrder::CommitmentFirst { " (commitment first)" } else { " (RAA first)" } + } else { "" }, + if funding_signed.is_some() { "a" } else { "no" }, + if funding_created.is_some() { "a" } else { "no" }, + if channel_ready.is_some() { "a" } else { "no" }); + SignerResumeUpdates { commitment_update, + raa, + order, funding_signed, funding_created, channel_ready, } } - fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK { - let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); - let per_commitment_secret = self.context.holder_signer.as_ref().release_commitment_secret(self.context.cur_holder_commitment_transaction_number + 2); - msgs::RevokeAndACK { - channel_id: self.context.channel_id, - per_commitment_secret, - next_per_commitment_point, - #[cfg(taproot)] - next_local_nonce: None, + fn get_last_revoke_and_ack(&self, logger: &L) -> Option where L::Target: Logger { + assert!(self.context.cur_holder_commitment_transaction_number <= INITIAL_COMMITMENT_NUMBER + 2); + match (self.context.cur_holder_commitment_point, self.context.prev_holder_commitment_secret) { + (Some(next_per_commitment_point), Some(per_commitment_secret)) => { + log_debug!(logger, "Regenerated last revoke-and-ack in channel {} for next per-commitment point sequence number {}, releasing secret for {}", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number, + self.context.cur_holder_commitment_transaction_number + 2); + + Some(msgs::RevokeAndACK { + channel_id: self.context.channel_id, + per_commitment_secret, + next_per_commitment_point, + #[cfg(taproot)] + next_local_nonce: None, + }) + }, + + (Some(_), None) => { + log_debug!(logger, "Last revoke-and-ack pending in channel {} for sequence {} because the secret for {} is not available", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number, + self.context.cur_holder_commitment_transaction_number + 2); + None + }, + + (None, Some(_)) => { + log_debug!(logger, "Last revoke-and-ack pending in channel {} for sequence {} because the next per-commitment point is not available", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number); + None + }, + + (None, None) => { + log_debug!(logger, "Last revoke-and-ack pending in channel {} for sequence {} because neither the next per-commitment point nor the secret for {} is available", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number, + self.context.cur_holder_commitment_transaction_number + 2); + None + }, } } + fn get_channel_ready(&self) -> Option { + self.context.cur_holder_commitment_point.map(|next_per_commitment_point| { + msgs::ChannelReady { + channel_id: self.context.channel_id(), + next_per_commitment_point, + short_channel_id_alias: Some(self.context.outbound_scid_alias), + } + }) + } + /// Gets the last commitment update for immediate sending to our peer. fn get_last_commitment_update_for_send(&mut self, logger: &L) -> Result where L::Target: Logger { let mut update_add_htlcs = Vec::new(); @@ -4089,10 +4317,6 @@ impl Channel where }) } else { None }; - log_trace!(logger, "Regenerating latest commitment update in channel {} with{} {} update_adds, {} update_fulfills, {} update_fails, and {} update_fail_malformeds", - &self.context.channel_id(), if update_fee.is_some() { " update_fee," } else { "" }, - update_add_htlcs.len(), update_fulfill_htlcs.len(), update_fail_htlcs.len(), update_fail_malformed_htlcs.len()); - let commitment_signed = if let Ok(update) = self.send_commitment_no_state_update(logger).map(|(cu, _)| cu) { if self.context.signer_pending_commitment_update { log_trace!(logger, "Commitment update generated: clearing signer_pending_commitment_update"); @@ -4106,6 +4330,9 @@ impl Channel where } return Err(()); }; + log_debug!(logger, "Regenerated latest commitment update in channel {} at {} with{} {} update_adds, {} update_fulfills, {} update_fails, and {} update_fail_malformeds", + &self.context.channel_id(), self.context.cur_holder_commitment_transaction_number, if update_fee.is_some() { " update_fee," } else { "" }, + update_add_htlcs.len(), update_fulfill_htlcs.len(), update_fail_htlcs.len(), update_fail_malformed_htlcs.len()); Ok(msgs::CommitmentUpdate { update_add_htlcs, update_fulfill_htlcs, update_fail_htlcs, update_fail_malformed_htlcs, update_fee, commitment_signed, @@ -4151,11 +4378,19 @@ impl Channel where } if msg.next_remote_commitment_number > 0 { - let expected_point = self.context.holder_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - msg.next_remote_commitment_number + 1, &self.context.secp_ctx); - let given_secret = SecretKey::from_slice(&msg.your_last_per_commitment_secret) - .map_err(|_| ChannelError::Close("Peer sent a garbage channel_reestablish with unparseable secret key".to_owned()))?; - if expected_point != PublicKey::from_secret_key(&self.context.secp_ctx, &given_secret) { - return Err(ChannelError::Close("Peer sent a garbage channel_reestablish with secret key not matching the commitment height provided".to_owned())); + // TODO(waterson): figure out how to do this verification when an async signer is provided + // with a (more or less) arbitrary state index. Should we require that an async signer cache + // old points? Or should we make it so that we can restart the re-establish after the signer + // becomes unblocked? Or something else? + if false { + let state_index = INITIAL_COMMITMENT_NUMBER - msg.next_remote_commitment_number + 1; + let expected_point = self.context.holder_signer.as_ref().get_per_commitment_point(state_index, &self.context.secp_ctx) + .map_err(|_| ChannelError::Close(format!("Unable to retrieve per-commitment point for state {}", state_index)))?; + let given_secret = SecretKey::from_slice(&msg.your_last_per_commitment_secret) + .map_err(|_| ChannelError::Close("Peer sent a garbage channel_reestablish with unparseable secret key".to_owned()))?; + if expected_point != PublicKey::from_secret_key(&self.context.secp_ctx, &given_secret) { + return Err(ChannelError::Close("Peer sent a garbage channel_reestablish with secret key not matching the commitment height provided".to_owned())); + } } if msg.next_remote_commitment_number > INITIAL_COMMITMENT_NUMBER - self.context.cur_holder_commitment_transaction_number { macro_rules! log_and_panic { @@ -4210,13 +4445,13 @@ impl Channel where } // We have OurChannelReady set! - let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); + let channel_ready = self.get_channel_ready(); + if channel_ready.is_none() { + self.context.signer_pending_channel_ready = true; + } + return Ok(ReestablishResponses { - channel_ready: Some(msgs::ChannelReady { - channel_id: self.context.channel_id(), - next_per_commitment_point, - short_channel_id_alias: Some(self.context.outbound_scid_alias), - }), + channel_ready, raa: None, commitment_update: None, order: RAACommitmentOrder::CommitmentFirst, shutdown_msg, announcement_sigs, @@ -4232,7 +4467,13 @@ impl Channel where self.context.monitor_pending_revoke_and_ack = true; None } else { - Some(self.get_last_revoke_and_ack()) + self.get_last_revoke_and_ack(logger).map(|raa| { + self.context.signer_pending_revoke_and_ack = false; + raa + }).or_else(|| { + self.context.signer_pending_revoke_and_ack = true; + None + }) } } else { return Err(ChannelError::Close("Peer attempted to reestablish channel with a very old local commitment transaction".to_owned())); @@ -4250,11 +4491,9 @@ impl Channel where let channel_ready = if msg.next_local_commitment_number == 1 && INITIAL_COMMITMENT_NUMBER - self.context.cur_holder_commitment_transaction_number == 1 { // We should never have to worry about MonitorUpdateInProgress resending ChannelReady - let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); - Some(msgs::ChannelReady { - channel_id: self.context.channel_id(), - next_per_commitment_point, - short_channel_id_alias: Some(self.context.outbound_scid_alias), + self.get_channel_ready().or_else(|| { + self.context.signer_pending_channel_ready = true; + None }) } else { None }; @@ -4956,13 +5195,14 @@ impl Channel where if need_commitment_update { if self.context.channel_state & (ChannelState::MonitorUpdateInProgress as u32) == 0 { if self.context.channel_state & (ChannelState::PeerDisconnected as u32) == 0 { - let next_per_commitment_point = - self.context.holder_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &self.context.secp_ctx); - return Some(msgs::ChannelReady { - channel_id: self.context.channel_id, - next_per_commitment_point, - short_channel_id_alias: Some(self.context.outbound_scid_alias), - }); + if let Ok(next_per_commitment_point) = self.context.holder_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &self.context.secp_ctx) { + return Some(msgs::ChannelReady { + channel_id: self.context.channel_id, + next_per_commitment_point, + short_channel_id_alias: Some(self.context.outbound_scid_alias), + }); + } + self.context.signer_pending_channel_ready = true; } } else { self.context.monitor_pending_channel_ready = true; @@ -5578,6 +5818,7 @@ impl Channel where self.context.pending_update_fee = None; } } + log_debug!(logger, "setting resend_order to RevokeAndACKFirst"); self.context.resend_order = RAACommitmentOrder::RevokeAndACKFirst; let (mut htlcs_ref, counterparty_commitment_tx) = @@ -5850,6 +6091,7 @@ impl Channel where pub(super) struct OutboundV1Channel where SP::Target: SignerProvider { pub context: ChannelContext, pub unfunded_context: UnfundedChannelContext, + pub signer_pending_open_channel: bool, } impl OutboundV1Channel where SP::Target: SignerProvider { @@ -5925,6 +6167,8 @@ impl OutboundV1Channel where SP::Target: SignerProvider { let temporary_channel_id = ChannelId::temporary_from_entropy_source(entropy_source); + let cur_holder_commitment_point = holder_signer.get_per_commitment_point(INITIAL_COMMITMENT_NUMBER, &secp_ctx).ok(); + Ok(Self { context: ChannelContext { user_id, @@ -5953,6 +6197,8 @@ impl OutboundV1Channel where SP::Target: SignerProvider { destination_script, cur_holder_commitment_transaction_number: INITIAL_COMMITMENT_NUMBER, + cur_holder_commitment_point, + prev_holder_commitment_secret: None, cur_counterparty_commitment_transaction_number: INITIAL_COMMITMENT_NUMBER, value_to_self_msat, @@ -5975,7 +6221,11 @@ impl OutboundV1Channel where SP::Target: SignerProvider { monitor_pending_finalized_fulfills: Vec::new(), signer_pending_commitment_update: false, + signer_pending_revoke_and_ack: false, signer_pending_funding: false, + signer_pending_channel_ready: false, + signer_pending_commitment_point: false, + signer_pending_released_secret: false, #[cfg(debug_assertions)] holder_max_commitment_tx_output: Mutex::new((channel_value_satoshis * 1000 - push_msat, push_msat)), @@ -6053,7 +6303,8 @@ impl OutboundV1Channel where SP::Target: SignerProvider { blocked_monitor_updates: Vec::new(), }, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 } + unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 }, + signer_pending_open_channel: false, }) } @@ -6140,7 +6391,7 @@ impl OutboundV1Channel where SP::Target: SignerProvider { /// and see if we get a new `OpenChannel` message, otherwise the channel is failed. pub(crate) fn maybe_handle_error_without_close( &mut self, chain_hash: ChainHash, fee_estimator: &LowerBoundedFeeEstimator - ) -> Result + ) -> Result, ()> where F::Target: FeeEstimator { @@ -6168,10 +6419,14 @@ impl OutboundV1Channel where SP::Target: SignerProvider { self.context.channel_type = ChannelTypeFeatures::only_static_remote_key(); } self.context.channel_transaction_parameters.channel_type_features = self.context.channel_type.clone(); - Ok(self.get_open_channel(chain_hash)) + let opt_msg = self.get_open_channel(chain_hash); + if opt_msg.is_none() { + self.signer_pending_open_channel = true; + } + Ok(opt_msg) } - pub fn get_open_channel(&self, chain_hash: ChainHash) -> msgs::OpenChannel { + pub fn get_open_channel(&self, chain_hash: ChainHash) -> Option { if !self.context.is_outbound() { panic!("Tried to open a channel for an inbound channel?"); } @@ -6183,34 +6438,35 @@ impl OutboundV1Channel where SP::Target: SignerProvider { panic!("Tried to send an open_channel for a channel that has already advanced"); } - let first_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); let keys = self.context.get_holder_pubkeys(); - msgs::OpenChannel { - chain_hash, - temporary_channel_id: self.context.channel_id, - funding_satoshis: self.context.channel_value_satoshis, - push_msat: self.context.channel_value_satoshis * 1000 - self.context.value_to_self_msat, - dust_limit_satoshis: self.context.holder_dust_limit_satoshis, - max_htlc_value_in_flight_msat: self.context.holder_max_htlc_value_in_flight_msat, - channel_reserve_satoshis: self.context.holder_selected_channel_reserve_satoshis, - htlc_minimum_msat: self.context.holder_htlc_minimum_msat, - feerate_per_kw: self.context.feerate_per_kw as u32, - to_self_delay: self.context.get_holder_selected_contest_delay(), - max_accepted_htlcs: self.context.holder_max_accepted_htlcs, - funding_pubkey: keys.funding_pubkey, - revocation_basepoint: keys.revocation_basepoint, - payment_point: keys.payment_point, - delayed_payment_basepoint: keys.delayed_payment_basepoint, - htlc_basepoint: keys.htlc_basepoint, - first_per_commitment_point, - channel_flags: if self.context.config.announced_channel {1} else {0}, - shutdown_scriptpubkey: Some(match &self.context.shutdown_scriptpubkey { - Some(script) => script.clone().into_inner(), - None => Builder::new().into_script(), - }), - channel_type: Some(self.context.channel_type.clone()), - } + self.context.cur_holder_commitment_point.map(|first_per_commitment_point| { + msgs::OpenChannel { + chain_hash, + temporary_channel_id: self.context.channel_id, + funding_satoshis: self.context.channel_value_satoshis, + push_msat: self.context.channel_value_satoshis * 1000 - self.context.value_to_self_msat, + dust_limit_satoshis: self.context.holder_dust_limit_satoshis, + max_htlc_value_in_flight_msat: self.context.holder_max_htlc_value_in_flight_msat, + channel_reserve_satoshis: self.context.holder_selected_channel_reserve_satoshis, + htlc_minimum_msat: self.context.holder_htlc_minimum_msat, + feerate_per_kw: self.context.feerate_per_kw as u32, + to_self_delay: self.context.get_holder_selected_contest_delay(), + max_accepted_htlcs: self.context.holder_max_accepted_htlcs, + funding_pubkey: keys.funding_pubkey, + revocation_basepoint: keys.revocation_basepoint, + payment_point: keys.payment_point, + delayed_payment_basepoint: keys.delayed_payment_basepoint, + htlc_basepoint: keys.htlc_basepoint, + first_per_commitment_point, + channel_flags: if self.context.config.announced_channel {1} else {0}, + shutdown_scriptpubkey: Some(match &self.context.shutdown_scriptpubkey { + Some(script) => script.clone().into_inner(), + None => Builder::new().into_script(), + }), + channel_type: Some(self.context.channel_type.clone()), + } + }) } // Message handlers @@ -6343,12 +6599,31 @@ impl OutboundV1Channel where SP::Target: SignerProvider { Ok(()) } + + /// Indicates that the signer may have some signatures for us, so we should retry if we're + /// blocked. + #[allow(unused)] + pub fn signer_maybe_unblocked(&mut self, chain_hash: &ChainHash, logger: &L) -> UnfundedOutboundV1SignerResumeUpdates + where L::Target: Logger + { + let open_channel = if self.signer_pending_open_channel { + self.context.update_holder_per_commitment(logger); + self.get_open_channel(chain_hash.clone()).map(|msg| { + self.signer_pending_open_channel = false; + msg + }) + } else { None }; + UnfundedOutboundV1SignerResumeUpdates { + open_channel, + } + } } /// A not-yet-funded inbound (from counterparty) channel using V1 channel establishment. pub(super) struct InboundV1Channel where SP::Target: SignerProvider { pub context: ChannelContext, pub unfunded_context: UnfundedChannelContext, + pub signer_pending_accept_channel: bool, } impl InboundV1Channel where SP::Target: SignerProvider { @@ -6557,6 +6832,7 @@ impl InboundV1Channel where SP::Target: SignerProvider { } else { Some(cmp::max(config.channel_handshake_config.minimum_depth, 1)) }; + let cur_holder_commitment_point = holder_signer.get_per_commitment_point(INITIAL_COMMITMENT_NUMBER, &secp_ctx).ok(); let chan = Self { context: ChannelContext { @@ -6585,6 +6861,8 @@ impl InboundV1Channel where SP::Target: SignerProvider { destination_script, cur_holder_commitment_transaction_number: INITIAL_COMMITMENT_NUMBER, + cur_holder_commitment_point, + prev_holder_commitment_secret: None, cur_counterparty_commitment_transaction_number: INITIAL_COMMITMENT_NUMBER, value_to_self_msat: msg.push_msat, @@ -6607,7 +6885,11 @@ impl InboundV1Channel where SP::Target: SignerProvider { monitor_pending_finalized_fulfills: Vec::new(), signer_pending_commitment_update: false, + signer_pending_revoke_and_ack: false, signer_pending_funding: false, + signer_pending_channel_ready: false, + signer_pending_commitment_point: false, + signer_pending_released_secret: false, #[cfg(debug_assertions)] holder_max_commitment_tx_output: Mutex::new((msg.push_msat, msg.funding_satoshis * 1000 - msg.push_msat)), @@ -6689,7 +6971,8 @@ impl InboundV1Channel where SP::Target: SignerProvider { blocked_monitor_updates: Vec::new(), }, - unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 } + unfunded_context: UnfundedChannelContext { unfunded_channel_age_ticks: 0 }, + signer_pending_accept_channel: false, }; Ok(chan) @@ -6699,7 +6982,7 @@ impl InboundV1Channel where SP::Target: SignerProvider { /// should be sent back to the counterparty node. /// /// [`msgs::AcceptChannel`]: crate::ln::msgs::AcceptChannel - pub fn accept_inbound_channel(&mut self) -> msgs::AcceptChannel { + pub fn accept_inbound_channel(&mut self) -> Option { if self.context.is_outbound() { panic!("Tried to send accept_channel for an outbound channel?"); } @@ -6718,33 +7001,33 @@ impl InboundV1Channel where SP::Target: SignerProvider { /// [`InboundV1Channel::accept_inbound_channel`] instead. /// /// [`msgs::AcceptChannel`]: crate::ln::msgs::AcceptChannel - fn generate_accept_channel_message(&self) -> msgs::AcceptChannel { - let first_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx); - let keys = self.context.get_holder_pubkeys(); - - msgs::AcceptChannel { - temporary_channel_id: self.context.channel_id, - dust_limit_satoshis: self.context.holder_dust_limit_satoshis, - max_htlc_value_in_flight_msat: self.context.holder_max_htlc_value_in_flight_msat, - channel_reserve_satoshis: self.context.holder_selected_channel_reserve_satoshis, - htlc_minimum_msat: self.context.holder_htlc_minimum_msat, - minimum_depth: self.context.minimum_depth.unwrap(), - to_self_delay: self.context.get_holder_selected_contest_delay(), - max_accepted_htlcs: self.context.holder_max_accepted_htlcs, - funding_pubkey: keys.funding_pubkey, - revocation_basepoint: keys.revocation_basepoint, - payment_point: keys.payment_point, - delayed_payment_basepoint: keys.delayed_payment_basepoint, - htlc_basepoint: keys.htlc_basepoint, - first_per_commitment_point, - shutdown_scriptpubkey: Some(match &self.context.shutdown_scriptpubkey { - Some(script) => script.clone().into_inner(), - None => Builder::new().into_script(), - }), - channel_type: Some(self.context.channel_type.clone()), - #[cfg(taproot)] - next_local_nonce: None, - } + fn generate_accept_channel_message(&self) -> Option { + self.context.cur_holder_commitment_point.map(|first_per_commitment_point| { + let keys = self.context.get_holder_pubkeys(); + msgs::AcceptChannel { + temporary_channel_id: self.context.channel_id, + dust_limit_satoshis: self.context.holder_dust_limit_satoshis, + max_htlc_value_in_flight_msat: self.context.holder_max_htlc_value_in_flight_msat, + channel_reserve_satoshis: self.context.holder_selected_channel_reserve_satoshis, + htlc_minimum_msat: self.context.holder_htlc_minimum_msat, + minimum_depth: self.context.minimum_depth.unwrap(), + to_self_delay: self.context.get_holder_selected_contest_delay(), + max_accepted_htlcs: self.context.holder_max_accepted_htlcs, + funding_pubkey: keys.funding_pubkey, + revocation_basepoint: keys.revocation_basepoint, + payment_point: keys.payment_point, + delayed_payment_basepoint: keys.delayed_payment_basepoint, + htlc_basepoint: keys.htlc_basepoint, + first_per_commitment_point, + shutdown_scriptpubkey: Some(match &self.context.shutdown_scriptpubkey { + Some(script) => script.clone().into_inner(), + None => Builder::new().into_script(), + }), + channel_type: Some(self.context.channel_type.clone()), + #[cfg(taproot)] + next_local_nonce: None, + } + }) } /// Enables the possibility for tests to extract a [`msgs::AcceptChannel`] message for an @@ -6752,14 +7035,14 @@ impl InboundV1Channel where SP::Target: SignerProvider { /// /// [`msgs::AcceptChannel`]: crate::ln::msgs::AcceptChannel #[cfg(test)] - pub fn get_accept_channel_message(&self) -> msgs::AcceptChannel { + pub fn get_accept_channel_message(&self) -> Option { self.generate_accept_channel_message() } fn check_funding_created_signature(&mut self, sig: &Signature, logger: &L) -> Result where L::Target: Logger { let funding_script = self.context.get_funding_redeemscript(); - let keys = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number); + let keys = self.context.build_next_holder_transaction_keys(); let initial_commitment_tx = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &keys, true, false, logger).tx; let trusted_tx = initial_commitment_tx.trust(); let initial_commitment_bitcoin_tx = trusted_tx.built_transaction(); @@ -6832,6 +7115,7 @@ impl InboundV1Channel where SP::Target: SignerProvider { self.context.channel_id = funding_txo.to_channel_id(); self.context.cur_counterparty_commitment_transaction_number -= 1; self.context.cur_holder_commitment_transaction_number -= 1; + self.context.update_holder_per_commitment(logger); let (counterparty_initial_commitment_tx, funding_signed) = self.context.get_funding_signed_msg(logger); @@ -6858,6 +7142,7 @@ impl InboundV1Channel where SP::Target: SignerProvider { log_info!(logger, "{} funding_signed for peer for channel {}", if funding_signed.is_some() { "Generated" } else { "Waiting for signature on" }, &self.context.channel_id()); + let signer_pending_funding = self.context.signer_pending_funding; // Promote the channel to a full-fledged one now that we have updated the state and have a // `ChannelMonitor`. @@ -6870,6 +7155,24 @@ impl InboundV1Channel where SP::Target: SignerProvider { Ok((channel, funding_signed, channel_monitor)) } + + /// Indicates that the signer may have some signatures for us, so we should retry if we're + /// blocked. + #[allow(unused)] + pub fn signer_maybe_unblocked(&mut self, logger: &L) -> UnfundedInboundV1SignerResumeUpdates + where L::Target: Logger + { + let accept_channel = if self.signer_pending_accept_channel { + self.context.update_holder_per_commitment(logger); + self.generate_accept_channel_message().map(|msg| { + self.signer_pending_accept_channel = false; + msg + }) + } else { None }; + UnfundedInboundV1SignerResumeUpdates { + accept_channel, + } + } } const SERIALIZATION_VERSION: u8 = 3; @@ -7248,6 +7551,14 @@ impl Writeable for Channel where SP::Target: SignerProvider { (35, pending_outbound_skimmed_fees, optional_vec), (37, holding_cell_skimmed_fees, optional_vec), (38, self.context.is_batch_funding, option), + (39, self.context.cur_holder_commitment_point, option), + (41, self.context.prev_holder_commitment_secret, option), + (43, self.context.signer_pending_commitment_point, required), + (45, self.context.signer_pending_revoke_and_ack, required), + (47, self.context.signer_pending_funding, required), + (49, self.context.signer_pending_channel_ready, required), + (51, self.context.signer_pending_commitment_point, required), + (53, self.context.signer_pending_released_secret, required), }); Ok(()) @@ -7530,9 +7841,18 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch let mut pending_outbound_skimmed_fees_opt: Option>> = None; let mut holding_cell_skimmed_fees_opt: Option>> = None; + let mut cur_holder_commitment_point: Option = None; + let mut prev_holder_commitment_secret: Option<[u8; 32]> = None; let mut is_batch_funding: Option<()> = None; + let mut signer_pending_commitment_update: bool = false; + let mut signer_pending_revoke_and_ack: bool = false; + let mut signer_pending_funding: bool = false; + let mut signer_pending_channel_ready: bool = false; + let mut signer_pending_commitment_point: bool = false; + let mut signer_pending_released_secret: bool = false; + read_tlv_fields!(reader, { (0, announcement_sigs, option), (1, minimum_depth, option), @@ -7559,6 +7879,14 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch (35, pending_outbound_skimmed_fees_opt, optional_vec), (37, holding_cell_skimmed_fees_opt, optional_vec), (38, is_batch_funding, option), + (39, cur_holder_commitment_point, option), + (41, prev_holder_commitment_secret, option), + (43, signer_pending_commitment_update, (default_value, false)), + (45, signer_pending_revoke_and_ack, (default_value, false)), + (47, signer_pending_funding, (default_value, false)), + (49, signer_pending_channel_ready, (default_value, false)), + (51, signer_pending_commitment_point, (default_value, false)), + (53, signer_pending_released_secret, (default_value, false)), }); let (channel_keys_id, holder_signer) = if let Some(channel_keys_id) = channel_keys_id { @@ -7610,6 +7938,25 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch let mut secp_ctx = Secp256k1::new(); secp_ctx.seeded_randomize(&entropy_source.get_secure_random_bytes()); + // If we weren't able to load the cur_holder_commitment_point or prev_holder_commitment_secret, + // ask the signer for it now. + if cur_holder_commitment_point.is_none() { + cur_holder_commitment_point = holder_signer.get_per_commitment_point( + cur_holder_commitment_transaction_number, &secp_ctx + ).ok(); + + signer_pending_commitment_point = cur_holder_commitment_point.is_none(); + } + + if prev_holder_commitment_secret.is_none() { + let release_transaction_number = cur_holder_commitment_transaction_number + 2; + prev_holder_commitment_secret = if release_transaction_number <= INITIAL_COMMITMENT_NUMBER { + let secret = holder_signer.release_commitment_secret(release_transaction_number).ok(); + signer_pending_commitment_point = secret.is_none(); + secret + } else { None }; + } + // `user_id` used to be a single u64 value. In order to remain backwards // compatible with versions prior to 0.0.113, the u128 is serialized as two // separate u64 values. @@ -7662,6 +8009,8 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch destination_script, cur_holder_commitment_transaction_number, + cur_holder_commitment_point, + prev_holder_commitment_secret, cur_counterparty_commitment_transaction_number, value_to_self_msat, @@ -7679,8 +8028,12 @@ impl<'a, 'b, 'c, ES: Deref, SP: Deref> ReadableArgs<(&'a ES, &'b SP, u32, &'c Ch monitor_pending_failures, monitor_pending_finalized_fulfills: monitor_pending_finalized_fulfills.unwrap(), - signer_pending_commitment_update: false, - signer_pending_funding: false, + signer_pending_commitment_update, + signer_pending_revoke_and_ack, + signer_pending_funding, + signer_pending_channel_ready, + signer_pending_commitment_point, + signer_pending_released_secret, pending_update_fee, holding_cell_update_fee, @@ -7910,7 +8263,7 @@ mod tests { // Now change the fee so we can check that the fee in the open_channel message is the // same as the old fee. fee_est.fee_est = 500; - let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)); + let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); assert_eq!(open_channel_msg.feerate_per_kw, original_fee); } @@ -7936,12 +8289,12 @@ mod tests { // Create Node B's channel by receiving Node A's open_channel message // Make sure A's dust limit is as we expect. - let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)); + let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[7; 32]).unwrap()); let mut node_b_chan = InboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_channel_type_features(&config), &channelmanager::provided_init_features(&config), &open_channel_msg, 7, &config, 0, &&logger, /*is_0conf=*/false).unwrap(); // Node B --> Node A: accept channel, explicitly setting B's dust limit. - let mut accept_channel_msg = node_b_chan.accept_inbound_channel(); + let mut accept_channel_msg = node_b_chan.accept_inbound_channel().unwrap(); accept_channel_msg.dust_limit_satoshis = 546; node_a_chan.accept_channel(&accept_channel_msg, &config.channel_handshake_limits, &channelmanager::provided_init_features(&config)).unwrap(); node_a_chan.context.holder_dust_limit_satoshis = 1560; @@ -8065,12 +8418,12 @@ mod tests { let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap(); // Create Node B's channel by receiving Node A's open_channel message - let open_channel_msg = node_a_chan.get_open_channel(chain_hash); + let open_channel_msg = node_a_chan.get_open_channel(chain_hash).unwrap(); let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[7; 32]).unwrap()); let mut node_b_chan = InboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_channel_type_features(&config), &channelmanager::provided_init_features(&config), &open_channel_msg, 7, &config, 0, &&logger, /*is_0conf=*/false).unwrap(); // Node B --> Node A: accept channel - let accept_channel_msg = node_b_chan.accept_inbound_channel(); + let accept_channel_msg = node_b_chan.accept_inbound_channel().unwrap(); node_a_chan.accept_channel(&accept_channel_msg, &config.channel_handshake_limits, &channelmanager::provided_init_features(&config)).unwrap(); // Node A --> Node B: funding created @@ -8134,7 +8487,7 @@ mod tests { let chan_2_value_msat = chan_2.context.channel_value_satoshis * 1000; assert_eq!(chan_2.context.holder_max_htlc_value_in_flight_msat, (chan_2_value_msat as f64 * 0.99) as u64); - let chan_1_open_channel_msg = chan_1.get_open_channel(ChainHash::using_genesis_block(network)); + let chan_1_open_channel_msg = chan_1.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); // Test that `InboundV1Channel::new` creates a channel with the correct value for // `holder_max_htlc_value_in_flight_msat`, when configured with a valid percentage value, @@ -8215,7 +8568,7 @@ mod tests { let expected_outbound_selected_chan_reserve = cmp::max(MIN_THEIR_CHAN_RESERVE_SATOSHIS, (chan.context.channel_value_satoshis as f64 * outbound_selected_channel_reserve_perc) as u64); assert_eq!(chan.context.holder_selected_channel_reserve_satoshis, expected_outbound_selected_chan_reserve); - let chan_open_channel_msg = chan.get_open_channel(ChainHash::using_genesis_block(network)); + let chan_open_channel_msg = chan.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); let mut inbound_node_config = UserConfig::default(); inbound_node_config.channel_handshake_config.their_channel_reserve_proportional_millionths = (inbound_selected_channel_reserve_perc * 1_000_000.0) as u32; @@ -8251,12 +8604,12 @@ mod tests { // Create Node B's channel by receiving Node A's open_channel message // Make sure A's dust limit is as we expect. - let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)); + let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[7; 32]).unwrap()); let mut node_b_chan = InboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_channel_type_features(&config), &channelmanager::provided_init_features(&config), &open_channel_msg, 7, &config, 0, &&logger, /*is_0conf=*/false).unwrap(); // Node B --> Node A: accept channel, explicitly setting B's dust limit. - let mut accept_channel_msg = node_b_chan.accept_inbound_channel(); + let mut accept_channel_msg = node_b_chan.accept_inbound_channel().unwrap(); accept_channel_msg.dust_limit_satoshis = 546; node_a_chan.accept_channel(&accept_channel_msg, &config.channel_handshake_limits, &channelmanager::provided_init_features(&config)).unwrap(); node_a_chan.context.holder_dust_limit_satoshis = 1560; @@ -8379,7 +8732,7 @@ mod tests { assert_eq!(counterparty_pubkeys.htlc_basepoint.serialize()[..], hex::decode("032c0b7cf95324a07d05398b240174dc0c2be444d96b159aa6c7f7b1e668680991").unwrap()[..]); - // We can't just use build_holder_transaction_keys here as the per_commitment_secret is not + // We can't just use build_next_holder_transaction_keys here as the per_commitment_secret is not // derived from a commitment_seed, so instead we copy it here and call // build_commitment_transaction. let delayed_payment_base = &chan.context.holder_signer.as_ref().pubkeys().delayed_payment_basepoint; @@ -9100,7 +9453,7 @@ mod tests { let mut channel_type_features = ChannelTypeFeatures::only_static_remote_key(); channel_type_features.set_zero_conf_required(); - let mut open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)); + let mut open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); open_channel_msg.channel_type = Some(channel_type_features); let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[7; 32]).unwrap()); let res = InboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, @@ -9143,7 +9496,7 @@ mod tests { &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42 ).unwrap(); - let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)); + let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); let channel_b = InboundV1Channel::<&TestKeysInterface>::new( &fee_estimator, &&keys_provider, &&keys_provider, node_id_a, &channelmanager::provided_channel_type_features(&config), &channelmanager::provided_init_features(&config), @@ -9181,7 +9534,7 @@ mod tests { ).unwrap(); // Set `channel_type` to `None` to force the implicit feature negotiation. - let mut open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)); + let mut open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); open_channel_msg.channel_type = None; // Since A supports both `static_remote_key` and `option_anchors`, but B only accepts @@ -9226,7 +9579,7 @@ mod tests { &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42 ).unwrap(); - let mut open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)); + let mut open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); open_channel_msg.channel_type = Some(simple_anchors_channel_type.clone()); let res = InboundV1Channel::<&TestKeysInterface>::new( @@ -9245,7 +9598,7 @@ mod tests { 10000000, 100000, 42, &config, 0, 42 ).unwrap(); - let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)); + let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network)).unwrap(); let channel_b = InboundV1Channel::<&TestKeysInterface>::new( &fee_estimator, &&keys_provider, &&keys_provider, node_id_a, @@ -9253,7 +9606,7 @@ mod tests { &open_channel_msg, 7, &config, 0, &&logger, /*is_0conf=*/false ).unwrap(); - let mut accept_channel_msg = channel_b.get_accept_channel_message(); + let mut accept_channel_msg = channel_b.get_accept_channel_message().unwrap(); accept_channel_msg.channel_type = Some(simple_anchors_channel_type.clone()); let res = channel_a.accept_channel( @@ -9303,7 +9656,7 @@ mod tests { node_b_node_id, &channelmanager::provided_channel_type_features(&config), &channelmanager::provided_init_features(&config), - &open_channel_msg, + &open_channel_msg.unwrap(), 7, &config, 0, @@ -9313,7 +9666,7 @@ mod tests { let accept_channel_msg = node_b_chan.accept_inbound_channel(); node_a_chan.accept_channel( - &accept_channel_msg, + &accept_channel_msg.unwrap(), &config.channel_handshake_limits, &channelmanager::provided_init_features(&config), ).unwrap(); diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs index 3ca3af7c7..f3a89f824 100644 --- a/lightning/src/ln/channelmanager.rs +++ b/lightning/src/ln/channelmanager.rs @@ -2415,7 +2415,7 @@ where .ok_or_else(|| APIError::APIMisuseError{ err: format!("Not connected to node: {}", their_network_key) })?; let mut peer_state = peer_state_mutex.lock().unwrap(); - let channel = { + let mut channel = { let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); let their_features = &peer_state.latest_features; let config = if override_config.is_some() { override_config.as_ref().unwrap() } else { &self.default_configuration }; @@ -2430,7 +2430,10 @@ where }, } }; - let res = channel.get_open_channel(self.chain_hash); + let opt_msg = channel.get_open_channel(self.chain_hash); + if opt_msg.is_none() { + channel.signer_pending_open_channel = true; + } let temporary_channel_id = channel.context.channel_id(); match peer_state.channel_by_id.entry(temporary_channel_id) { @@ -2444,10 +2447,13 @@ where hash_map::Entry::Vacant(entry) => { entry.insert(ChannelPhase::UnfundedOutboundV1(channel)); } } - peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { - node_id: their_network_key, - msg: res, - }); + if let Some(msg) = opt_msg { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { + node_id: their_network_key, + msg, + }); + }; + Ok(temporary_channel_id) } @@ -5869,6 +5875,12 @@ where emit_channel_ready_event!(pending_events, channel); } + + log_debug!(self.logger, "Outgoing message queue is:"); + for msg in pending_msg_events.iter() { + log_debug!(self.logger, " {:?}", msg); + } + htlc_forwards } @@ -6019,10 +6031,14 @@ where let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); channel.context.set_outbound_scid_alias(outbound_scid_alias); - peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { - node_id: channel.context.get_counterparty_node_id(), - msg: channel.accept_inbound_channel(), - }); + match channel.accept_inbound_channel() { + Some(msg) => + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { + node_id: channel.context.get_counterparty_node_id(), + msg + }), + None => channel.signer_pending_accept_channel = true, + }; peer_state.channel_by_id.insert(temporary_channel_id.clone(), ChannelPhase::UnfundedInboundV1(channel)); @@ -6174,10 +6190,15 @@ where let outbound_scid_alias = self.create_and_insert_outbound_scid_alias(); channel.context.set_outbound_scid_alias(outbound_scid_alias); - peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { - node_id: counterparty_node_id.clone(), - msg: channel.accept_inbound_channel(), - }); + match channel.accept_inbound_channel() { + Some(msg) => + peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { + node_id: channel.context.get_counterparty_node_id(), + msg + }), + None => channel.signer_pending_accept_channel = true, + }; + peer_state.channel_by_id.insert(channel_id, ChannelPhase::UnfundedInboundV1(channel)); Ok(()) } @@ -6237,6 +6258,12 @@ where Some(ChannelPhase::UnfundedInboundV1(inbound_chan)) => { match inbound_chan.funding_created(msg, best_block, &self.signer_provider, &self.logger) { Ok(res) => res, + Err((inbound_chan, ChannelError::Ignore(_))) => { + // If we get an `Ignore` error then something transient went wrong. Put the channel + // back into the table and bail. + peer_state.channel_by_id.insert(msg.temporary_channel_id, ChannelPhase::UnfundedInboundV1(inbound_chan)); + return Ok(()); + }, Err((mut inbound_chan, err)) => { // We've already removed this inbound channel from the map in `PeerState` // above so at this point we just need to clean up any lingering entries @@ -6357,6 +6384,7 @@ where match peer_state.channel_by_id.entry(msg.channel_id) { hash_map::Entry::Occupied(mut chan_phase_entry) => { if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { + log_debug!(self.logger, "<== channel_ready"); let announcement_sigs_opt = try_chan_phase_entry!(self, chan.channel_ready(&msg, &self.node_signer, self.chain_hash, &self.default_configuration, &self.best_block.read().unwrap(), &self.logger), chan_phase_entry); if let Some(announcement_sigs) = announcement_sigs_opt { @@ -6577,6 +6605,7 @@ where _ => pending_forward_info } }; + log_debug!(self.logger, "<== update_add_htlc: htlc_id={} amount_msat={}", msg.htlc_id, msg.amount_msat); try_chan_phase_entry!(self, chan.update_add_htlc(&msg, pending_forward_info, create_pending_htlc_status, &self.fee_estimator, &self.logger), chan_phase_entry); } else { return try_chan_phase_entry!(self, Err(ChannelError::Close( @@ -6602,6 +6631,7 @@ where match peer_state.channel_by_id.entry(msg.channel_id) { hash_map::Entry::Occupied(mut chan_phase_entry) => { if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { + log_debug!(self.logger, "<== update_fulfill_htlc: htlc_id={}", msg.htlc_id); let res = try_chan_phase_entry!(self, chan.update_fulfill_htlc(&msg), chan_phase_entry); if let HTLCSource::PreviousHopData(prev_hop) = &res.0 { log_trace!(self.logger, @@ -6698,6 +6728,7 @@ where hash_map::Entry::Occupied(mut chan_phase_entry) => { if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() { let funding_txo = chan.context.get_funding_txo(); + log_debug!(self.logger, "<== commitment_signed: {} htlcs", msg.htlc_signatures.len()); let monitor_update_opt = try_chan_phase_entry!(self, chan.commitment_signed(&msg, &self.logger), chan_phase_entry); if let Some(monitor_update) = monitor_update_opt { handle_new_monitor_update!(self, funding_txo.unwrap(), monitor_update, peer_state_lock, @@ -6869,6 +6900,7 @@ where &peer_state.actions_blocking_raa_monitor_updates, funding_txo, *counterparty_node_id) } else { false }; + log_debug!(self.logger, "<== revoke_and_ack"); let (htlcs_to_fail, monitor_update_opt) = try_chan_phase_entry!(self, chan.revoke_and_ack(&msg, &self.fee_estimator, &self.logger, mon_update_blocked), chan_phase_entry); if let Some(monitor_update) = monitor_update_opt { @@ -7245,28 +7277,51 @@ where let unblock_chan = |phase: &mut ChannelPhase, pending_msg_events: &mut Vec| { let node_id = phase.context().get_counterparty_node_id(); - if let ChannelPhase::Funded(chan) = phase { - let msgs = chan.signer_maybe_unblocked(&self.logger); - if let Some(updates) = msgs.commitment_update { - pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { - node_id, - updates, - }); - } - if let Some(msg) = msgs.funding_signed { - pending_msg_events.push(events::MessageSendEvent::SendFundingSigned { - node_id, - msg, - }); + match phase { + ChannelPhase::Funded(chan) => { + let msgs = chan.signer_maybe_unblocked(&self.logger); + match (msgs.commitment_update, msgs.raa) { + (Some(cu), Some(raa)) if msgs.order == RAACommitmentOrder::CommitmentFirst => { + pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu }); + pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa }); + }, + (Some(cu), Some(raa)) if msgs.order == RAACommitmentOrder::RevokeAndACKFirst => { + pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa }); + pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu }); + }, + (Some(cu), _) => pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu }), + (_, Some(raa)) => pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa }), + (_, _) => (), + }; + if let Some(msg) = msgs.funding_signed { + pending_msg_events.push(events::MessageSendEvent::SendFundingSigned { + node_id, + msg, + }); + } + if let Some(msg) = msgs.funding_created { + pending_msg_events.push(events::MessageSendEvent::SendFundingCreated { + node_id, + msg, + }); + } + if let Some(msg) = msgs.channel_ready { + send_channel_ready!(self, pending_msg_events, chan, msg); + } } - if let Some(msg) = msgs.funding_created { - pending_msg_events.push(events::MessageSendEvent::SendFundingCreated { - node_id, - msg, - }); + ChannelPhase::UnfundedInboundV1(chan) => { + let msgs = chan.signer_maybe_unblocked(&self.logger); + let node_id = phase.context().get_counterparty_node_id(); + if let Some(msg) = msgs.accept_channel { + pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { node_id, msg }); + } } - if let Some(msg) = msgs.channel_ready { - send_channel_ready!(self, pending_msg_events, chan, msg); + ChannelPhase::UnfundedOutboundV1(chan) => { + let msgs = chan.signer_maybe_unblocked(&self.chain_hash, &self.logger); + let node_id = phase.context().get_counterparty_node_id(); + if let Some(msg) = msgs.open_channel { + pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { node_id, msg }); + } } } }; @@ -8887,11 +8942,13 @@ where let mut peer_state_lock = peer_state_mutex_opt.unwrap().lock().unwrap(); let peer_state = &mut *peer_state_lock; if let Some(ChannelPhase::UnfundedOutboundV1(chan)) = peer_state.channel_by_id.get_mut(&msg.channel_id) { - if let Ok(msg) = chan.maybe_handle_error_without_close(self.chain_hash, &self.fee_estimator) { - peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { - node_id: *counterparty_node_id, - msg, - }); + if let Ok(opt_msg) = chan.maybe_handle_error_without_close(self.chain_hash, &self.fee_estimator) { + if let Some(msg) = opt_msg { + peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { + node_id: *counterparty_node_id, + msg, + }); + } return; } } diff --git a/lightning/src/ln/functional_test_utils.rs b/lightning/src/ln/functional_test_utils.rs index 04da6a81b..993138672 100644 --- a/lightning/src/ln/functional_test_utils.rs +++ b/lightning/src/ln/functional_test_utils.rs @@ -32,6 +32,8 @@ use crate::util::config::{UserConfig, MaxDustHTLCExposure}; use crate::util::ser::{ReadableArgs, Writeable}; #[cfg(test)] use crate::util::logger::Logger; +#[cfg(test)] +use crate::util::test_channel_signer::ops; use bitcoin::blockdata::block::{Block, BlockHeader}; use bitcoin::blockdata::transaction::{Transaction, TxOut}; @@ -438,14 +440,14 @@ impl<'a, 'b, 'c> Node<'a, 'b, 'c> { pub fn get_block_header(&self, height: u32) -> BlockHeader { self.blocks.lock().unwrap()[height as usize].0.header } + /// Changes the channel signer's availability for the specified peer and channel. /// /// When `available` is set to `true`, the channel signer will behave normally. When set to /// `false`, the channel signer will act like an off-line remote signer and will return `Err` for - /// several of the signing methods. Currently, only `get_per_commitment_point` and - /// `release_commitment_secret` are affected by this setting. + /// several of the signing methods. #[cfg(test)] - pub fn set_channel_signer_available(&self, peer_id: &PublicKey, chan_id: &ChannelId, available: bool) { + pub fn set_channel_signer_ops_available(&self, peer_id: &PublicKey, chan_id: &ChannelId, mask: u32, available: bool) { let per_peer_state = self.node.per_peer_state.read().unwrap(); let chan_lock = per_peer_state.get(peer_id).unwrap().lock().unwrap(); let signer = (|| { @@ -454,8 +456,9 @@ impl<'a, 'b, 'c> Node<'a, 'b, 'c> { None => panic!("Couldn't find a channel with id {}", chan_id), } })(); - log_debug!(self.logger, "Setting channel signer for {} as available={}", chan_id, available); - signer.as_ecdsa().unwrap().set_available(available); + log_debug!(self.logger, "Setting channel signer for {} as {}available for {} (mask={})", + chan_id, if available { "" } else { "un" }, ops::string_from(mask), mask); + signer.as_ecdsa().unwrap().set_ops_available(mask, available); } } @@ -3228,7 +3231,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { } else { panic!("Unexpected event! {:?}", announcement_event[0]); } } } else { - assert!(chan_msgs.0.is_none()); + assert!(chan_msgs.0.is_none(), "did not expect to have a ChannelReady for node 1"); } if pending_raa.0 { assert!(chan_msgs.3 == RAACommitmentOrder::RevokeAndACKFirst); @@ -3236,7 +3239,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { assert!(node_a.node.get_and_clear_pending_msg_events().is_empty()); check_added_monitors!(node_a, 1); } else { - assert!(chan_msgs.1.is_none()); + assert!(chan_msgs.1.is_none(), "did not expect to have a RevokeAndACK for node 1"); } if pending_htlc_adds.0 != 0 || pending_htlc_claims.0 != 0 || pending_htlc_fails.0 != 0 || pending_cell_htlc_claims.0 != 0 || pending_cell_htlc_fails.0 != 0 || @@ -3269,7 +3272,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { check_added_monitors!(node_b, if pending_responding_commitment_signed_dup_monitor.0 { 0 } else { 1 }); } } else { - assert!(chan_msgs.2.is_none()); + assert!(chan_msgs.2.is_none(), "did not expect to have commitment updates for node 1"); } } @@ -3286,7 +3289,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { } } } else { - assert!(chan_msgs.0.is_none()); + assert!(chan_msgs.0.is_none(), "did not expect to have a ChannelReady for node 2"); } if pending_raa.1 { assert!(chan_msgs.3 == RAACommitmentOrder::RevokeAndACKFirst); @@ -3294,7 +3297,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { assert!(node_b.node.get_and_clear_pending_msg_events().is_empty()); check_added_monitors!(node_b, 1); } else { - assert!(chan_msgs.1.is_none()); + assert!(chan_msgs.1.is_none(), "did not expect to have a RevokeAndACK for node 2"); } if pending_htlc_adds.1 != 0 || pending_htlc_claims.1 != 0 || pending_htlc_fails.1 != 0 || pending_cell_htlc_claims.1 != 0 || pending_cell_htlc_fails.1 != 0 || @@ -3327,7 +3330,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) { check_added_monitors!(node_a, if pending_responding_commitment_signed_dup_monitor.1 { 0 } else { 1 }); } } else { - assert!(chan_msgs.2.is_none()); + assert!(chan_msgs.2.is_none(), "did not expect to have commitment updates for node 2"); } } } diff --git a/lightning/src/ln/functional_tests.rs b/lightning/src/ln/functional_tests.rs index 9b33d8029..7bef4dfc4 100644 --- a/lightning/src/ln/functional_tests.rs +++ b/lightning/src/ln/functional_tests.rs @@ -719,7 +719,7 @@ fn test_update_fee_that_funder_cannot_afford() { let chan_signer = remote_chan.get_signer(); let pubkeys = chan_signer.as_ref().pubkeys(); (pubkeys.delayed_payment_basepoint, pubkeys.htlc_basepoint, - chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &secp_ctx), + chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &secp_ctx).unwrap(), pubkeys.funding_pubkey) }; @@ -1440,8 +1440,8 @@ fn test_fee_spike_violation_fails_htlc() { let pubkeys = chan_signer.as_ref().pubkeys(); (pubkeys.revocation_basepoint, pubkeys.htlc_basepoint, - chan_signer.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER), - chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 2, &secp_ctx), + chan_signer.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER).unwrap(), + chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 2, &secp_ctx).unwrap(), chan_signer.as_ref().pubkeys().funding_pubkey) }; let (remote_delayed_payment_basepoint, remote_htlc_basepoint, remote_point, remote_funding) = { @@ -1453,7 +1453,7 @@ fn test_fee_spike_violation_fails_htlc() { let chan_signer = remote_chan.get_signer(); let pubkeys = chan_signer.as_ref().pubkeys(); (pubkeys.delayed_payment_basepoint, pubkeys.htlc_basepoint, - chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &secp_ctx), + chan_signer.as_ref().get_per_commitment_point(INITIAL_COMMITMENT_NUMBER - 1, &secp_ctx).unwrap(), chan_signer.as_ref().pubkeys().funding_pubkey) }; @@ -7758,15 +7758,15 @@ fn test_counterparty_raa_skip_no_crash() { // Make signer believe we got a counterparty signature, so that it allows the revocation keys.as_ecdsa().unwrap().get_enforcement_state().last_holder_commitment -= 1; - per_commitment_secret = keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER); + per_commitment_secret = keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER).unwrap(); // Must revoke without gaps keys.as_ecdsa().unwrap().get_enforcement_state().last_holder_commitment -= 1; - keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER - 1); + keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER - 1).expect("unable to release commitment secret"); keys.as_ecdsa().unwrap().get_enforcement_state().last_holder_commitment -= 1; next_per_commitment_point = PublicKey::from_secret_key(&Secp256k1::new(), - &SecretKey::from_slice(&keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER - 2)).unwrap()); + &SecretKey::from_slice(&keys.as_ref().release_commitment_secret(INITIAL_COMMITMENT_NUMBER - 2).unwrap()).unwrap()); } nodes[1].node.handle_revoke_and_ack(&nodes[0].node.get_our_node_id(), diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs index 5912b7b84..66f96b927 100644 --- a/lightning/src/sign/mod.rs +++ b/lightning/src/sign/mod.rs @@ -551,7 +551,10 @@ pub trait ChannelSigner { /// Gets the per-commitment point for a specific commitment number /// /// Note that the commitment number starts at `(1 << 48) - 1` and counts backwards. - fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> PublicKey; + /// + /// If the signer returns `Err`, then the user is responsible for either force-closing the channel + /// or retrying once the signature is ready. + fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> Result; /// Gets the commitment secret for a specific commitment number as part of the revocation process /// @@ -562,7 +565,7 @@ pub trait ChannelSigner { /// /// Note that the commitment number starts at `(1 << 48) - 1` and counts backwards. // TODO: return a Result so we can signal a validation error - fn release_commitment_secret(&self, idx: u64) -> [u8; 32]; + fn release_commitment_secret(&self, idx: u64) -> Result<[u8; 32], ()>; /// Validate the counterparty's signatures on the holder commitment transaction and HTLCs. /// @@ -1198,13 +1201,13 @@ impl EntropySource for InMemorySigner { } impl ChannelSigner for InMemorySigner { - fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> PublicKey { + fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> Result { let commitment_secret = SecretKey::from_slice(&chan_utils::build_commitment_secret(&self.commitment_seed, idx)).unwrap(); - PublicKey::from_secret_key(secp_ctx, &commitment_secret) + Ok(PublicKey::from_secret_key(secp_ctx, &commitment_secret)) } - fn release_commitment_secret(&self, idx: u64) -> [u8; 32] { - chan_utils::build_commitment_secret(&self.commitment_seed, idx) + fn release_commitment_secret(&self, idx: u64) -> Result<[u8; 32], ()> { + Ok(chan_utils::build_commitment_secret(&self.commitment_seed, idx)) } fn validate_holder_commitment(&self, _holder_tx: &HolderCommitmentTransaction, _preimages: Vec) -> Result<(), ()> { diff --git a/lightning/src/util/test_channel_signer.rs b/lightning/src/util/test_channel_signer.rs index ab576fee9..2f1fbc748 100644 --- a/lightning/src/util/test_channel_signer.rs +++ b/lightning/src/util/test_channel_signer.rs @@ -56,9 +56,55 @@ pub struct TestChannelSigner { /// Channel state used for policy enforcement pub state: Arc>, pub disable_revocation_policy_check: bool, - /// When `true` (the default), the signer will respond immediately with signatures. When `false`, - /// the signer will return an error indicating that it is unavailable. - pub available: Arc>, + /// A flag array that indicates which signing features are currently *not* available in the + /// channel. When a method's bit is set, then the signer will act as if the signature is + /// unavailable and return an error result. + pub unavailable: Arc>, +} + +/// Channel signer operations that can be individually enabled and disabled. If a particular value +/// is set in the `TestChannelSigner::unavailable` bitmask, then that operation will return an +/// error. +pub mod ops { + pub const GET_PER_COMMITMENT_POINT: u32 = 1 << 0; + pub const RELEASE_COMMITMENT_SECRET: u32 = 1 << 1; + pub const VALIDATE_HOLDER_COMMITMENT: u32 = 1 << 2; + pub const SIGN_COUNTERPARTY_COMMITMENT: u32 = 1 << 3; + pub const VALIDATE_COUNTERPARTY_REVOCATION: u32 = 1 << 4; + pub const SIGN_HOLDER_COMMITMENT_AND_HTLCS: u32 = 1 << 5; + pub const SIGN_JUSTICE_REVOKED_OUTPUT: u32 = 1 << 6; + pub const SIGN_JUSTICE_REVOKED_HTLC: u32 = 1 << 7; + pub const SIGN_HOLDER_HTLC_TRANSACTION: u32 = 1 << 8; + pub const SIGN_COUNTERPARTY_HTLC_TRANSATION: u32 = 1 << 9; + pub const SIGN_CLOSING_TRANSACTION: u32 = 1 << 10; + pub const SIGN_HOLDER_ANCHOR_INPUT: u32 = 1 << 11; + pub const SIGN_CHANNEL_ANNOUNCMENT_WITH_FUNDING_KEY: u32 = 1 << 12; + + #[cfg(test)] + pub fn string_from(mask: u32) -> String { + if mask == 0 { + return "nothing".to_owned(); + } + if mask == !(0 as u32) { + return "everything".to_owned(); + } + + vec![ + if (mask & GET_PER_COMMITMENT_POINT) != 0 { Some("get_per_commitment_point") } else { None }, + if (mask & RELEASE_COMMITMENT_SECRET) != 0 { Some("release_commitment_secret") } else { None }, + if (mask & VALIDATE_HOLDER_COMMITMENT) != 0 { Some("validate_holder_commitment") } else { None }, + if (mask & SIGN_COUNTERPARTY_COMMITMENT) != 0 { Some("sign_counterparty_commitment") } else { None }, + if (mask & VALIDATE_COUNTERPARTY_REVOCATION) != 0 { Some("validate_counterparty_revocation") } else { None }, + if (mask & SIGN_HOLDER_COMMITMENT_AND_HTLCS) != 0 { Some("sign_holder_commitment_and_htlcs") } else { None }, + if (mask & SIGN_JUSTICE_REVOKED_OUTPUT) != 0 { Some("sign_justice_revoked_output") } else { None }, + if (mask & SIGN_JUSTICE_REVOKED_HTLC) != 0 { Some("sign_justice_revoked_htlc") } else { None }, + if (mask & SIGN_HOLDER_HTLC_TRANSACTION) != 0 { Some("sign_holder_htlc_transaction") } else { None }, + if (mask & SIGN_COUNTERPARTY_HTLC_TRANSATION) != 0 { Some("sign_counterparty_htlc_transation") } else { None }, + if (mask & SIGN_CLOSING_TRANSACTION) != 0 { Some("sign_closing_transaction") } else { None }, + if (mask & SIGN_HOLDER_ANCHOR_INPUT) != 0 { Some("sign_holder_anchor_input") } else { None }, + if (mask & SIGN_CHANNEL_ANNOUNCMENT_WITH_FUNDING_KEY) != 0 { Some("sign_channel_announcment_with_funding_key") } else { None }, + ].iter().flatten().map(|s| s.to_string()).collect::>().join(", ") + } } impl PartialEq for TestChannelSigner { @@ -75,7 +121,7 @@ impl TestChannelSigner { inner, state, disable_revocation_policy_check: false, - available: Arc::new(Mutex::new(true)), + unavailable: Arc::new(Mutex::new(0)), } } @@ -89,7 +135,7 @@ impl TestChannelSigner { inner, state, disable_revocation_policy_check, - available: Arc::new(Mutex::new(true)), + unavailable: Arc::new(Mutex::new(0)), } } @@ -101,22 +147,28 @@ impl TestChannelSigner { } /// Marks the signer's availability. - /// - /// When `true`, methods are forwarded to the underlying signer as normal. When `false`, some - /// methods will return `Err` indicating that the signer is unavailable. Intended to be used for - /// testing asynchronous signing. #[cfg(test)] - pub fn set_available(&self, available: bool) { - *self.available.lock().unwrap() = available; + pub fn set_ops_available(&self, mask: u32, available: bool) { + if available { + *self.unavailable.lock().unwrap() &= !mask; // clear the bits that are now available + } else { + *self.unavailable.lock().unwrap() |= mask; // set the bits that are now unavailable + } } } impl ChannelSigner for TestChannelSigner { - fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> PublicKey { + fn get_per_commitment_point(&self, idx: u64, secp_ctx: &Secp256k1) -> Result { + if (*self.unavailable.lock().unwrap() & ops::GET_PER_COMMITMENT_POINT) != 0 { + return Err(()); + } self.inner.get_per_commitment_point(idx, secp_ctx) } - fn release_commitment_secret(&self, idx: u64) -> [u8; 32] { + fn release_commitment_secret(&self, idx: u64) -> Result<[u8; 32], ()> { + if (*self.unavailable.lock().unwrap() & ops::RELEASE_COMMITMENT_SECRET) != 0 { + return Err(()); + } { let mut state = self.state.lock().unwrap(); assert!(idx == state.last_holder_revoked_commitment || idx == state.last_holder_revoked_commitment - 1, "can only revoke the current or next unrevoked commitment - trying {}, last revoked {}", idx, state.last_holder_revoked_commitment); @@ -127,6 +179,9 @@ impl ChannelSigner for TestChannelSigner { } fn validate_holder_commitment(&self, holder_tx: &HolderCommitmentTransaction, _preimages: Vec) -> Result<(), ()> { + if (*self.unavailable.lock().unwrap() & ops::VALIDATE_HOLDER_COMMITMENT) != 0 { + return Err(()); + } let mut state = self.state.lock().unwrap(); let idx = holder_tx.commitment_number(); assert!(idx == state.last_holder_commitment || idx == state.last_holder_commitment - 1, "expecting to validate the current or next holder commitment - trying {}, current {}", idx, state.last_holder_commitment); @@ -148,7 +203,7 @@ impl EcdsaChannelSigner for TestChannelSigner { self.verify_counterparty_commitment_tx(commitment_tx, secp_ctx); { - if !*self.available.lock().unwrap() { + if (*self.unavailable.lock().unwrap() & ops::SIGN_COUNTERPARTY_COMMITMENT) != 0 { return Err(()); } let mut state = self.state.lock().unwrap(); @@ -167,7 +222,7 @@ impl EcdsaChannelSigner for TestChannelSigner { } fn validate_counterparty_revocation(&self, idx: u64, _secret: &SecretKey) -> Result<(), ()> { - if !*self.available.lock().unwrap() { + if (*self.unavailable.lock().unwrap() & ops::VALIDATE_COUNTERPARTY_REVOCATION) != 0 { return Err(()); } let mut state = self.state.lock().unwrap(); @@ -177,13 +232,13 @@ impl EcdsaChannelSigner for TestChannelSigner { } fn sign_holder_commitment(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1) -> Result { - if !*self.available.lock().unwrap() { + if (*self.unavailable.lock().unwrap() & ops::SIGN_HOLDER_COMMITMENT_AND_HTLCS) != 0 { return Err(()); } let trusted_tx = self.verify_holder_commitment_tx(commitment_tx, secp_ctx); let state = self.state.lock().unwrap(); let commitment_number = trusted_tx.commitment_number(); - if state.last_holder_revoked_commitment - 1 != commitment_number && state.last_holder_revoked_commitment - 2 != commitment_number { + if state.last_holder_revoked_commitment != commitment_number && state.last_holder_revoked_commitment - 1 != commitment_number { if !self.disable_revocation_policy_check { panic!("can only sign the next two unrevoked commitment numbers, revoked={} vs requested={} for {}", state.last_holder_revoked_commitment, commitment_number, self.inner.commitment_seed[0])