+# 0.0.117 - Oct 3, 2023 - "Everything but the Twelve Sinks"
+
+## API Updates
+ * `ProbabilisticScorer`'s internal models have been substantially improved,
+ including better decaying (#1789), a more granular historical channel
+ liquidity tracker (#2176) and a now-default option to make our estimate for a
+ channel's current liquidity nonlinear in the channel's capacity (#2547). In
+ total, these changes should result in improved payment success rates at the
+ cost of slightly worse routefinding performance.
+ * Support for custom TLVs for recipients of HTLCs has been added (#2308).
+ * Support for generating transactions for third-party watchtowers has been
+ added to `ChannelMonitor/Update`s (#2337).
+ * `KVStorePersister` has been replaced with a more generic and featureful
+ `KVStore` interface (#2472).
+ * A new `MonitorUpdatingPersister` is provided which wraps a `KVStore` and
+ implements `Persist` by writing differential updates rather than full
+ `ChannelMonitor`s (#2359).
+ * Batch funding of outbound channels is now supported using the new
+ `ChannelManager::batch_funding_transaction_generated` method (#2486).
+ * `ChannelManager::send_preflight_probes` has been added to probe a payment's
+ potential paths while a user is providing approval for a payment (#2534).
+ * Fully asynchronous `ChannelMonitor` updating is available as an alpha
+ preview. There remain a few known but incredibly rare race conditions which
+ may lead to loss of funds (#2112, #2169, #2562).
+ * `ChannelMonitorUpdateStatus::PermanentFailure` has been removed in favor of a
+ new `ChannelMonitorUpdateStatus::UnrecoverableError`. The new variant panics
+ on use, rather than force-closing a channel in an unsafe manner, which the
+ previous variant did (#2562). Rather than panicking with the new variant,
+ users may wish to use the new asynchronous `ChannelMonitor` updating using
+ `ChannelMonitorUpdateStatus::InProgress`.
+ * `RouteParameters::max_total_routing_fee_msat` was added to limit the fees
+ paid when routing, defaulting to 1% + 50sats when using the new
+ `from_payment_params_and_value` constructor (#2417, #2603, #2604).
+ * Implementations of `UtxoSource` are now provided in `lightning-block-sync`.
+ Those running with a full node should use this to validate gossip (#2248).
+ * `LockableScore` now supports read locking for parallel routefinding (#2197).
+ * `ChannelMonitor::get_spendable_outputs` was added to allow for re-generation
+ of `SpendableOutputDescriptor`s for a channel after they were provided via
+ `Event::SpendableOutputs` (#2609, #2624).
+ * `[u8; 32]` has been replaced with a `ChannelId` newtype for chan ids (#2485).
+ * `NetAddress` was renamed `SocketAddress` (#2549) and `FromStr` impl'd (#2134)
+ * For `no-std` users, `parse_onion_address` was added which creates a
+ `NetAddress` from a "...onion" string and port (#2134, #2633).
+ * HTLC information is now provided in `Event::PaymentClaimed::htlcs` (#2478).
+ * The success probability used in historical penalties when scoring is now
+ available via `historical_estimated_payment_success_probability` (#2466).
+ * `RecentPaymentDetails::*::payment_id` has been added (#2567).
+ * `Route` now contains a `RouteParameters` rather than a `PaymentParameters`,
+ tracking the original arguments passed to routefinding (#2555).
+ * `Balance::*::claimable_amount_satoshis` was renamed `amount_satoshis` (#2460)
+ * `*Features::set_*_feature_bit` have been added for non-custom flags (#2522).
+ * `channel_id` was added to `SpendableOutputs` events (#2511).
+ * `counterparty_node_id` and `channel_capacity_sats` were added to
+ `ChannelClosed` events (#2387).
+ * `ChannelMonitor` now implements `Clone` for `Clone`able signers (#2448).
+ * `create_onion_message` was added to build an onion message (#2583, #2595).
+ * `HTLCDescriptor` now implements `Writeable`/`Readable` (#2571).
+ * `SpendableOutputDescriptor` now implements `Hash` (#2602).
+ * `MonitorUpdateId` now implements `Debug` (#2594).
+ * `Payment{Hash,Id,Preimage}` now implement `Display` (#2492).
+ * `NodeSigner::sign_bolt12_invoice{,request}` were added for future use (#2432)
+
+## Backwards Compatibility
+ * Users migrating to the new `KVStore` can use a concatentation of
+ `[{primary_namespace}/[{secondary_namespace}/]]{key}` to build a key
+ compatible with the previous `KVStorePersister` interface (#2472).
+ * Downgrading after receipt of a payment with custom HTLC TLVs may result in
+ unintentionally accepting payments with TLVs you do not understand (#2308).
+ * `Route` objects (including pending payments) written by LDK versions prior
+ to 0.0.117 won't be retryable after being deserialized by LDK 0.0.117 or
+ above (#2555).
+ * Users of the `MonitorUpdatingPersister` can upgrade seamlessly from the
+ default `KVStore` `Persist` implementation, however the stored
+ `ChannelMonitor`s are deliberately unreadable by the default `Persist`. This
+ ensures the correct downgrade procedure is followed, which is: (#2359)
+ * First, make a backup copy of all channel state,
+ * then ensure all `ChannelMonitorUpdate`s stored are fully applied to the
+ relevant `ChannelMonitor`,
+ * finally, write each full `ChannelMonitor` using your new `Persist` impl.
+
+## Bug Fixes
+ * Anchor channels which were closed by a counterparty broadcasting its
+ commitment transaction (i.e. force-closing) would previously not generate a
+ `SpendableOutputs` event for our `to_remote` (i.e. non-HTLC-encumbered)
+ balance. Those with such balances available should fetch the missing
+ `SpendableOutputDescriptor`s using the new
+ `ChannelMonitor::get_spendable_outputs` method (#2605).
+ * Anchor channels may result in spurious or missing `Balance` entries for HTLC
+ balances (#2610).
+ * `ChannelManager::send_spontaneous_payment_with_retry` spuriously did not
+ provide the recipient with enough information to claim the payment, leading
+ to all spontaneous payments failing (#2475).
+ `send_spontaneous_payment_with_route` was unaffected.
+ * The `keysend` feature on node announcements was spuriously un-set in 0.0.112
+ and has been re-enabled (#2465).
+ * Fixed several races which could lead to deadlock when force-closing a channel
+ (#2597). These races have not been seen in production.
+ * The `ChannelManager` is persisted substantially less when it has not changed,
+ leading to substantially less I/O traffic for it (#2521, #2617).
+ * Passing new block data to `ChainMonitor` no longer results in all other
+ monitor operations being blocked until it completes (#2528).
+ * When retrying payments, any excess amount sent to the recipient in order to
+ meet an `htlc_minimum` constraint on the path is now no longer included in
+ the amount we send in the retry (#2575).
+ * Several edge cases in route-finding around HTLC minimums were fixed which
+ could have caused invalid routes or panics when built with debug assertions
+ (#2570, #2575).
+ * Several edge cases in route-finding around HTLC minimums and route hints
+ were fixed which would spuriously result in no route found (#2575, #2604).
+ * The `user_channel_id` passed to `SignerProvider::generate_channel_keys_id`
+ for inbound channels is now correctly using the one passed to
+ `ChannelManager::accept_inbound_channel` rather than a default value (#2428).
+ * Users of `impl_writeable_tlv_based!` no longer have use requirements (#2506).
+ * No longer force-close channels when counterparties send a `channel_update`
+ with a bogus `htlc_minimum_msat`, which LND users can manually build (#2611).
+
+## Node Compatibility
+ * LDK now ignores `error` messages generated by LND in response to a
+ `shutdown` message, avoiding force-closes due to LND bug 6039. This may
+ lead to non-trivial bandwidth usage with LND peers exhibiting this bug
+ during the cooperative shutdown process (#2507).
+
+## Security
+0.0.117 fixes several loss-of-funds vulnerabilities in anchor output channels,
+support for which was added in 0.0.116, in reorg handling, and when accepting
+channel(s) from counterparties which are miners.
+ * When a counterparty broadcasts their latest commitment transaction for a
+ channel with anchor outputs, we'd previously fail to build claiming
+ transactions against any HTLC outputs in that transaction. This could lead
+ to loss of funds if the counterparty is able to eventually claim the HTLC
+ after a timeout (#2606).
+ * Anchor channels HTLC claims on-chain previously spent the entire value of any
+ HTLCs as fee, which has now been fixed (#2587).
+ * If a channel is closed via an on-chain commitment transaction confirmation
+ with a pending outbound HTLC in the commitment transaction, followed by a
+ reorg which replaces the confirmed commitment transaction with a different
+ (but non-revoked) commitment transaction, all before we learn the payment
+ preimage for this HTLC, we may previously have not generated a proper
+ claiming transaction for the HTLC's value (#2623).
+ * 0.0.117 now correctly handles channels for which our counterparty funded the
+ channel with a coinbase transaction. As such transactions are not spendable
+ until they've reached 100 confirmations, this could have resulted in
+ accepting HTLC(s) which are not enforcible on-chain (#1924).
+
+In total, this release features 121 files changed, 20477 insertions, 8184
+deletions in 381 commits from 27 authors, in alphabetical order:
+ * Alec Chen
+ * Allan Douglas R. de Oliveira
+ * Antonio Yang
+ * Arik Sosman
+ * Chris Waterson
+ * David Caseria
+ * DhananjayPurohit
+ * Dom Zippilli
+ * Duncan Dean
+ * Elias Rohrer
+ * Erik De Smedt
+ * Evan Feenstra
+ * Gabor Szabo
+ * Gursharan Singh
+ * Jeffrey Czyz
+ * Joseph Goulden
+ * Lalitmohansharma1
+ * Matt Corallo
+ * Rachel Malonson
+ * Sergi Delgado Segura
+ * Valentine Wallace
+ * Vladimir Fomene
+ * Willem Van Lint
+ * Wilmer Paulino
+ * benthecarman
+ * jbesraa
+ * optout
+
+
# 0.0.116 - Jul 21, 2023 - "Anchoring the Roadmap"
## API Updates
};
let deserialized_monitor = <(BlockHash, channelmonitor::ChannelMonitor<TestChannelSigner>)>::
read(&mut Cursor::new(&map_entry.get().1), (&*self.keys, &*self.keys)).unwrap().1;
- deserialized_monitor.update_monitor(update, &&TestBroadcaster{}, &FuzzEstimator { ret_val: atomic::AtomicU32::new(253) }, &self.logger).unwrap();
+ deserialized_monitor.update_monitor(update, &&TestBroadcaster{}, &&FuzzEstimator { ret_val: atomic::AtomicU32::new(253) }, &self.logger).unwrap();
let mut ser = VecWriter(Vec::new());
deserialized_monitor.write(&mut ser).unwrap();
map_entry.insert((update.update_id, ser.0));
force_close_spend_delay: None,
is_outbound: true, is_channel_ready: true,
is_usable: true, is_public: true,
+ balance_msat: 0,
outbound_capacity_msat: capacity.saturating_mul(1000),
next_outbound_htlc_limit_msat: capacity.saturating_mul(1000),
next_outbound_htlc_minimum_msat: 0,
[package]
name = "lightning-background-processor"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Valentine Wallace <vwallace@protonmail.com>"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = { version = "0.29.0", default-features = false }
-lightning = { version = "0.0.117-alpha2", path = "../lightning", default-features = false }
-lightning-rapid-gossip-sync = { version = "0.0.117-alpha2", path = "../lightning-rapid-gossip-sync", default-features = false }
+lightning = { version = "0.0.117", path = "../lightning", default-features = false }
+lightning-rapid-gossip-sync = { version = "0.0.117", path = "../lightning-rapid-gossip-sync", default-features = false }
[dev-dependencies]
tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "time" ] }
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["_test_utils"] }
-lightning-invoice = { version = "0.25.0-alpha2", path = "../lightning-invoice" }
-lightning-persister = { version = "0.0.117-alpha2", path = "../lightning-persister" }
+lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] }
+lightning-invoice = { version = "0.25.0", path = "../lightning-invoice" }
+lightning-persister = { version = "0.0.117", path = "../lightning-persister" }
/// could setup `process_events_async` like this:
/// ```
/// # use lightning::io;
-/// # use std::sync::{Arc, Mutex};
+/// # use std::sync::{Arc, RwLock};
/// # use std::sync::atomic::{AtomicBool, Ordering};
/// # use lightning_background_processor::{process_events_async, GossipSync};
/// # struct MyStore {}
/// # type MyFilter = dyn lightning::chain::Filter + Send + Sync;
/// # type MyLogger = dyn lightning::util::logger::Logger + Send + Sync;
/// # type MyChainMonitor = lightning::chain::chainmonitor::ChainMonitor<lightning::sign::InMemorySigner, Arc<MyFilter>, Arc<MyBroadcaster>, Arc<MyFeeEstimator>, Arc<MyLogger>, Arc<MyStore>>;
-/// # type MyPeerManager = lightning::ln::peer_handler::SimpleArcPeerManager<MySocketDescriptor, MyChainMonitor, MyBroadcaster, MyFeeEstimator, MyUtxoLookup, MyLogger>;
+/// # type MyPeerManager = lightning::ln::peer_handler::SimpleArcPeerManager<MySocketDescriptor, MyChainMonitor, MyBroadcaster, MyFeeEstimator, Arc<MyUtxoLookup>, MyLogger>;
/// # type MyNetworkGraph = lightning::routing::gossip::NetworkGraph<Arc<MyLogger>>;
/// # type MyGossipSync = lightning::routing::gossip::P2PGossipSync<Arc<MyNetworkGraph>, Arc<MyUtxoLookup>, Arc<MyLogger>>;
/// # type MyChannelManager = lightning::ln::channelmanager::SimpleArcChannelManager<MyChainMonitor, MyBroadcaster, MyFeeEstimator, MyLogger>;
-/// # type MyScorer = Mutex<lightning::routing::scoring::ProbabilisticScorer<Arc<MyNetworkGraph>, Arc<MyLogger>>>;
+/// # type MyScorer = RwLock<lightning::routing::scoring::ProbabilisticScorer<Arc<MyNetworkGraph>, Arc<MyLogger>>>;
///
/// # async fn setup_background_processing(my_persister: Arc<MyStore>, my_event_handler: Arc<MyEventHandler>, my_chain_monitor: Arc<MyChainMonitor>, my_channel_manager: Arc<MyChannelManager>, my_gossip_sync: Arc<MyGossipSync>, my_logger: Arc<MyLogger>, my_scorer: Arc<MyScorer>, my_peer_manager: Arc<MyPeerManager>) {
/// let background_persister = Arc::clone(&my_persister);
let network_graph = Arc::new(NetworkGraph::new(network, logger.clone()));
let scorer = Arc::new(Mutex::new(TestScorer::new()));
let seed = [i as u8; 32];
- let router = Arc::new(DefaultRouter::new(network_graph.clone(), logger.clone(), seed, scorer.clone(), ()));
+ let router = Arc::new(DefaultRouter::new(network_graph.clone(), logger.clone(), seed, scorer.clone(), Default::default()));
let chain_source = Arc::new(test_utils::TestChainSource::new(Network::Bitcoin));
let kv_store = Arc::new(FilesystemStore::new(format!("{}_persister_{}", &persist_dir, i).into()));
let now = Duration::from_secs(genesis_block.header.time as u64);
[package]
name = "lightning-block-sync"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Jeffrey Czyz", "Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.29.0"
-lightning = { version = "0.0.117-alpha2", path = "../lightning" }
+lightning = { version = "0.0.117", path = "../lightning" }
tokio = { version = "1.0", features = [ "io-util", "net", "time" ], optional = true }
serde_json = { version = "1.0", optional = true }
chunked_transfer = { version = "1.4", optional = true }
[dev-dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] }
tokio = { version = "1.14", features = [ "macros", "rt" ] }
#[test]
fn convert_to_socket_addrs() {
- let endpoint = HttpEndpoint::for_host("foo.com".into());
+ let endpoint = HttpEndpoint::for_host("localhost".into());
let host = endpoint.host();
let port = endpoint.port();
use std::net::ToSocketAddrs;
match (&endpoint).to_socket_addrs() {
Err(e) => panic!("Unexpected error: {:?}", e),
- Ok(mut socket_addrs) => {
- match socket_addrs.next() {
- None => panic!("Expected socket address"),
- Some(addr) => {
- assert_eq!(addr, (host, port).to_socket_addrs().unwrap().next().unwrap());
- assert!(socket_addrs.next().is_none());
- }
+ Ok(socket_addrs) => {
+ let mut std_addrs = (host, port).to_socket_addrs().unwrap();
+ for addr in socket_addrs {
+ assert_eq!(addr, std_addrs.next().unwrap());
}
+ assert!(std_addrs.next().is_none());
}
}
}
[package]
name = "lightning-custom-message"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Jeffrey Czyz"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.29.0"
-lightning = { version = "0.0.117-alpha2", path = "../lightning" }
+lightning = { version = "0.0.117", path = "../lightning" }
[package]
name = "lightning-invoice"
description = "Data structures to parse and serialize BOLT11 lightning invoices"
-version = "0.25.0-alpha2"
+version = "0.25.0"
authors = ["Sebastian Geisler <sgeisler@wh2.tu-dresden.de>"]
documentation = "https://docs.rs/lightning-invoice/"
license = "MIT OR Apache-2.0"
[dependencies]
bech32 = { version = "0.9.0", default-features = false }
-lightning = { version = "0.0.117-alpha2", path = "../lightning", default-features = false }
+lightning = { version = "0.0.117", path = "../lightning", default-features = false }
secp256k1 = { version = "0.24.0", default-features = false, features = ["recovery", "alloc"] }
num-traits = { version = "0.2.8", default-features = false }
bitcoin_hashes = { version = "0.11", default-features = false }
bitcoin = { version = "0.29.0", default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", default-features = false, features = ["_test_utils"] }
+lightning = { version = "0.0.117", path = "../lightning", default-features = false, features = ["_test_utils"] }
hex = "0.4"
serde_json = { version = "1"}
impl PartialOrd for Bolt11InvoiceSignature {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
- self.0.serialize_compact().1.partial_cmp(&other.0.serialize_compact().1)
+ Some(self.cmp(other))
}
}
//! Convenient utilities for paying Lightning invoices.
-use crate::{Bolt11Invoice, Vec};
+use crate::Bolt11Invoice;
+use crate::prelude::*;
use bitcoin_hashes::Hash;
/// with the same [`PaymentHash`] is never sent.
///
/// If you wish to use a different payment idempotency token, see [`pay_invoice_with_id`].
-pub fn pay_invoice<C: AChannelManager>(
- invoice: &Bolt11Invoice, retry_strategy: Retry, channelmanager: &C
+pub fn pay_invoice<C: Deref>(
+ invoice: &Bolt11Invoice, retry_strategy: Retry, channelmanager: C
) -> Result<PaymentId, PaymentError>
+where C::Target: AChannelManager,
{
let payment_id = PaymentId(invoice.payment_hash().into_inner());
pay_invoice_with_id(invoice, payment_id, retry_strategy, channelmanager.get_cm())
/// [`PaymentHash`] has never been paid before.
///
/// See [`pay_invoice`] for a variant which uses the [`PaymentHash`] for the idempotency token.
-pub fn pay_invoice_with_id<C: AChannelManager>(
- invoice: &Bolt11Invoice, payment_id: PaymentId, retry_strategy: Retry, channelmanager: &C
+pub fn pay_invoice_with_id<C: Deref>(
+ invoice: &Bolt11Invoice, payment_id: PaymentId, retry_strategy: Retry, channelmanager: C
) -> Result<(), PaymentError>
+where C::Target: AChannelManager,
{
let amt_msat = invoice.amount_milli_satoshis().ok_or(PaymentError::Invoice("amount missing"))?;
pay_invoice_using_amount(invoice, amt_msat, payment_id, retry_strategy, channelmanager.get_cm())
///
/// If you wish to use a different payment idempotency token, see
/// [`pay_zero_value_invoice_with_id`].
-pub fn pay_zero_value_invoice<C: AChannelManager>(
- invoice: &Bolt11Invoice, amount_msats: u64, retry_strategy: Retry, channelmanager: &C
+pub fn pay_zero_value_invoice<C: Deref>(
+ invoice: &Bolt11Invoice, amount_msats: u64, retry_strategy: Retry, channelmanager: C
) -> Result<PaymentId, PaymentError>
+where C::Target: AChannelManager,
{
let payment_id = PaymentId(invoice.payment_hash().into_inner());
pay_zero_value_invoice_with_id(invoice, amount_msats, payment_id, retry_strategy,
///
/// See [`pay_zero_value_invoice`] for a variant which uses the [`PaymentHash`] for the
/// idempotency token.
-pub fn pay_zero_value_invoice_with_id<C: AChannelManager>(
+pub fn pay_zero_value_invoice_with_id<C: Deref>(
invoice: &Bolt11Invoice, amount_msats: u64, payment_id: PaymentId, retry_strategy: Retry,
- channelmanager: &C
+ channelmanager: C
) -> Result<(), PaymentError>
+where C::Target: AChannelManager,
{
if invoice.amount_milli_satoshis().is_some() {
Err(PaymentError::Invoice("amount unexpected"))
/// Sends payment probes over all paths of a route that would be used to pay the given invoice.
///
/// See [`ChannelManager::send_preflight_probes`] for more information.
-pub fn preflight_probe_invoice<C: AChannelManager>(
- invoice: &Bolt11Invoice, channelmanager: &C, liquidity_limit_multiplier: Option<u64>,
+pub fn preflight_probe_invoice<C: Deref>(
+ invoice: &Bolt11Invoice, channelmanager: C, liquidity_limit_multiplier: Option<u64>,
) -> Result<Vec<(PaymentHash, PaymentId)>, ProbingError>
+where C::Target: AChannelManager,
{
let amount_msat = if let Some(invoice_amount_msat) = invoice.amount_milli_satoshis() {
invoice_amount_msat
/// invoice using the given amount.
///
/// See [`ChannelManager::send_preflight_probes`] for more information.
-pub fn preflight_probe_zero_value_invoice<C: AChannelManager>(
- invoice: &Bolt11Invoice, amount_msat: u64, channelmanager: &C,
+pub fn preflight_probe_zero_value_invoice<C: Deref>(
+ invoice: &Bolt11Invoice, amount_msat: u64, channelmanager: C,
liquidity_limit_multiplier: Option<u64>,
) -> Result<Vec<(PaymentHash, PaymentId)>, ProbingError>
+where C::Target: AChannelManager,
{
if invoice.amount_milli_satoshis().is_some() {
return Err(ProbingError::Invoice("amount unexpected"));
[package]
name = "lightning-net-tokio"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning/"
[dependencies]
bitcoin = "0.29.0"
-lightning = { version = "0.0.117-alpha2", path = "../lightning" }
+lightning = { version = "0.0.117", path = "../lightning" }
tokio = { version = "1.0", features = [ "rt", "sync", "net", "time" ] }
[dev-dependencies]
tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "net", "time" ] }
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] }
[package]
name = "lightning-persister"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Valentine Wallace", "Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.29.0"
-lightning = { version = "0.0.117-alpha2", path = "../lightning" }
+lightning = { version = "0.0.117", path = "../lightning" }
[target.'cfg(windows)'.dependencies]
windows-sys = { version = "0.48.0", default-features = false, features = ["Win32_Storage_FileSystem", "Win32_Foundation"] }
criterion = { version = "0.4", optional = true, default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] }
bitcoin = { version = "0.29.0", default-features = false }
}
// Test that if the store's path to channel data is read-only, writing a
- // monitor to it results in the store returning an InProgress.
+ // monitor to it results in the store returning an UnrecoverableError.
// Windows ignores the read-only flag for folders, so this test is Unix-only.
#[cfg(not(target_os = "windows"))]
#[test]
let update_id = update_map.get(&added_monitors[0].0.to_channel_id()).unwrap();
// Set the store's directory to read-only, which should result in
- // returning a permanent failure when we then attempt to persist a
+ // returning an unrecoverable failure when we then attempt to persist a
// channel update.
let path = &store.get_data_dir();
let mut perms = fs::metadata(path).unwrap().permissions();
[package]
name = "lightning-rapid-gossip-sync"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Arik Sosman <git@arik.io>"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
std = ["lightning/std"]
[dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", default-features = false }
+lightning = { version = "0.0.117", path = "../lightning", default-features = false }
bitcoin = { version = "0.29.0", default-features = false }
[target.'cfg(ldk_bench)'.dependencies]
criterion = { version = "0.4", optional = true, default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] }
[package]
name = "lightning-transaction-sync"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Elias Rohrer"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
async-interface = []
[dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", default-features = false }
+lightning = { version = "0.0.117", path = "../lightning", default-features = false }
bitcoin = { version = "0.29.0", default-features = false }
bdk-macros = "0.6"
futures = { version = "0.3", optional = true }
reqwest = { version = "0.11", optional = true, default-features = false, features = ["json"] }
[dev-dependencies]
-lightning = { version = "0.0.117-alpha2", path = "../lightning", features = ["std"] }
+lightning = { version = "0.0.117", path = "../lightning", features = ["std"] }
electrsd = { version = "0.22.0", features = ["legacy", "esplora_a33e97e1", "bitcoind_23_0"] }
electrum-client = "0.12.0"
tokio = { version = "1.14.0", features = ["full"] }
[package]
name = "lightning"
-version = "0.0.117-alpha2"
+version = "0.0.117"
authors = ["Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning/"
use core::sync::atomic::{AtomicUsize, Ordering};
use bitcoin::secp256k1::PublicKey;
-#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
-/// A specific update's ID stored in a `MonitorUpdateId`, separated out to make the contents
-/// entirely opaque.
-enum UpdateOrigin {
- /// An update that was generated by the `ChannelManager` (via our `chain::Watch`
- /// implementation). This corresponds to an actual [`ChannelMonitorUpdate::update_id`] field
- /// and [`ChannelMonitor::get_latest_update_id`].
- OffChain(u64),
- /// An update that was generated during blockchain processing. The ID here is specific to the
- /// generating [`ChainMonitor`] and does *not* correspond to any on-disk IDs.
- ChainSync(u64),
+mod update_origin {
+ #[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
+ /// A specific update's ID stored in a `MonitorUpdateId`, separated out to make the contents
+ /// entirely opaque.
+ pub(crate) enum UpdateOrigin {
+ /// An update that was generated by the `ChannelManager` (via our [`crate::chain::Watch`]
+ /// implementation). This corresponds to an actual [ChannelMonitorUpdate::update_id] field
+ /// and [ChannelMonitor::get_latest_update_id].
+ ///
+ /// [ChannelMonitor::get_latest_update_id]: crate::chain::channelmonitor::ChannelMonitor::get_latest_update_id
+ /// [ChannelMonitorUpdate::update_id]: crate::chain::channelmonitor::ChannelMonitorUpdate::update_id
+ OffChain(u64),
+ /// An update that was generated during blockchain processing. The ID here is specific to the
+ /// generating [ChannelMonitor] and does *not* correspond to any on-disk IDs.
+ ///
+ /// [ChannelMonitor]: crate::chain::channelmonitor::ChannelMonitor
+ ChainSync(u64),
+ }
}
+#[cfg(any(feature = "_test_utils", test))]
+pub(crate) use update_origin::UpdateOrigin;
+#[cfg(not(any(feature = "_test_utils", test)))]
+use update_origin::UpdateOrigin;
+
/// An opaque identifier describing a specific [`Persist`] method call.
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq)]
pub struct MonitorUpdateId {
- contents: UpdateOrigin,
+ pub(crate) contents: UpdateOrigin,
}
impl MonitorUpdateId {
/// If at some point no further progress can be made towards persisting the pending updates, the
/// node should simply shut down.
///
-/// * If the persistence has failed and cannot be retried further (e.g. because of some timeout),
+/// * If the persistence has failed and cannot be retried further (e.g. because of an outage),
/// [`ChannelMonitorUpdateStatus::UnrecoverableError`] can be used, though this will result in
/// an immediate panic and future operations in LDK generally failing.
///
/// [`ChainMonitor::channel_monitor_updated`] must be called once for *each* update which occurs.
///
/// If at some point no further progress can be made towards persisting a pending update, the node
-/// should simply shut down.
+/// should simply shut down. Until then, the background task should either loop indefinitely, or
+/// persistence should be regularly retried with [`ChainMonitor::list_pending_monitor_updates`]
+/// and [`ChainMonitor::get_monitor`] (note that if a full monitor is persisted all pending
+/// monitor updates may be marked completed).
///
/// # Using remote watchtowers
///
/// updated monitor itself to disk/backups. See the [`Persist`] trait documentation for more
/// details.
///
- /// During blockchain synchronization operations, this may be called with no
- /// [`ChannelMonitorUpdate`], in which case the full [`ChannelMonitor`] needs to be persisted.
+ /// During blockchain synchronization operations, and in some rare cases, this may be called with
+ /// no [`ChannelMonitorUpdate`], in which case the full [`ChannelMonitor`] needs to be persisted.
/// Note that after the full [`ChannelMonitor`] is persisted any previous
/// [`ChannelMonitorUpdate`]s which were persisted should be discarded - they can no longer be
/// applied to the persisted [`ChannelMonitor`] as they were already applied.
if self.update_monitor_with_chain_data(header, best_height, txdata, &process, funding_outpoint, &monitor_state).is_err() {
// Take the monitors lock for writing so that we poison it and any future
// operations going forward fail immediately.
- core::mem::drop(monitor_state);
core::mem::drop(monitor_lock);
let _poison = self.monitors.write().unwrap();
log_error!(self.logger, "{}", err_str);
/// claims which are awaiting confirmation.
///
/// Includes the balances from each [`ChannelMonitor`] *except* those included in
- /// `ignored_channels`.
+ /// `ignored_channels`, allowing you to filter out balances from channels which are still open
+ /// (and whose balance should likely be pulled from the [`ChannelDetails`]).
///
/// See [`ChannelMonitor::get_claimable_balances`] for more details on the exact criteria for
/// inclusion in the return value.
Some(monitor_state) => {
let monitor = &monitor_state.monitor;
log_trace!(self.logger, "Updating ChannelMonitor for channel {}", log_funding_info!(monitor));
- let update_res = monitor.update_monitor(update, &self.broadcaster, &*self.fee_estimator, &self.logger);
- if update_res.is_err() {
- log_error!(self.logger, "Failed to update ChannelMonitor for channel {}.", log_funding_info!(monitor));
- }
- // Even if updating the monitor returns an error, the monitor's state will
- // still be changed. So, persist the updated monitor despite the error.
+ let update_res = monitor.update_monitor(update, &self.broadcaster, &self.fee_estimator, &self.logger);
+
let update_id = MonitorUpdateId::from_monitor_update(update);
let mut pending_monitor_updates = monitor_state.pending_monitor_updates.lock().unwrap();
- let persist_res = self.persister.update_persisted_channel(funding_txo, Some(update), monitor, update_id);
+ let persist_res = if update_res.is_err() {
+ // Even if updating the monitor returns an error, the monitor's state will
+ // still be changed. Therefore, we should persist the updated monitor despite the error.
+ // We don't want to persist a `monitor_update` which results in a failure to apply later
+ // while reading `channel_monitor` with updates from storage. Instead, we should persist
+ // the entire `channel_monitor` here.
+ log_warn!(self.logger, "Failed to update ChannelMonitor for channel {}. Going ahead and persisting the entire ChannelMonitor", log_funding_info!(monitor));
+ self.persister.update_persisted_channel(funding_txo, None, monitor, update_id)
+ } else {
+ self.persister.update_persisted_channel(funding_txo, Some(update), monitor, update_id)
+ };
match persist_res {
ChannelMonitorUpdateStatus::InProgress => {
pending_monitor_updates.push(update_id);
use bitcoin::blockdata::block::BlockHeader;
use bitcoin::blockdata::transaction::{OutPoint as BitcoinOutPoint, TxOut, Transaction};
-use bitcoin::blockdata::script::{Script, Builder};
-use bitcoin::blockdata::opcodes;
+use bitcoin::blockdata::script::Script;
use bitcoin::hashes::Hash;
use bitcoin::hashes::sha256::Hash as Sha256;
-use bitcoin::hash_types::{Txid, BlockHash, WPubkeyHash};
+use bitcoin::hash_types::{Txid, BlockHash};
use bitcoin::secp256k1::{Secp256k1, ecdsa::Signature};
use bitcoin::secp256k1::{SecretKey, PublicKey};
best_block: BestBlock, counterparty_node_id: PublicKey) -> ChannelMonitor<Signer> {
assert!(commitment_transaction_number_obscure_factor <= (1 << 48));
- let payment_key_hash = WPubkeyHash::hash(&keys.pubkeys().payment_point.serialize());
- let counterparty_payment_script = Builder::new().push_opcode(opcodes::all::OP_PUSHBYTES_0).push_slice(&payment_key_hash[..]).into_script();
+ let counterparty_payment_script = chan_utils::get_counterparty_payment_script(
+ &channel_parameters.channel_type_features, &keys.pubkeys().payment_point
+ );
let counterparty_channel_parameters = channel_parameters.counterparty_parameters.as_ref().unwrap();
let counterparty_delayed_payment_base_key = counterparty_channel_parameters.pubkeys.delayed_payment_basepoint;
&self,
updates: &ChannelMonitorUpdate,
broadcaster: &B,
- fee_estimator: F,
+ fee_estimator: &F,
logger: &L,
) -> Result<(), ()>
where
/// Returns the descriptors for relevant outputs (i.e., those that we can spend) within the
/// transaction if they exist and the transaction has at least [`ANTI_REORG_DELAY`]
+ /// confirmations. For [`SpendableOutputDescriptor::DelayedPaymentOutput`] descriptors to be
+ /// returned, the transaction must have at least `max(ANTI_REORG_DELAY, to_self_delay)`
/// confirmations.
///
/// Descriptors returned by this method are primarily exposed via [`Event::SpendableOutputs`]
/// missed/unhandled descriptors. For the purpose of gathering historical records, if the
/// channel close has fully resolved (i.e., [`ChannelMonitor::get_claimable_balances`] returns
/// an empty set), you can retrieve all spendable outputs by providing all descendant spending
- /// transactions starting from the channel's funding or closing transaction that have at least
- /// [`ANTI_REORG_DELAY`] confirmations.
+ /// transactions starting from the channel's funding transaction and going down three levels.
///
/// `tx` is a transaction we'll scan the outputs of. Any transaction can be provided. If any
/// outputs which can be spent by us are found, at least one descriptor is returned.
pub fn get_spendable_outputs(&self, tx: &Transaction, confirmation_height: u32) -> Vec<SpendableOutputDescriptor> {
let inner = self.inner.lock().unwrap();
let current_height = inner.best_block.height;
- if current_height.saturating_sub(ANTI_REORG_DELAY) + 1 >= confirmation_height {
- inner.get_spendable_outputs(tx)
- } else {
- Vec::new()
- }
+ let mut spendable_outputs = inner.get_spendable_outputs(tx);
+ spendable_outputs.retain(|descriptor| {
+ let mut conf_threshold = current_height.saturating_sub(ANTI_REORG_DELAY) + 1;
+ if let SpendableOutputDescriptor::DelayedPaymentOutput(descriptor) = descriptor {
+ conf_threshold = cmp::min(conf_threshold,
+ current_height.saturating_sub(descriptor.to_self_delay as u32) + 1);
+ }
+ conf_threshold >= confirmation_height
+ });
+ spendable_outputs
+ }
+
+ #[cfg(test)]
+ pub fn get_counterparty_payment_script(&self) -> Script{
+ self.inner.lock().unwrap().counterparty_payment_script.clone()
+ }
+
+ #[cfg(test)]
+ pub fn set_counterparty_payment_script(&self, script: Script) {
+ self.inner.lock().unwrap().counterparty_payment_script = script;
}
}
},
OnchainEvent::MaturingOutput {
descriptor: SpendableOutputDescriptor::DelayedPaymentOutput(ref descriptor) }
- if descriptor.outpoint.index as u32 == htlc_commitment_tx_output_idx => {
+ if event.transaction.as_ref().map(|tx| tx.input.iter().enumerate()
+ .any(|(input_idx, inp)|
+ Some(inp.previous_output.txid) == confirmed_txid &&
+ inp.previous_output.vout == htlc_commitment_tx_output_idx &&
+ // A maturing output for an HTLC claim will always be at the same
+ // index as the HTLC input. This is true pre-anchors, as there's
+ // only 1 input and 1 output. This is also true post-anchors,
+ // because we have a SIGHASH_SINGLE|ANYONECANPAY signature from our
+ // channel counterparty.
+ descriptor.outpoint.index as usize == input_idx
+ ))
+ .unwrap_or(false)
+ => {
debug_assert!(holder_delayed_output_pending.is_none());
holder_delayed_output_pending = Some(event.confirmation_threshold());
},
/// confirmations on the claim transaction.
///
/// Note that for `ChannelMonitors` which track a channel which went on-chain with versions of
- /// LDK prior to 0.0.111, balances may not be fully captured if our counterparty broadcasted
- /// a revoked state.
+ /// LDK prior to 0.0.111, not all or excess balances may be included.
///
/// See [`Balance`] for additional details on the types of claimable balances which
/// may be returned here and their meanings.
#[cfg(test)]
pub fn deliberately_bogus_accepted_htlc_witness_program() -> Vec<u8> {
+ use bitcoin::blockdata::opcodes;
let mut ret = [opcodes::all::OP_NOP.to_u8(); 136];
ret[131] = opcodes::all::OP_DROP.to_u8();
ret[132] = opcodes::all::OP_DROP.to_u8();
{
self.payment_preimages.insert(payment_hash.clone(), payment_preimage.clone());
+ let confirmed_spend_txid = self.funding_spend_confirmed.or_else(|| {
+ self.onchain_events_awaiting_threshold_conf.iter().find_map(|event| match event.event {
+ OnchainEvent::FundingSpendConfirmation { .. } => Some(event.txid),
+ _ => None,
+ })
+ });
+ let confirmed_spend_txid = if let Some(txid) = confirmed_spend_txid {
+ txid
+ } else {
+ return;
+ };
+
// If the channel is force closed, try to claim the output from this preimage.
// First check if a counterparty commitment transaction has been broadcasted:
macro_rules! claim_htlcs {
}
}
if let Some(txid) = self.current_counterparty_commitment_txid {
- if let Some(commitment_number) = self.counterparty_commitment_txn_on_chain.get(&txid) {
- claim_htlcs!(*commitment_number, txid);
+ if txid == confirmed_spend_txid {
+ if let Some(commitment_number) = self.counterparty_commitment_txn_on_chain.get(&txid) {
+ claim_htlcs!(*commitment_number, txid);
+ } else {
+ debug_assert!(false);
+ log_error!(logger, "Detected counterparty commitment tx on-chain without tracking commitment number");
+ }
return;
}
}
if let Some(txid) = self.prev_counterparty_commitment_txid {
- if let Some(commitment_number) = self.counterparty_commitment_txn_on_chain.get(&txid) {
- claim_htlcs!(*commitment_number, txid);
+ if txid == confirmed_spend_txid {
+ if let Some(commitment_number) = self.counterparty_commitment_txn_on_chain.get(&txid) {
+ claim_htlcs!(*commitment_number, txid);
+ } else {
+ debug_assert!(false);
+ log_error!(logger, "Detected counterparty commitment tx on-chain without tracking commitment number");
+ }
return;
}
}
// *we* sign a holder commitment transaction, not when e.g. a watchtower broadcasts one of our
// holder commitment transactions.
if self.broadcasted_holder_revokable_script.is_some() {
- // Assume that the broadcasted commitment transaction confirmed in the current best
- // block. Even if not, its a reasonable metric for the bump criteria on the HTLC
- // transactions.
- let (claim_reqs, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, self.best_block.height());
- self.onchain_tx_handler.update_claims_view_from_requests(claim_reqs, self.best_block.height(), self.best_block.height(), broadcaster, fee_estimator, logger);
- if let Some(ref tx) = self.prev_holder_signed_commitment_tx {
- let (claim_reqs, _) = self.get_broadcasted_holder_claims(&tx, self.best_block.height());
+ let holder_commitment_tx = if self.current_holder_commitment_tx.txid == confirmed_spend_txid {
+ Some(&self.current_holder_commitment_tx)
+ } else if let Some(prev_holder_commitment_tx) = &self.prev_holder_signed_commitment_tx {
+ if prev_holder_commitment_tx.txid == confirmed_spend_txid {
+ Some(prev_holder_commitment_tx)
+ } else {
+ None
+ }
+ } else {
+ None
+ };
+ if let Some(holder_commitment_tx) = holder_commitment_tx {
+ // Assume that the broadcasted commitment transaction confirmed in the current best
+ // block. Even if not, its a reasonable metric for the bump criteria on the HTLC
+ // transactions.
+ let (claim_reqs, _) = self.get_broadcasted_holder_claims(&holder_commitment_tx, self.best_block.height());
self.onchain_tx_handler.update_claims_view_from_requests(claim_reqs, self.best_block.height(), self.best_block.height(), broadcaster, fee_estimator, logger);
}
}
self.pending_monitor_events.push(MonitorEvent::HolderForceClosed(self.funding_info.0));
}
- pub fn update_monitor<B: Deref, F: Deref, L: Deref>(&mut self, updates: &ChannelMonitorUpdate, broadcaster: &B, fee_estimator: F, logger: &L) -> Result<(), ()>
+ pub fn update_monitor<B: Deref, F: Deref, L: Deref>(&mut self, updates: &ChannelMonitorUpdate, broadcaster: &B, fee_estimator: &F, logger: &L) -> Result<(), ()>
where B::Target: BroadcasterInterface,
F::Target: FeeEstimator,
L::Target: Logger,
panic!("Attempted to apply ChannelMonitorUpdates out of order, check the update_id before passing an update to update_monitor!");
}
let mut ret = Ok(());
- let bounded_fee_estimator = LowerBoundedFeeEstimator::new(&*fee_estimator);
+ let bounded_fee_estimator = LowerBoundedFeeEstimator::new(&**fee_estimator);
for update in updates.updates.iter() {
match update {
ChannelMonitorUpdateStep::LatestHolderCommitmentTXInfo { commitment_tx, htlc_outputs, claimed_htlcs, nondust_htlc_sources } => {
output: outp.clone(),
channel_keys_id: self.channel_keys_id,
channel_value_satoshis: self.channel_value_satoshis,
+ channel_transaction_parameters: Some(self.onchain_tx_handler.channel_transaction_parameters.clone()),
}));
}
if self.shutdown_script.as_ref() == Some(&outp.script_pubkey) {
1 => { None },
_ => return Err(DecodeError::InvalidValue),
};
- let counterparty_payment_script = Readable::read(reader)?;
+ let mut counterparty_payment_script: Script = Readable::read(reader)?;
let shutdown_script = {
let script = <Script as Readable>::read(reader)?;
if script.is_empty() { None } else { Some(script) }
(17, initial_counterparty_commitment_info, option),
});
+ // Monitors for anchor outputs channels opened in v0.0.116 suffered from a bug in which the
+ // wrong `counterparty_payment_script` was being tracked. Fix it now on deserialization to
+ // give them a chance to recognize the spendable output.
+ if onchain_tx_handler.channel_type_features().supports_anchors_zero_fee_htlc_tx() &&
+ counterparty_payment_script.is_v0_p2wpkh()
+ {
+ let payment_point = onchain_tx_handler.channel_transaction_parameters.holder_pubkeys.payment_point;
+ counterparty_payment_script =
+ chan_utils::get_to_countersignatory_with_anchors_redeemscript(&payment_point).to_v0_p2wsh();
+ }
+
Ok((best_block.block_hash(), ChannelMonitor::from_impl(ChannelMonitorImpl {
latest_update_id,
commitment_transaction_number_obscure_factor,
let broadcaster = TestBroadcaster::with_blocks(Arc::clone(&nodes[1].blocks));
assert!(
- pre_update_monitor.update_monitor(&replay_update, &&broadcaster, &chanmon_cfgs[1].fee_estimator, &nodes[1].logger)
+ pre_update_monitor.update_monitor(&replay_update, &&broadcaster, &&chanmon_cfgs[1].fee_estimator, &nodes[1].logger)
.is_err());
// Even though we error'd on the first update, we should still have generated an HTLC claim
// transaction
}
}
+impl core::fmt::Display for OutPoint {
+ fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
+ write!(f, "{}:{}", self.txid, self.index)
+ }
+}
+
impl_writeable!(OutPoint, { txid, index });
#[cfg(test)]
use crate::ln::features::ChannelTypeFeatures;
use crate::ln::PaymentPreimage;
use crate::prelude::*;
-use crate::sign::{EcdsaChannelSigner, SignerProvider, WriteableEcdsaChannelSigner};
+use crate::sign::{EcdsaChannelSigner, SignerProvider, WriteableEcdsaChannelSigner, P2WPKH_WITNESS_WEIGHT};
use crate::sync::Mutex;
use crate::util::logger::Logger;
}
impl Utxo {
- const P2WPKH_WITNESS_WEIGHT: u64 = 1 /* num stack items */ +
- 1 /* sig length */ +
- 73 /* sig including sighash flag */ +
- 1 /* pubkey length */ +
- 33 /* pubkey */;
-
/// Returns a `Utxo` with the `satisfaction_weight` estimate for a legacy P2PKH output.
pub fn new_p2pkh(outpoint: OutPoint, value: u64, pubkey_hash: &PubkeyHash) -> Self {
let script_sig_size = 1 /* script_sig length */ +
value,
script_pubkey: Script::new_p2sh(&Script::new_v0_p2wpkh(pubkey_hash).script_hash()),
},
- satisfaction_weight: script_sig_size * WITNESS_SCALE_FACTOR as u64 + Self::P2WPKH_WITNESS_WEIGHT,
+ satisfaction_weight: script_sig_size * WITNESS_SCALE_FACTOR as u64 + P2WPKH_WITNESS_WEIGHT,
}
}
value,
script_pubkey: Script::new_v0_p2wpkh(pubkey_hash),
},
- satisfaction_weight: EMPTY_SCRIPT_SIG_WEIGHT + Self::P2WPKH_WITNESS_WEIGHT,
+ satisfaction_weight: EMPTY_SCRIPT_SIG_WEIGHT + P2WPKH_WITNESS_WEIGHT,
}
}
}
/// or was explicitly abandoned by [`ChannelManager::abandon_payment`].
///
/// [`ChannelManager::abandon_payment`]: crate::ln::channelmanager::ChannelManager::abandon_payment
+ #[cfg(invreqfailed)]
InvoiceRequestFailed {
/// The `payment_id` to have been associated with payment for the requested invoice.
payment_id: PaymentId,
(8, funding_txo, required),
});
},
+ #[cfg(invreqfailed)]
&Event::InvoiceRequestFailed { ref payment_id } => {
33u8.write(writer)?;
write_tlv_fields!(writer, {
};
f()
},
+ #[cfg(invreqfailed)]
33u8 => {
let f = || {
_init_and_read_len_prefixed_tlv_fields!(reader, {
use bitcoin::hashes::{Hash, HashEngine};
use bitcoin::hashes::sha256::Hash as Sha256;
use bitcoin::hashes::ripemd160::Hash as Ripemd160;
-use bitcoin::hash_types::{Txid, PubkeyHash};
+use bitcoin::hash_types::{Txid, PubkeyHash, WPubkeyHash};
use crate::chain::chaininterface::fee_for_weight;
use crate::chain::package::WEIGHT_REVOKED_OUTPUT;
});
/// One counterparty's public keys which do not change over the life of a channel.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelPublicKeys {
/// The public key which is used to sign all commitment transactions, as it appears in the
/// on-chain channel lock-in 2-of-2 multisig output.
res
}
+/// Returns the script for the counterparty's output on a holder's commitment transaction based on
+/// the channel type.
+pub fn get_counterparty_payment_script(channel_type_features: &ChannelTypeFeatures, payment_key: &PublicKey) -> Script {
+ if channel_type_features.supports_anchors_zero_fee_htlc_tx() {
+ get_to_countersignatory_with_anchors_redeemscript(payment_key).to_v0_p2wsh()
+ } else {
+ Script::new_v0_p2wpkh(&WPubkeyHash::hash(&payment_key.serialize()))
+ }
+}
+
/// Information about an HTLC as it appears in a commitment transaction
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct HTLCOutputInCommitment {
///
/// Normally, this is converted to the broadcaster/countersignatory-organized DirectedChannelTransactionParameters
/// before use, via the as_holder_broadcastable and as_counterparty_broadcastable functions.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelTransactionParameters {
/// Holder public keys
pub holder_pubkeys: ChannelPublicKeys,
}
/// Late-bound per-channel counterparty data used to build transactions.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct CounterpartyChannelTransactionParameters {
/// Counter-party public keys
pub pubkeys: ChannelPublicKeys,
}
pub struct AvailableBalances {
+ /// The amount that would go to us if we close the channel, ignoring any on-chain fees.
+ pub balance_msat: u64,
/// Total amount available for our counterparty to send to us.
pub inbound_capacity_msat: u64,
/// Total amount available for us to send to our counterparty.
let inbound_stats = context.get_inbound_pending_htlc_stats(None);
let outbound_stats = context.get_outbound_pending_htlc_stats(None);
+ let mut balance_msat = context.value_to_self_msat;
+ for ref htlc in context.pending_inbound_htlcs.iter() {
+ if let InboundHTLCState::LocalRemoved(InboundHTLCRemovalReason::Fulfill(_)) = htlc.state {
+ balance_msat += htlc.amount_msat;
+ }
+ }
+ balance_msat -= outbound_stats.pending_htlcs_value_msat;
+
let outbound_capacity_msat = context.value_to_self_msat
.saturating_sub(outbound_stats.pending_htlcs_value_msat)
.saturating_sub(
outbound_capacity_msat,
next_outbound_htlc_limit_msat: available_capacity_msat,
next_outbound_htlc_minimum_msat,
+ balance_msat,
}
}
NS::Target: NodeSigner,
L::Target: Logger
{
+ let mut msgs = (None, None);
if let Some(funding_txo) = self.context.get_funding_txo() {
for &(index_in_block, tx) in txdata.iter() {
// Check if the transaction is the expected funding transaction, and if it is,
if let Some(channel_ready) = self.check_get_channel_ready(height) {
log_info!(logger, "Sending a channel_ready to our peer for channel {}", &self.context.channel_id);
let announcement_sigs = self.get_announcement_sigs(node_signer, genesis_block_hash, user_config, height, logger);
- return Ok((Some(channel_ready), announcement_sigs));
+ msgs = (Some(channel_ready), announcement_sigs);
}
}
for inp in tx.input.iter() {
}
}
}
- Ok((None, None))
+ Ok(msgs)
}
/// When a new block is connected, we check the height of the block against outbound holding
}
}
- pub fn channel_update(&mut self, msg: &msgs::ChannelUpdate) -> Result<(), ChannelError> {
- self.context.counterparty_forwarding_info = Some(CounterpartyForwardingInfo {
+ /// Applies the `ChannelUpdate` and returns a boolean indicating whether a change actually
+ /// happened.
+ pub fn channel_update(&mut self, msg: &msgs::ChannelUpdate) -> Result<bool, ChannelError> {
+ let new_forwarding_info = Some(CounterpartyForwardingInfo {
fee_base_msat: msg.contents.fee_base_msat,
fee_proportional_millionths: msg.contents.fee_proportional_millionths,
cltv_expiry_delta: msg.contents.cltv_expiry_delta
});
+ let did_change = self.context.counterparty_forwarding_info != new_forwarding_info;
+ if did_change {
+ self.context.counterparty_forwarding_info = new_forwarding_info;
+ }
- Ok(())
+ Ok(did_change)
}
/// Begins the shutdown process, getting a message for the remote peer and returning all
},
signature: Signature::from(unsafe { FFISignature::new() })
};
- node_a_chan.channel_update(&update).unwrap();
+ assert!(node_a_chan.channel_update(&update).unwrap());
// The counterparty can send an update with a higher minimum HTLC, but that shouldn't
// change our official htlc_minimum_msat.
},
None => panic!("expected counterparty forwarding info to be Some")
}
+
+ assert!(!node_a_chan.channel_update(&update).unwrap());
}
#[cfg(feature = "_test_vectors")]
Arc<DefaultRouter<
Arc<NetworkGraph<Arc<L>>>,
Arc<L>,
- Arc<Mutex<ProbabilisticScorer<Arc<NetworkGraph<Arc<L>>>, Arc<L>>>>,
+ Arc<RwLock<ProbabilisticScorer<Arc<NetworkGraph<Arc<L>>>, Arc<L>>>>,
ProbabilisticScoringFeeParameters,
ProbabilisticScorer<Arc<NetworkGraph<Arc<L>>>, Arc<L>>,
>>,
&'e DefaultRouter<
&'f NetworkGraph<&'g L>,
&'g L,
- &'h Mutex<ProbabilisticScorer<&'f NetworkGraph<&'g L>, &'g L>>,
+ &'h RwLock<ProbabilisticScorer<&'f NetworkGraph<&'g L>, &'g L>>,
ProbabilisticScoringFeeParameters,
ProbabilisticScorer<&'f NetworkGraph<&'g L>, &'g L>
>,
>;
/// A trivial trait which describes any [`ChannelManager`].
+///
+/// This is not exported to bindings users as general cover traits aren't useful in other
+/// languages.
pub trait AChannelManager {
/// A type implementing [`chain::Watch`].
type Watch: chain::Watch<Self::Signer> + ?Sized;
}
/// Details of a channel, as returned by [`ChannelManager::list_channels`] and [`ChannelManager::list_usable_channels`]
-///
-/// Balances of a channel are available through [`ChainMonitor::get_claimable_balances`] and
-/// [`ChannelMonitor::get_claimable_balances`], calculated with respect to the corresponding on-chain
-/// transactions.
-///
-/// [`ChainMonitor::get_claimable_balances`]: crate::chain::chainmonitor::ChainMonitor::get_claimable_balances
#[derive(Clone, Debug, PartialEq)]
pub struct ChannelDetails {
/// The channel's ID (prior to funding transaction generation, this is a random 32 bytes,
///
/// This value will be `None` for objects serialized with LDK versions prior to 0.0.115.
pub feerate_sat_per_1000_weight: Option<u32>,
+ /// Our total balance. This is the amount we would get if we close the channel.
+ /// This value is not exact. Due to various in-flight changes and feerate changes, exactly this
+ /// amount is not likely to be recoverable on close.
+ ///
+ /// This does not include any pending HTLCs which are not yet fully resolved (and, thus, whose
+ /// balance is not available for inclusion in new outbound HTLCs). This further does not include
+ /// any pending outgoing HTLCs which are awaiting some other resolution to be sent.
+ /// This does not consider any on-chain fees.
+ ///
+ /// See also [`ChannelDetails::outbound_capacity_msat`]
+ pub balance_msat: u64,
/// The available outbound capacity for sending HTLCs to the remote peer. This does not include
/// any pending HTLCs which are not yet fully resolved (and, thus, whose balance is not
/// available for inclusion in new outbound HTLCs). This further does not include any pending
/// outgoing HTLCs which are awaiting some other resolution to be sent.
///
+ /// See also [`ChannelDetails::balance_msat`]
+ ///
/// This value is not exact. Due to various in-flight changes, feerate changes, and our
/// conflict-avoidance policy, exactly this amount is not likely to be spendable. However, we
/// should be able to spend nearly this amount.
/// the current state and per-HTLC limit(s). This is intended for use when routing, allowing us
/// to use a limit as close as possible to the HTLC limit we can currently send.
///
- /// See also [`ChannelDetails::next_outbound_htlc_minimum_msat`] and
- /// [`ChannelDetails::outbound_capacity_msat`].
+ /// See also [`ChannelDetails::next_outbound_htlc_minimum_msat`],
+ /// [`ChannelDetails::balance_msat`], and [`ChannelDetails::outbound_capacity_msat`].
pub next_outbound_htlc_limit_msat: u64,
/// The minimum value for sending a single HTLC to the remote peer. This is the equivalent of
/// [`ChannelDetails::next_outbound_htlc_limit_msat`] but represents a lower-bound, rather than
channel_value_satoshis: context.get_value_satoshis(),
feerate_sat_per_1000_weight: Some(context.get_feerate_sat_per_1000_weight()),
unspendable_punishment_reserve: to_self_reserve_satoshis,
+ balance_msat: balance.balance_msat,
inbound_capacity_msat: balance.inbound_capacity_msat,
outbound_capacity_msat: balance.outbound_capacity_msat,
next_outbound_htlc_limit_msat: balance.next_outbound_htlc_limit_msat,
// it does not exist for this peer. Either way, we can attempt to force-close it.
//
// An appropriate error will be returned for non-existence of the channel if that's the case.
+ mem::drop(peer_state_lock);
+ mem::drop(per_peer_state);
return self.force_close_channel_with_peer(&channel_id, counterparty_node_id, None, false).map(|_| ())
},
}
// payment logic has enough time to fail the HTLC backward before our onchain logic triggers a
// channel closure (see HTLC_FAIL_BACK_BUFFER rationale).
let current_height: u32 = self.best_block.read().unwrap().height();
- if (outgoing_cltv_value as u64) <= current_height as u64 + HTLC_FAIL_BACK_BUFFER as u64 + 1 {
+ if cltv_expiry <= current_height + HTLC_FAIL_BACK_BUFFER + 1 {
let mut err_data = Vec::with_capacity(12);
err_data.extend_from_slice(&amt_msat.to_be_bytes());
err_data.extend_from_slice(¤t_height.to_be_bytes());
/// In general, a path may raise:
/// * [`APIError::InvalidRoute`] when an invalid route or forwarding parameter (cltv_delta, fee,
/// node public key) is specified.
- /// * [`APIError::ChannelUnavailable`] if the next-hop channel is not available for updates
- /// (including due to previous monitor update failure or new permanent monitor update
- /// failure).
+ /// * [`APIError::ChannelUnavailable`] if the next-hop channel is not available as it has been
+ /// closed, doesn't exist, or the peer is currently disconnected.
/// * [`APIError::MonitorUpdateInProgress`] if a new monitor update failure prevented sending the
/// relevant updates.
///
/// wait until you receive either a [`Event::PaymentFailed`] or [`Event::PaymentSent`] event to
/// determine the ultimate status of a payment.
///
- /// # Requested Invoices
- ///
- /// In the case of paying a [`Bolt12Invoice`], abandoning the payment prior to receiving the
- /// invoice will result in an [`Event::InvoiceRequestFailed`] and prevent any attempts at paying
- /// it once received. The other events may only be generated once the invoice has been received.
- ///
/// # Restart Behavior
///
/// If an [`Event::PaymentFailed`] is generated and we restart without first persisting the
- /// [`ChannelManager`], another [`Event::PaymentFailed`] may be generated; likewise for
- /// [`Event::InvoiceRequestFailed`].
- ///
- /// [`Bolt12Invoice`]: crate::offers::invoice::Bolt12Invoice
+ /// [`ChannelManager`], another [`Event::PaymentFailed`] may be generated.
pub fn abandon_payment(&self, payment_id: PaymentId) {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
self.pending_outbound_payments.abandon_payment(payment_id, PaymentFailureReason::UserAbandoned, &self.pending_events);
btree_map::Entry::Vacant(vacant) => Some(vacant.insert(Vec::new())),
}
});
- for (channel_idx, &(temporary_channel_id, counterparty_node_id)) in temporary_channels.iter().enumerate() {
+ for &(temporary_channel_id, counterparty_node_id) in temporary_channels.iter() {
result = result.and_then(|_| self.funding_transaction_generated_intern(
temporary_channel_id,
counterparty_node_id,
for channel_id in channel_ids {
if !peer_state.has_channel(channel_id) {
return Err(APIError::ChannelUnavailable {
- err: format!("Channel with ID {} was not found for the passed counterparty_node_id {}", channel_id, counterparty_node_id),
+ err: format!("Channel with id {} not found for the passed counterparty node_id {}", channel_id, counterparty_node_id),
});
};
}
next_hop_channel_id, next_node_id)
}),
None => return Err(APIError::ChannelUnavailable {
- err: format!("Channel with id {} not found for the passed counterparty node_id {}.",
+ err: format!("Channel with id {} not found for the passed counterparty node_id {}",
next_hop_channel_id, next_node_id)
})
}
if !chan.context.is_outbound() { return NotifyOption::SkipPersistNoEvents; }
// If the feerate has decreased by less than half, don't bother
if new_feerate <= chan.context.get_feerate_sat_per_1000_weight() && new_feerate * 2 > chan.context.get_feerate_sat_per_1000_weight() {
- log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}.",
+ if new_feerate != chan.context.get_feerate_sat_per_1000_weight() {
+ log_trace!(self.logger, "Channel {} does not qualify for a feerate change from {} to {}.",
chan_id, chan.context.get_feerate_sat_per_1000_weight(), new_feerate);
+ }
return NotifyOption::SkipPersistNoEvents;
}
if !chan.context.is_live() {
return Ok(NotifyOption::SkipPersistNoEvents);
} else {
log_debug!(self.logger, "Received channel_update {:?} for channel {}.", msg, chan_id);
- try_chan_phase_entry!(self, chan.channel_update(&msg), chan_phase_entry);
+ let did_change = try_chan_phase_entry!(self, chan.channel_update(&msg), chan_phase_entry);
+ // If nothing changed after applying their update, we don't need to bother
+ // persisting.
+ if !did_change {
+ return Ok(NotifyOption::SkipPersistNoEvents);
+ }
}
} else {
return try_chan_phase_entry!(self, Err(ChannelError::Close(
fn maybe_generate_initial_closing_signed(&self) -> bool {
let mut handle_errors: Vec<(PublicKey, Result<(), _>)> = Vec::new();
let mut has_update = false;
- let mut shutdown_result = None;
- let mut unbroadcasted_batch_funding_txid = None;
+ let mut shutdown_results = Vec::new();
{
let per_peer_state = self.per_peer_state.read().unwrap();
peer_state.channel_by_id.retain(|channel_id, phase| {
match phase {
ChannelPhase::Funded(chan) => {
- unbroadcasted_batch_funding_txid = chan.context.unbroadcasted_batch_funding_txid();
+ let unbroadcasted_batch_funding_txid = chan.context.unbroadcasted_batch_funding_txid();
match chan.maybe_propose_closing_signed(&self.fee_estimator, &self.logger) {
Ok((msg_opt, tx_opt)) => {
if let Some(msg) = msg_opt {
log_info!(self.logger, "Broadcasting {}", log_tx!(tx));
self.tx_broadcaster.broadcast_transactions(&[&tx]);
update_maps_on_chan_removal!(self, &chan.context);
- shutdown_result = Some((None, Vec::new(), unbroadcasted_batch_funding_txid));
+ shutdown_results.push((None, Vec::new(), unbroadcasted_batch_funding_txid));
false
} else { true }
},
let _ = handle_error!(self, err, counterparty_node_id);
}
- if let Some(shutdown_result) = shutdown_result {
+ for shutdown_result in shutdown_results.drain(..) {
self.finish_close_channel(shutdown_result);
}
(10, self.channel_value_satoshis, required),
(12, self.unspendable_punishment_reserve, option),
(14, user_channel_id_low, required),
- (16, self.next_outbound_htlc_limit_msat, required), // Forwards compatibility for removed balance_msat field.
+ (16, self.balance_msat, required),
(18, self.outbound_capacity_msat, required),
(19, self.next_outbound_htlc_limit_msat, required),
(20, self.inbound_capacity_msat, required),
(10, channel_value_satoshis, required),
(12, unspendable_punishment_reserve, option),
(14, user_channel_id_low, required),
- (16, _balance_msat, option), // Backwards compatibility for removed balance_msat field.
+ (16, balance_msat, required),
(18, outbound_capacity_msat, required),
// Note that by the time we get past the required read above, outbound_capacity_msat will be
// filled in, so we can safely unwrap it here.
let user_channel_id = user_channel_id_low as u128 +
((user_channel_id_high_opt.unwrap_or(0 as u64) as u128) << 64);
- let _balance_msat: Option<u64> = _balance_msat;
-
Ok(Self {
inbound_scid_alias,
channel_id: channel_id.0.unwrap(),
channel_value_satoshis: channel_value_satoshis.0.unwrap(),
unspendable_punishment_reserve,
user_channel_id,
+ balance_msat: balance_msat.0.unwrap(),
outbound_capacity_msat: outbound_capacity_msat.0.unwrap(),
next_outbound_htlc_limit_msat: next_outbound_htlc_limit_msat.0.unwrap(),
next_outbound_htlc_minimum_msat: next_outbound_htlc_minimum_msat.0.unwrap(),
check_api_error_message(expected_message, res_err)
}
+ fn check_channel_unavailable_error<T>(res_err: Result<T, APIError>, expected_channel_id: ChannelId, peer_node_id: PublicKey) {
+ let expected_message = format!("Channel with id {} not found for the passed counterparty node_id {}", expected_channel_id, peer_node_id);
+ check_api_error_message(expected_message, res_err)
+ }
+
+ fn check_api_misuse_error<T>(res_err: Result<T, APIError>) {
+ let expected_message = "No such channel awaiting to be accepted.".to_string();
+ check_api_error_message(expected_message, res_err)
+ }
+
fn check_api_error_message<T>(expected_err_message: String, res_err: Result<T, APIError>) {
match res_err {
Err(APIError::APIMisuseError { err }) => {
check_unkown_peer_error(nodes[0].node.update_channel_config(&unkown_public_key, &[channel_id], &ChannelConfig::default()), unkown_public_key);
}
+ #[test]
+ fn test_api_calls_with_unavailable_channel() {
+ // Tests that our API functions that expects a `counterparty_node_id` and a `channel_id`
+ // as input, behaves as expected if the `counterparty_node_id` is a known peer in the
+ // `ChannelManager::per_peer_state` map, but the peer state doesn't contain a channel with
+ // the given `channel_id`.
+ let chanmon_cfg = create_chanmon_cfgs(2);
+ let node_cfg = create_node_cfgs(2, &chanmon_cfg);
+ let node_chanmgr = create_node_chanmgrs(2, &node_cfg, &[None, None]);
+ let nodes = create_network(2, &node_cfg, &node_chanmgr);
+
+ let counterparty_node_id = nodes[1].node.get_our_node_id();
+
+ // Dummy values
+ let channel_id = ChannelId::from_bytes([4; 32]);
+
+ // Test the API functions.
+ check_api_misuse_error(nodes[0].node.accept_inbound_channel(&channel_id, &counterparty_node_id, 42));
+
+ check_channel_unavailable_error(nodes[0].node.close_channel(&channel_id, &counterparty_node_id), channel_id, counterparty_node_id);
+
+ check_channel_unavailable_error(nodes[0].node.force_close_broadcasting_latest_txn(&channel_id, &counterparty_node_id), channel_id, counterparty_node_id);
+
+ check_channel_unavailable_error(nodes[0].node.force_close_without_broadcasting_txn(&channel_id, &counterparty_node_id), channel_id, counterparty_node_id);
+
+ check_channel_unavailable_error(nodes[0].node.forward_intercepted_htlc(InterceptId([0; 32]), &channel_id, counterparty_node_id, 1_000_000), channel_id, counterparty_node_id);
+
+ check_channel_unavailable_error(nodes[0].node.update_channel_config(&counterparty_node_id, &[channel_id], &ChannelConfig::default()), channel_id, counterparty_node_id);
+ }
+
#[test]
fn test_connection_limiting() {
// Test that we limit un-channel'd peers and un-funded channels properly.
sender_intended_amt_msat - extra_fee_msat, 42, None, true, Some(extra_fee_msat)).is_ok());
}
+ #[test]
+ fn test_final_incorrect_cltv(){
+ let chanmon_cfg = create_chanmon_cfgs(1);
+ let node_cfg = create_node_cfgs(1, &chanmon_cfg);
+ let node_chanmgr = create_node_chanmgrs(1, &node_cfg, &[None]);
+ let node = create_network(1, &node_cfg, &node_chanmgr);
+
+ let result = node[0].node.construct_recv_pending_htlc_info(msgs::InboundOnionPayload::Receive {
+ amt_msat: 100,
+ outgoing_cltv_value: 22,
+ payment_metadata: None,
+ keysend_preimage: None,
+ payment_data: Some(msgs::FinalOnionHopData {
+ payment_secret: PaymentSecret([0; 32]), total_msat: 100,
+ }),
+ custom_tlvs: Vec::new(),
+ }, [0; 32], PaymentHash([0; 32]), 100, 23, None, true, None);
+
+ // Should not return an error as this condition:
+ // https://github.com/lightning/bolts/blob/4dcc377209509b13cf89a4b91fde7d478f5b46d8/04-onion-routing.md?plain=1#L334
+ // is not satisfied.
+ assert!(result.is_ok());
+ }
+
#[test]
fn test_inbound_anchors_manual_acceptance() {
// Tests that we properly limit inbound channels when we have the manual-channel-acceptance
Ok(())
}
- fn from_be_bytes(mut flags: Vec<u8>) -> Features<T> {
+ /// Create a [`Features`] given a set of flags, in big-endian. This is in byte order from
+ /// most on-the-wire encodings.
+ ///
+ /// This is not exported to bindings users as we don't support export across multiple T
+ pub fn from_be_bytes(mut flags: Vec<u8>) -> Features<T> {
flags.reverse(); // Swap to little-endian
Self {
flags,
use crate::chain::channelmonitor::ChannelMonitor;
use crate::chain::transaction::OutPoint;
use crate::events::{ClaimedHTLC, ClosureReason, Event, HTLCDestination, MessageSendEvent, MessageSendEventsProvider, PathFailure, PaymentPurpose, PaymentFailureReason};
-use crate::events::bump_transaction::{BumpTransactionEventHandler, Wallet, WalletSource};
+use crate::events::bump_transaction::{BumpTransactionEvent, BumpTransactionEventHandler, Wallet, WalletSource};
use crate::ln::{ChannelId, PaymentPreimage, PaymentHash, PaymentSecret};
use crate::ln::channelmanager::{AChannelManager, ChainParameters, ChannelManager, ChannelManagerReadArgs, RAACommitmentOrder, PaymentSendFailure, RecipientOnionFields, PaymentId, MIN_CLTV_EXPIRY_DELTA};
use crate::routing::gossip::{P2PGossipSync, NetworkGraph, NetworkUpdate};
}
}
+pub fn handle_bump_htlc_event(node: &Node, count: usize) {
+ let events = node.chain_monitor.chain_monitor.get_and_clear_pending_events();
+ assert_eq!(events.len(), count);
+ for event in events {
+ match event {
+ Event::BumpTransaction(bump_event) => {
+ if let BumpTransactionEvent::HTLCResolution { .. } = &bump_event {}
+ else { panic!(); }
+ node.bump_tx_handler.handle_event(&bump_event);
+ },
+ _ => panic!(),
+ }
+ }
+}
+
pub fn close_channel<'a, 'b, 'c>(outbound_node: &Node<'a, 'b, 'c>, inbound_node: &Node<'a, 'b, 'c>, channel_id: &ChannelId, funding_tx: Transaction, close_inbound_first: bool) -> (msgs::ChannelUpdate, msgs::ChannelUpdate, Transaction) {
let (node_a, broadcaster_a, struct_a) = if close_inbound_first { (&inbound_node.node, &inbound_node.tx_broadcaster, inbound_node) } else { (&outbound_node.node, &outbound_node.tx_broadcaster, outbound_node) };
let (node_b, broadcaster_b, struct_b) = if close_inbound_first { (&outbound_node.node, &outbound_node.tx_broadcaster, outbound_node) } else { (&inbound_node.node, &inbound_node.tx_broadcaster, inbound_node) };
}
// Note that the following only works for CLTV values up to 128
-pub const ACCEPTED_HTLC_SCRIPT_WEIGHT: usize = 137; //Here we have a diff due to HTLC CLTV expiry being < 2^15 in test
+pub const ACCEPTED_HTLC_SCRIPT_WEIGHT: usize = 137; // Here we have a diff due to HTLC CLTV expiry being < 2^15 in test
+pub const ACCEPTED_HTLC_SCRIPT_WEIGHT_ANCHORS: usize = 140; // Here we have a diff due to HTLC CLTV expiry being < 2^15 in test
#[derive(PartialEq)]
pub enum HTLCType { NONE, TIMEOUT, SUCCESS }
// Ensure the channels don't exist anymore.
assert!(nodes[0].node.list_channels().is_empty());
}
+
+fn do_test_funding_and_commitment_tx_confirm_same_block(confirm_remote_commitment: bool) {
+ // Tests that a node will forget the channel (when it only requires 1 confirmation) if the
+ // funding and commitment transaction confirm in the same block.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let mut min_depth_1_block_cfg = test_default_channel_config();
+ min_depth_1_block_cfg.channel_handshake_config.minimum_depth = 1;
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(min_depth_1_block_cfg), Some(min_depth_1_block_cfg)]);
+ let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ let funding_tx = create_chan_between_nodes_with_value_init(&nodes[0], &nodes[1], 1_000_000, 0);
+ let chan_id = chain::transaction::OutPoint { txid: funding_tx.txid(), index: 0 }.to_channel_id();
+
+ assert_eq!(nodes[0].node.list_channels().len(), 1);
+ assert_eq!(nodes[1].node.list_channels().len(), 1);
+
+ let (closing_node, other_node) = if confirm_remote_commitment {
+ (&nodes[1], &nodes[0])
+ } else {
+ (&nodes[0], &nodes[1])
+ };
+
+ closing_node.node.force_close_broadcasting_latest_txn(&chan_id, &other_node.node.get_our_node_id()).unwrap();
+ let mut msg_events = closing_node.node.get_and_clear_pending_msg_events();
+ assert_eq!(msg_events.len(), 1);
+ match msg_events.pop().unwrap() {
+ MessageSendEvent::HandleError { action: msgs::ErrorAction::SendErrorMessage { .. }, .. } => {},
+ _ => panic!("Unexpected event"),
+ }
+ check_added_monitors(closing_node, 1);
+ check_closed_event(closing_node, 1, ClosureReason::HolderForceClosed, false, &[other_node.node.get_our_node_id()], 1_000_000);
+
+ let commitment_tx = {
+ let mut txn = closing_node.tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let commitment_tx = txn.pop().unwrap();
+ check_spends!(commitment_tx, funding_tx);
+ commitment_tx
+ };
+
+ mine_transactions(&nodes[0], &[&funding_tx, &commitment_tx]);
+ mine_transactions(&nodes[1], &[&funding_tx, &commitment_tx]);
+
+ check_closed_broadcast(other_node, 1, true);
+ check_added_monitors(other_node, 1);
+ check_closed_event(other_node, 1, ClosureReason::CommitmentTxConfirmed, false, &[closing_node.node.get_our_node_id()], 1_000_000);
+
+ assert!(nodes[0].node.list_channels().is_empty());
+ assert!(nodes[1].node.list_channels().is_empty());
+}
+
+#[test]
+fn test_funding_and_commitment_tx_confirm_same_block() {
+ do_test_funding_and_commitment_tx_confirm_same_block(false);
+ do_test_funding_and_commitment_tx_confirm_same_block(true);
+}
expect_payment_failed_with_update!(nodes[0], payment_hash, false, update_a.contents.short_channel_id, true);
}
-fn test_spendable_output<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, spendable_tx: &Transaction) -> Vec<SpendableOutputDescriptor> {
+fn test_spendable_output<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, spendable_tx: &Transaction, has_anchors_htlc_event: bool) -> Vec<SpendableOutputDescriptor> {
let mut spendable = node.chain_monitor.chain_monitor.get_and_clear_pending_events();
- assert_eq!(spendable.len(), 1);
+ assert_eq!(spendable.len(), if has_anchors_htlc_event { 2 } else { 1 });
+ if has_anchors_htlc_event {
+ if let Event::BumpTransaction(BumpTransactionEvent::HTLCResolution { .. }) = spendable.pop().unwrap() {}
+ else { panic!(); }
+ }
if let Event::SpendableOutputs { outputs, .. } = spendable.pop().unwrap() {
assert_eq!(outputs.len(), 1);
let spend_tx = node.keys_manager.backing.spend_spendable_outputs(&[&outputs[0]], Vec::new(),
expect_payment_failed!(nodes[1], payment_hash_1, false);
}
-#[test]
-fn chanmon_claim_value_coop_close() {
+fn do_chanmon_claim_value_coop_close(anchors: bool) {
// Tests `get_claimable_balances` returns the correct values across a simple cooperative claim.
// Specifically, this tests that the channel non-HTLC balances show up in
// `get_claimable_balances` until the cooperative claims have confirmed and generated a
// `SpendableOutputs` event, and no longer.
let chanmon_cfgs = create_chanmon_cfgs(2);
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
let (_, _, chan_id, funding_tx) =
let chan_feerate = get_feerate!(nodes[0], nodes[1], chan_id) as u64;
let channel_type_features = get_channel_type_features!(nodes[0], nodes[1], chan_id);
+ let commitment_tx_fee = chan_feerate * channel::commitment_tx_base_weight(&channel_type_features) / 1000;
+ let anchor_outputs_value = if anchors { channel::ANCHOR_OUTPUT_VALUE_SATOSHI * 2 } else { 0 };
assert_eq!(vec![Balance::ClaimableOnChannelClose {
- amount_satoshis: 1_000_000 - 1_000 - chan_feerate * channel::commitment_tx_base_weight(&channel_type_features) / 1000
+ amount_satoshis: 1_000_000 - 1_000 - commitment_tx_fee - anchor_outputs_value
}],
nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
assert_eq!(vec![Balance::ClaimableOnChannelClose { amount_satoshis: 1_000, }],
assert!(nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events().is_empty());
assert_eq!(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 1_000 - chan_feerate * channel::commitment_tx_base_weight(&channel_type_features) / 1000,
+ amount_satoshis: 1_000_000 - 1_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 1,
}],
nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
assert_eq!(Vec::<Balance>::new(),
nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
- let spendable_outputs_a = test_spendable_output(&nodes[0], &shutdown_tx[0]);
+ let spendable_outputs_a = test_spendable_output(&nodes[0], &shutdown_tx[0], false);
assert_eq!(
get_monitor!(nodes[0], chan_id).get_spendable_outputs(&shutdown_tx[0], shutdown_tx_conf_height_a),
spendable_outputs_a
);
- let spendable_outputs_b = test_spendable_output(&nodes[1], &shutdown_tx[0]);
+ let spendable_outputs_b = test_spendable_output(&nodes[1], &shutdown_tx[0], false);
assert_eq!(
get_monitor!(nodes[1], chan_id).get_spendable_outputs(&shutdown_tx[0], shutdown_tx_conf_height_b),
spendable_outputs_b
check_closed_event!(nodes[1], 1, ClosureReason::CooperativeClosure, [nodes[0].node.get_our_node_id()], 1000000);
}
+#[test]
+fn chanmon_claim_value_coop_close() {
+ do_chanmon_claim_value_coop_close(false);
+ do_chanmon_claim_value_coop_close(true);
+}
+
fn sorted_vec<T: Ord>(mut v: Vec<T>) -> Vec<T> {
v.sort_unstable();
v
assert!(b_u64 >= a_u64 - 5);
}
-fn do_test_claim_value_force_close(prev_commitment_tx: bool) {
+fn do_test_claim_value_force_close(anchors: bool, prev_commitment_tx: bool) {
// Tests `get_claimable_balances` with an HTLC across a force-close.
// We build a channel with an HTLC pending, then force close the channel and check that the
// `get_claimable_balances` return value is correct as transactions confirm on-chain.
chanmon_cfgs[1].keys_manager.disable_revocation_policy_check = true;
}
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let coinbase_tx = Transaction {
+ version: 2,
+ lock_time: PackedLockTime::ZERO,
+ input: vec![TxIn { ..Default::default() }],
+ output: vec![
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[0].wallet_source.get_change_script().unwrap(),
+ },
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[1].wallet_source.get_change_script().unwrap(),
+ },
+ ],
+ };
+ if anchors {
+ nodes[0].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 0 }, coinbase_tx.output[0].value);
+ nodes[1].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 1 }, coinbase_tx.output[1].value);
+ }
+
let (_, _, chan_id, funding_tx) =
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 1_000_000);
let funding_outpoint = OutPoint { txid: funding_tx.txid(), index: 0 };
let htlc_cltv_timeout = nodes[0].best_block_info().1 + TEST_FINAL_CLTV + 1; // Note ChannelManager adds one to CLTV timeouts for safety
- let chan_feerate = get_feerate!(nodes[0], nodes[1], chan_id) as u64;
+ let chan_feerate = get_feerate!(nodes[0], nodes[1], chan_id);
let channel_type_features = get_channel_type_features!(nodes[0], nodes[1], chan_id);
let remote_txn = get_local_commitment_txn!(nodes[1], chan_id);
// Before B receives the payment preimage, it only suggests the push_msat value of 1_000 sats
// as claimable. A lists both its to-self balance and the (possibly-claimable) HTLCs.
+ let commitment_tx_fee = chan_feerate as u64 *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
+ let anchor_outputs_value = if anchors { 2 * channel::ANCHOR_OUTPUT_VALUE_SATOSHI } else { 0 };
assert_eq!(sorted_vec(vec![Balance::ClaimableOnChannelClose {
- amount_satoshis: 1_000_000 - 3_000 - 4_000 - 1_000 - 3 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 3_000 - 4_000 - 1_000 - 3 - commitment_tx_fee - anchor_outputs_value,
}, sent_htlc_balance.clone(), sent_htlc_timeout_balance.clone()]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
assert_eq!(sorted_vec(vec![Balance::ClaimableOnChannelClose {
// Once B has received the payment preimage, it includes the value of the HTLC in its
// "claimable if you were to close the channel" balance.
+ let commitment_tx_fee = chan_feerate as u64 *
+ (channel::commitment_tx_base_weight(&channel_type_features) +
+ if prev_commitment_tx { 1 } else { 2 } * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
let mut a_expected_balances = vec![Balance::ClaimableOnChannelClose {
amount_satoshis: 1_000_000 - // Channel funding value in satoshis
4_000 - // The to-be-failed HTLC value in satoshis
3_000 - // The claimed HTLC value in satoshis
1_000 - // The push_msat value in satoshis
3 - // The dust HTLC value in satoshis
- // The commitment transaction fee with two HTLC outputs:
- chan_feerate * (channel::commitment_tx_base_weight(&channel_type_features) +
- if prev_commitment_tx { 1 } else { 2 } *
- channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ commitment_tx_fee - // The commitment transaction fee with two HTLC outputs
+ anchor_outputs_value, // The anchor outputs value in satoshis
}, sent_htlc_timeout_balance.clone()];
if !prev_commitment_tx {
a_expected_balances.push(sent_htlc_balance.clone());
mine_transaction(&nodes[0], &remote_txn[0]);
mine_transaction(&nodes[1], &remote_txn[0]);
- let b_broadcast_txn = nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap().split_off(0);
+ if anchors {
+ let mut events = nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events();
+ assert_eq!(events.len(), 1);
+ match events.pop().unwrap() {
+ Event::BumpTransaction(bump_event) => {
+ let mut first_htlc_event = bump_event.clone();
+ if let BumpTransactionEvent::HTLCResolution { ref mut htlc_descriptors, .. } = &mut first_htlc_event {
+ htlc_descriptors.remove(1);
+ } else {
+ panic!("Unexpected event");
+ }
+ let mut second_htlc_event = bump_event;
+ if let BumpTransactionEvent::HTLCResolution { ref mut htlc_descriptors, .. } = &mut second_htlc_event {
+ htlc_descriptors.remove(0);
+ } else {
+ panic!("Unexpected event");
+ }
+ nodes[1].bump_tx_handler.handle_event(&first_htlc_event);
+ nodes[1].bump_tx_handler.handle_event(&second_htlc_event);
+ },
+ _ => panic!("Unexpected event"),
+ }
+ }
+
+ let b_broadcast_txn = nodes[1].tx_broadcaster.txn_broadcast();
assert_eq!(b_broadcast_txn.len(), 2);
// b_broadcast_txn should spend the HTLCs output of the commitment tx for 3_000 and 4_000 sats
- check_spends!(b_broadcast_txn[0], remote_txn[0]);
- check_spends!(b_broadcast_txn[1], remote_txn[0]);
- assert_eq!(b_broadcast_txn[0].input.len(), 1);
- assert_eq!(b_broadcast_txn[1].input.len(), 1);
+ check_spends!(b_broadcast_txn[0], remote_txn[0], coinbase_tx);
+ check_spends!(b_broadcast_txn[1], remote_txn[0], coinbase_tx);
+ assert_eq!(b_broadcast_txn[0].input.len(), if anchors { 2 } else { 1 });
+ assert_eq!(b_broadcast_txn[1].input.len(), if anchors { 2 } else { 1 });
assert_eq!(remote_txn[0].output[b_broadcast_txn[0].input[0].previous_output.vout as usize].value, 3_000);
assert_eq!(remote_txn[0].output[b_broadcast_txn[1].input[0].previous_output.vout as usize].value, 4_000);
// other Balance variants, as close has already happened.
assert!(nodes[0].chain_monitor.chain_monitor.get_and_clear_pending_events().is_empty());
assert!(nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events().is_empty());
-
+ let commitment_tx_fee = chan_feerate as u64 *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 3_000 - 4_000 - 1_000 - 3 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 3_000 - 4_000 - 1_000 - 3 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 1,
}, sent_htlc_balance.clone(), sent_htlc_timeout_balance.clone()]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
}, received_htlc_claiming_balance.clone(), received_htlc_timeout_claiming_balance.clone()]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- test_spendable_output(&nodes[0], &remote_txn[0]);
+ test_spendable_output(&nodes[0], &remote_txn[0], false);
assert!(nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events().is_empty());
// After broadcasting the HTLC claim transaction, node A will still consider the HTLC
nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
expect_payment_failed!(nodes[0], timeout_payment_hash, false);
- test_spendable_output(&nodes[0], &a_broadcast_txn[1]);
+ test_spendable_output(&nodes[0], &a_broadcast_txn[1], false);
// Node B will no longer consider the HTLC "contentious" after the HTLC claim transaction
// confirms, and consider it simply "awaiting confirmations". Note that it has to wait for the
// After reaching the commitment output CSV, we'll get a SpendableOutputs event for it and have
// only the HTLCs claimable on node B.
connect_blocks(&nodes[1], node_b_commitment_claimable - nodes[1].best_block_info().1);
- test_spendable_output(&nodes[1], &remote_txn[0]);
+ test_spendable_output(&nodes[1], &remote_txn[0], anchors);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
amount_satoshis: 3_000,
// After reaching the claimed HTLC output CSV, we'll get a SpendableOutptus event for it and
// have only one HTLC output left spendable.
connect_blocks(&nodes[1], node_b_htlc_claimable - nodes[1].best_block_info().1);
- test_spendable_output(&nodes[1], &b_broadcast_txn[0]);
+ test_spendable_output(&nodes[1], &b_broadcast_txn[0], anchors);
assert_eq!(vec![received_htlc_timeout_claiming_balance.clone()],
nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
#[test]
fn test_claim_value_force_close() {
- do_test_claim_value_force_close(true);
- do_test_claim_value_force_close(false);
+ do_test_claim_value_force_close(false, true);
+ do_test_claim_value_force_close(false, false);
+ do_test_claim_value_force_close(true, true);
+ do_test_claim_value_force_close(true, false);
}
-#[test]
-fn test_balances_on_local_commitment_htlcs() {
+fn do_test_balances_on_local_commitment_htlcs(anchors: bool) {
// Previously, when handling the broadcast of a local commitment transactions (with associated
// CSV delays prior to spendability), we incorrectly handled the CSV delays on HTLC
// transactions. This caused us to miss spendable outputs for HTLCs which were awaiting a CSV
// claim by our counterparty).
let chanmon_cfgs = create_chanmon_cfgs(2);
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let coinbase_tx = Transaction {
+ version: 2,
+ lock_time: PackedLockTime::ZERO,
+ input: vec![TxIn { ..Default::default() }],
+ output: vec![
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[0].wallet_source.get_change_script().unwrap(),
+ },
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[1].wallet_source.get_change_script().unwrap(),
+ },
+ ],
+ };
+ if anchors {
+ nodes[0].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 0 }, coinbase_tx.output[0].value);
+ nodes[1].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 1 }, coinbase_tx.output[1].value);
+ }
+
// Create a single channel with two pending HTLCs from nodes[0] to nodes[1], one which nodes[1]
// knows the preimage for, one which it does not.
let (_, _, chan_id, funding_tx) = create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 0);
let chan_feerate = get_feerate!(nodes[0], nodes[1], chan_id) as u64;
let channel_type_features = get_channel_type_features!(nodes[0], nodes[1], chan_id);
- // Get nodes[0]'s commitment transaction and HTLC-Timeout transactions
- let as_txn = get_local_commitment_txn!(nodes[0], chan_id);
- assert_eq!(as_txn.len(), 3);
- check_spends!(as_txn[1], as_txn[0]);
- check_spends!(as_txn[2], as_txn[0]);
- check_spends!(as_txn[0], funding_tx);
-
// First confirm the commitment transaction on nodes[0], which should leave us with three
// claimable balances.
let node_a_commitment_claimable = nodes[0].best_block_info().1 + BREAKDOWN_TIMEOUT as u32;
- mine_transaction(&nodes[0], &as_txn[0]);
+ nodes[0].node.force_close_broadcasting_latest_txn(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
check_added_monitors!(nodes[0], 1);
check_closed_broadcast!(nodes[0], true);
- check_closed_event!(nodes[0], 1, ClosureReason::CommitmentTxConfirmed, [nodes[1].node.get_our_node_id()], 1000000);
+ check_closed_event!(nodes[0], 1, ClosureReason::HolderForceClosed, [nodes[1].node.get_our_node_id()], 1000000);
+ let commitment_tx = {
+ let mut txn = nodes[0].tx_broadcaster.unique_txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let commitment_tx = txn.pop().unwrap();
+ check_spends!(commitment_tx, funding_tx);
+ commitment_tx
+ };
+ let commitment_tx_conf_height_a = block_from_scid(&mine_transaction(&nodes[0], &commitment_tx));
+ if anchors && nodes[0].connect_style.borrow().updates_best_block_first() {
+ let mut txn = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ assert_eq!(txn[0].txid(), commitment_tx.txid());
+ }
let htlc_balance_known_preimage = Balance::MaybeTimeoutClaimableHTLC {
amount_satoshis: 10_000,
payment_hash: payment_hash_2,
};
+ let commitment_tx_fee = chan_feerate *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
+ let anchor_outputs_value = if anchors { 2 * channel::ANCHOR_OUTPUT_VALUE_SATOSHI } else { 0 };
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 10_000 - 20_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 10_000 - 20_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: node_a_commitment_claimable,
}, htlc_balance_known_preimage.clone(), htlc_balance_unknown_preimage.clone()]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
// Get nodes[1]'s HTLC claim tx for the second HTLC
- mine_transaction(&nodes[1], &as_txn[0]);
+ mine_transaction(&nodes[1], &commitment_tx);
check_added_monitors!(nodes[1], 1);
check_closed_broadcast!(nodes[1], true);
check_closed_event!(nodes[1], 1, ClosureReason::CommitmentTxConfirmed, [nodes[0].node.get_our_node_id()], 1000000);
let bs_htlc_claim_txn = nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap().split_off(0);
assert_eq!(bs_htlc_claim_txn.len(), 1);
- check_spends!(bs_htlc_claim_txn[0], as_txn[0]);
+ check_spends!(bs_htlc_claim_txn[0], commitment_tx);
// Connect blocks until the HTLCs expire, allowing us to (validly) broadcast the HTLC-Timeout
// transaction.
- connect_blocks(&nodes[0], TEST_FINAL_CLTV - 1);
+ connect_blocks(&nodes[0], TEST_FINAL_CLTV);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 10_000 - 20_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 10_000 - 20_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: node_a_commitment_claimable,
}, htlc_balance_known_preimage.clone(), htlc_balance_unknown_preimage.clone()]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- assert_eq!(as_txn[1].lock_time.0, nodes[0].best_block_info().1 + 1); // as_txn[1] can be included in the next block
+ if anchors {
+ handle_bump_htlc_event(&nodes[0], 2);
+ }
+ let timeout_htlc_txn = nodes[0].tx_broadcaster.unique_txn_broadcast();
+ assert_eq!(timeout_htlc_txn.len(), 2);
+ check_spends!(timeout_htlc_txn[0], commitment_tx, coinbase_tx);
+ check_spends!(timeout_htlc_txn[1], commitment_tx, coinbase_tx);
// Now confirm nodes[0]'s HTLC-Timeout transaction, which changes the claimable balance to an
// "awaiting confirmations" one.
let node_a_htlc_claimable = nodes[0].best_block_info().1 + BREAKDOWN_TIMEOUT as u32;
- mine_transaction(&nodes[0], &as_txn[1]);
+ mine_transaction(&nodes[0], &timeout_htlc_txn[0]);
// Note that prior to the fix in the commit which introduced this test, this (and the next
// balance) check failed. With this check removed, the code panicked in the `connect_blocks`
// call, as described, two hunks down.
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 10_000 - 20_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 10_000 - 20_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: node_a_commitment_claimable,
}, Balance::ClaimableAwaitingConfirmations {
amount_satoshis: 10_000,
mine_transaction(&nodes[0], &bs_htlc_claim_txn[0]);
expect_payment_sent(&nodes[0], payment_preimage_2, None, true, false);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 10_000 - 20_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 10_000 - 20_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: node_a_commitment_claimable,
}, Balance::ClaimableAwaitingConfirmations {
amount_satoshis: 10_000,
expect_payment_failed!(nodes[0], payment_hash, false);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 10_000 - 20_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 10_000 - 20_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: node_a_commitment_claimable,
}, Balance::ClaimableAwaitingConfirmations {
amount_satoshis: 10_000,
// Connect blocks until the commitment transaction's CSV expires, providing us the relevant
// `SpendableOutputs` event and removing the claimable balance entry.
- connect_blocks(&nodes[0], node_a_commitment_claimable - nodes[0].best_block_info().1);
+ connect_blocks(&nodes[0], node_a_commitment_claimable - nodes[0].best_block_info().1 - 1);
+ assert!(get_monitor!(nodes[0], chan_id)
+ .get_spendable_outputs(&commitment_tx, commitment_tx_conf_height_a).is_empty());
+ connect_blocks(&nodes[0], 1);
assert_eq!(vec![Balance::ClaimableAwaitingConfirmations {
amount_satoshis: 10_000,
confirmation_height: node_a_htlc_claimable,
}],
nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
- test_spendable_output(&nodes[0], &as_txn[0]);
+ let to_self_spendable_output = test_spendable_output(&nodes[0], &commitment_tx, false);
+ assert_eq!(
+ get_monitor!(nodes[0], chan_id).get_spendable_outputs(&commitment_tx, commitment_tx_conf_height_a),
+ to_self_spendable_output
+ );
// Connect blocks until the HTLC-Timeout's CSV expires, providing us the relevant
// `SpendableOutputs` event and removing the claimable balance entry.
connect_blocks(&nodes[0], node_a_htlc_claimable - nodes[0].best_block_info().1);
assert!(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
- test_spendable_output(&nodes[0], &as_txn[1]);
+ test_spendable_output(&nodes[0], &timeout_htlc_txn[0], false);
// Ensure that even if we connect more blocks, potentially replaying the entire chain if we're
// using `ConnectStyle::HighlyRedundantTransactionsFirstSkippingBlocks`, we don't get new
assert!(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
}
+#[test]
+fn test_balances_on_local_commitment_htlcs() {
+ do_test_balances_on_local_commitment_htlcs(false);
+ do_test_balances_on_local_commitment_htlcs(true);
+}
+
#[test]
fn test_no_preimage_inbound_htlc_balances() {
// Tests that MaybePreimageClaimableHTLC are generated for inbound HTLCs for which we do not
// For node B, we'll get the non-HTLC funds claimable after ANTI_REORG_DELAY confirmations
connect_blocks(&nodes[1], ANTI_REORG_DELAY - 1);
- test_spendable_output(&nodes[1], &as_txn[0]);
+ test_spendable_output(&nodes[1], &as_txn[0], false);
bs_pre_spend_claims.retain(|e| if let Balance::ClaimableAwaitingConfirmations { .. } = e { false } else { true });
// The next few blocks for B look the same as for A, though for the opposite HTLC
confirmation_height: core::cmp::max(as_timeout_claimable_height, htlc_cltv_timeout),
}],
nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
- test_spendable_output(&nodes[0], &as_txn[0]);
+ test_spendable_output(&nodes[0], &as_txn[0], false);
connect_blocks(&nodes[0], as_timeout_claimable_height - nodes[0].best_block_info().1);
assert!(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
- test_spendable_output(&nodes[0], &as_htlc_timeout_claim[0]);
+ test_spendable_output(&nodes[0], &as_htlc_timeout_claim[0], false);
// The process for B should be completely identical as well, noting that the non-HTLC-balance
// was already claimed.
assert_eq!(vec![b_received_htlc_balance.clone()],
nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
- test_spendable_output(&nodes[1], &bs_htlc_timeout_claim[0]);
+ test_spendable_output(&nodes[1], &bs_htlc_timeout_claim[0], false);
connect_blocks(&nodes[1], 1);
assert!(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
v
}
-fn do_test_revoked_counterparty_commitment_balances(confirm_htlc_spend_first: bool) {
+fn do_test_revoked_counterparty_commitment_balances(anchors: bool, confirm_htlc_spend_first: bool) {
// Tests `get_claimable_balances` for revoked counterparty commitment transactions.
let mut chanmon_cfgs = create_chanmon_cfgs(2);
// We broadcast a second-to-latest commitment transaction, without providing the revocation
// transaction which, from the point of view of our keys_manager, is revoked.
chanmon_cfgs[1].keys_manager.disable_revocation_policy_check = true;
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
let (_, _, chan_id, funding_tx) =
// The following constants were determined experimentally
const BS_TO_SELF_CLAIM_EXP_WEIGHT: usize = 483;
- const OUTBOUND_HTLC_CLAIM_EXP_WEIGHT: usize = 571;
- const INBOUND_HTLC_CLAIM_EXP_WEIGHT: usize = 578;
+ let outbound_htlc_claim_exp_weight: usize = if anchors { 574 } else { 571 };
+ let inbound_htlc_claim_exp_weight: usize = if anchors { 582 } else { 578 };
// Check that the weight is close to the expected weight. Note that signature sizes vary
// somewhat so it may not always be exact.
- fuzzy_assert_eq(claim_txn[0].weight(), OUTBOUND_HTLC_CLAIM_EXP_WEIGHT);
- fuzzy_assert_eq(claim_txn[1].weight(), INBOUND_HTLC_CLAIM_EXP_WEIGHT);
- fuzzy_assert_eq(claim_txn[2].weight(), INBOUND_HTLC_CLAIM_EXP_WEIGHT);
+ fuzzy_assert_eq(claim_txn[0].weight(), outbound_htlc_claim_exp_weight);
+ fuzzy_assert_eq(claim_txn[1].weight(), inbound_htlc_claim_exp_weight);
+ fuzzy_assert_eq(claim_txn[2].weight(), inbound_htlc_claim_exp_weight);
fuzzy_assert_eq(claim_txn[3].weight(), BS_TO_SELF_CLAIM_EXP_WEIGHT);
+ let commitment_tx_fee = chan_feerate *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 3 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
+ let anchor_outputs_value = if anchors { channel::ANCHOR_OUTPUT_VALUE_SATOSHI * 2 } else { 0 };
+ let inbound_htlc_claim_fee = chan_feerate * inbound_htlc_claim_exp_weight as u64 / 1000;
+ let outbound_htlc_claim_fee = chan_feerate * outbound_htlc_claim_exp_weight as u64 / 1000;
+ let to_self_claim_fee = chan_feerate * claim_txn[3].weight() as u64 / 1000;
+
// The expected balance for the next three checks, with the largest-HTLC and to_self output
// claim balances separated out.
let expected_balance = vec![Balance::ClaimableAwaitingConfirmations {
}];
let to_self_unclaimed_balance = Balance::CounterpartyRevokedOutputClaimable {
- amount_satoshis: 1_000_000 - 100_000 - 3_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 3 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - 3_000 - commitment_tx_fee - anchor_outputs_value,
};
let to_self_claimed_avail_height;
let largest_htlc_unclaimed_balance = Balance::CounterpartyRevokedOutputClaimable {
}
let largest_htlc_claimed_balance = Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 5_000 - chan_feerate * INBOUND_HTLC_CLAIM_EXP_WEIGHT as u64 / 1000,
+ amount_satoshis: 5_000 - inbound_htlc_claim_fee,
confirmation_height: largest_htlc_claimed_avail_height,
};
let to_self_claimed_balance = Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 100_000 - 3_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 3 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000
- - chan_feerate * claim_txn[3].weight() as u64 / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - 3_000 - commitment_tx_fee - anchor_outputs_value - to_self_claim_fee,
confirmation_height: to_self_claimed_avail_height,
};
amount_satoshis: 100_000 - 5_000 - 4_000 - 3,
confirmation_height: nodes[1].best_block_info().1 + 1,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 1_000_000 - 100_000 - 3_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 3 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000
- - chan_feerate * claim_txn[3].weight() as u64 / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - 3_000 - commitment_tx_fee - anchor_outputs_value - to_self_claim_fee,
confirmation_height: to_self_claimed_avail_height,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 3_000 - chan_feerate * OUTBOUND_HTLC_CLAIM_EXP_WEIGHT as u64 / 1000,
+ amount_satoshis: 3_000 - outbound_htlc_claim_fee,
confirmation_height: nodes[1].best_block_info().1 + 4,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 4_000 - chan_feerate * INBOUND_HTLC_CLAIM_EXP_WEIGHT as u64 / 1000,
+ amount_satoshis: 4_000 - inbound_htlc_claim_fee,
confirmation_height: nodes[1].best_block_info().1 + 5,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: 5_000 - chan_feerate * INBOUND_HTLC_CLAIM_EXP_WEIGHT as u64 / 1000,
+ amount_satoshis: 5_000 - inbound_htlc_claim_fee,
confirmation_height: largest_htlc_claimed_avail_height,
}]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[1], 1);
- test_spendable_output(&nodes[1], &as_revoked_txn[0]);
+ test_spendable_output(&nodes[1], &as_revoked_txn[0], false);
let mut payment_failed_events = nodes[1].node.get_and_clear_pending_events();
expect_payment_failed_conditions_event(payment_failed_events[..2].to_vec(),
dust_payment_hash, false, PaymentFailedConditions::new());
connect_blocks(&nodes[1], 1);
- test_spendable_output(&nodes[1], &claim_txn[if confirm_htlc_spend_first { 2 } else { 3 }]);
+ test_spendable_output(&nodes[1], &claim_txn[if confirm_htlc_spend_first { 2 } else { 3 }], false);
connect_blocks(&nodes[1], 1);
- test_spendable_output(&nodes[1], &claim_txn[if confirm_htlc_spend_first { 3 } else { 2 }]);
+ test_spendable_output(&nodes[1], &claim_txn[if confirm_htlc_spend_first { 3 } else { 2 }], false);
expect_payment_failed!(nodes[1], live_payment_hash, false);
connect_blocks(&nodes[1], 1);
- test_spendable_output(&nodes[1], &claim_txn[0]);
+ test_spendable_output(&nodes[1], &claim_txn[0], false);
connect_blocks(&nodes[1], 1);
- test_spendable_output(&nodes[1], &claim_txn[1]);
+ test_spendable_output(&nodes[1], &claim_txn[1], false);
expect_payment_failed!(nodes[1], timeout_payment_hash, false);
assert_eq!(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances(), Vec::new());
#[test]
fn test_revoked_counterparty_commitment_balances() {
- do_test_revoked_counterparty_commitment_balances(true);
- do_test_revoked_counterparty_commitment_balances(false);
+ do_test_revoked_counterparty_commitment_balances(false, true);
+ do_test_revoked_counterparty_commitment_balances(false, false);
+ do_test_revoked_counterparty_commitment_balances(true, true);
+ do_test_revoked_counterparty_commitment_balances(true, false);
}
-#[test]
-fn test_revoked_counterparty_htlc_tx_balances() {
+fn do_test_revoked_counterparty_htlc_tx_balances(anchors: bool) {
// Tests `get_claimable_balances` for revocation spends of HTLC transactions.
let mut chanmon_cfgs = create_chanmon_cfgs(2);
chanmon_cfgs[1].keys_manager.disable_revocation_policy_check = true;
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let coinbase_tx = Transaction {
+ version: 2,
+ lock_time: PackedLockTime::ZERO,
+ input: vec![TxIn { ..Default::default() }],
+ output: vec![
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[0].wallet_source.get_change_script().unwrap(),
+ },
+ TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[1].wallet_source.get_change_script().unwrap(),
+ },
+ ],
+ };
+ if anchors {
+ nodes[0].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 0 }, coinbase_tx.output[0].value);
+ nodes[1].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 1 }, coinbase_tx.output[1].value);
+ }
+
// Create some initial channels
let (_, _, chan_id, funding_tx) =
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 11_000_000);
let revoked_local_txn = get_local_commitment_txn!(nodes[1], chan_id);
assert_eq!(revoked_local_txn[0].input.len(), 1);
assert_eq!(revoked_local_txn[0].input[0].previous_output.txid, funding_tx.txid());
+ if anchors {
+ assert_eq!(revoked_local_txn[0].output[4].value, 10000); // to_self output
+ } else {
+ assert_eq!(revoked_local_txn[0].output[2].value, 10000); // to_self output
+ }
- // The to-be-revoked commitment tx should have two HTLCs and an output for both sides
- assert_eq!(revoked_local_txn[0].output.len(), 4);
+ // The to-be-revoked commitment tx should have two HTLCs, an output for each side, and an
+ // anchor output for each side if enabled.
+ assert_eq!(revoked_local_txn[0].output.len(), if anchors { 6 } else { 4 });
claim_payment(&nodes[0], &[&nodes[1]], payment_preimage);
check_closed_broadcast!(nodes[1], true);
check_added_monitors!(nodes[1], 1);
check_closed_event!(nodes[1], 1, ClosureReason::CommitmentTxConfirmed, [nodes[0].node.get_our_node_id()], 1000000);
+ if anchors {
+ handle_bump_htlc_event(&nodes[1], 1);
+ }
let revoked_htlc_success = {
let mut txn = nodes[1].tx_broadcaster.txn_broadcast();
assert_eq!(txn.len(), 1);
- assert_eq!(txn[0].input.len(), 1);
- assert_eq!(txn[0].input[0].witness.last().unwrap().len(), ACCEPTED_HTLC_SCRIPT_WEIGHT);
- check_spends!(txn[0], revoked_local_txn[0]);
+ assert_eq!(txn[0].input.len(), if anchors { 2 } else { 1 });
+ assert_eq!(txn[0].input[0].previous_output.vout, if anchors { 3 } else { 1 });
+ assert_eq!(txn[0].input[0].witness.last().unwrap().len(),
+ if anchors { ACCEPTED_HTLC_SCRIPT_WEIGHT_ANCHORS } else { ACCEPTED_HTLC_SCRIPT_WEIGHT });
+ check_spends!(txn[0], revoked_local_txn[0], coinbase_tx);
txn.pop().unwrap()
};
+ let revoked_htlc_success_fee = chan_feerate * revoked_htlc_success.weight() as u64 / 1000;
connect_blocks(&nodes[1], TEST_FINAL_CLTV);
+ if anchors {
+ handle_bump_htlc_event(&nodes[1], 2);
+ }
let revoked_htlc_timeout = {
let mut txn = nodes[1].tx_broadcaster.unique_txn_broadcast();
assert_eq!(txn.len(), 2);
txn.remove(0)
}
};
- check_spends!(revoked_htlc_timeout, revoked_local_txn[0]);
+ check_spends!(revoked_htlc_timeout, revoked_local_txn[0], coinbase_tx);
assert_ne!(revoked_htlc_success.input[0].previous_output, revoked_htlc_timeout.input[0].previous_output);
assert_eq!(revoked_htlc_success.lock_time.0, 0);
assert_ne!(revoked_htlc_timeout.lock_time.0, 0);
check_closed_event!(nodes[0], 1, ClosureReason::CommitmentTxConfirmed, [nodes[1].node.get_our_node_id()], 1000000);
let to_remote_conf_height = nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 1;
- let as_commitment_claim_txn = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().split_off(0);
- assert_eq!(as_commitment_claim_txn.len(), 1);
- check_spends!(as_commitment_claim_txn[0], revoked_local_txn[0]);
+ let revoked_to_self_claim = {
+ let mut as_commitment_claim_txn = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(as_commitment_claim_txn.len(), if anchors { 2 } else { 1 });
+ if anchors {
+ assert_eq!(as_commitment_claim_txn[0].input.len(), 1);
+ assert_eq!(as_commitment_claim_txn[0].input[0].previous_output.vout, 4); // Separate to_remote claim
+ check_spends!(as_commitment_claim_txn[0], revoked_local_txn[0]);
+ assert_eq!(as_commitment_claim_txn[1].input.len(), 2);
+ assert_eq!(as_commitment_claim_txn[1].input[0].previous_output.vout, 2);
+ assert_eq!(as_commitment_claim_txn[1].input[1].previous_output.vout, 3);
+ check_spends!(as_commitment_claim_txn[1], revoked_local_txn[0]);
+ Some(as_commitment_claim_txn.remove(0))
+ } else {
+ assert_eq!(as_commitment_claim_txn[0].input.len(), 3);
+ assert_eq!(as_commitment_claim_txn[0].input[0].previous_output.vout, 2);
+ assert_eq!(as_commitment_claim_txn[0].input[1].previous_output.vout, 0);
+ assert_eq!(as_commitment_claim_txn[0].input[2].previous_output.vout, 1);
+ check_spends!(as_commitment_claim_txn[0], revoked_local_txn[0]);
+ None
+ }
+ };
// The next two checks have the same balance set for A - even though we confirm a revoked HTLC
// transaction our balance tracking doesn't use the on-chain value so the
// `CounterpartyRevokedOutputClaimable` entry doesn't change.
+ let commitment_tx_fee = chan_feerate *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
+ let anchor_outputs_value = if anchors { channel::ANCHOR_OUTPUT_VALUE_SATOSHI * 2 } else { 0 };
let as_balances = sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
// to_remote output in B's revoked commitment
- amount_satoshis: 1_000_000 - 11_000 - 3_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 11_000 - 3_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: to_remote_conf_height,
}, Balance::CounterpartyRevokedOutputClaimable {
// to_self output in B's revoked commitment
mine_transaction(&nodes[0], &revoked_htlc_success);
let as_htlc_claim_tx = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().split_off(0);
assert_eq!(as_htlc_claim_tx.len(), 2);
+ assert_eq!(as_htlc_claim_tx[0].input.len(), 1);
check_spends!(as_htlc_claim_tx[0], revoked_htlc_success);
- check_spends!(as_htlc_claim_tx[1], revoked_local_txn[0]); // A has to generate a new claim for the remaining revoked
- // outputs (which no longer includes the spent HTLC output)
+ // A has to generate a new claim for the remaining revoked outputs (which no longer includes the
+ // spent HTLC output)
+ assert_eq!(as_htlc_claim_tx[1].input.len(), if anchors { 1 } else { 2 });
+ assert_eq!(as_htlc_claim_tx[1].input[0].previous_output.vout, 2);
+ if !anchors {
+ assert_eq!(as_htlc_claim_tx[1].input[1].previous_output.vout, 0);
+ }
+ check_spends!(as_htlc_claim_tx[1], revoked_local_txn[0]);
assert_eq!(as_balances,
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
assert_eq!(as_htlc_claim_tx[0].output.len(), 1);
- fuzzy_assert_eq(as_htlc_claim_tx[0].output[0].value,
- 3_000 - chan_feerate * (revoked_htlc_success.weight() + as_htlc_claim_tx[0].weight()) as u64 / 1000);
+ let as_revoked_htlc_success_claim_fee = chan_feerate * as_htlc_claim_tx[0].weight() as u64 / 1000;
+ if anchors {
+ // With anchors, B can pay for revoked_htlc_success's fee with additional inputs, rather
+ // than with the HTLC itself.
+ fuzzy_assert_eq(as_htlc_claim_tx[0].output[0].value,
+ 3_000 - as_revoked_htlc_success_claim_fee);
+ } else {
+ fuzzy_assert_eq(as_htlc_claim_tx[0].output[0].value,
+ 3_000 - revoked_htlc_success_fee - as_revoked_htlc_success_claim_fee);
+ }
mine_transaction(&nodes[0], &as_htlc_claim_tx[0]);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
// to_remote output in B's revoked commitment
- amount_satoshis: 1_000_000 - 11_000 - 3_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 11_000 - 3_000 - commitment_tx_fee - anchor_outputs_value,
confirmation_height: to_remote_conf_height,
}, Balance::CounterpartyRevokedOutputClaimable {
// to_self output in B's revoked commitment
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[0], ANTI_REORG_DELAY - 3);
- test_spendable_output(&nodes[0], &revoked_local_txn[0]);
+ test_spendable_output(&nodes[0], &revoked_local_txn[0], false);
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output to B
amount_satoshis: 10_000,
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[0], 2);
- test_spendable_output(&nodes[0], &as_htlc_claim_tx[0]);
+ test_spendable_output(&nodes[0], &as_htlc_claim_tx[0], false);
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output in B's revoked commitment
amount_satoshis: 10_000,
}
mine_transaction(&nodes[0], &revoked_htlc_timeout);
- let as_second_htlc_claim_tx = nodes[0].tx_broadcaster.txn_broadcasted.lock().unwrap().split_off(0);
- assert_eq!(as_second_htlc_claim_tx.len(), 2);
-
- check_spends!(as_second_htlc_claim_tx[0], revoked_htlc_timeout);
- check_spends!(as_second_htlc_claim_tx[1], revoked_local_txn[0]);
+ let (revoked_htlc_timeout_claim, revoked_to_self_claim) = {
+ let mut as_second_htlc_claim_tx = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(as_second_htlc_claim_tx.len(), if anchors { 1 } else { 2 });
+ if anchors {
+ assert_eq!(as_second_htlc_claim_tx[0].input.len(), 1);
+ assert_eq!(as_second_htlc_claim_tx[0].input[0].previous_output.vout, 0);
+ check_spends!(as_second_htlc_claim_tx[0], revoked_htlc_timeout);
+ (as_second_htlc_claim_tx.remove(0), revoked_to_self_claim.unwrap())
+ } else {
+ assert_eq!(as_second_htlc_claim_tx[0].input.len(), 1);
+ assert_eq!(as_second_htlc_claim_tx[0].input[0].previous_output.vout, 0);
+ check_spends!(as_second_htlc_claim_tx[0], revoked_htlc_timeout);
+ assert_eq!(as_second_htlc_claim_tx[1].input.len(), 1);
+ assert_eq!(as_second_htlc_claim_tx[1].input[0].previous_output.vout, 2);
+ check_spends!(as_second_htlc_claim_tx[1], revoked_local_txn[0]);
+ (as_second_htlc_claim_tx.remove(0), as_second_htlc_claim_tx.remove(0))
+ }
+ };
// Connect blocks to finalize the HTLC resolution with the HTLC-Timeout transaction. In a
// previous iteration of the revoked balance handling this would result in us "forgetting" that
}]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- mine_transaction(&nodes[0], &as_second_htlc_claim_tx[0]);
+ mine_transaction(&nodes[0], &revoked_htlc_timeout_claim);
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output in B's revoked commitment
amount_satoshis: 10_000,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: as_second_htlc_claim_tx[0].output[0].value,
+ amount_satoshis: revoked_htlc_timeout_claim.output[0].value,
confirmation_height: nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 1,
}]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- mine_transaction(&nodes[0], &as_second_htlc_claim_tx[1]);
+ mine_transaction(&nodes[0], &revoked_to_self_claim);
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
// to_self output in B's revoked commitment
- amount_satoshis: as_second_htlc_claim_tx[1].output[0].value,
+ amount_satoshis: revoked_to_self_claim.output[0].value,
confirmation_height: nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 1,
}, Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: as_second_htlc_claim_tx[0].output[0].value,
+ amount_satoshis: revoked_htlc_timeout_claim.output[0].value,
confirmation_height: nodes[0].best_block_info().1 + ANTI_REORG_DELAY - 2,
}]),
sorted_vec(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[0], ANTI_REORG_DELAY - 2);
- test_spendable_output(&nodes[0], &as_second_htlc_claim_tx[0]);
+ test_spendable_output(&nodes[0], &revoked_htlc_timeout_claim, false);
connect_blocks(&nodes[0], 1);
- test_spendable_output(&nodes[0], &as_second_htlc_claim_tx[1]);
+ test_spendable_output(&nodes[0], &revoked_to_self_claim, false);
assert_eq!(nodes[0].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances(), Vec::new());
}
#[test]
-fn test_revoked_counterparty_aggregated_claims() {
+fn test_revoked_counterparty_htlc_tx_balances() {
+ do_test_revoked_counterparty_htlc_tx_balances(false);
+ do_test_revoked_counterparty_htlc_tx_balances(true);
+}
+
+fn do_test_revoked_counterparty_aggregated_claims(anchors: bool) {
// Tests `get_claimable_balances` for revoked counterparty commitment transactions when
// claiming with an aggregated claim transaction.
let mut chanmon_cfgs = create_chanmon_cfgs(2);
// transaction which, from the point of view of our keys_manager, is revoked.
chanmon_cfgs[1].keys_manager.disable_revocation_policy_check = true;
let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let mut user_config = test_default_channel_config();
+ if anchors {
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ }
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let coinbase_tx = Transaction {
+ version: 2,
+ lock_time: PackedLockTime::ZERO,
+ input: vec![TxIn { ..Default::default() }],
+ output: vec![TxOut {
+ value: Amount::ONE_BTC.to_sat(),
+ script_pubkey: nodes[0].wallet_source.get_change_script().unwrap(),
+ }],
+ };
+ nodes[0].wallet_source.add_utxo(bitcoin::OutPoint { txid: coinbase_tx.txid(), vout: 0 }, coinbase_tx.output[0].value);
+
let (_, _, chan_id, funding_tx) =
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 100_000_000);
let funding_outpoint = OutPoint { txid: funding_tx.txid(), index: 0 };
// Now get the latest commitment transaction from A and then update the fee to revoke it
let as_revoked_txn = get_local_commitment_txn!(nodes[0], chan_id);
- assert_eq!(as_revoked_txn.len(), 2);
+ assert_eq!(as_revoked_txn.len(), if anchors { 1 } else { 2 });
check_spends!(as_revoked_txn[0], funding_tx);
- check_spends!(as_revoked_txn[1], as_revoked_txn[0]); // The HTLC-Claim transaction
+ if !anchors {
+ check_spends!(as_revoked_txn[1], as_revoked_txn[0]); // The HTLC-Claim transaction
+ }
let channel_type_features = get_channel_type_features!(nodes[0], nodes[1], chan_id);
let chan_feerate = get_feerate!(nodes[0], nodes[1], chan_id) as u64;
check_closed_event!(nodes[1], 1, ClosureReason::CommitmentTxConfirmed, [nodes[0].node.get_our_node_id()], 1000000);
check_added_monitors!(nodes[1], 1);
- let mut claim_txn: Vec<_> = nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap().drain(..).filter(|tx| tx.input.iter().any(|inp| inp.previous_output.txid == as_revoked_txn[0].txid())).collect();
- // Currently the revoked commitment outputs are all claimed in one aggregated transaction
- assert_eq!(claim_txn.len(), 1);
- assert_eq!(claim_txn[0].input.len(), 3);
- check_spends!(claim_txn[0], as_revoked_txn[0]);
+ let mut claim_txn = nodes[1].tx_broadcaster.txn_broadcast();
+ assert_eq!(claim_txn.len(), if anchors { 2 } else { 1 });
+ let revoked_to_self_claim = if anchors {
+ assert_eq!(claim_txn[0].input.len(), 1);
+ assert_eq!(claim_txn[0].input[0].previous_output.vout, 5); // Separate to_remote claim
+ check_spends!(claim_txn[0], as_revoked_txn[0]);
+ assert_eq!(claim_txn[1].input.len(), 2);
+ assert_eq!(claim_txn[1].input[0].previous_output.vout, 2);
+ assert_eq!(claim_txn[1].input[1].previous_output.vout, 3);
+ check_spends!(claim_txn[1], as_revoked_txn[0]);
+ Some(claim_txn.remove(0))
+ } else {
+ assert_eq!(claim_txn[0].input.len(), 3);
+ assert_eq!(claim_txn[0].input[0].previous_output.vout, 3);
+ assert_eq!(claim_txn[0].input[1].previous_output.vout, 0);
+ assert_eq!(claim_txn[0].input[2].previous_output.vout, 1);
+ check_spends!(claim_txn[0], as_revoked_txn[0]);
+ None
+ };
let to_remote_maturity = nodes[1].best_block_info().1 + ANTI_REORG_DELAY - 1;
+ let commitment_tx_fee = chan_feerate *
+ (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000;
+ let anchor_outputs_value = if anchors { channel::ANCHOR_OUTPUT_VALUE_SATOSHI * 2 } else { 0 };
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
// to_remote output in A's revoked commitment
amount_satoshis: 100_000 - 4_000 - 3_000,
confirmation_height: to_remote_maturity,
}, Balance::CounterpartyRevokedOutputClaimable {
// to_self output in A's revoked commitment
- amount_satoshis: 1_000_000 - 100_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - commitment_tx_fee - anchor_outputs_value,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 1
amount_satoshis: 4_000,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 2
}]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- // Confirm A's HTLC-Success tranasction which presumably raced B's claim, causing B to create a
+ // Confirm A's HTLC-Success transaction which presumably raced B's claim, causing B to create a
// new claim.
- mine_transaction(&nodes[1], &as_revoked_txn[1]);
+ if anchors {
+ mine_transaction(&nodes[0], &as_revoked_txn[0]);
+ check_closed_broadcast(&nodes[0], 1, true);
+ check_added_monitors(&nodes[0], 1);
+ check_closed_event!(&nodes[0], 1, ClosureReason::CommitmentTxConfirmed, false, [nodes[1].node.get_our_node_id()], 1_000_000);
+ handle_bump_htlc_event(&nodes[0], 1);
+ }
+ let htlc_success_claim = if anchors {
+ let mut txn = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ check_spends!(txn[0], as_revoked_txn[0], coinbase_tx);
+ txn.pop().unwrap()
+ } else {
+ as_revoked_txn[1].clone()
+ };
+ mine_transaction(&nodes[1], &htlc_success_claim);
expect_payment_sent(&nodes[1], claimed_payment_preimage, None, true, false);
- let mut claim_txn_2: Vec<_> = nodes[1].tx_broadcaster.txn_broadcasted.lock().unwrap().clone();
- claim_txn_2.sort_unstable_by_key(|tx| if tx.input.iter().any(|inp| inp.previous_output.txid == as_revoked_txn[0].txid()) { 0 } else { 1 });
+
+ let mut claim_txn_2 = nodes[1].tx_broadcaster.txn_broadcast();
// Once B sees the HTLC-Success transaction it splits its claim transaction into two, though in
// theory it could re-aggregate the claims as well.
assert_eq!(claim_txn_2.len(), 2);
- assert_eq!(claim_txn_2[0].input.len(), 2);
- check_spends!(claim_txn_2[0], as_revoked_txn[0]);
- assert_eq!(claim_txn_2[1].input.len(), 1);
- check_spends!(claim_txn_2[1], as_revoked_txn[1]);
+ if anchors {
+ assert_eq!(claim_txn_2[0].input.len(), 1);
+ assert_eq!(claim_txn_2[0].input[0].previous_output.vout, 0);
+ check_spends!(claim_txn_2[0], &htlc_success_claim);
+ assert_eq!(claim_txn_2[1].input.len(), 1);
+ assert_eq!(claim_txn_2[1].input[0].previous_output.vout, 3);
+ check_spends!(claim_txn_2[1], as_revoked_txn[0]);
+ } else {
+ assert_eq!(claim_txn_2[0].input.len(), 1);
+ assert_eq!(claim_txn_2[0].input[0].previous_output.vout, 0);
+ check_spends!(claim_txn_2[0], as_revoked_txn[1]);
+ assert_eq!(claim_txn_2[1].input.len(), 2);
+ assert_eq!(claim_txn_2[1].input[0].previous_output.vout, 3);
+ assert_eq!(claim_txn_2[1].input[1].previous_output.vout, 1);
+ check_spends!(claim_txn_2[1], as_revoked_txn[0]);
+ }
assert_eq!(sorted_vec(vec![Balance::ClaimableAwaitingConfirmations {
// to_remote output in A's revoked commitment
confirmation_height: to_remote_maturity,
}, Balance::CounterpartyRevokedOutputClaimable {
// to_self output in A's revoked commitment
- amount_satoshis: 1_000_000 - 100_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - commitment_tx_fee - anchor_outputs_value,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 1
amount_satoshis: 4_000,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 2
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[1], 5);
- test_spendable_output(&nodes[1], &as_revoked_txn[0]);
+ test_spendable_output(&nodes[1], &as_revoked_txn[0], false);
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output in A's revoked commitment
- amount_satoshis: 1_000_000 - 100_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - commitment_tx_fee - anchor_outputs_value,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 1
amount_satoshis: 4_000,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 2
}]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- mine_transaction(&nodes[1], &claim_txn_2[1]);
+ mine_transaction(&nodes[1], &claim_txn_2[0]);
let htlc_2_claim_maturity = nodes[1].best_block_info().1 + ANTI_REORG_DELAY - 1;
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output in A's revoked commitment
- amount_satoshis: 1_000_000 - 100_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - commitment_tx_fee - anchor_outputs_value,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 1
amount_satoshis: 4_000,
}, Balance::ClaimableAwaitingConfirmations { // HTLC 2
- amount_satoshis: claim_txn_2[1].output[0].value,
+ amount_satoshis: claim_txn_2[0].output[0].value,
confirmation_height: htlc_2_claim_maturity,
}]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
connect_blocks(&nodes[1], 5);
- test_spendable_output(&nodes[1], &claim_txn_2[1]);
+ test_spendable_output(&nodes[1], &claim_txn_2[0], false);
assert_eq!(sorted_vec(vec![Balance::CounterpartyRevokedOutputClaimable {
// to_self output in A's revoked commitment
- amount_satoshis: 1_000_000 - 100_000 - chan_feerate *
- (channel::commitment_tx_base_weight(&channel_type_features) + 2 * channel::COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000,
+ amount_satoshis: 1_000_000 - 100_000 - commitment_tx_fee - anchor_outputs_value,
}, Balance::CounterpartyRevokedOutputClaimable { // HTLC 1
amount_satoshis: 4_000,
}]),
sorted_vec(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances()));
- mine_transaction(&nodes[1], &claim_txn_2[0]);
+ if anchors {
+ mine_transactions(&nodes[1], &[&claim_txn_2[1], revoked_to_self_claim.as_ref().unwrap()]);
+ } else {
+ mine_transaction(&nodes[1], &claim_txn_2[1]);
+ }
let rest_claim_maturity = nodes[1].best_block_info().1 + ANTI_REORG_DELAY - 1;
- assert_eq!(vec![Balance::ClaimableAwaitingConfirmations {
- amount_satoshis: claim_txn_2[0].output[0].value,
- confirmation_height: rest_claim_maturity,
- }],
- nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
+ if anchors {
+ assert_eq!(vec![Balance::ClaimableAwaitingConfirmations {
+ amount_satoshis: claim_txn_2[1].output[0].value,
+ confirmation_height: rest_claim_maturity,
+ }, Balance::ClaimableAwaitingConfirmations {
+ amount_satoshis: revoked_to_self_claim.as_ref().unwrap().output[0].value,
+ confirmation_height: rest_claim_maturity,
+ }],
+ nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
+ } else {
+ assert_eq!(vec![Balance::ClaimableAwaitingConfirmations {
+ amount_satoshis: claim_txn_2[1].output[0].value,
+ confirmation_height: rest_claim_maturity,
+ }],
+ nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances());
+ }
assert!(nodes[1].node.get_and_clear_pending_events().is_empty()); // We shouldn't fail the payment until we spend the output
connect_blocks(&nodes[1], 5);
expect_payment_failed!(nodes[1], revoked_payment_hash, false);
- test_spendable_output(&nodes[1], &claim_txn_2[0]);
+ if anchors {
+ let events = nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events();
+ assert_eq!(events.len(), 2);
+ for (i, event) in events.into_iter().enumerate() {
+ if let Event::SpendableOutputs { outputs, .. } = event {
+ assert_eq!(outputs.len(), 1);
+ let spend_tx = nodes[1].keys_manager.backing.spend_spendable_outputs(
+ &[&outputs[0]], Vec::new(), Builder::new().push_opcode(opcodes::all::OP_RETURN).into_script(),
+ 253, None, &Secp256k1::new()
+ ).unwrap();
+ check_spends!(spend_tx, if i == 0 { &claim_txn_2[1] } else { revoked_to_self_claim.as_ref().unwrap() });
+ } else { panic!(); }
+ }
+ } else {
+ test_spendable_output(&nodes[1], &claim_txn_2[1], false);
+ }
assert!(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
// Ensure that even if we connect more blocks, potentially replaying the entire chain if we're
assert!(nodes[1].chain_monitor.chain_monitor.get_monitor(funding_outpoint).unwrap().get_claimable_balances().is_empty());
}
+#[test]
+fn test_revoked_counterparty_aggregated_claims() {
+ do_test_revoked_counterparty_aggregated_claims(false);
+ do_test_revoked_counterparty_aggregated_claims(true);
+}
+
fn do_test_restored_packages_retry(check_old_monitor_retries_after_upgrade: bool) {
// Tests that we'll retry packages that were previously timelocked after we've restored them.
let chanmon_cfgs = create_chanmon_cfgs(2);
&LowerBoundedFeeEstimator::new(node_cfgs[0].fee_estimator), &nodes[0].logger
);
get_monitor!(nodes[1], chan_id).provide_payment_preimage(
- &payment_hash_1, &payment_preimage_1, &node_cfgs[0].tx_broadcaster,
+ &payment_hash_1, &payment_preimage_1, &node_cfgs[1].tx_broadcaster,
&LowerBoundedFeeEstimator::new(node_cfgs[1].fee_estimator), &nodes[1].logger
);
assert!(nodes[1].chain_monitor.chain_monitor.get_and_clear_pending_events().is_empty());
let spendable_output_events = nodes[0].chain_monitor.chain_monitor.get_and_clear_pending_events();
- assert_eq!(spendable_output_events.len(), 2);
- for event in spendable_output_events.iter() {
+ assert_eq!(spendable_output_events.len(), 4);
+ for event in spendable_output_events {
if let Event::SpendableOutputs { outputs, channel_id } = event {
assert_eq!(outputs.len(), 1);
assert!(vec![chan_b.2, chan_a.2].contains(&channel_id.unwrap()));
&[&outputs[0]], Vec::new(), Script::new_op_return(&[]), 253, None, &Secp256k1::new(),
).unwrap();
- check_spends!(spend_tx, revoked_claim_transactions.get(&spend_tx.input[0].previous_output.txid).unwrap());
+ if let SpendableOutputDescriptor::StaticPaymentOutput(_) = &outputs[0] {
+ check_spends!(spend_tx, &revoked_commitment_a, &revoked_commitment_b);
+ } else {
+ check_spends!(spend_tx, revoked_claim_transactions.get(&spend_tx.input[0].previous_output.txid).unwrap());
+ }
} else {
panic!("unexpected event");
}
// revoked commitment which Bob has the preimage for.
assert_eq!(nodes[1].chain_monitor.chain_monitor.get_claimable_balances(&[]).len(), 6);
}
+
+fn do_test_anchors_monitor_fixes_counterparty_payment_script_on_reload(confirm_commitment_before_reload: bool) {
+ // Tests that we'll fix a ChannelMonitor's `counterparty_payment_script` for an anchor outputs
+ // channel upon deserialization.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let persister;
+ let chain_monitor;
+ let mut user_config = test_default_channel_config();
+ user_config.channel_handshake_config.negotiate_anchors_zero_fee_htlc_tx = true;
+ user_config.manually_accept_inbound_channels = true;
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(user_config), Some(user_config)]);
+ let node_deserialized;
+ let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ let (_, _, chan_id, funding_tx) = create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 50_000_000);
+
+ // Set the monitor's `counterparty_payment_script` to a dummy P2WPKH script.
+ let secp = Secp256k1::new();
+ let privkey = bitcoin::PrivateKey::from_slice(&[1; 32], bitcoin::Network::Testnet).unwrap();
+ let pubkey = bitcoin::PublicKey::from_private_key(&secp, &privkey);
+ let p2wpkh_script = Script::new_v0_p2wpkh(&pubkey.wpubkey_hash().unwrap());
+ get_monitor!(nodes[1], chan_id).set_counterparty_payment_script(p2wpkh_script.clone());
+ assert_eq!(get_monitor!(nodes[1], chan_id).get_counterparty_payment_script(), p2wpkh_script);
+
+ // Confirm the counterparty's commitment and reload the monitor (either before or after) such
+ // that we arrive at the correct `counterparty_payment_script` after the reload.
+ nodes[0].node.force_close_broadcasting_latest_txn(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
+ check_added_monitors(&nodes[0], 1);
+ check_closed_broadcast(&nodes[0], 1, true);
+ check_closed_event!(&nodes[0], 1, ClosureReason::HolderForceClosed, false,
+ [nodes[1].node.get_our_node_id()], 100000);
+
+ let commitment_tx = {
+ let mut txn = nodes[0].tx_broadcaster.unique_txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ assert_eq!(txn[0].output.len(), 4);
+ check_spends!(txn[0], funding_tx);
+ txn.pop().unwrap()
+ };
+
+ mine_transaction(&nodes[0], &commitment_tx);
+ let commitment_tx_conf_height = if confirm_commitment_before_reload {
+ // We should expect our round trip serialization check to fail as we're writing the monitor
+ // with the incorrect P2WPKH script but reading it with the correct P2WSH script.
+ *nodes[1].chain_monitor.expect_monitor_round_trip_fail.lock().unwrap() = Some(chan_id);
+ let commitment_tx_conf_height = block_from_scid(&mine_transaction(&nodes[1], &commitment_tx));
+ let serialized_monitor = get_monitor!(nodes[1], chan_id).encode();
+ reload_node!(nodes[1], user_config, &nodes[1].node.encode(), &[&serialized_monitor], persister, chain_monitor, node_deserialized);
+ commitment_tx_conf_height
+ } else {
+ let serialized_monitor = get_monitor!(nodes[1], chan_id).encode();
+ reload_node!(nodes[1], user_config, &nodes[1].node.encode(), &[&serialized_monitor], persister, chain_monitor, node_deserialized);
+ let commitment_tx_conf_height = block_from_scid(&mine_transaction(&nodes[1], &commitment_tx));
+ check_added_monitors(&nodes[1], 1);
+ check_closed_broadcast(&nodes[1], 1, true);
+ commitment_tx_conf_height
+ };
+ check_closed_event!(&nodes[1], 1, ClosureReason::CommitmentTxConfirmed, false,
+ [nodes[0].node.get_our_node_id()], 100000);
+ assert!(get_monitor!(nodes[1], chan_id).get_counterparty_payment_script().is_v0_p2wsh());
+
+ connect_blocks(&nodes[0], ANTI_REORG_DELAY - 1);
+ connect_blocks(&nodes[1], ANTI_REORG_DELAY - 1);
+
+ if confirm_commitment_before_reload {
+ // If we saw the commitment before our `counterparty_payment_script` was fixed, we'll never
+ // get the spendable output event for the `to_remote` output, so we'll need to get it
+ // manually via `get_spendable_outputs`.
+ check_added_monitors(&nodes[1], 1);
+ let outputs = get_monitor!(nodes[1], chan_id).get_spendable_outputs(&commitment_tx, commitment_tx_conf_height);
+ assert_eq!(outputs.len(), 1);
+ let spend_tx = nodes[1].keys_manager.backing.spend_spendable_outputs(
+ &[&outputs[0]], Vec::new(), Builder::new().push_opcode(opcodes::all::OP_RETURN).into_script(),
+ 253, None, &secp
+ ).unwrap();
+ check_spends!(spend_tx, &commitment_tx);
+ } else {
+ test_spendable_output(&nodes[1], &commitment_tx, false);
+ }
+}
+
+#[test]
+fn test_anchors_monitor_fixes_counterparty_payment_script_on_reload() {
+ do_test_anchors_monitor_fixes_counterparty_payment_script_on_reload(false);
+ do_test_anchors_monitor_fixes_counterparty_payment_script_on_reload(true);
+}
use crate::sign::{NodeSigner, Recipient};
use crate::prelude::*;
+#[cfg(feature = "std")]
use core::convert::TryFrom;
use core::fmt;
use core::fmt::Debug;
use core::ops::Deref;
+#[cfg(feature = "std")]
use core::str::FromStr;
use crate::io::{self, Cursor, Read};
use crate::io_extras::read_to_end;
}
}
-fn parse_onion_address(host: &str, port: u16) -> Result<SocketAddress, SocketAddressParseError> {
+/// Parses an OnionV3 host and port into a [`SocketAddress::OnionV3`].
+///
+/// The host part must end with ".onion".
+pub fn parse_onion_address(host: &str, port: u16) -> Result<SocketAddress, SocketAddressParseError> {
if host.ends_with(".onion") {
let domain = &host[..host.len() - ".onion".len()];
if domain.len() != 56 {
/// Note that if this field is non-empty, it will contain strictly increasing TLVs, each
/// represented by a `(u64, Vec<u8>)` for its type number and serialized value respectively.
/// This is validated when setting this field using [`Self::with_custom_tlvs`].
+ #[cfg(not(c_bindings))]
pub fn custom_tlvs(&self) -> &Vec<(u64, Vec<u8>)> {
&self.custom_tlvs
}
+ /// Gets the custom TLVs that will be sent or have been received.
+ ///
+ /// Custom TLVs allow sending extra application-specific data with a payment. They provide
+ /// additional flexibility on top of payment metadata, as while other implementations may
+ /// require `payment_metadata` to reflect metadata provided in an invoice, custom TLVs
+ /// do not have this restriction.
+ ///
+ /// Note that if this field is non-empty, it will contain strictly increasing TLVs, each
+ /// represented by a `(u64, Vec<u8>)` for its type number and serialized value respectively.
+ /// This is validated when setting this field using [`Self::with_custom_tlvs`].
+ #[cfg(c_bindings)]
+ pub fn custom_tlvs(&self) -> Vec<(u64, Vec<u8>)> {
+ self.custom_tlvs.clone()
+ }
+
/// When we have received some HTLC(s) towards an MPP payment, as we receive further HTLC(s) we
/// have to make sure that some fields match exactly across the parts. For those that aren't
/// required to match, if they don't match we should remove them so as to not expose data
&self, pending_events: &Mutex<VecDeque<(events::Event, Option<EventCompletionAction>)>>)
{
let mut pending_outbound_payments = self.pending_outbound_payments.lock().unwrap();
+ #[cfg(not(invreqfailed))]
+ let pending_events = pending_events.lock().unwrap();
+ #[cfg(invreqfailed)]
let mut pending_events = pending_events.lock().unwrap();
pending_outbound_payments.retain(|payment_id, payment| {
// If an outbound payment was completed, and no pending HTLCs remain, we should remove it
if *timer_ticks_without_response <= INVOICE_REQUEST_TIMEOUT_TICKS {
true
} else {
+ #[cfg(invreqfailed)]
pending_events.push_back(
(events::Event::InvoiceRequestFailed { payment_id: *payment_id }, None)
);
payment.remove();
}
} else if let PendingOutboundPayment::AwaitingInvoice { .. } = payment.get() {
+ #[cfg(invreqfailed)]
pending_events.lock().unwrap().push_back((events::Event::InvoiceRequestFailed {
payment_id,
}, None));
use crate::ln::channelmanager::{PaymentId, RecipientOnionFields};
use crate::ln::features::{ChannelFeatures, NodeFeatures};
use crate::ln::msgs::{ErrorAction, LightningError};
- use crate::ln::outbound_payment::{Bolt12PaymentError, INVOICE_REQUEST_TIMEOUT_TICKS, OutboundPayments, Retry, RetryableSendFailure};
+ use crate::ln::outbound_payment::{Bolt12PaymentError, OutboundPayments, Retry, RetryableSendFailure};
+ #[cfg(invreqfailed)]
+ use crate::ln::outbound_payment::INVOICE_REQUEST_TIMEOUT_TICKS;
use crate::offers::invoice::DEFAULT_RELATIVE_EXPIRY;
use crate::offers::offer::OfferBuilder;
use crate::offers::test_utils::*;
}
#[test]
+ #[cfg(invreqfailed)]
fn removes_stale_awaiting_invoice() {
let pending_events = Mutex::new(VecDeque::new());
let outbound_payments = OutboundPayments::new();
}
#[test]
+ #[cfg(invreqfailed)]
fn removes_abandoned_awaiting_invoice() {
let pending_events = Mutex::new(VecDeque::new());
let outbound_payments = OutboundPayments::new();
// Check for unknown channel id error.
let unknown_chan_id_err = nodes[1].node.forward_intercepted_htlc(intercept_id, &ChannelId::from_bytes([42; 32]), nodes[2].node.get_our_node_id(), expected_outbound_amount_msat).unwrap_err();
assert_eq!(unknown_chan_id_err , APIError::ChannelUnavailable {
- err: format!("Channel with id {} not found for the passed counterparty node_id {}.",
+ err: format!("Channel with id {} not found for the passed counterparty node_id {}",
log_bytes!([42; 32]), nodes[2].node.get_our_node_id()) });
if test == InterceptTest::Fail {
payment_hash, Some(payment_secret), events.pop().unwrap(), true, None).unwrap();
match payment_claimable {
Event::PaymentClaimable { onion_fields, .. } => {
- assert_eq!(onion_fields.unwrap().custom_tlvs(), &custom_tlvs);
+ assert_eq!(&onion_fields.unwrap().custom_tlvs()[..], &custom_tlvs[..]);
},
_ => panic!("Unexpected event"),
};
pub type SimpleArcPeerManager<SD, M, T, F, C, L> = PeerManager<
SD,
Arc<SimpleArcChannelManager<M, T, F, L>>,
- Arc<P2PGossipSync<Arc<NetworkGraph<Arc<L>>>, Arc<C>, Arc<L>>>,
+ Arc<P2PGossipSync<Arc<NetworkGraph<Arc<L>>>, C, Arc<L>>>,
Arc<SimpleArcOnionMessenger<L>>,
Arc<L>,
IgnoringMessageHandler,
///
/// This is not exported to bindings users as general type aliases don't make sense in bindings.
pub type SimpleRefPeerManager<
- 'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, 'i, 'j, 'k, 'l, 'm, 'n, SD, M, T, F, C, L
+ 'a, 'b, 'c, 'd, 'e, 'f, 'logger, 'h, 'i, 'j, 'graph, SD, M, T, F, C, L
> = PeerManager<
SD,
- &'n SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'm, M, T, F, L>,
- &'f P2PGossipSync<&'g NetworkGraph<&'f L>, &'h C, &'f L>,
- &'i SimpleRefOnionMessenger<'g, 'm, 'n, L>,
- &'f L,
+ &'j SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'graph, 'logger, 'i, M, T, F, L>,
+ &'f P2PGossipSync<&'graph NetworkGraph<&'logger L>, C, &'logger L>,
+ &'h SimpleRefOnionMessenger<'logger, 'i, 'j, L>,
+ &'logger L,
IgnoringMessageHandler,
&'c KeysManager
>;
use crate::util::config::UserConfig;
use crate::util::string::UntrustedString;
-use bitcoin::{PackedLockTime, Transaction, TxOut};
use bitcoin::hash_types::BlockHash;
use crate::prelude::*;
//! Further functional tests which test blockchain reorganizations.
+use crate::chain::chaininterface::LowerBoundedFeeEstimator;
use crate::chain::channelmonitor::{ANTI_REORG_DELAY, LATENCY_GRACE_PERIOD_BLOCKS};
use crate::chain::transaction::OutPoint;
use crate::chain::Confirm;
-use crate::events::{Event, MessageSendEventsProvider, ClosureReason, HTLCDestination};
+use crate::events::{Event, MessageSendEventsProvider, ClosureReason, HTLCDestination, MessageSendEvent};
use crate::ln::msgs::{ChannelMessageHandler, Init};
use crate::util::test_utils;
use crate::util::ser::Writeable;
do_test_to_remote_after_local_detection(ConnectStyle::TransactionsFirstReorgsOnlyTip);
do_test_to_remote_after_local_detection(ConnectStyle::FullBlockViaListen);
}
+
+#[test]
+fn test_htlc_preimage_claim_holder_commitment_after_counterparty_commitment_reorg() {
+ // We detect a counterparty commitment confirm onchain, followed by a reorg and a confirmation
+ // of a holder commitment. Then, if we learn of the preimage for an HTLC in both commitments,
+ // test that we only claim the currently confirmed commitment.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ let (_, _, chan_id, funding_tx) = create_announced_chan_between_nodes(&nodes, 0, 1);
+
+ // Route an HTLC which we will claim onchain with the preimage.
+ let (payment_preimage, payment_hash, ..) = route_payment(&nodes[0], &[&nodes[1]], 1_000_000);
+
+ // Force close with the latest counterparty commitment, confirm it, and reorg it with the latest
+ // holder commitment.
+ nodes[0].node.force_close_broadcasting_latest_txn(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
+ check_closed_broadcast(&nodes[0], 1, true);
+ check_added_monitors(&nodes[0], 1);
+ check_closed_event(&nodes[0], 1, ClosureReason::HolderForceClosed, false, &[nodes[1].node.get_our_node_id()], 100000);
+
+ nodes[1].node.force_close_broadcasting_latest_txn(&chan_id, &nodes[0].node.get_our_node_id()).unwrap();
+ check_closed_broadcast(&nodes[1], 1, true);
+ check_added_monitors(&nodes[1], 1);
+ check_closed_event(&nodes[1], 1, ClosureReason::HolderForceClosed, false, &[nodes[0].node.get_our_node_id()], 100000);
+
+ let mut txn = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let commitment_tx_a = txn.pop().unwrap();
+ check_spends!(commitment_tx_a, funding_tx);
+
+ let mut txn = nodes[1].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let commitment_tx_b = txn.pop().unwrap();
+ check_spends!(commitment_tx_b, funding_tx);
+
+ mine_transaction(&nodes[0], &commitment_tx_a);
+ mine_transaction(&nodes[1], &commitment_tx_a);
+
+ disconnect_blocks(&nodes[0], 1);
+ disconnect_blocks(&nodes[1], 1);
+
+ mine_transaction(&nodes[0], &commitment_tx_b);
+ mine_transaction(&nodes[1], &commitment_tx_b);
+
+ // Provide the preimage now, such that we only claim from the holder commitment (since it's
+ // currently confirmed) and not the counterparty's.
+ get_monitor!(nodes[1], chan_id).provide_payment_preimage(
+ &payment_hash, &payment_preimage, &nodes[1].tx_broadcaster,
+ &LowerBoundedFeeEstimator(nodes[1].fee_estimator), &nodes[1].logger
+ );
+
+ let mut txn = nodes[1].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let htlc_success_tx = txn.pop().unwrap();
+ check_spends!(htlc_success_tx, commitment_tx_b);
+}
+
+#[test]
+fn test_htlc_preimage_claim_prev_counterparty_commitment_after_current_counterparty_commitment_reorg() {
+ // We detect a counterparty commitment confirm onchain, followed by a reorg and a
+ // confirmation of the previous (still unrevoked) counterparty commitment. Then, if we learn
+ // of the preimage for an HTLC in both commitments, test that we only claim the currently
+ // confirmed commitment.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ let (_, _, chan_id, funding_tx) = create_announced_chan_between_nodes(&nodes, 0, 1);
+
+ // Route an HTLC which we will claim onchain with the preimage.
+ let (payment_preimage, payment_hash, ..) = route_payment(&nodes[0], &[&nodes[1]], 1_000_000);
+
+ // Obtain the current commitment, which will become the previous after a fee update.
+ let prev_commitment_a = &get_local_commitment_txn!(nodes[0], chan_id)[0];
+
+ *nodes[0].fee_estimator.sat_per_kw.lock().unwrap() *= 4;
+ nodes[0].node.timer_tick_occurred();
+ check_added_monitors(&nodes[0], 1);
+ let mut msg_events = nodes[0].node.get_and_clear_pending_msg_events();
+ assert_eq!(msg_events.len(), 1);
+ let (update_fee, commit_sig) = if let MessageSendEvent::UpdateHTLCs { node_id, mut updates } = msg_events.pop().unwrap() {
+ assert_eq!(node_id, nodes[1].node.get_our_node_id());
+ (updates.update_fee.take().unwrap(), updates.commitment_signed)
+ } else {
+ panic!("Unexpected message send event");
+ };
+
+ // Handle the fee update on the other side, but don't send the last RAA such that the previous
+ // commitment is still valid (unrevoked).
+ nodes[1].node().handle_update_fee(&nodes[0].node.get_our_node_id(), &update_fee);
+ let _last_revoke_and_ack = commitment_signed_dance!(nodes[1], nodes[0], commit_sig, false, true, false, true);
+
+ // Force close with the latest commitment, confirm it, and reorg it with the previous commitment.
+ nodes[0].node.force_close_broadcasting_latest_txn(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
+ check_closed_broadcast(&nodes[0], 1, true);
+ check_added_monitors(&nodes[0], 1);
+ check_closed_event(&nodes[0], 1, ClosureReason::HolderForceClosed, false, &[nodes[1].node.get_our_node_id()], 100000);
+
+ let mut txn = nodes[0].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let current_commitment_a = txn.pop().unwrap();
+ assert_ne!(current_commitment_a.txid(), prev_commitment_a.txid());
+ check_spends!(current_commitment_a, funding_tx);
+
+ mine_transaction(&nodes[0], ¤t_commitment_a);
+ mine_transaction(&nodes[1], ¤t_commitment_a);
+
+ check_closed_broadcast(&nodes[1], 1, true);
+ check_added_monitors(&nodes[1], 1);
+ check_closed_event(&nodes[1], 1, ClosureReason::CommitmentTxConfirmed, false, &[nodes[0].node.get_our_node_id()], 100000);
+
+ disconnect_blocks(&nodes[0], 1);
+ disconnect_blocks(&nodes[1], 1);
+
+ mine_transaction(&nodes[0], &prev_commitment_a);
+ mine_transaction(&nodes[1], &prev_commitment_a);
+
+ // Provide the preimage now, such that we only claim from the previous commitment (since it's
+ // currently confirmed) and not the latest.
+ get_monitor!(nodes[1], chan_id).provide_payment_preimage(
+ &payment_hash, &payment_preimage, &nodes[1].tx_broadcaster,
+ &LowerBoundedFeeEstimator(nodes[1].fee_estimator), &nodes[1].logger
+ );
+
+ let mut txn = nodes[1].tx_broadcaster.txn_broadcast();
+ assert_eq!(txn.len(), 1);
+ let htlc_preimage_tx = txn.pop().unwrap();
+ check_spends!(htlc_preimage_tx, prev_commitment_a);
+ // Make sure it was indeed a preimage claim and not a revocation claim since the previous
+ // commitment (still unrevoked) is the currently confirmed closing transaction.
+ assert_eq!(htlc_preimage_tx.input[0].witness.second_to_last().unwrap(), &payment_preimage.0[..]);
+}
nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 100_000, 0, None).unwrap();
let open_chan = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
- // P2WSH
+ // Create a dummy P2WPKH script
let script = Builder::new().push_int(0)
.push_slice(&[0; 20])
.into_script();
/// The chains that may be used when paying a requested invoice (e.g., bitcoin mainnet).
/// Payments must be denominated in units of the minimal lightning-payable unit (e.g., msats)
/// for the selected chain.
- pub fn chains(&$self) -> Vec<$crate::bitcoin::blockdata::constants::ChainHash> {
+ pub fn chains(&$self) -> Vec<bitcoin::blockdata::constants::ChainHash> {
$contents.chains()
}
}
/// The public key used by the recipient to sign invoices.
- pub fn signing_pubkey(&$self) -> $crate::bitcoin::secp256k1::PublicKey {
+ pub fn signing_pubkey(&$self) -> bitcoin::secp256k1::PublicKey {
$contents.signing_pubkey()
}
} }
fn read_custom_message<R: io::Read>(&self, message_type: u64, buffer: &mut R) -> Result<Option<Self::CustomMessage>, msgs::DecodeError>;
}
+
+/// Create an onion message with contents `message` to the destination of `path`.
+/// Returns (introduction_node_id, onion_msg)
+pub fn create_onion_message<ES: Deref, NS: Deref, T: CustomOnionMessageContents>(
+ entropy_source: &ES, node_signer: &NS, secp_ctx: &Secp256k1<secp256k1::All>,
+ path: OnionMessagePath, message: OnionMessageContents<T>, reply_path: Option<BlindedPath>,
+) -> Result<(PublicKey, msgs::OnionMessage), SendError>
+where
+ ES::Target: EntropySource,
+ NS::Target: NodeSigner,
+{
+ let OnionMessagePath { intermediate_nodes, mut destination } = path;
+ if let Destination::BlindedPath(BlindedPath { ref blinded_hops, .. }) = destination {
+ if blinded_hops.len() < 2 {
+ return Err(SendError::TooFewBlindedHops);
+ }
+ }
+
+ if message.tlv_type() < 64 { return Err(SendError::InvalidMessage) }
+
+ // If we are sending straight to a blinded path and we are the introduction node, we need to
+ // advance the blinded path by 1 hop so the second hop is the new introduction node.
+ if intermediate_nodes.len() == 0 {
+ if let Destination::BlindedPath(ref mut blinded_path) = destination {
+ let our_node_id = node_signer.get_node_id(Recipient::Node)
+ .map_err(|()| SendError::GetNodeIdFailed)?;
+ if blinded_path.introduction_node_id == our_node_id {
+ advance_path_by_one(blinded_path, node_signer, &secp_ctx)
+ .map_err(|()| SendError::BlindedPathAdvanceFailed)?;
+ }
+ }
+ }
+
+ let blinding_secret_bytes = entropy_source.get_secure_random_bytes();
+ let blinding_secret = SecretKey::from_slice(&blinding_secret_bytes[..]).expect("RNG is busted");
+ let (introduction_node_id, blinding_point) = if intermediate_nodes.len() != 0 {
+ (intermediate_nodes[0], PublicKey::from_secret_key(&secp_ctx, &blinding_secret))
+ } else {
+ match destination {
+ Destination::Node(pk) => (pk, PublicKey::from_secret_key(&secp_ctx, &blinding_secret)),
+ Destination::BlindedPath(BlindedPath { introduction_node_id, blinding_point, .. }) =>
+ (introduction_node_id, blinding_point),
+ }
+ };
+ let (packet_payloads, packet_keys) = packet_payloads_and_keys(
+ &secp_ctx, &intermediate_nodes, destination, message, reply_path, &blinding_secret)
+ .map_err(|e| SendError::Secp256k1(e))?;
+
+ let prng_seed = entropy_source.get_secure_random_bytes();
+ let onion_routing_packet = construct_onion_message_packet(
+ packet_payloads, packet_keys, prng_seed).map_err(|()| SendError::TooBigPacket)?;
+
+ Ok((introduction_node_id, msgs::OnionMessage {
+ blinding_point,
+ onion_routing_packet
+ }))
+}
+
impl<ES: Deref, NS: Deref, L: Deref, MR: Deref, OMH: Deref, CMH: Deref>
OnionMessenger<ES, NS, L, MR, OMH, CMH>
where
&self, path: OnionMessagePath, message: OnionMessageContents<T>,
reply_path: Option<BlindedPath>
) -> Result<(), SendError> {
- let (introduction_node_id, onion_msg) = Self::create_onion_message(
- &self.entropy_source,
- &self.node_signer,
- &self.secp_ctx,
- path,
- message,
- reply_path
+ let (introduction_node_id, onion_msg) = create_onion_message(
+ &self.entropy_source, &self.node_signer, &self.secp_ctx,
+ path, message, reply_path
)?;
let mut pending_per_peer_msgs = self.pending_messages.lock().unwrap();
}
}
- /// Create an onion message with contents `message` to the destination of `path`.
- /// Returns (introduction_node_id, onion_msg)
- pub fn create_onion_message<T: CustomOnionMessageContents>(
- entropy_source: &ES,
- node_signer: &NS,
- secp_ctx: &Secp256k1<secp256k1::All>,
- path: OnionMessagePath,
- message: OnionMessageContents<T>,
- reply_path: Option<BlindedPath>,
- ) -> Result<(PublicKey, msgs::OnionMessage), SendError> {
- let OnionMessagePath { intermediate_nodes, mut destination } = path;
- if let Destination::BlindedPath(BlindedPath { ref blinded_hops, .. }) = destination {
- if blinded_hops.len() < 2 {
- return Err(SendError::TooFewBlindedHops);
- }
- }
-
- if message.tlv_type() < 64 { return Err(SendError::InvalidMessage) }
-
- // If we are sending straight to a blinded path and we are the introduction node, we need to
- // advance the blinded path by 1 hop so the second hop is the new introduction node.
- if intermediate_nodes.len() == 0 {
- if let Destination::BlindedPath(ref mut blinded_path) = destination {
- let our_node_id = node_signer.get_node_id(Recipient::Node)
- .map_err(|()| SendError::GetNodeIdFailed)?;
- if blinded_path.introduction_node_id == our_node_id {
- advance_path_by_one(blinded_path, node_signer, &secp_ctx)
- .map_err(|()| SendError::BlindedPathAdvanceFailed)?;
- }
- }
- }
-
- let blinding_secret_bytes = entropy_source.get_secure_random_bytes();
- let blinding_secret = SecretKey::from_slice(&blinding_secret_bytes[..]).expect("RNG is busted");
- let (introduction_node_id, blinding_point) = if intermediate_nodes.len() != 0 {
- (intermediate_nodes[0], PublicKey::from_secret_key(&secp_ctx, &blinding_secret))
- } else {
- match destination {
- Destination::Node(pk) => (pk, PublicKey::from_secret_key(&secp_ctx, &blinding_secret)),
- Destination::BlindedPath(BlindedPath { introduction_node_id, blinding_point, .. }) =>
- (introduction_node_id, blinding_point),
- }
- };
- let (packet_payloads, packet_keys) = packet_payloads_and_keys(
- &secp_ctx, &intermediate_nodes, destination, message, reply_path, &blinding_secret)
- .map_err(|e| SendError::Secp256k1(e))?;
-
- let prng_seed = entropy_source.get_secure_random_bytes();
- let onion_routing_packet = construct_onion_message_packet(
- packet_payloads, packet_keys, prng_seed).map_err(|()| SendError::TooBigPacket)?;
-
- Ok((introduction_node_id, msgs::OnionMessage {
- blinding_point,
- onion_routing_packet
- }))
- }
-
fn respond_with_onion_message<T: CustomOnionMessageContents>(
&self, response: OnionMessageContents<T>, path_id: Option<[u8; 32]>,
reply_path: Option<BlindedPath>
/// [`find_route`].
///
/// [`ScoreLookUp`]: crate::routing::scoring::ScoreLookUp
-pub struct ScorerAccountingForInFlightHtlcs<'a, SP: Sized, Sc: 'a + ScoreLookUp<ScoreParams = SP>, S: Deref<Target = Sc>> {
+pub struct ScorerAccountingForInFlightHtlcs<'a, S: Deref> where S::Target: ScoreLookUp {
scorer: S,
// Maps a channel's short channel id and its direction to the liquidity used up.
inflight_htlcs: &'a InFlightHtlcs,
}
-impl<'a, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>, S: Deref<Target = Sc>> ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> {
+impl<'a, S: Deref> ScorerAccountingForInFlightHtlcs<'a, S> where S::Target: ScoreLookUp {
/// Initialize a new `ScorerAccountingForInFlightHtlcs`.
pub fn new(scorer: S, inflight_htlcs: &'a InFlightHtlcs) -> Self {
ScorerAccountingForInFlightHtlcs {
}
}
-#[cfg(c_bindings)]
-impl<'a, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>, S: Deref<Target = Sc>> Writeable for ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> {
- fn write<W: Writer>(&self, writer: &mut W) -> Result<(), io::Error> { self.scorer.write(writer) }
-}
-
-impl<'a, SP: Sized, Sc: 'a + ScoreLookUp<ScoreParams = SP>, S: Deref<Target = Sc>> ScoreLookUp for ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> {
- type ScoreParams = Sc::ScoreParams;
+impl<'a, S: Deref> ScoreLookUp for ScorerAccountingForInFlightHtlcs<'a, S> where S::Target: ScoreLookUp {
+ type ScoreParams = <S::Target as ScoreLookUp>::ScoreParams;
fn channel_penalty_msat(&self, short_channel_id: u64, source: &NodeId, target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams) -> u64 {
if let Some(used_liquidity) = self.inflight_htlcs.used_liquidity_msat(
source, target, short_channel_id
inbound_scid_alias: None,
channel_value_satoshis: 0,
user_channel_id: 0,
+ balance_msat: 0,
outbound_capacity_msat,
next_outbound_htlc_limit_msat: outbound_capacity_msat,
next_outbound_htlc_minimum_msat: 0,
outbound_scid_alias: None,
channel_value_satoshis: 10_000_000_000,
user_channel_id: 0,
+ balance_msat: 10_000_000_000,
outbound_capacity_msat: 10_000_000_000,
next_outbound_htlc_minimum_msat: 0,
next_outbound_htlc_limit_msat: 10_000_000_000,
/// `ScoreLookUp` is used to determine the penalty for a given channel.
///
/// Scoring is in terms of fees willing to be paid in order to avoid routing through a channel.
-pub trait ScoreLookUp $(: $supertrait)* {
+pub trait ScoreLookUp {
/// A configurable type which should contain various passed-in parameters for configuring the scorer,
/// on a per-routefinding-call basis through to the scorer methods,
/// which are used to determine the parameters for the suitability of channels for use.
}
/// `ScoreUpdate` is used to update the scorer's internal state after a payment attempt.
-pub trait ScoreUpdate $(: $supertrait)* {
+pub trait ScoreUpdate {
/// Handles updating channel penalties after failing to route through a channel.
fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64);
fn probe_successful(&mut self, path: &Path);
}
-impl<SP: Sized, S: ScoreLookUp<ScoreParams = SP>, T: Deref<Target=S> $(+ $supertrait)*> ScoreLookUp for T {
- type ScoreParams = SP;
+/// A trait which can both lookup and update routing channel penalty scores.
+///
+/// This is used in places where both bounds are required and implemented for all types which
+/// implement [`ScoreLookUp`] and [`ScoreUpdate`].
+///
+/// Bindings users may need to manually implement this for their custom scoring implementations.
+pub trait Score : ScoreLookUp + ScoreUpdate $(+ $supertrait)* {}
+
+#[cfg(not(c_bindings))]
+impl<T: ScoreLookUp + ScoreUpdate $(+ $supertrait)*> Score for T {}
+
+#[cfg(not(c_bindings))]
+impl<S: ScoreLookUp, T: Deref<Target=S>> ScoreLookUp for T {
+ type ScoreParams = S::ScoreParams;
fn channel_penalty_msat(
&self, short_channel_id: u64, source: &NodeId, target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams
) -> u64 {
}
}
-impl<S: ScoreUpdate, T: DerefMut<Target=S> $(+ $supertrait)*> ScoreUpdate for T {
+#[cfg(not(c_bindings))]
+impl<S: ScoreUpdate, T: DerefMut<Target=S>> ScoreUpdate for T {
fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) {
self.deref_mut().payment_path_failed(path, short_channel_id)
}
#[cfg(not(c_bindings))]
impl<'a, T> WriteableScore<'a> for T where T: LockableScore<'a> + Writeable {}
#[cfg(not(c_bindings))]
-impl<'a, T: 'a + ScoreLookUp + ScoreUpdate> LockableScore<'a> for Mutex<T> {
+impl<'a, T: Score + 'a> LockableScore<'a> for Mutex<T> {
type ScoreUpdate = T;
type ScoreLookUp = T;
}
#[cfg(not(c_bindings))]
-impl<'a, T: 'a + ScoreUpdate + ScoreLookUp> LockableScore<'a> for RefCell<T> {
+impl<'a, T: Score + 'a> LockableScore<'a> for RefCell<T> {
type ScoreUpdate = T;
type ScoreLookUp = T;
}
#[cfg(not(c_bindings))]
-impl<'a, SP:Sized, T: 'a + ScoreUpdate + ScoreLookUp<ScoreParams = SP>> LockableScore<'a> for RwLock<T> {
+impl<'a, T: Score + 'a> LockableScore<'a> for RwLock<T> {
type ScoreUpdate = T;
type ScoreLookUp = T;
#[cfg(c_bindings)]
/// A concrete implementation of [`LockableScore`] which supports multi-threading.
-pub struct MultiThreadedLockableScore<T: ScoreLookUp + ScoreUpdate> {
+pub struct MultiThreadedLockableScore<T: Score> {
score: RwLock<T>,
}
#[cfg(c_bindings)]
-impl<'a, SP:Sized, T: 'a + ScoreLookUp<ScoreParams = SP> + ScoreUpdate> LockableScore<'a> for MultiThreadedLockableScore<T> {
+impl<'a, T: Score + 'a> LockableScore<'a> for MultiThreadedLockableScore<T> {
type ScoreUpdate = T;
type ScoreLookUp = T;
type WriteLocked = MultiThreadedScoreLockWrite<'a, Self::ScoreUpdate>;
}
#[cfg(c_bindings)]
-impl<T: ScoreUpdate + ScoreLookUp> Writeable for MultiThreadedLockableScore<T> {
+impl<T: Score> Writeable for MultiThreadedLockableScore<T> {
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), io::Error> {
self.score.read().unwrap().write(writer)
}
}
#[cfg(c_bindings)]
-impl<'a, T: 'a + ScoreUpdate + ScoreLookUp> WriteableScore<'a> for MultiThreadedLockableScore<T> {}
+impl<'a, T: Score + 'a> WriteableScore<'a> for MultiThreadedLockableScore<T> {}
#[cfg(c_bindings)]
-impl<T: ScoreLookUp + ScoreUpdate> MultiThreadedLockableScore<T> {
+impl<T: Score> MultiThreadedLockableScore<T> {
/// Creates a new [`MultiThreadedLockableScore`] given an underlying [`Score`].
pub fn new(score: T) -> Self {
MultiThreadedLockableScore { score: RwLock::new(score) }
#[cfg(c_bindings)]
/// A locked `MultiThreadedLockableScore`.
-pub struct MultiThreadedScoreLockRead<'a, T: ScoreLookUp>(RwLockReadGuard<'a, T>);
+pub struct MultiThreadedScoreLockRead<'a, T: Score>(RwLockReadGuard<'a, T>);
#[cfg(c_bindings)]
/// A locked `MultiThreadedLockableScore`.
-pub struct MultiThreadedScoreLockWrite<'a, T: ScoreUpdate>(RwLockWriteGuard<'a, T>);
+pub struct MultiThreadedScoreLockWrite<'a, T: Score>(RwLockWriteGuard<'a, T>);
#[cfg(c_bindings)]
-impl<'a, T: 'a + ScoreLookUp> Deref for MultiThreadedScoreLockRead<'a, T> {
+impl<'a, T: 'a + Score> Deref for MultiThreadedScoreLockRead<'a, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
}
#[cfg(c_bindings)]
-impl<'a, T: 'a + ScoreUpdate> Writeable for MultiThreadedScoreLockWrite<'a, T> {
+impl<'a, T: Score> ScoreLookUp for MultiThreadedScoreLockRead<'a, T> {
+ type ScoreParams = T::ScoreParams;
+ fn channel_penalty_msat(&self, short_channel_id: u64, source: &NodeId,
+ target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams
+ ) -> u64 {
+ self.0.channel_penalty_msat(short_channel_id, source, target, usage, score_params)
+ }
+}
+
+#[cfg(c_bindings)]
+impl<'a, T: Score> Writeable for MultiThreadedScoreLockWrite<'a, T> {
fn write<W: Writer>(&self, writer: &mut W) -> Result<(), io::Error> {
self.0.write(writer)
}
}
#[cfg(c_bindings)]
-impl<'a, T: 'a + ScoreUpdate> Deref for MultiThreadedScoreLockWrite<'a, T> {
+impl<'a, T: 'a + Score> Deref for MultiThreadedScoreLockWrite<'a, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
}
#[cfg(c_bindings)]
-impl<'a, T: 'a + ScoreUpdate> DerefMut for MultiThreadedScoreLockWrite<'a, T> {
+impl<'a, T: 'a + Score> DerefMut for MultiThreadedScoreLockWrite<'a, T> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.0.deref_mut()
}
}
+#[cfg(c_bindings)]
+impl<'a, T: Score> ScoreUpdate for MultiThreadedScoreLockWrite<'a, T> {
+ fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) {
+ self.0.payment_path_failed(path, short_channel_id)
+ }
+
+ fn payment_path_successful(&mut self, path: &Path) {
+ self.0.payment_path_successful(path)
+ }
+
+ fn probe_failed(&mut self, path: &Path, short_channel_id: u64) {
+ self.0.probe_failed(path, short_channel_id)
+ }
+
+ fn probe_successful(&mut self, path: &Path) {
+ self.0.probe_successful(path)
+ }
+}
+
/// Proposed use of a channel passed as a parameter to [`ScoreLookUp::channel_penalty_msat`].
#[derive(Clone, Copy, Debug, PartialEq)]
}
}
+#[cfg(c_bindings)]
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> Score for ProbabilisticScorerUsingTime<G, L, T>
+where L::Target: Logger {}
+
mod approx {
const BITS: u32 = 64;
const HIGHEST_BIT: u32 = BITS - 1;
(12, channel_value_satoshis, required),
});
+pub(crate) const P2WPKH_WITNESS_WEIGHT: u64 = 1 /* num stack items */ +
+ 1 /* sig length */ +
+ 73 /* sig including sighash flag */ +
+ 1 /* pubkey length */ +
+ 33 /* pubkey */;
+
/// Information about a spendable output to our "payment key".
///
/// See [`SpendableOutputDescriptor::StaticPaymentOutput`] for more details on how to spend this.
pub channel_keys_id: [u8; 32],
/// The value of the channel which this transactions spends.
pub channel_value_satoshis: u64,
+ /// The necessary channel parameters that need to be provided to the re-derived signer through
+ /// [`ChannelSigner::provide_channel_parameters`].
+ ///
+ /// Added as optional, but always `Some` if the descriptor was produced in v0.0.117 or later.
+ pub channel_transaction_parameters: Option<ChannelTransactionParameters>,
}
impl StaticPaymentOutputDescriptor {
+ /// Returns the `witness_script` of the spendable output.
+ ///
+ /// Note that this will only return `Some` for [`StaticPaymentOutputDescriptor`]s that
+ /// originated from an anchor outputs channel, as they take the form of a P2WSH script.
+ pub fn witness_script(&self) -> Option<Script> {
+ self.channel_transaction_parameters.as_ref()
+ .and_then(|channel_params|
+ if channel_params.channel_type_features.supports_anchors_zero_fee_htlc_tx() {
+ let payment_point = channel_params.holder_pubkeys.payment_point;
+ Some(chan_utils::get_to_countersignatory_with_anchors_redeemscript(&payment_point))
+ } else {
+ None
+ }
+ )
+ }
+
/// The maximum length a well-formed witness spending one of these should have.
/// Note: If you have the grind_signatures feature enabled, this will be at least 1 byte
/// shorter.
- // Calculated as 1 byte legnth + 73 byte signature, 1 byte empty vec push, 1 byte length plus
- // redeemscript push length.
- pub const MAX_WITNESS_LENGTH: usize = 1 + 73 + 34;
+ pub fn max_witness_length(&self) -> usize {
+ if self.channel_transaction_parameters.as_ref()
+ .map(|channel_params| channel_params.channel_type_features.supports_anchors_zero_fee_htlc_tx())
+ .unwrap_or(false)
+ {
+ let witness_script_weight = 1 /* pubkey push */ + 33 /* pubkey */ +
+ 1 /* OP_CHECKSIGVERIFY */ + 1 /* OP_1 */ + 1 /* OP_CHECKSEQUENCEVERIFY */;
+ 1 /* num witness items */ + 1 /* sig push */ + 73 /* sig including sighash flag */ +
+ 1 /* witness script push */ + witness_script_weight
+ } else {
+ P2WPKH_WITNESS_WEIGHT as usize
+ }
+ }
}
impl_writeable_tlv_based!(StaticPaymentOutputDescriptor, {
(0, outpoint, required),
(2, output, required),
(4, channel_keys_id, required),
(6, channel_value_satoshis, required),
+ (7, channel_transaction_parameters, option),
});
/// Describes the necessary information to spend a spendable output.
/// [`DelayedPaymentOutputDescriptor::to_self_delay`] contained here to
/// [`chan_utils::get_revokeable_redeemscript`].
DelayedPaymentOutput(DelayedPaymentOutputDescriptor),
- /// An output to a P2WPKH, spendable exclusively by our payment key (i.e., the private key
- /// which corresponds to the `payment_point` in [`ChannelSigner::pubkeys`]). The witness
- /// in the spending input is, thus, simply:
+ /// An output spendable exclusively by our payment key (i.e., the private key that corresponds
+ /// to the `payment_point` in [`ChannelSigner::pubkeys`]). The output type depends on the
+ /// channel type negotiated.
+ ///
+ /// On an anchor outputs channel, the witness in the spending input is:
+ /// ```bitcoin
+ /// <BIP 143 signature> <witness script>
+ /// ```
+ ///
+ /// Otherwise, it is:
/// ```bitcoin
/// <BIP 143 signature> <payment key>
/// ```
///
/// These are generally the result of our counterparty having broadcast the current state,
- /// allowing us to claim the non-HTLC-encumbered outputs immediately.
+ /// allowing us to claim the non-HTLC-encumbered outputs immediately, or after one confirmation
+ /// in the case of anchor outputs channels.
StaticPaymentOutput(StaticPaymentOutputDescriptor),
}
///
/// Note that this does not include any signatures, just the information required to
/// construct the transaction and sign it.
+ ///
+ /// This is not exported to bindings users as there is no standard serialization for an input.
+ /// See [`Self::create_spendable_outputs_psbt`] instead.
pub fn to_psbt_input(&self) -> bitcoin::psbt::Input {
match self {
SpendableOutputDescriptor::StaticOutput { output, .. } => {
match outp {
SpendableOutputDescriptor::StaticPaymentOutput(descriptor) => {
if !output_set.insert(descriptor.outpoint) { return Err(()); }
+ let sequence =
+ if descriptor.channel_transaction_parameters.as_ref()
+ .map(|channel_params| channel_params.channel_type_features.supports_anchors_zero_fee_htlc_tx())
+ .unwrap_or(false)
+ {
+ Sequence::from_consensus(1)
+ } else {
+ Sequence::ZERO
+ };
input.push(TxIn {
previous_output: descriptor.outpoint.into_bitcoin_outpoint(),
script_sig: Script::new(),
- sequence: Sequence::ZERO,
+ sequence,
witness: Witness::new(),
});
- witness_weight += StaticPaymentOutputDescriptor::MAX_WITNESS_LENGTH;
+ witness_weight += descriptor.max_witness_length();
#[cfg(feature = "grind_signatures")]
{ witness_weight -= 1; } // Guarantees a low R signature
input_value += descriptor.output.value;
/// Returns the counterparty's pubkeys.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn counterparty_pubkeys(&self) -> &ChannelPublicKeys { &self.get_channel_parameters().counterparty_parameters.as_ref().unwrap().pubkeys }
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn counterparty_pubkeys(&self) -> Option<&ChannelPublicKeys> {
+ self.get_channel_parameters()
+ .and_then(|params| params.counterparty_parameters.as_ref().map(|params| ¶ms.pubkeys))
+ }
+
/// Returns the `contest_delay` value specified by our counterparty and applied on holder-broadcastable
/// transactions, i.e., the amount of time that we have to wait to recover our funds if we
/// broadcast a transaction.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn counterparty_selected_contest_delay(&self) -> u16 { self.get_channel_parameters().counterparty_parameters.as_ref().unwrap().selected_contest_delay }
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn counterparty_selected_contest_delay(&self) -> Option<u16> {
+ self.get_channel_parameters()
+ .and_then(|params| params.counterparty_parameters.as_ref().map(|params| params.selected_contest_delay))
+ }
+
/// Returns the `contest_delay` value specified by us and applied on transactions broadcastable
/// by our counterparty, i.e., the amount of time that they have to wait to recover their funds
/// if they broadcast a transaction.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn holder_selected_contest_delay(&self) -> u16 { self.get_channel_parameters().holder_selected_contest_delay }
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn holder_selected_contest_delay(&self) -> Option<u16> {
+ self.get_channel_parameters().map(|params| params.holder_selected_contest_delay)
+ }
+
/// Returns whether the holder is the initiator.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn is_outbound(&self) -> bool { self.get_channel_parameters().is_outbound_from_holder }
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn is_outbound(&self) -> Option<bool> {
+ self.get_channel_parameters().map(|params| params.is_outbound_from_holder)
+ }
+
/// Funding outpoint
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn funding_outpoint(&self) -> &OutPoint { self.get_channel_parameters().funding_outpoint.as_ref().unwrap() }
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn funding_outpoint(&self) -> Option<&OutPoint> {
+ self.get_channel_parameters().map(|params| params.funding_outpoint.as_ref()).flatten()
+ }
+
/// Returns a [`ChannelTransactionParameters`] for this channel, to be used when verifying or
/// building transactions.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn get_channel_parameters(&self) -> &ChannelTransactionParameters {
- self.channel_parameters.as_ref().unwrap()
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn get_channel_parameters(&self) -> Option<&ChannelTransactionParameters> {
+ self.channel_parameters.as_ref()
}
+
/// Returns the channel type features of the channel parameters. Should be helpful for
/// determining a channel's category, i. e. legacy/anchors/taproot/etc.
///
- /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before.
- pub fn channel_type_features(&self) -> &ChannelTypeFeatures {
- &self.get_channel_parameters().channel_type_features
+ /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called.
+ /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation.
+ pub fn channel_type_features(&self) -> Option<&ChannelTypeFeatures> {
+ self.get_channel_parameters().map(|params| ¶ms.channel_type_features)
}
+
/// Sign the single input of `spend_tx` at index `input_idx`, which spends the output described
/// by `descriptor`, returning the witness stack for the input.
///
if !spend_tx.input[input_idx].script_sig.is_empty() { return Err(()); }
if spend_tx.input[input_idx].previous_output != descriptor.outpoint.into_bitcoin_outpoint() { return Err(()); }
- let remotepubkey = self.pubkeys().payment_point;
- let witness_script = bitcoin::Address::p2pkh(&::bitcoin::PublicKey{compressed: true, inner: remotepubkey}, Network::Testnet).script_pubkey();
+ let remotepubkey = bitcoin::PublicKey::new(self.pubkeys().payment_point);
+ // We cannot always assume that `channel_parameters` is set, so can't just call
+ // `self.channel_parameters()` or anything that relies on it
+ let supports_anchors_zero_fee_htlc_tx = self.channel_type_features()
+ .map(|features| features.supports_anchors_zero_fee_htlc_tx())
+ .unwrap_or(false);
+
+ let witness_script = if supports_anchors_zero_fee_htlc_tx {
+ chan_utils::get_to_countersignatory_with_anchors_redeemscript(&remotepubkey.inner)
+ } else {
+ Script::new_p2pkh(&remotepubkey.pubkey_hash())
+ };
let sighash = hash_to_message!(&sighash::SighashCache::new(spend_tx).segwit_signature_hash(input_idx, &witness_script, descriptor.output.value, EcdsaSighashType::All).unwrap()[..]);
let remotesig = sign_with_aux_rand(secp_ctx, &sighash, &self.payment_key, &self);
- let payment_script = bitcoin::Address::p2wpkh(&::bitcoin::PublicKey{compressed: true, inner: remotepubkey}, Network::Bitcoin).unwrap().script_pubkey();
+ let payment_script = if supports_anchors_zero_fee_htlc_tx {
+ witness_script.to_v0_p2wsh()
+ } else {
+ Script::new_v0_p2wpkh(&remotepubkey.wpubkey_hash().unwrap())
+ };
if payment_script != descriptor.output.script_pubkey { return Err(()); }
let mut witness = Vec::with_capacity(2);
witness.push(remotesig.serialize_der().to_vec());
witness[0].push(EcdsaSighashType::All as u8);
- witness.push(remotepubkey.serialize().to_vec());
+ if supports_anchors_zero_fee_htlc_tx {
+ witness.push(witness_script.to_bytes());
+ } else {
+ witness.push(remotepubkey.to_bytes());
+ }
Ok(witness)
}
}
}
+const MISSING_PARAMS_ERR: &'static str = "ChannelSigner::provide_channel_parameters must be called before signing operations";
+
impl EcdsaChannelSigner for InMemorySigner {
fn sign_counterparty_commitment(&self, commitment_tx: &CommitmentTransaction, _preimages: Vec<PaymentPreimage>, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<(Signature, Vec<Signature>), ()> {
let trusted_tx = commitment_tx.trust();
let keys = trusted_tx.keys();
let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key);
- let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey);
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey);
let built_tx = trusted_tx.built_transaction();
let commitment_sig = built_tx.sign_counterparty_commitment(&self.funding_key, &channel_funding_redeemscript, self.channel_value_satoshis, secp_ctx);
let mut htlc_sigs = Vec::with_capacity(commitment_tx.htlcs().len());
for htlc in commitment_tx.htlcs() {
- let channel_parameters = self.get_channel_parameters();
- let htlc_tx = chan_utils::build_htlc_transaction(&commitment_txid, commitment_tx.feerate_per_kw(), self.holder_selected_contest_delay(), htlc, &channel_parameters.channel_type_features, &keys.broadcaster_delayed_payment_key, &keys.revocation_key);
- let htlc_redeemscript = chan_utils::get_htlc_redeemscript(&htlc, self.channel_type_features(), &keys);
- let htlc_sighashtype = if self.channel_type_features().supports_anchors_zero_fee_htlc_tx() { EcdsaSighashType::SinglePlusAnyoneCanPay } else { EcdsaSighashType::All };
+ let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR);
+ let holder_selected_contest_delay =
+ self.holder_selected_contest_delay().expect(MISSING_PARAMS_ERR);
+ let chan_type = &channel_parameters.channel_type_features;
+ let htlc_tx = chan_utils::build_htlc_transaction(&commitment_txid, commitment_tx.feerate_per_kw(), holder_selected_contest_delay, htlc, chan_type, &keys.broadcaster_delayed_payment_key, &keys.revocation_key);
+ let htlc_redeemscript = chan_utils::get_htlc_redeemscript(&htlc, chan_type, &keys);
+ let htlc_sighashtype = if chan_type.supports_anchors_zero_fee_htlc_tx() { EcdsaSighashType::SinglePlusAnyoneCanPay } else { EcdsaSighashType::All };
let htlc_sighash = hash_to_message!(&sighash::SighashCache::new(&htlc_tx).segwit_signature_hash(0, &htlc_redeemscript, htlc.amount_msat / 1000, htlc_sighashtype).unwrap()[..]);
let holder_htlc_key = chan_utils::derive_private_key(&secp_ctx, &keys.per_commitment_point, &self.htlc_base_key);
htlc_sigs.push(sign(secp_ctx, &htlc_sighash, &holder_htlc_key));
fn sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<(Signature, Vec<Signature>), ()> {
let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key);
- let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey);
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey);
let trusted_tx = commitment_tx.trust();
let sig = trusted_tx.built_transaction().sign_holder_commitment(&self.funding_key, &funding_redeemscript, self.channel_value_satoshis, &self, secp_ctx);
- let channel_parameters = self.get_channel_parameters();
+ let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR);
let htlc_sigs = trusted_tx.get_htlc_sigs(&self.htlc_base_key, &channel_parameters.as_holder_broadcastable(), &self, secp_ctx)?;
Ok((sig, htlc_sigs))
}
#[cfg(any(test,feature = "unsafe_revoked_tx_signing"))]
fn unsafe_sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<(Signature, Vec<Signature>), ()> {
let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key);
- let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey);
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey);
let trusted_tx = commitment_tx.trust();
let sig = trusted_tx.built_transaction().sign_holder_commitment(&self.funding_key, &funding_redeemscript, self.channel_value_satoshis, &self, secp_ctx);
- let channel_parameters = self.get_channel_parameters();
+ let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR);
let htlc_sigs = trusted_tx.get_htlc_sigs(&self.htlc_base_key, &channel_parameters.as_holder_broadcastable(), &self, secp_ctx)?;
Ok((sig, htlc_sigs))
}
let per_commitment_point = PublicKey::from_secret_key(secp_ctx, &per_commitment_key);
let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint);
let witness_script = {
- let counterparty_delayedpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().delayed_payment_basepoint);
- chan_utils::get_revokeable_redeemscript(&revocation_pubkey, self.holder_selected_contest_delay(), &counterparty_delayedpubkey)
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let holder_selected_contest_delay =
+ self.holder_selected_contest_delay().expect(MISSING_PARAMS_ERR);
+ let counterparty_delayedpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.delayed_payment_basepoint);
+ chan_utils::get_revokeable_redeemscript(&revocation_pubkey, holder_selected_contest_delay, &counterparty_delayedpubkey)
};
let mut sighash_parts = sighash::SighashCache::new(justice_tx);
let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]);
let per_commitment_point = PublicKey::from_secret_key(secp_ctx, &per_commitment_key);
let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint);
let witness_script = {
- let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().htlc_basepoint);
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.htlc_basepoint);
let holder_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.pubkeys().htlc_basepoint);
- chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, self.channel_type_features(), &counterparty_htlcpubkey, &holder_htlcpubkey, &revocation_pubkey)
+ let chan_type = self.channel_type_features().expect(MISSING_PARAMS_ERR);
+ chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, chan_type, &counterparty_htlcpubkey, &holder_htlcpubkey, &revocation_pubkey)
};
let mut sighash_parts = sighash::SighashCache::new(justice_tx);
let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]);
fn sign_counterparty_htlc_transaction(&self, htlc_tx: &Transaction, input: usize, amount: u64, per_commitment_point: &PublicKey, htlc: &HTLCOutputInCommitment, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<Signature, ()> {
let htlc_key = chan_utils::derive_private_key(&secp_ctx, &per_commitment_point, &self.htlc_base_key);
let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint);
- let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().htlc_basepoint);
+ let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR);
+ let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.htlc_basepoint);
let htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.pubkeys().htlc_basepoint);
- let witness_script = chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, self.channel_type_features(), &counterparty_htlcpubkey, &htlcpubkey, &revocation_pubkey);
+ let chan_type = self.channel_type_features().expect(MISSING_PARAMS_ERR);
+ let witness_script = chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, chan_type, &counterparty_htlcpubkey, &htlcpubkey, &revocation_pubkey);
let mut sighash_parts = sighash::SighashCache::new(htlc_tx);
let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]);
Ok(sign_with_aux_rand(secp_ctx, &sighash, &htlc_key, &self))
fn sign_closing_transaction(&self, closing_tx: &ClosingTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<Signature, ()> {
let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key);
- let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey);
+ let counterparty_funding_key = &self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR).funding_pubkey;
+ let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, counterparty_funding_key);
Ok(closing_tx.trust().sign(&self.funding_key, &channel_funding_redeemscript, self.channel_value_satoshis, secp_ctx))
}
SpendableOutputDescriptor::StaticPaymentOutput(descriptor) => {
let input_idx = psbt.unsigned_tx.input.iter().position(|i| i.previous_output == descriptor.outpoint.into_bitcoin_outpoint()).ok_or(())?;
if keys_cache.is_none() || keys_cache.as_ref().unwrap().1 != descriptor.channel_keys_id {
- keys_cache = Some((
- self.derive_channel_keys(descriptor.channel_value_satoshis, &descriptor.channel_keys_id),
- descriptor.channel_keys_id));
+ let mut signer = self.derive_channel_keys(descriptor.channel_value_satoshis, &descriptor.channel_keys_id);
+ if let Some(channel_params) = descriptor.channel_transaction_parameters.as_ref() {
+ signer.provide_channel_parameters(channel_params);
+ }
+ keys_cache = Some((signer, descriptor.channel_keys_id));
}
let witness = Witness::from_vec(keys_cache.as_ref().unwrap().0.sign_counterparty_payment_input(&psbt.unsigned_tx, input_idx, &descriptor, &secp_ctx)?);
psbt.inputs[input_idx].final_script_witness = Some(witness);
}
}
+ #[allow(unused)]
pub(crate) fn as_mut_ecdsa(&mut self) -> Option<&mut ECS> {
match self {
ChannelSignerType::Ecdsa(ecs) => Some(ecs)
pub fn new(
kv_store: K, logger: L, maximum_pending_updates: u64, entropy_source: ES,
signer_provider: SP,
- ) -> Self
- where
- ES::Target: EntropySource + Sized,
- SP::Target: SignerProvider + Sized,
- {
+ ) -> Self {
MonitorUpdatingPersister {
kv_store,
logger,
/// It is extremely important that your [`KVStore::read`] implementation uses the
/// [`io::ErrorKind::NotFound`] variant correctly. For more information, please see the
/// documentation for [`MonitorUpdatingPersister`].
- pub fn read_all_channel_monitors_with_updates<B: Deref, F: Deref + Clone>(
- &self, broadcaster: B, fee_estimator: F,
+ pub fn read_all_channel_monitors_with_updates<B: Deref, F: Deref>(
+ &self, broadcaster: &B, fee_estimator: &F,
) -> Result<Vec<(BlockHash, ChannelMonitor<<SP::Target as SignerProvider>::Signer>)>, io::Error>
where
- ES::Target: EntropySource + Sized,
- SP::Target: SignerProvider + Sized,
B::Target: BroadcasterInterface,
F::Target: FeeEstimator,
{
let mut res = Vec::with_capacity(monitor_list.len());
for monitor_key in monitor_list {
res.push(self.read_channel_monitor_with_updates(
- &broadcaster,
- fee_estimator.clone(),
+ broadcaster,
+ fee_estimator,
monitor_key,
)?)
}
///
/// Loading a large number of monitors will be faster if done in parallel. You can use this
/// function to accomplish this. Take care to limit the number of parallel readers.
- pub fn read_channel_monitor_with_updates<B: Deref, F: Deref + Clone>(
- &self, broadcaster: &B, fee_estimator: F, monitor_key: String,
+ pub fn read_channel_monitor_with_updates<B: Deref, F: Deref>(
+ &self, broadcaster: &B, fee_estimator: &F, monitor_key: String,
) -> Result<(BlockHash, ChannelMonitor<<SP::Target as SignerProvider>::Signer>), io::Error>
where
- ES::Target: EntropySource + Sized,
- SP::Target: SignerProvider + Sized,
B::Target: BroadcasterInterface,
F::Target: FeeEstimator,
{
Err(err) => return Err(err),
};
- monitor.update_monitor(&update, broadcaster, fee_estimator.clone(), &self.logger)
+ monitor.update_monitor(&update, broadcaster, fee_estimator, &self.logger)
.map_err(|e| {
log_error!(
self.logger,
// Check that the persisted channel data is empty before any channels are
// open.
let mut persisted_chan_data_0 = persister_0.read_all_channel_monitors_with_updates(
- broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap();
+ &broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap();
assert_eq!(persisted_chan_data_0.len(), 0);
let mut persisted_chan_data_1 = persister_1.read_all_channel_monitors_with_updates(
- broadcaster_1, &chanmon_cfgs[1].fee_estimator).unwrap();
+ &broadcaster_1, &&chanmon_cfgs[1].fee_estimator).unwrap();
assert_eq!(persisted_chan_data_1.len(), 0);
// Helper to make sure the channel is on the expected update ID.
macro_rules! check_persisted_data {
($expected_update_id: expr) => {
persisted_chan_data_0 = persister_0.read_all_channel_monitors_with_updates(
- broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap();
+ &broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap();
// check that we stored only one monitor
assert_eq!(persisted_chan_data_0.len(), 1);
for (_, mon) in persisted_chan_data_0.iter() {
}
}
persisted_chan_data_1 = persister_1.read_all_channel_monitors_with_updates(
- broadcaster_1, &chanmon_cfgs[1].fee_estimator).unwrap();
+ &broadcaster_1, &&chanmon_cfgs[1].fee_estimator).unwrap();
assert_eq!(persisted_chan_data_1.len(), 1);
for (_, mon) in persisted_chan_data_1.iter() {
assert_eq!(mon.get_latest_update_id(), $expected_update_id);
check_persisted_data!(CLOSED_CHANNEL_UPDATE_ID);
// Make sure the expected number of stale updates is present.
- let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap();
+ let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap();
let (_, monitor) = &persisted_chan_data[0];
let monitor_name = MonitorName::from(monitor.get_funding_txo().0);
// The channel should have 0 updates, as it wrote a full monitor and consolidated.
// Check that the persisted channel data is empty before any channels are
// open.
- let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap();
+ let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap();
assert_eq!(persisted_chan_data.len(), 0);
// Create some initial channel
send_payment(&nodes[1], &vec![&nodes[0]][..], 4_000_000);
// Get the monitor and make a fake stale update at update_id=1 (lowest height of an update possible)
- let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap();
+ let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap();
let (_, monitor) = &persisted_chan_data[0];
let monitor_name = MonitorName::from(monitor.get_funding_txo().0);
persister_0
use crate::io::{self, Read, Seek, Write};
use crate::io_extras::{copy, sink};
use core::hash::Hash;
-use crate::sync::Mutex;
+use crate::sync::{Mutex, RwLock};
use core::cmp;
use core::convert::TryFrom;
use core::ops::Deref;
}
}
+impl<T: Readable> Readable for RwLock<T> {
+ fn read<R: Read>(r: &mut R) -> Result<Self, DecodeError> {
+ let t: T = Readable::read(r)?;
+ Ok(RwLock::new(t))
+ }
+}
+impl<T: Writeable> Writeable for RwLock<T> {
+ fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
+ self.read().unwrap().write(w)
+ }
+}
+
impl<A: Readable, B: Readable> Readable for (A, B) {
fn read<R: Read>(r: &mut R) -> Result<Self, DecodeError> {
let a: A = Readable::read(r)?;
}
}
- pub fn channel_type_features(&self) -> &ChannelTypeFeatures { self.inner.channel_type_features() }
+ pub fn channel_type_features(&self) -> &ChannelTypeFeatures { self.inner.channel_type_features().unwrap() }
#[cfg(test)]
pub fn get_enforcement_state(&self) -> MutexGuard<EnforcementState> {
fn sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<(Signature, Vec<Signature>), ()> {
let trusted_tx = self.verify_holder_commitment_tx(commitment_tx, secp_ctx);
let commitment_txid = trusted_tx.txid();
- let holder_csv = self.inner.counterparty_selected_contest_delay();
+ let holder_csv = self.inner.counterparty_selected_contest_delay().unwrap();
let state = self.state.lock().unwrap();
let commitment_number = trusted_tx.commitment_number();
}
fn sign_closing_transaction(&self, closing_tx: &ClosingTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<Signature, ()> {
- closing_tx.verify(self.inner.funding_outpoint().into_bitcoin_outpoint())
+ closing_tx.verify(self.inner.funding_outpoint().unwrap().into_bitcoin_outpoint())
.expect("derived different closing transaction");
Ok(self.inner.sign_closing_transaction(closing_tx, secp_ctx).unwrap())
}
impl TestChannelSigner {
fn verify_counterparty_commitment_tx<'a, T: secp256k1::Signing + secp256k1::Verification>(&self, commitment_tx: &'a CommitmentTransaction, secp_ctx: &Secp256k1<T>) -> TrustedCommitmentTransaction<'a> {
- commitment_tx.verify(&self.inner.get_channel_parameters().as_counterparty_broadcastable(),
- self.inner.counterparty_pubkeys(), self.inner.pubkeys(), secp_ctx)
- .expect("derived different per-tx keys or built transaction")
+ commitment_tx.verify(
+ &self.inner.get_channel_parameters().unwrap().as_counterparty_broadcastable(),
+ self.inner.counterparty_pubkeys().unwrap(), self.inner.pubkeys(), secp_ctx
+ ).expect("derived different per-tx keys or built transaction")
}
fn verify_holder_commitment_tx<'a, T: secp256k1::Signing + secp256k1::Verification>(&self, commitment_tx: &'a CommitmentTransaction, secp_ctx: &Secp256k1<T>) -> TrustedCommitmentTransaction<'a> {
- commitment_tx.verify(&self.inner.get_channel_parameters().as_holder_broadcastable(),
- self.inner.pubkeys(), self.inner.counterparty_pubkeys(), secp_ctx)
- .expect("derived different per-tx keys or built transaction")
+ commitment_tx.verify(
+ &self.inner.get_channel_parameters().unwrap().as_holder_broadcastable(),
+ self.inner.pubkeys(), self.inner.counterparty_pubkeys().unwrap(), secp_ctx
+ ).expect("derived different per-tx keys or built transaction")
}
}
use crate::chain::chaininterface::ConfirmationTarget;
use crate::chain::chaininterface::FEERATE_FLOOR_SATS_PER_KW;
use crate::chain::chainmonitor;
-use crate::chain::chainmonitor::MonitorUpdateId;
+use crate::chain::chainmonitor::{MonitorUpdateId, UpdateOrigin};
use crate::chain::channelmonitor;
use crate::chain::channelmonitor::MonitorEvent;
use crate::chain::transaction::OutPoint;
// Since the path is reversed, the last element in our iteration is the first
// hop.
if idx == path.hops.len() - 1 {
- scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(payer), &NodeId::from_pubkey(&hop.pubkey), usage, &());
+ scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(payer), &NodeId::from_pubkey(&hop.pubkey), usage, &Default::default());
} else {
let curr_hop_path_idx = path.hops.len() - 1 - idx;
- scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(&path.hops[curr_hop_path_idx - 1].pubkey), &NodeId::from_pubkey(&hop.pubkey), usage, &());
+ scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(&path.hops[curr_hop_path_idx - 1].pubkey), &NodeId::from_pubkey(&hop.pubkey), usage, &Default::default());
}
}
}
let logger = TestLogger::new();
find_route(
payer, params, &self.network_graph, first_hops, &logger,
- &ScorerAccountingForInFlightHtlcs::new(self.scorer.read().unwrap(), &inflight_htlcs), &(),
+ &ScorerAccountingForInFlightHtlcs::new(self.scorer.read().unwrap(), &inflight_htlcs), &Default::default(),
&[42; 32]
)
}
/// ChannelForceClosed event for the given channel_id with should_broadcast set to the given
/// boolean.
pub expect_channel_force_closed: Mutex<Option<(ChannelId, bool)>>,
+ /// If this is set to Some(), the next round trip serialization check will not hold after an
+ /// update_channel call (not watch_channel) for the given channel_id.
+ pub expect_monitor_round_trip_fail: Mutex<Option<ChannelId>>,
}
impl<'a> TestChainMonitor<'a> {
pub fn new(chain_source: Option<&'a TestChainSource>, broadcaster: &'a chaininterface::BroadcasterInterface, logger: &'a TestLogger, fee_estimator: &'a TestFeeEstimator, persister: &'a chainmonitor::Persist<TestChannelSigner>, keys_manager: &'a TestKeysInterface) -> Self {
chain_monitor: chainmonitor::ChainMonitor::new(chain_source, broadcaster, logger, fee_estimator, persister),
keys_manager,
expect_channel_force_closed: Mutex::new(None),
+ expect_monitor_round_trip_fail: Mutex::new(None),
}
}
monitor.write(&mut w).unwrap();
let new_monitor = <(BlockHash, channelmonitor::ChannelMonitor<TestChannelSigner>)>::read(
&mut io::Cursor::new(&w.0), (self.keys_manager, self.keys_manager)).unwrap().1;
- assert!(new_monitor == *monitor);
+ if let Some(chan_id) = self.expect_monitor_round_trip_fail.lock().unwrap().take() {
+ assert_eq!(chan_id, funding_txo.to_channel_id());
+ assert!(new_monitor != *monitor);
+ } else {
+ assert!(new_monitor == *monitor);
+ }
self.added_monitors.lock().unwrap().push((funding_txo, new_monitor));
update_res
}
chain::ChannelMonitorUpdateStatus::Completed
}
- fn update_persisted_channel(&self, funding_txo: OutPoint, update: Option<&channelmonitor::ChannelMonitorUpdate>, _data: &channelmonitor::ChannelMonitor<Signer>, update_id: MonitorUpdateId) -> chain::ChannelMonitorUpdateStatus {
+ fn update_persisted_channel(&self, funding_txo: OutPoint, _update: Option<&channelmonitor::ChannelMonitorUpdate>, _data: &channelmonitor::ChannelMonitor<Signer>, update_id: MonitorUpdateId) -> chain::ChannelMonitorUpdateStatus {
let mut ret = chain::ChannelMonitorUpdateStatus::Completed;
if let Some(update_ret) = self.update_rets.lock().unwrap().pop_front() {
ret = update_ret;
}
- if update.is_none() {
+ let is_chain_sync = if let UpdateOrigin::ChainSync(_) = update_id.contents { true } else { false };
+ if is_chain_sync {
self.chain_sync_monitor_persistences.lock().unwrap().entry(funding_txo).or_insert(HashSet::new()).insert(update_id);
} else {
self.offchain_monitor_updates.lock().unwrap().entry(funding_txo).or_insert(HashSet::new()).insert(update_id);
+++ /dev/null
-## Backwards Compatibility
-
-* Since the addition of custom HTLC TLV support in 0.0.117, if you downgrade you may unintentionally accept payments with features you don't understand.
+++ /dev/null
-## Backwards Compatibility
-
-* Users migrating custom persistence backends from the pre-v0.0.117 `KVStorePersister` interface can use a concatenation of `[{primary_namespace}/[{secondary_namespace}/]]{key}` to recover a `key` compatible with the data model previously assumed by `KVStorePersister::persist`.
+++ /dev/null
-## Backwards Compatibility
-
-* The `MonitorUpdatingPersister` can read monitors stored conventionally, such as with the `KVStorePersister` from previous LDK versions. You can use this to migrate _to_ the `MonitorUpdatingPersister`; just "point" `MonitorUpdatingPersister` to existing, fully updated `ChannelMonitors`, and it will read them and work from there. However, downgrading is more complex. Monitors stored with `MonitorUpdatingPersister` have a prepended sentinel value that prevents them from being deserialized by previous `Persist` implementations. This is to ensure that they are not accidentally read and used while pending updates are still stored and not applied, as this could result in penalty transactions. Users who wish to downgrade should perform the following steps:
- * Make a backup copy of all channel state.
- * Ensure all updates are applied to the monitors. This may be done by loading all the existing data with the `MonitorUpdatingPersister::read_all_channel_monitors_with_updates` function. You can then write the resulting `ChannelMonitor`s using your previous `Persist` implementation.
\ No newline at end of file
+++ /dev/null
-* The `NetAddress` has been moved to `SocketAddress`. The fieds `IPv4` and `IPv6` are also rename to `TcpIpV4` and `TcpIpV6` (#2358).
+++ /dev/null
-* In several APIs, `channel_id` parameters have been changed from type `[u8; 32]` to newly introduced `ChannelId` type, from `ln` namespace (`lightning::ln::ChannelId`) (PR #2485)
+++ /dev/null
-* The `AvailableBalances::balance_msat` field has been removed in favor of `ChannelMonitor::get_claimable_balances`. `ChannelDetails` serialized with versions of LDK >= 0.0.117 will have their `balance_msat` field set to `next_outbound_htlc_limit_msat` when read by versions of LDK prior to 0.0.117 (#2476).
+++ /dev/null
-# Backwards Compatibility
-
-- `Route` objects written with LDK versions prior to 0.0.117 won't be retryable after being deserialized with LDK 0.0.117 or above.