run: |
sudo apt-get -y install shellcheck
shellcheck ci/ci-tests.sh
+ - name: Set RUSTFLAGS to deny warnings
+ if: "matrix.toolchain == '1.63.0'"
+ run: echo "RUSTFLAGS=-D warnings" >> "$GITHUB_ENV"
- name: Run CI script
shell: bash # Default on Winblows is powershell
run: CI_MINIMIZE_DISK_USAGE=1 ./ci/ci-tests.sh
+# 0.0.119 - Dec 15, 2023 - "Spring Cleaning for Christmas"
+
+## API Updates
+ * The LDK crate ecosystem MSRV has been increased to 1.63 (#2681).
+ * The `bitcoin` dependency has been updated to version 0.30 (#2740).
+ * `lightning-invoice::payment::*` have been replaced with parameter generation
+ via `payment_parameters_from[_zero_amount]_invoice` (#2727).
+ * `{CoinSelection,Wallet}Source::sign_tx` are now `sign_psbt`, providing more
+ information, incl spent outputs, about the transaction being signed (#2775).
+ * Logger `Record`s now include `channel_id` and `peer_id` fields. These are
+ opportunistically filled in when a log record is specific to a given channel
+ and/or peer, and may occasionally be spuriously empty (#2314).
+ * When handling send or reply onion messages (e.g. for BOLT12 payments), a new
+ `Event::ConnectionNeeded` may be raised, indicating a direct connection
+ should be made to a payee or an introduction point. This event is expected to
+ be removed once onion message forwarding is widespread in the network (#2723)
+ * Scoring data decay now happens via `ScoreUpDate::time_passed`, called from
+ `lightning-background-processor`. `process_events_async` now takes a new
+ time-fetch function, and `ScoreUpDate` methods now take the current time as a
+ `Duration` argument. This avoids fetching time during pathfinding (#2656).
+ * Receiving payments to multi-hop blinded paths is now supported (#2688).
+ * `MessageRouter` and `Router` now feature methods to generate blinded paths to
+ the local node for incoming messages and payments. `Router` now extends
+ `MessageRouter`, and both are used in `ChannelManager` when processing or
+ creating BOLT12 structures to generate multi-hop blinded paths (#1781).
+ * `lightning-transaction-sync` now supports Electrum-based sync (#2685).
+ * `Confirm::get_relevant_txids` now returns the height at which a transaction
+ was confirmed. This can be used to assist in reorg detection (#2685).
+ * `ConfirmationTarget::MaxAllowedNonAnchorChannelRemoteFee` has been removed.
+ Non-anchor channel feerates are bounded indirectly through
+ `ChannelConfig::max_dust_htlc_exposure` (#2696).
+ * `lightning-invoice` `Description`s now rely on `UntrustedString` for
+ sanitization (#2730).
+ * `ScoreLookUp::channel_penalty_msat` now uses `CandidateRouteHop` (#2551).
+ * The `EcdsaChannelSigner` trait was moved to `lightning::sign::ecdsa` (#2512).
+ * `SignerProvider::get_destination_script` now takes `channel_keys_id` (#2744)
+ * `SpendableOutputDescriptor::StaticOutput` now has `channel_keys_id` (#2749).
+ * `EcdsaChannelSigner::sign_counterparty_commitment` now takes HTLC preimages
+ for both inbound and outbound HTLCs (#2753).
+ * `ClaimedHTLC` now includes a `counterparty_skimmed_fee_msat` field (#2715).
+ * `peel_payment_onion` was added to decode an encrypted onion for a payment
+ without receiving an HTLC. This allows for stateless verification of if a
+ theoretical payment would be accepted prior to receipt (#2700).
+ * `create_payment_onion` was added to construct an encrypted onion for a
+ payment path without sending an HTLC immediately (#2677).
+ * Various keys used in channels are now wrapped to provide type-safety for
+ specific usages of the keys (#2675).
+ * `TaggedHash` now includes the raw `tag` and `merkle_root` (#2687).
+ * `Offer::is_expired_no_std` was added (#2689).
+ * `PaymentPurpose::preimage()` was added (#2768).
+ * `temporary_channel_id` can now be specified in `create_channel` (#2699).
+ * Wire definitions for splicing messages were added (#2544).
+ * Various `lightning-invoice` structs now impl `Display`, now have pub fields,
+ or impl `From` (#2730).
+ * The `Hash` trait is now implemented for more structs, incl P2P msgs (#2716).
+
+## Performance Improvements
+ * Memory allocations (though not memory usage) have been substantially reduced,
+ meaning less overhead and hopefully less memory fragmentation (#2708, #2779).
+
+## Bug Fixes
+ * Since 0.0.117, calling `close_channel*` on a channel which has not yet been
+ funded would previously result in an infinite loop and hang (#2760).
+ * Since 0.0.116, sending payments requiring data in the onion for the recipient
+ which was too large for the onion may have caused corruption which resulted
+ in payment failure (#2752).
+ * Cooperative channel closure on channels with remaining output HTLCs may have
+ spuriously force-closed (#2529).
+ * In LDK versions 0.0.116 through 0.0.118, in rare cases where skimmed fees are
+ present on shutdown the `ChannelManager` may fail to deserialize (#2735).
+ * `ChannelConfig::max_dust_exposure` values which, converted to absolute fees,
+ exceeded 2^63 - 1 would result in an overflow and could lead to spurious
+ payment failures or channel closures (#2722).
+ * In cases where LDK is operating with provably-stale state, it panics to
+ avoid funds loss. This may not have happened in cases where LDK was behind
+ only exactly one state, leading instead to a revoked broadcast and funds
+ loss (#2721).
+ * Fixed a bug where decoding `Txid`s from Bitcoin Core JSON-RPC responses using
+ `lightning-block-sync` would not properly byte-swap the hash. Note that LDK
+ does not use this API internally (#2796).
+
+## Backwards Compatibility
+ * `ChannelManager`s written with LDK 0.0.119 are no longer readable by versions
+ of LDK prior to 0.0.113. Users wishing to downgrade to LDK 0.0.112 or before
+ can read an 0.0.119-serialized `ChannelManager` with a version of LDK from
+ 0.0.113 to 0.0.118, re-serialize it, and then downgrade (#2708).
+ * Nodes that upgrade to 0.0.119 and subsequently downgrade after receiving a
+ payment to a blinded path may leak recipient information if one or more of
+ those HTLCs later fails (#2688).
+ * Similarly, forwarding a blinded HTLC and subsequently downgrading to an LDK
+ version prior to 0.0.119 may result in leaking the path information to the
+ payment sender (#2540).
+
+In total, this release features 148 files changed, 13780 insertions, 6279
+deletions in 280 commits from 22 authors, in alphabetical order:
+ * Arik Sosman
+ * Chris Waterson
+ * Elias Rohrer
+ * Evan Feenstra
+ * Gursharan Singh
+ * Jeffrey Czyz
+ * John Cantrell
+ * Lalitmohansharma1
+ * Matt Corallo
+ * Matthew Rheaume
+ * Orbital
+ * Rachel Malonson
+ * Valentine Wallace
+ * Willem Van Lint
+ * Wilmer Paulino
+ * alexanderwiederin
+ * benthecarman
+ * henghonglee
+ * jbesraa
+ * olegkubrakov
+ * optout
+ * shaavan
+
+
# 0.0.118 - Oct 23, 2023 - "Just the Twelve Sinks"
## API Updates
lightning_persister::fs_store::bench::bench_sends,
lightning_rapid_gossip_sync::bench::bench_reading_full_graph_from_file,
lightning::routing::gossip::benches::read_network_graph,
- lightning::routing::gossip::benches::write_network_graph);
+ lightning::routing::gossip::benches::write_network_graph,
+ lightning::routing::scoring::benches::decay_100k_channel_bounds);
criterion_main!(benches);
pass
elif feature == "electrum":
pass
+ elif feature == "time":
+ pass
elif feature == "_test_utils":
pass
elif feature == "_test_vectors":
pass
elif cfg == "taproot":
pass
+ elif cfg == "async_signing":
+ pass
elif cfg == "require_route_graph_test":
pass
else:
[ "$RUSTC_MINOR_VERSION" -lt 65 ] && cargo update -p reqwest --precise "0.11.20" --verbose
# Starting with version 1.10.0, the `regex` crate has an MSRV of rustc 1.65.0.
[ "$RUSTC_MINOR_VERSION" -lt 65 ] && cargo update -p regex --precise "1.9.6" --verbose
+ # Starting with version 0.5.9 (there is no .6-.8), the `home` crate has an MSRV of rustc 1.70.0.
+ [ "$RUSTC_MINOR_VERSION" -lt 70 ] && cargo update -p home --precise "0.5.5" --verbose
DOWNLOAD_ELECTRS_AND_BITCOIND
- RUSTFLAGS="--cfg no_download" cargo test --verbose --color always --features esplora-blocking
- RUSTFLAGS="--cfg no_download" cargo check --verbose --color always --features esplora-blocking
- RUSTFLAGS="--cfg no_download" cargo test --verbose --color always --features esplora-async
- RUSTFLAGS="--cfg no_download" cargo check --verbose --color always --features esplora-async
- RUSTFLAGS="--cfg no_download" cargo test --verbose --color always --features esplora-async-https
- RUSTFLAGS="--cfg no_download" cargo check --verbose --color always --features esplora-async-https
- RUSTFLAGS="--cfg no_download" cargo test --verbose --color always --features electrum
- RUSTFLAGS="--cfg no_download" cargo check --verbose --color always --features electrum
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo test --verbose --color always --features esplora-blocking
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo check --verbose --color always --features esplora-blocking
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo test --verbose --color always --features esplora-async
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo check --verbose --color always --features esplora-async
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo test --verbose --color always --features esplora-async-https
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo check --verbose --color always --features esplora-async-https
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo test --verbose --color always --features electrum
+ RUSTFLAGS="$RUSTFLAGS --cfg no_download" cargo check --verbose --color always --features electrum
popd
fi
echo -e "\n\nBuilding with all Log-Limiting features"
pushd lightning
grep '^max_level_' Cargo.toml | awk '{ print $1 }'| while read -r FEATURE; do
- cargo check --verbose --color always --features "$FEATURE"
+ RUSTFLAGS="$RUSTFLAGS -A unused_variables -A unused_macros -A unused_imports -A dead_code" cargo check --verbose --color always --features "$FEATURE"
done
popd
for DIR in lightning lightning-invoice lightning-rapid-gossip-sync; do
# check if there is a conflict between no-std and the c_bindings cfg
- RUSTFLAGS="--cfg=c_bindings" cargo test -p $DIR --verbose --color always --no-default-features --features=no-std
+ RUSTFLAGS="$RUSTFLAGS --cfg=c_bindings" cargo test -p $DIR --verbose --color always --no-default-features --features=no-std
done
-RUSTFLAGS="--cfg=c_bindings" cargo test --verbose --color always
+RUSTFLAGS="$RUSTFLAGS --cfg=c_bindings" cargo test --verbose --color always
# Note that outbound_commitment_test only runs in this mode because of hardcoded signature values
pushd lightning
popd
fi
-echo -e "\n\nTest Taproot builds"
-pushd lightning
-RUSTFLAGS="$RUSTFLAGS --cfg=taproot" cargo test --verbose --color always -p lightning
-popd
+echo -e "\n\nTest cfg-flag builds"
+RUSTFLAGS="--cfg=taproot" cargo test --verbose --color always -p lightning
+RUSTFLAGS="--cfg=async_signing" cargo test --verbose --color always -p lightning
use bitcoin::hashes::sha256d::Hash as Sha256dHash;
use bitcoin::hash_types::{BlockHash, WPubkeyHash};
+use lightning::blinded_path::BlindedPath;
+use lightning::blinded_path::payment::ReceiveTlvs;
use lightning::chain;
use lightning::chain::{BestBlock, ChannelMonitorUpdateStatus, chainmonitor, channelmonitor, Confirm, Watch};
use lightning::chain::channelmonitor::{ChannelMonitor, MonitorEvent};
use lightning::ln::msgs::{self, CommitmentUpdate, ChannelMessageHandler, DecodeError, UpdateAddHTLC, Init};
use lightning::ln::script::ShutdownScript;
use lightning::ln::functional_test_utils::*;
-use lightning::offers::invoice::UnsignedBolt12Invoice;
+use lightning::offers::invoice::{BlindedPayInfo, UnsignedBolt12Invoice};
use lightning::offers::invoice_request::UnsignedInvoiceRequest;
+use lightning::onion_message::{Destination, MessageRouter, OnionMessagePath};
use lightning::util::test_channel_signer::{TestChannelSigner, EnforcementState};
use lightning::util::errors::APIError;
use lightning::util::logger::Logger;
use crate::utils::test_logger::{self, Output};
use crate::utils::test_persister::TestPersister;
-use bitcoin::secp256k1::{Message, PublicKey, SecretKey, Scalar, Secp256k1};
+use bitcoin::secp256k1::{Message, PublicKey, SecretKey, Scalar, Secp256k1, self};
use bitcoin::secp256k1::ecdh::SharedSecret;
use bitcoin::secp256k1::ecdsa::{RecoverableSignature, Signature};
use bitcoin::secp256k1::schnorr;
action: msgs::ErrorAction::IgnoreError
})
}
+
+ fn create_blinded_payment_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _first_hops: Vec<ChannelDetails>, _tlvs: ReceiveTlvs,
+ _amount_msats: u64, _entropy_source: &ES, _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()> {
+ unreachable!()
+ }
+}
+
+impl MessageRouter for FuzzRouter {
+ fn find_path(
+ &self, _sender: PublicKey, _peers: Vec<PublicKey>, _destination: Destination
+ ) -> Result<OnionMessagePath, ()> {
+ unreachable!()
+ }
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES,
+ _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ unreachable!()
+ }
}
pub struct TestBroadcaster {}
use bitcoin::hashes::sha256d::Hash as Sha256dHash;
use bitcoin::hash_types::{Txid, BlockHash, WPubkeyHash};
+use lightning::blinded_path::BlindedPath;
+use lightning::blinded_path::payment::ReceiveTlvs;
use lightning::chain;
use lightning::chain::{BestBlock, ChannelMonitorUpdateStatus, Confirm, Listen};
use lightning::chain::chaininterface::{BroadcasterInterface, ConfirmationTarget, FeeEstimator};
use lightning::ln::msgs::{self, DecodeError};
use lightning::ln::script::ShutdownScript;
use lightning::ln::functional_test_utils::*;
-use lightning::offers::invoice::UnsignedBolt12Invoice;
+use lightning::offers::invoice::{BlindedPayInfo, UnsignedBolt12Invoice};
use lightning::offers::invoice_request::UnsignedInvoiceRequest;
+use lightning::onion_message::{Destination, MessageRouter, OnionMessagePath};
use lightning::routing::gossip::{P2PGossipSync, NetworkGraph};
use lightning::routing::utxo::UtxoLookup;
use lightning::routing::router::{InFlightHtlcs, PaymentParameters, Route, RouteParameters, Router};
use crate::utils::test_logger;
use crate::utils::test_persister::TestPersister;
-use bitcoin::secp256k1::{Message, PublicKey, SecretKey, Scalar, Secp256k1};
+use bitcoin::secp256k1::{Message, PublicKey, SecretKey, Scalar, Secp256k1, self};
use bitcoin::secp256k1::ecdh::SharedSecret;
use bitcoin::secp256k1::ecdsa::{RecoverableSignature, Signature};
use bitcoin::secp256k1::schnorr;
action: msgs::ErrorAction::IgnoreError
})
}
+
+ fn create_blinded_payment_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _first_hops: Vec<ChannelDetails>, _tlvs: ReceiveTlvs,
+ _amount_msats: u64, _entropy_source: &ES, _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()> {
+ unreachable!()
+ }
+}
+
+impl MessageRouter for FuzzRouter {
+ fn find_path(
+ &self, _sender: PublicKey, _peers: Vec<PublicKey>, _destination: Destination
+ ) -> Result<OnionMessagePath, ()> {
+ unreachable!()
+ }
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES,
+ _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ unreachable!()
+ }
}
struct TestBroadcaster {
// Imports that need to be added manually
use bitcoin::bech32::u5;
use bitcoin::blockdata::script::ScriptBuf;
-use bitcoin::secp256k1::{PublicKey, Scalar, Secp256k1, SecretKey};
+use bitcoin::secp256k1::{PublicKey, Scalar, Secp256k1, SecretKey, self};
use bitcoin::secp256k1::ecdh::SharedSecret;
use bitcoin::secp256k1::ecdsa::RecoverableSignature;
use bitcoin::secp256k1::schnorr;
-use lightning::sign::{Recipient, KeyMaterial, EntropySource, NodeSigner, SignerProvider};
+use lightning::blinded_path::BlindedPath;
use lightning::ln::features::InitFeatures;
use lightning::ln::msgs::{self, DecodeError, OnionMessageHandler};
use lightning::ln::script::ShutdownScript;
use lightning::offers::invoice::UnsignedBolt12Invoice;
use lightning::offers::invoice_request::UnsignedInvoiceRequest;
+use lightning::sign::{Recipient, KeyMaterial, EntropySource, NodeSigner, SignerProvider};
use lightning::util::test_channel_signer::TestChannelSigner;
use lightning::util::logger::Logger;
use lightning::util::ser::{Readable, Writeable, Writer};
first_node_addresses: None,
})
}
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES,
+ _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ unreachable!()
+ }
}
struct TestOffersMessageHandler {}
[package]
name = "lightning-background-processor"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Valentine Wallace <vwallace@protonmail.com>"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = { version = "0.30.2", default-features = false }
-lightning = { version = "0.0.118", path = "../lightning", default-features = false }
-lightning-rapid-gossip-sync = { version = "0.0.118", path = "../lightning-rapid-gossip-sync", default-features = false }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false }
+lightning-rapid-gossip-sync = { version = "0.0.119", path = "../lightning-rapid-gossip-sync", default-features = false }
[dev-dependencies]
-tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "time" ] }
-lightning = { version = "0.0.118", path = "../lightning", features = ["_test_utils"] }
-lightning-invoice = { version = "0.26.0", path = "../lightning-invoice" }
-lightning-persister = { version = "0.0.118", path = "../lightning-persister" }
+tokio = { version = "1.35", features = [ "macros", "rt", "rt-multi-thread", "sync", "time" ] }
+lightning = { version = "0.0.119", path = "../lightning", features = ["_test_utils"] }
+lightning-invoice = { version = "0.27.0", path = "../lightning-invoice" }
+lightning-persister = { version = "0.0.119", path = "../lightning-persister" }
use lightning::sign::{EntropySource, NodeSigner, SignerProvider};
use lightning::events::{Event, PathFailure};
#[cfg(feature = "std")]
-use lightning::events::{EventHandler, EventsProvider};
+use lightning::events::EventHandler;
+#[cfg(any(feature = "std", feature = "futures"))]
+use lightning::events::EventsProvider;
+
use lightning::ln::channelmanager::ChannelManager;
use lightning::ln::msgs::OnionMessageHandler;
use lightning::ln::peer_handler::APeerManager;
const NETWORK_PRUNE_TIMER: u64 = 60 * 60;
#[cfg(not(test))]
-const SCORER_PERSIST_TIMER: u64 = 60 * 60;
+const SCORER_PERSIST_TIMER: u64 = 60 * 5;
#[cfg(test)]
const SCORER_PERSIST_TIMER: u64 = 1;
/// Updates scorer based on event and returns whether an update occurred so we can decide whether
/// to persist.
fn update_scorer<'a, S: 'static + Deref<Target = SC> + Send + Sync, SC: 'a + WriteableScore<'a>>(
- scorer: &'a S, event: &Event
+ scorer: &'a S, event: &Event, duration_since_epoch: Duration,
) -> bool {
match event {
Event::PaymentPathFailed { ref path, short_channel_id: Some(scid), .. } => {
let mut score = scorer.write_lock();
- score.payment_path_failed(path, *scid);
+ score.payment_path_failed(path, *scid, duration_since_epoch);
},
Event::PaymentPathFailed { ref path, payment_failed_permanently: true, .. } => {
// Reached if the destination explicitly failed it back. We treat this as a successful probe
// because the payment made it all the way to the destination with sufficient liquidity.
let mut score = scorer.write_lock();
- score.probe_successful(path);
+ score.probe_successful(path, duration_since_epoch);
},
Event::PaymentPathSuccessful { path, .. } => {
let mut score = scorer.write_lock();
- score.payment_path_successful(path);
+ score.payment_path_successful(path, duration_since_epoch);
},
Event::ProbeSuccessful { path, .. } => {
let mut score = scorer.write_lock();
- score.probe_successful(path);
+ score.probe_successful(path, duration_since_epoch);
},
Event::ProbeFailed { path, short_channel_id: Some(scid), .. } => {
let mut score = scorer.write_lock();
- score.probe_failed(path, *scid);
+ score.probe_failed(path, *scid, duration_since_epoch);
},
_ => return false,
}
$channel_manager: ident, $process_channel_manager_events: expr,
$peer_manager: ident, $process_onion_message_handler_events: expr, $gossip_sync: ident,
$logger: ident, $scorer: ident, $loop_exit_check: expr, $await: expr, $get_timer: expr,
- $timer_elapsed: expr, $check_slow_await: expr
+ $timer_elapsed: expr, $check_slow_await: expr, $time_fetch: expr,
) => { {
log_trace!($logger, "Calling ChannelManager's timer_tick_occurred on startup");
$channel_manager.timer_tick_occurred();
let mut last_scorer_persist_call = $get_timer(SCORER_PERSIST_TIMER);
let mut last_rebroadcast_call = $get_timer(REBROADCAST_TIMER);
let mut have_pruned = false;
+ let mut have_decayed_scorer = false;
loop {
$process_channel_manager_events;
if should_prune {
// The network graph must not be pruned while rapid sync completion is pending
if let Some(network_graph) = $gossip_sync.prunable_network_graph() {
- #[cfg(feature = "std")] {
+ if let Some(duration_since_epoch) = $time_fetch() {
log_trace!($logger, "Pruning and persisting network graph.");
- network_graph.remove_stale_channels_and_tracking();
- }
- #[cfg(not(feature = "std"))] {
+ network_graph.remove_stale_channels_and_tracking_with_time(duration_since_epoch.as_secs());
+ } else {
log_warn!($logger, "Not pruning network graph, consider enabling `std` or doing so manually with remove_stale_channels_and_tracking_with_time.");
log_trace!($logger, "Persisting network graph.");
}
last_prune_call = $get_timer(prune_timer);
}
+ if !have_decayed_scorer {
+ if let Some(ref scorer) = $scorer {
+ if let Some(duration_since_epoch) = $time_fetch() {
+ log_trace!($logger, "Calling time_passed on scorer at startup");
+ scorer.write_lock().time_passed(duration_since_epoch);
+ }
+ }
+ have_decayed_scorer = true;
+ }
+
if $timer_elapsed(&mut last_scorer_persist_call, SCORER_PERSIST_TIMER) {
if let Some(ref scorer) = $scorer {
- log_trace!($logger, "Persisting scorer");
+ if let Some(duration_since_epoch) = $time_fetch() {
+ log_trace!($logger, "Calling time_passed and persisting scorer");
+ scorer.write_lock().time_passed(duration_since_epoch);
+ } else {
+ log_trace!($logger, "Persisting scorer");
+ }
if let Err(e) = $persister.persist_scorer(&scorer) {
log_error!($logger, "Error: Failed to persist scorer, check your disk and permissions {}", e)
}
/// are unsure, you should set the flag, as the performance impact of it is minimal unless there
/// are hundreds or thousands of simultaneous process calls running.
///
+/// The `fetch_time` parameter should return the current wall clock time, if one is available. If
+/// no time is available, some features may be disabled, however the node will still operate fine.
+///
/// For example, in order to process background events in a [Tokio](https://tokio.rs/) task, you
/// could setup `process_events_async` like this:
/// ```
/// # use lightning::io;
/// # use std::sync::{Arc, RwLock};
/// # use std::sync::atomic::{AtomicBool, Ordering};
+/// # use std::time::SystemTime;
/// # use lightning_background_processor::{process_events_async, GossipSync};
/// # struct MyStore {}
/// # impl lightning::util::persist::KVStore for MyStore {
/// Some(background_scorer),
/// sleeper,
/// mobile_interruptable_platform,
+/// || Some(SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap())
/// )
/// .await
/// .expect("Failed to process events");
S: 'static + Deref<Target = SC> + Send + Sync,
SC: for<'b> WriteableScore<'b>,
SleepFuture: core::future::Future<Output = bool> + core::marker::Unpin,
- Sleeper: Fn(Duration) -> SleepFuture
+ Sleeper: Fn(Duration) -> SleepFuture,
+ FetchTime: Fn() -> Option<Duration>,
>(
persister: PS, event_handler: EventHandler, chain_monitor: M, channel_manager: CM,
gossip_sync: GossipSync<PGS, RGS, G, UL, L>, peer_manager: PM, logger: L, scorer: Option<S>,
- sleeper: Sleeper, mobile_interruptable_platform: bool,
+ sleeper: Sleeper, mobile_interruptable_platform: bool, fetch_time: FetchTime,
) -> Result<(), lightning::io::Error>
where
UL::Target: 'static + UtxoLookup,
let scorer = &scorer;
let logger = &logger;
let persister = &persister;
+ let fetch_time = &fetch_time;
async move {
if let Some(network_graph) = network_graph {
handle_network_graph_update(network_graph, &event)
}
if let Some(ref scorer) = scorer {
- if update_scorer(scorer, &event) {
- log_trace!(logger, "Persisting scorer after update");
- if let Err(e) = persister.persist_scorer(&scorer) {
- log_error!(logger, "Error: Failed to persist scorer, check your disk and permissions {}", e)
+ if let Some(duration_since_epoch) = fetch_time() {
+ if update_scorer(scorer, &event, duration_since_epoch) {
+ log_trace!(logger, "Persisting scorer after update");
+ if let Err(e) = persister.persist_scorer(&scorer) {
+ log_error!(logger, "Error: Failed to persist scorer, check your disk and permissions {}", e)
+ }
}
}
}
task::Poll::Ready(exit) => { should_break = exit; true },
task::Poll::Pending => false,
}
- }, mobile_interruptable_platform
+ }, mobile_interruptable_platform, fetch_time,
)
}
where
PM::Target: APeerManager + Send + Sync,
{
- use lightning::events::EventsProvider;
-
let events = core::cell::RefCell::new(Vec::new());
peer_manager.onion_message_handler().process_pending_events(&|e| events.borrow_mut().push(e));
handle_network_graph_update(network_graph, &event)
}
if let Some(ref scorer) = scorer {
- if update_scorer(scorer, &event) {
+ use std::time::SystemTime;
+ let duration_since_epoch = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH)
+ .expect("Time should be sometime after 1970");
+ if update_scorer(scorer, &event, duration_since_epoch) {
log_trace!(logger, "Persisting scorer after update");
if let Err(e) = persister.persist_scorer(&scorer) {
log_error!(logger, "Error: Failed to persist scorer, check your disk and permissions {}", e)
channel_manager.get_event_or_persistence_needed_future(),
chain_monitor.get_update_future()
).wait_timeout(Duration::from_millis(100)); },
- |_| Instant::now(), |time: &Instant, dur| time.elapsed().as_secs() > dur, false
+ |_| Instant::now(), |time: &Instant, dur| time.elapsed().as_secs() > dur, false,
+ || {
+ use std::time::SystemTime;
+ Some(SystemTime::now().duration_since(SystemTime::UNIX_EPOCH)
+ .expect("Time should be sometime after 1970"))
+ },
)
});
Self { stop_thread: stop_thread_clone, thread_handle: Some(handle) }
}
impl ScoreUpdate for TestScorer {
- fn payment_path_failed(&mut self, actual_path: &Path, actual_short_channel_id: u64) {
+ fn payment_path_failed(&mut self, actual_path: &Path, actual_short_channel_id: u64, _: Duration) {
if let Some(expectations) = &mut self.event_expectations {
match expectations.pop_front().unwrap() {
TestResult::PaymentFailure { path, short_channel_id } => {
}
}
- fn payment_path_successful(&mut self, actual_path: &Path) {
+ fn payment_path_successful(&mut self, actual_path: &Path, _: Duration) {
if let Some(expectations) = &mut self.event_expectations {
match expectations.pop_front().unwrap() {
TestResult::PaymentFailure { path, .. } => {
}
}
- fn probe_failed(&mut self, actual_path: &Path, _: u64) {
+ fn probe_failed(&mut self, actual_path: &Path, _: u64, _: Duration) {
if let Some(expectations) = &mut self.event_expectations {
match expectations.pop_front().unwrap() {
TestResult::PaymentFailure { path, .. } => {
}
}
}
- fn probe_successful(&mut self, actual_path: &Path) {
+ fn probe_successful(&mut self, actual_path: &Path, _: Duration) {
if let Some(expectations) = &mut self.event_expectations {
match expectations.pop_front().unwrap() {
TestResult::PaymentFailure { path, .. } => {
}
}
}
+ fn time_passed(&mut self, _: Duration) {}
}
#[cfg(c_bindings)]
tokio::time::sleep(dur).await;
false // Never exit
})
- }, false,
+ }, false, || Some(Duration::ZERO),
);
match bp_future.await {
Ok(_) => panic!("Expected error persisting manager"),
loop {
let log_entries = nodes[0].logger.lines.lock().unwrap();
- let expected_log = "Persisting scorer".to_string();
+ let expected_log = "Calling time_passed and persisting scorer".to_string();
if log_entries.get(&("lightning_background_processor", expected_log)).is_some() {
break
}
_ = exit_receiver.changed() => true,
}
})
- }, false,
+ }, false, || Some(Duration::from_secs(1696300000)),
);
let t1 = tokio::spawn(bp_future);
_ = exit_receiver.changed() => true,
}
})
- }, false,
+ }, false, || Some(Duration::ZERO),
);
let t1 = tokio::spawn(bp_future);
let t2 = tokio::spawn(async move {
[package]
name = "lightning-block-sync"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Jeffrey Czyz", "Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.30.2"
hex = { package = "hex-conservative", version = "0.1.1", default-features = false }
-lightning = { version = "0.0.118", path = "../lightning" }
-tokio = { version = "1.0", features = [ "io-util", "net", "time", "rt" ], optional = true }
+lightning = { version = "0.0.119", path = "../lightning" }
+tokio = { version = "1.35", features = [ "io-util", "net", "time", "rt" ], optional = true }
serde_json = { version = "1.0", optional = true }
chunked_transfer = { version = "1.4", optional = true }
[dev-dependencies]
-lightning = { version = "0.0.118", path = "../lightning", features = ["_test_utils"] }
-tokio = { version = "1.14", features = [ "macros", "rt" ] }
+lightning = { version = "0.0.119", path = "../lightning", features = ["_test_utils"] }
+tokio = { version = "1.35", features = [ "macros", "rt" ] }
impl TryInto<Txid> for JsonResponse {
type Error = std::io::Error;
fn try_into(self) -> std::io::Result<Txid> {
- match self.0.as_str() {
- None => Err(std::io::Error::new(
- std::io::ErrorKind::InvalidData,
- "expected JSON string",
- )),
- Some(hex_data) => match Vec::<u8>::from_hex(hex_data) {
- Err(_) => Err(std::io::Error::new(
- std::io::ErrorKind::InvalidData,
- "invalid hex data",
- )),
- Ok(txid_data) => match encode::deserialize(&txid_data) {
- Err(_) => Err(std::io::Error::new(
- std::io::ErrorKind::InvalidData,
- "invalid txid",
- )),
- Ok(txid) => Ok(txid),
- },
- },
- }
+ let hex_data = self.0.as_str().ok_or(Self::Error::new(std::io::ErrorKind::InvalidData, "expected JSON string" ))?;
+ Txid::from_str(hex_data).map_err(|err|Self::Error::new(std::io::ErrorKind::InvalidData, err.to_string() ))
}
}
/// The REST `getutxos` endpoint retuns a whole pile of data we don't care about and one bit we do
/// - whether the `hit bitmap` field had any entries. Thus we condense the result down into only
/// that.
+#[cfg(feature = "rest-client")]
pub(crate) struct GetUtxosResponse {
pub(crate) hit_bitmap_nonempty: bool
}
+#[cfg(feature = "rest-client")]
impl TryInto<GetUtxosResponse> for JsonResponse {
type Error = std::io::Error;
match TryInto::<Txid>::try_into(response) {
Err(e) => {
assert_eq!(e.kind(), std::io::ErrorKind::InvalidData);
- assert_eq!(e.get_ref().unwrap().to_string(), "invalid hex data");
+ assert_eq!(e.get_ref().unwrap().to_string(), "bad hex string length 6 (expected 64)");
}
Ok(_) => panic!("Expected error"),
}
match TryInto::<Txid>::try_into(response) {
Err(e) => {
assert_eq!(e.kind(), std::io::ErrorKind::InvalidData);
- assert_eq!(e.get_ref().unwrap().to_string(), "invalid txid");
+ assert_eq!(e.get_ref().unwrap().to_string(), "bad hex string length 4 (expected 64)");
}
Ok(_) => panic!("Expected error"),
}
}
}
+ #[test]
+ fn into_txid_from_bitcoind_rpc_json_response() {
+ let mut rpc_response = serde_json::json!(
+ {"error": "", "id": "770", "result": "7934f775149929a8b742487129a7c3a535dfb612f0b726cc67bc10bc2628f906"}
+
+ );
+ let r: std::io::Result<Txid> = JsonResponse(rpc_response.get_mut("result").unwrap().take())
+ .try_into();
+ assert_eq!(
+ r.unwrap().to_string(),
+ "7934f775149929a8b742487129a7c3a535dfb612f0b726cc67bc10bc2628f906"
+ );
+ }
+
// TryInto<Transaction> can be used in two ways, first with plain hex response where data is
// the hex encoded transaction (e.g. as a result of getrawtransaction) or as a JSON object
// where the hex encoded transaction can be found in the hex field of the object (if present)
[package]
name = "lightning-custom-message"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Jeffrey Czyz"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.30.2"
-lightning = { version = "0.0.118", path = "../lightning" }
+lightning = { version = "0.0.119", path = "../lightning" }
[package]
name = "lightning-invoice"
description = "Data structures to parse and serialize BOLT11 lightning invoices"
-version = "0.26.0"
+version = "0.27.0"
authors = ["Sebastian Geisler <sgeisler@wh2.tu-dresden.de>"]
documentation = "https://docs.rs/lightning-invoice/"
license = "MIT OR Apache-2.0"
[features]
default = ["std"]
no-std = ["hashbrown", "lightning/no-std"]
-std = ["bitcoin_hashes/std", "num-traits/std", "lightning/std", "bech32/std"]
+std = ["bitcoin/std", "num-traits/std", "lightning/std", "bech32/std"]
[dependencies]
bech32 = { version = "0.9.0", default-features = false }
-lightning = { version = "0.0.118", path = "../lightning", default-features = false }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false }
secp256k1 = { version = "0.27.0", default-features = false, features = ["recovery", "alloc"] }
num-traits = { version = "0.2.8", default-features = false }
-bitcoin_hashes = { version = "0.12.0", default-features = false }
hashbrown = { version = "0.8", optional = true }
serde = { version = "1.0.118", optional = true }
bitcoin = { version = "0.30.2", default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.118", path = "../lightning", default-features = false, features = ["_test_utils"] }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false, features = ["_test_utils"] }
hex = { package = "hex-conservative", version = "0.1.1", default-features = false }
serde_json = { version = "1"}
use bitcoin::{PubkeyHash, ScriptHash};
use bitcoin::address::WitnessVersion;
-use bitcoin_hashes::Hash;
-use bitcoin_hashes::sha256;
+use bitcoin::hashes::Hash;
+use bitcoin::hashes::sha256;
use crate::prelude::*;
use lightning::ln::PaymentSecret;
use lightning::routing::gossip::RoutingFees;
17 => {
let pkh = match PubkeyHash::from_slice(&bytes) {
Ok(pkh) => pkh,
- Err(bitcoin_hashes::Error::InvalidLength(_, _)) => return Err(Bolt11ParseError::InvalidPubKeyHashLength),
+ Err(bitcoin::hashes::Error::InvalidLength(_, _)) => return Err(Bolt11ParseError::InvalidPubKeyHashLength),
};
Ok(Fallback::PubKeyHash(pkh))
}
18 => {
let sh = match ScriptHash::from_slice(&bytes) {
Ok(sh) => sh,
- Err(bitcoin_hashes::Error::InvalidLength(_, _)) => return Err(Bolt11ParseError::InvalidScriptHashLength),
+ Err(bitcoin::hashes::Error::InvalidLength(_, _)) => return Err(Bolt11ParseError::InvalidScriptHashLength),
};
Ok(Fallback::ScriptHash(sh))
}
use crate::de::Bolt11ParseError;
use secp256k1::PublicKey;
use bech32::u5;
- use bitcoin_hashes::sha256;
+ use bitcoin::hashes::sha256;
use std::str::FromStr;
const CHARSET_REV: [i8; 128] = [
use bech32::FromBase32;
use bitcoin::{PubkeyHash, ScriptHash};
use bitcoin::address::WitnessVersion;
- use bitcoin_hashes::Hash;
+ use bitcoin::hashes::Hash;
let cases = vec![
(
pub mod utils;
extern crate bech32;
-extern crate bitcoin_hashes;
#[macro_use] extern crate lightning;
extern crate num_traits;
extern crate secp256k1;
use bech32::u5;
use bitcoin::{Address, Network, PubkeyHash, ScriptHash};
use bitcoin::address::{Payload, WitnessProgram, WitnessVersion};
-use bitcoin_hashes::{Hash, sha256};
+use bitcoin::hashes::{Hash, sha256};
use lightning::ln::features::Bolt11InvoiceFeatures;
use lightning::util::invoice::construct_invoice_preimage;
mod ser;
mod tb;
+#[allow(unused_imports)]
mod prelude {
#[cfg(feature = "hashbrown")]
extern crate hashbrown;
use crate::prelude::*;
-/// Sync compat for std/no_std
-#[cfg(not(feature = "std"))]
-mod sync;
-
/// Errors that indicate what is wrong with the invoice. They have some granularity for debug
/// reasons, but should generally result in an "invalid BOLT11 invoice" message for the user.
#[allow(missing_docs)]
/// extern crate secp256k1;
/// extern crate lightning;
/// extern crate lightning_invoice;
-/// extern crate bitcoin_hashes;
+/// extern crate bitcoin;
///
-/// use bitcoin_hashes::Hash;
-/// use bitcoin_hashes::sha256;
+/// use bitcoin::hashes::Hash;
+/// use bitcoin::hashes::sha256;
///
/// use secp256k1::Secp256k1;
/// use secp256k1::SecretKey;
/// The encoded route has to be <1024 5bit characters long (<=639 bytes or <=12 hops)
///
#[derive(Clone, Debug, Hash, Eq, PartialEq, Ord, PartialOrd)]
-pub struct PrivateRoute(pub RouteHint);
+pub struct PrivateRoute(RouteHint);
/// Tag constants as specified in BOLT11
#[allow(missing_docs)]
#[cfg(test)]
mod test {
use bitcoin::ScriptBuf;
- use bitcoin_hashes::sha256;
+ use bitcoin::hashes::sha256;
use std::str::FromStr;
#[test]
use lightning::routing::router::RouteHintHop;
use secp256k1::Secp256k1;
use secp256k1::{SecretKey, PublicKey};
- use std::time::{UNIX_EPOCH, Duration};
+ use std::time::Duration;
let secp_ctx = Secp256k1::new();
assert_eq!(invoice.currency(), Currency::BitcoinTestnet);
#[cfg(feature = "std")]
assert_eq!(
- invoice.timestamp().duration_since(UNIX_EPOCH).unwrap().as_secs(),
+ invoice.timestamp().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs(),
1234567
);
assert_eq!(invoice.payee_pub_key(), Some(&public_key));
//! Convenient utilities for paying Lightning invoices.
use crate::Bolt11Invoice;
-use crate::bitcoin_hashes::Hash;
+use bitcoin::hashes::Hash;
use lightning::ln::PaymentHash;
use lightning::ln::channelmanager::RecipientOnionFields;
mod tests {
use super::*;
use crate::{InvoiceBuilder, Currency};
- use bitcoin_hashes::sha256::Hash as Sha256;
- use lightning::events::Event;
- use lightning::ln::channelmanager::{Retry, PaymentId};
- use lightning::ln::msgs::ChannelMessageHandler;
+ use bitcoin::hashes::sha256::Hash as Sha256;
use lightning::ln::PaymentSecret;
- use lightning::ln::functional_test_utils::*;
use lightning::routing::router::Payee;
use secp256k1::{SecretKey, PublicKey, Secp256k1};
- use std::time::{SystemTime, Duration};
+ use core::time::Duration;
+ #[cfg(feature = "std")]
+ use std::time::SystemTime;
fn duration_since_epoch() -> Duration {
#[cfg(feature = "std")]
#[test]
#[cfg(feature = "std")]
fn payment_metadata_end_to_end() {
+ use lightning::events::Event;
+ use lightning::ln::channelmanager::{Retry, PaymentId};
+ use lightning::ln::msgs::ChannelMessageHandler;
+ use lightning::ln::functional_test_utils::*;
// Test that a payment metadata read from an invoice passed to `pay_invoice` makes it all
// the way out through the `PaymentClaimable` event.
let chanmon_cfgs = create_chanmon_cfgs(2);
+++ /dev/null
-use core::cell::{RefCell, RefMut};
-use core::ops::{Deref, DerefMut};
-
-pub type LockResult<Guard> = Result<Guard, ()>;
-
-pub struct Mutex<T: ?Sized> {
- inner: RefCell<T>
-}
-
-#[must_use = "if unused the Mutex will immediately unlock"]
-pub struct MutexGuard<'a, T: ?Sized + 'a> {
- lock: RefMut<'a, T>,
-}
-
-impl<T: ?Sized> Deref for MutexGuard<'_, T> {
- type Target = T;
-
- fn deref(&self) -> &T {
- &self.lock.deref()
- }
-}
-
-impl<T: ?Sized> DerefMut for MutexGuard<'_, T> {
- fn deref_mut(&mut self) -> &mut T {
- self.lock.deref_mut()
- }
-}
-
-impl<T> Mutex<T> {
- pub fn new(inner: T) -> Mutex<T> {
- Mutex { inner: RefCell::new(inner) }
- }
-
- pub fn lock<'a>(&'a self) -> LockResult<MutexGuard<'a, T>> {
- Ok(MutexGuard { lock: self.inner.borrow_mut() })
- }
-}
use crate::{prelude::*, Description, Bolt11InvoiceDescription, Sha256};
use bech32::ToBase32;
-use bitcoin_hashes::Hash;
+use bitcoin::hashes::Hash;
use lightning::chain;
use lightning::chain::chaininterface::{BroadcasterInterface, FeeEstimator};
use lightning::sign::{Recipient, NodeSigner, SignerProvider, EntropySource};
#[cfg(test)]
mod test {
- use core::cell::RefCell;
use core::time::Duration;
use crate::{Currency, Description, Bolt11InvoiceDescription, SignOrCreationError, CreationError};
- use bitcoin_hashes::{Hash, sha256};
- use bitcoin_hashes::sha256::Hash as Sha256;
+ use bitcoin::hashes::{Hash, sha256};
+ use bitcoin::hashes::sha256::Hash as Sha256;
use lightning::sign::PhantomKeysManager;
- use lightning::events::{MessageSendEvent, MessageSendEventsProvider, Event, EventsProvider};
- use lightning::ln::{PaymentPreimage, PaymentHash};
+ use lightning::events::{MessageSendEvent, MessageSendEventsProvider};
+ use lightning::ln::PaymentHash;
+ #[cfg(feature = "std")]
+ use lightning::ln::PaymentPreimage;
use lightning::ln::channelmanager::{PhantomRouteHints, MIN_FINAL_CLTV_EXPIRY_DELTA, PaymentId, RecipientOnionFields, Retry};
use lightning::ln::functional_test_utils::*;
use lightning::ln::msgs::ChannelMessageHandler;
#[cfg(feature = "std")]
fn do_test_multi_node_receive(user_generated_pmt_hash: bool) {
+ use lightning::events::{Event, EventsProvider};
+ use core::cell::RefCell;
+
let mut chanmon_cfgs = create_chanmon_cfgs(3);
let seed_1 = [42u8; 32];
let seed_2 = [43u8; 32];
extern crate bech32;
-extern crate bitcoin_hashes;
extern crate lightning;
extern crate lightning_invoice;
extern crate secp256k1;
use bitcoin::address::WitnessVersion;
use bitcoin::{PubkeyHash, ScriptHash};
use bitcoin::hashes::hex::FromHex;
-use bitcoin_hashes::{sha256, Hash};
+use bitcoin::hashes::{sha256, Hash};
use lightning::ln::PaymentSecret;
use lightning::routing::gossip::RoutingFees;
use lightning::routing::router::{RouteHint, RouteHintHop};
[package]
name = "lightning-net-tokio"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning/"
[dependencies]
bitcoin = "0.30.2"
-lightning = { version = "0.0.118", path = "../lightning" }
-tokio = { version = "1.0", features = [ "rt", "sync", "net", "time" ] }
+lightning = { version = "0.0.119", path = "../lightning" }
+tokio = { version = "1.35", features = [ "rt", "sync", "net", "time" ] }
[dev-dependencies]
-tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "net", "time" ] }
-lightning = { version = "0.0.118", path = "../lightning", features = ["_test_utils"] }
+tokio = { version = "1.35", features = [ "macros", "rt", "rt-multi-thread", "sync", "net", "time" ] }
+lightning = { version = "0.0.119", path = "../lightning", features = ["_test_utils"] }
[package]
name = "lightning-persister"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Valentine Wallace", "Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
[dependencies]
bitcoin = "0.30.2"
-lightning = { version = "0.0.118", path = "../lightning" }
+lightning = { version = "0.0.119", path = "../lightning" }
[target.'cfg(windows)'.dependencies]
windows-sys = { version = "0.48.0", default-features = false, features = ["Win32_Storage_FileSystem", "Win32_Foundation"] }
criterion = { version = "0.4", optional = true, default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.118", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.119", path = "../lightning", features = ["_test_utils"] }
bitcoin = { version = "0.30.2", default-features = false }
use lightning::util::persist::read_channel_monitors;
use std::fs;
use std::str::FromStr;
- #[cfg(target_os = "windows")]
- use {
- lightning::get_event_msg,
- lightning::ln::msgs::ChannelMessageHandler,
- };
impl Drop for FilesystemStore {
fn drop(&mut self) {
[package]
name = "lightning-rapid-gossip-sync"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Arik Sosman <git@arik.io>"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
std = ["lightning/std"]
[dependencies]
-lightning = { version = "0.0.118", path = "../lightning", default-features = false }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false }
bitcoin = { version = "0.30.2", default-features = false }
[target.'cfg(ldk_bench)'.dependencies]
criterion = { version = "0.4", optional = true, default-features = false }
[dev-dependencies]
-lightning = { version = "0.0.118", path = "../lightning", features = ["_test_utils"] }
+lightning = { version = "0.0.119", path = "../lightning", features = ["_test_utils"] }
[package]
name = "lightning-transaction-sync"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Elias Rohrer"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning"
rustdoc-args = ["--cfg", "docsrs"]
[features]
-default = []
+default = ["time"]
+time = []
esplora-async = ["async-interface", "esplora-client/async", "futures"]
esplora-async-https = ["esplora-async", "esplora-client/async-https-rustls"]
esplora-blocking = ["esplora-client/blocking"]
async-interface = []
[dependencies]
-lightning = { version = "0.0.118", path = "../lightning", default-features = false, features = ["std"] }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false, features = ["std"] }
bitcoin = { version = "0.30.2", default-features = false }
bdk-macros = "0.6"
futures = { version = "0.3", optional = true }
electrum-client = { version = "0.18.0", optional = true }
[dev-dependencies]
-lightning = { version = "0.0.118", path = "../lightning", default-features = false, features = ["std", "_test_utils"] }
-tokio = { version = "1.14.0", features = ["full"] }
+lightning = { version = "0.0.119", path = "../lightning", default-features = false, features = ["std", "_test_utils"] }
+tokio = { version = "1.35.0", features = ["full"] }
[target.'cfg(not(no_download))'.dev-dependencies]
electrsd = { version = "0.26.0", default-features = false, features = ["legacy", "esplora_a33e97e1", "bitcoind_25_0"] }
let mut sync_state = self.sync_state.lock().unwrap();
log_trace!(self.logger, "Starting transaction sync.");
+ #[cfg(feature = "time")]
let start_time = Instant::now();
let mut num_confirmed = 0;
let mut num_unconfirmed = 0;
sync_state.pending_sync = false;
}
}
+ #[cfg(feature = "time")]
log_debug!(self.logger,
"Finished transaction sync at tip {} in {}ms: {} confirmed, {} unconfirmed.",
tip_header.block_hash(), start_time.elapsed().as_millis(), num_confirmed,
num_unconfirmed);
+ #[cfg(not(feature = "time"))]
+ log_debug!(self.logger,
+ "Finished transaction sync at tip {}: {} confirmed, {} unconfirmed.",
+ tip_header.block_hash(), num_confirmed, num_unconfirmed);
Ok(())
}
#[cfg(not(feature = "async-interface"))]
use esplora_client::blocking::BlockingClient;
-use std::time::Instant;
use std::collections::HashSet;
use core::ops::Deref;
let mut sync_state = self.sync_state.lock().await;
log_trace!(self.logger, "Starting transaction sync.");
- let start_time = Instant::now();
+ #[cfg(feature = "time")]
+ let start_time = std::time::Instant::now();
let mut num_confirmed = 0;
let mut num_unconfirmed = 0;
sync_state.pending_sync = false;
}
}
+ #[cfg(feature = "time")]
log_debug!(self.logger, "Finished transaction sync at tip {} in {}ms: {} confirmed, {} unconfirmed.",
tip_hash, start_time.elapsed().as_millis(), num_confirmed, num_unconfirmed);
+ #[cfg(not(feature = "time"))]
+ log_debug!(self.logger, "Finished transaction sync at tip {}: {} confirmed, {} unconfirmed.",
+ tip_hash, num_confirmed, num_unconfirmed);
Ok(())
}
[package]
name = "lightning"
-version = "0.0.118"
+version = "0.0.119"
authors = ["Matt Corallo"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/lightningdevkit/rust-lightning/"
# Override signing to not include randomness when generating signatures for test vectors.
_test_vectors = []
-no-std = ["hashbrown", "bitcoin/no-std", "core2/alloc"]
+no-std = ["hashbrown", "bitcoin/no-std", "core2/alloc", "libm"]
std = ["bitcoin/std"]
# Generates low-r bitcoin signatures, which saves 1 byte in 50% of the cases
backtrace = { version = "0.3", optional = true }
core2 = { version = "0.3.0", optional = true, default-features = false }
+libm = { version = "0.2", optional = true, default-features = false }
[dev-dependencies]
regex = "1.5.6"
use crate::blinded_path::utils;
use crate::io;
use crate::ln::PaymentSecret;
+use crate::ln::channelmanager::CounterpartyForwardingInfo;
use crate::ln::features::BlindedHopFeatures;
use crate::ln::msgs::DecodeError;
use crate::offers::invoice::BlindedPayInfo;
pub htlc_minimum_msat: u64,
}
+impl From<CounterpartyForwardingInfo> for PaymentRelay {
+ fn from(info: CounterpartyForwardingInfo) -> Self {
+ let CounterpartyForwardingInfo {
+ fee_base_msat, fee_proportional_millionths, cltv_expiry_delta
+ } = info;
+ Self { cltv_expiry_delta, fee_proportional_millionths, fee_base_msat }
+ }
+}
+
impl Writeable for ForwardTlvs {
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
encode_tlv_stream!(w, {
use crate::events;
use crate::events::{Event, EventHandler};
use crate::util::atomic_counter::AtomicCounter;
-use crate::util::logger::Logger;
+use crate::util::logger::{Logger, WithContext};
use crate::util::errors::APIError;
use crate::util::wakers::{Future, Notifier};
use crate::ln::channelmanager::ChannelDetails;
let monitors = self.monitors.read().unwrap();
match monitors.get(&funding_txo) {
None => {
- log_error!(self.logger, "Failed to update channel monitor: no such monitor registered");
+ let logger = WithContext::from(&self.logger, update.counterparty_node_id, Some(funding_txo.to_channel_id()));
+ log_error!(logger, "Failed to update channel monitor: no such monitor registered");
// We should never ever trigger this from within ChannelManager. Technically a
// user could use this object with some proxying in between which makes this
#[must_use]
pub struct ChannelMonitorUpdate {
pub(crate) updates: Vec<ChannelMonitorUpdateStep>,
+ /// Historically, [`ChannelMonitor`]s didn't know their counterparty node id. However,
+ /// `ChannelManager` really wants to know it so that it can easily look up the corresponding
+ /// channel. For now, this results in a temporary map in `ChannelManager` to look up channels
+ /// by only the funding outpoint.
+ ///
+ /// To eventually remove that, we repeat the counterparty node id here so that we can upgrade
+ /// `ChannelMonitor`s to become aware of the counterparty node id if they were generated prior
+ /// to when it was stored directly in them.
+ pub(crate) counterparty_node_id: Option<PublicKey>,
/// The sequence number of this update. Updates *must* be replayed in-order according to this
/// sequence number (and updates may panic if they are not). The update_id values are strictly
/// increasing and increase by one for each new update, with two exceptions specified below.
for update_step in self.updates.iter() {
update_step.write(w)?;
}
- write_tlv_fields!(w, {});
+ write_tlv_fields!(w, {
+ (1, self.counterparty_node_id, option),
+ });
Ok(())
}
}
updates.push(upd);
}
}
- read_tlv_fields!(r, {});
- Ok(Self { update_id, updates })
+ let mut counterparty_node_id = None;
+ read_tlv_fields!(r, {
+ (1, counterparty_node_id, option),
+ });
+ Ok(Self { update_id, counterparty_node_id, updates })
}
}
log_info!(logger, "Applying update to monitor {}, bringing update_id from {} to {} with {} change(s).",
log_funding_info!(self), self.latest_update_id, updates.update_id, updates.updates.len());
}
+
+ if updates.counterparty_node_id.is_some() {
+ if self.counterparty_node_id.is_none() {
+ self.counterparty_node_id = updates.counterparty_node_id;
+ } else {
+ debug_assert_eq!(self.counterparty_node_id, updates.counterparty_node_id);
+ }
+ }
+
// ChannelMonitor updates may be applied after force close if we receive a preimage for a
// broadcasted commitment transaction HTLC output that we'd like to claim on-chain. If this
// is the case, we no longer have guaranteed access to the monitor's update ID, so we use a
#[cfg(any(test, feature = "_test_utils"))] extern crate regex;
#[cfg(not(feature = "std"))] extern crate core2;
+#[cfg(not(feature = "std"))] extern crate libm;
#[cfg(ldk_bench)] extern crate criterion;
}
/// The maximum length of a script returned by get_revokeable_redeemscript.
-// Calculated as 6 bytes of opcodes, 1 byte push plus 2 bytes for contest_delay, and two public
-// keys of 33 bytes (+ 1 push).
-pub const REVOKEABLE_REDEEMSCRIPT_MAX_LENGTH: usize = 6 + 3 + 34*2;
+// Calculated as 6 bytes of opcodes, 1 byte push plus 3 bytes for contest_delay, and two public
+// keys of 33 bytes (+ 1 push). Generally, pushes are only 2 bytes (for values below 0x7fff, i.e.
+// around 7 months), however, a 7 month contest delay shouldn't result in being unable to reclaim
+// on-chain funds.
+pub const REVOKEABLE_REDEEMSCRIPT_MAX_LENGTH: usize = 6 + 4 + 34*2;
/// A script either spendable by the revocation
/// key or the broadcaster_delayed_payment_key and satisfying the relative-locktime OP_CSV constrain.
total_fee_sat: u64, // the total fee included in the transaction
num_nondust_htlcs: usize, // the number of HTLC outputs (dust HTLCs *non*-included)
htlcs_included: Vec<(HTLCOutputInCommitment, Option<&'a HTLCSource>)>, // the list of HTLCs (dust HTLCs *included*) which were not ignored when building the transaction
- local_balance_msat: u64, // local balance before fees but considering dust limits
- remote_balance_msat: u64, // remote balance before fees but considering dust limits
+ local_balance_msat: u64, // local balance before fees *not* considering dust limits
+ remote_balance_msat: u64, // remote balance before fees *not* considering dust limits
outbound_htlc_preimages: Vec<PaymentPreimage>, // preimages for successful offered HTLCs since last commitment
inbound_htlc_preimages: Vec<PaymentPreimage>, // preimages for successful received HTLCs since last commitment
}
/// The result of a shutdown that should be handled.
#[must_use]
pub(crate) struct ShutdownResult {
+ pub(crate) closure_reason: ClosureReason,
/// A channel monitor update to apply.
pub(crate) monitor_update: Option<(PublicKey, OutPoint, ChannelMonitorUpdate)>,
/// A list of dropped outbound HTLCs that can safely be failed backwards immediately.
/// propagated to the remainder of the batch.
pub(crate) unbroadcasted_batch_funding_txid: Option<Txid>,
pub(crate) channel_id: ChannelId,
+ pub(crate) user_channel_id: u128,
+ pub(crate) channel_capacity_satoshis: u64,
pub(crate) counterparty_node_id: PublicKey,
+ pub(crate) unbroadcasted_funding_tx: Option<Transaction>,
}
/// If the majority of the channels funds are to the fundee and the initiator holds only just
}
}
- let mut value_to_self_msat: i64 = (self.value_to_self_msat - local_htlc_total_msat) as i64 + value_to_self_msat_offset;
+ let value_to_self_msat: i64 = (self.value_to_self_msat - local_htlc_total_msat) as i64 + value_to_self_msat_offset;
assert!(value_to_self_msat >= 0);
// Note that in case they have several just-awaiting-last-RAA fulfills in-progress (ie
// AwaitingRemoteRevokeToRemove or AwaitingRemovedRemoteRevoke) we may have allowed them to
// "violate" their reserve value by couting those against it. Thus, we have to convert
// everything to i64 before subtracting as otherwise we can overflow.
- let mut value_to_remote_msat: i64 = (self.channel_value_satoshis * 1000) as i64 - (self.value_to_self_msat as i64) - (remote_htlc_total_msat as i64) - value_to_self_msat_offset;
+ let value_to_remote_msat: i64 = (self.channel_value_satoshis * 1000) as i64 - (self.value_to_self_msat as i64) - (remote_htlc_total_msat as i64) - value_to_self_msat_offset;
assert!(value_to_remote_msat >= 0);
#[cfg(debug_assertions)]
htlcs_included.sort_unstable_by_key(|h| h.0.transaction_output_index.unwrap());
htlcs_included.append(&mut included_dust_htlcs);
- // For the stats, trimmed-to-0 the value in msats accordingly
- value_to_self_msat = if (value_to_self_msat * 1000) < broadcaster_dust_limit_satoshis as i64 { 0 } else { value_to_self_msat };
- value_to_remote_msat = if (value_to_remote_msat * 1000) < broadcaster_dust_limit_satoshis as i64 { 0 } else { value_to_remote_msat };
-
CommitmentStats {
tx,
feerate_per_kw,
/// will sign and send to our counterparty.
/// If an Err is returned, it is a ChannelError::Close (for get_funding_created)
fn build_remote_transaction_keys(&self) -> TxCreationKeys {
- //TODO: Ensure that the payment_key derived here ends up in the library users' wallet as we
- //may see payments to it!
let revocation_basepoint = &self.get_holder_pubkeys().revocation_basepoint;
let htlc_basepoint = &self.get_holder_pubkeys().htlc_basepoint;
let counterparty_pubkeys = self.get_counterparty_pubkeys();
if let Some(feerate) = outbound_feerate_update {
feerate_per_kw = cmp::max(feerate_per_kw, feerate);
}
- cmp::max(2530, feerate_per_kw * 1250 / 1000)
+ let feerate_plus_quarter = feerate_per_kw.checked_mul(1250).map(|v| v / 1000);
+ cmp::max(2530, feerate_plus_quarter.unwrap_or(u32::max_value()))
}
/// Get forwarding information for the counterparty.
res
}
- fn if_unbroadcasted_funding<F, O>(&self, f: F) -> Option<O>
- where F: Fn() -> Option<O> {
+ fn if_unbroadcasted_funding<F, O>(&self, f: F) -> Option<O> where F: Fn() -> Option<O> {
match self.channel_state {
ChannelState::FundingNegotiated => f(),
- ChannelState::AwaitingChannelReady(flags) => if flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH) {
- f()
- } else {
- None
- },
+ ChannelState::AwaitingChannelReady(flags) =>
+ if flags.is_set(AwaitingChannelReadyFlags::WAITING_FOR_BATCH) ||
+ flags.is_set(FundedStateFlags::MONITOR_UPDATE_IN_PROGRESS.into())
+ {
+ f()
+ } else {
+ None
+ },
_ => None,
}
}
/// those explicitly stated to be allowed after shutdown completes, eg some simple getters).
/// Also returns the list of payment_hashes for channels which we can safely fail backwards
/// immediately (others we will have to allow to time out).
- pub fn force_shutdown(&mut self, should_broadcast: bool) -> ShutdownResult {
+ pub fn force_shutdown(&mut self, should_broadcast: bool, closure_reason: ClosureReason) -> ShutdownResult {
// Note that we MUST only generate a monitor update that indicates force-closure - we're
// called during initialization prior to the chain_monitor in the encompassing ChannelManager
// being fully configured in some cases. Thus, its likely any monitor events we generate will
self.latest_monitor_update_id = CLOSED_CHANNEL_UPDATE_ID;
Some((self.get_counterparty_node_id(), funding_txo, ChannelMonitorUpdate {
update_id: self.latest_monitor_update_id,
+ counterparty_node_id: Some(self.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::ChannelForceClosed { should_broadcast }],
}))
} else { None }
} else { None };
let unbroadcasted_batch_funding_txid = self.unbroadcasted_batch_funding_txid();
+ let unbroadcasted_funding_tx = self.unbroadcasted_funding();
self.channel_state = ChannelState::ShutdownComplete;
self.update_time_counter += 1;
ShutdownResult {
+ closure_reason,
monitor_update,
dropped_outbound_htlcs,
unbroadcasted_batch_funding_txid,
channel_id: self.channel_id,
+ user_channel_id: self.user_id,
+ channel_capacity_satoshis: self.channel_value_satoshis,
counterparty_node_id: self.counterparty_node_id,
+ unbroadcasted_funding_tx,
}
}
.ok();
if funding_signed.is_none() {
- log_trace!(logger, "Counterparty commitment signature not available for funding_signed message; setting signer_pending_funding");
- self.signer_pending_funding = true;
+ #[cfg(not(async_signing))] {
+ panic!("Failed to get signature for funding_signed");
+ }
+ #[cfg(async_signing)] {
+ log_trace!(logger, "Counterparty commitment signature not available for funding_signed message; setting signer_pending_funding");
+ self.signer_pending_funding = true;
+ }
} else if self.signer_pending_funding {
log_trace!(logger, "Counterparty commitment signature available for funding_signed message; clearing signer_pending_funding");
self.signer_pending_funding = false;
self.context.latest_monitor_update_id += 1;
let monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::PaymentPreimage {
payment_preimage: payment_preimage_arg.clone(),
}],
self.context.channel_state.clear_waiting_for_batch();
}
+ /// Unsets the existing funding information.
+ ///
+ /// This must only be used if the channel has not yet completed funding and has not been used.
+ ///
+ /// Further, the channel must be immediately shut down after this with a call to
+ /// [`ChannelContext::force_shutdown`].
+ pub fn unset_funding_info(&mut self, temporary_channel_id: ChannelId) {
+ debug_assert!(matches!(
+ self.context.channel_state, ChannelState::AwaitingChannelReady(_)
+ ));
+ self.context.channel_transaction_parameters.funding_outpoint = None;
+ self.context.channel_id = temporary_channel_id;
+ }
+
/// Handles a channel_ready message from our peer. If we've already sent our channel_ready
/// and the channel is now usable (and public), this may generate an announcement_signatures to
/// reply with.
self.context.latest_monitor_update_id += 1;
let mut monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::LatestHolderCommitmentTXInfo {
commitment_tx: holder_commitment_tx,
htlc_outputs: htlcs_and_sigs,
let mut monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id + 1, // We don't increment this yet!
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: Vec::new(),
};
self.context.latest_monitor_update_id += 1;
let mut monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::CommitmentSecret {
idx: self.context.cur_counterparty_commitment_transaction_number + 1,
secret: msg.per_commitment_secret,
/// Indicates that the signer may have some signatures for us, so we should retry if we're
/// blocked.
- #[allow(unused)]
+ #[cfg(async_signing)]
pub fn signer_maybe_unblocked<L: Deref>(&mut self, logger: &L) -> SignerResumeUpdates where L::Target: Logger {
let commitment_update = if self.context.signer_pending_commitment_update {
self.get_last_commitment_update_for_send(logger).ok()
}
update
} else {
- if !self.context.signer_pending_commitment_update {
- log_trace!(logger, "Commitment update awaiting signer: setting signer_pending_commitment_update");
- self.context.signer_pending_commitment_update = true;
+ #[cfg(not(async_signing))] {
+ panic!("Failed to get signature for new commitment state");
+ }
+ #[cfg(async_signing)] {
+ if !self.context.signer_pending_commitment_update {
+ log_trace!(logger, "Commitment update awaiting signer: setting signer_pending_commitment_update");
+ self.context.signer_pending_commitment_update = true;
+ }
+ return Err(());
}
- return Err(());
};
Ok(msgs::CommitmentUpdate {
update_add_htlcs, update_fulfill_htlcs, update_fail_htlcs, update_fail_malformed_htlcs, update_fee,
self.context.latest_monitor_update_id += 1;
let monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::ShutdownScript {
scriptpubkey: self.get_closing_scriptpubkey(),
}],
if let Some((last_fee, sig)) = self.context.last_sent_closing_fee {
if last_fee == msg.fee_satoshis {
let shutdown_result = ShutdownResult {
+ closure_reason: ClosureReason::CooperativeClosure,
monitor_update: None,
dropped_outbound_htlcs: Vec::new(),
unbroadcasted_batch_funding_txid: self.context.unbroadcasted_batch_funding_txid(),
channel_id: self.context.channel_id,
+ user_channel_id: self.context.user_id,
+ channel_capacity_satoshis: self.context.channel_value_satoshis,
counterparty_node_id: self.context.counterparty_node_id,
+ unbroadcasted_funding_tx: self.context.unbroadcasted_funding(),
};
let tx = self.build_signed_closing_transaction(&mut closing_tx, &msg.signature, &sig);
self.context.channel_state = ChannelState::ShutdownComplete;
.map_err(|_| ChannelError::Close("External signer refused to sign closing transaction".to_owned()))?;
let (signed_tx, shutdown_result) = if $new_fee == msg.fee_satoshis {
let shutdown_result = ShutdownResult {
+ closure_reason: ClosureReason::CooperativeClosure,
monitor_update: None,
dropped_outbound_htlcs: Vec::new(),
unbroadcasted_batch_funding_txid: self.context.unbroadcasted_batch_funding_txid(),
channel_id: self.context.channel_id,
+ user_channel_id: self.context.user_id,
+ channel_capacity_satoshis: self.context.channel_value_satoshis,
counterparty_node_id: self.context.counterparty_node_id,
+ unbroadcasted_funding_tx: self.context.unbroadcasted_funding(),
};
self.context.channel_state = ChannelState::ShutdownComplete;
self.context.update_time_counter += 1;
self.context.latest_monitor_update_id += 1;
let monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::LatestCounterpartyCommitmentTXInfo {
commitment_txid: counterparty_commitment_txid,
htlc_outputs: htlcs.clone(),
self.context.latest_monitor_update_id += 1;
let monitor_update = ChannelMonitorUpdate {
update_id: self.context.latest_monitor_update_id,
+ counterparty_node_id: Some(self.context.counterparty_node_id),
updates: vec![ChannelMonitorUpdateStep::ShutdownScript {
scriptpubkey: self.get_closing_scriptpubkey(),
}],
let funding_created = self.get_funding_created_msg(logger);
if funding_created.is_none() {
- if !self.context.signer_pending_funding {
- log_trace!(logger, "funding_created awaiting signer; setting signer_pending_funding");
- self.context.signer_pending_funding = true;
+ #[cfg(not(async_signing))] {
+ panic!("Failed to get signature for new funding creation");
+ }
+ #[cfg(async_signing)] {
+ if !self.context.signer_pending_funding {
+ log_trace!(logger, "funding_created awaiting signer; setting signer_pending_funding");
+ self.context.signer_pending_funding = true;
+ }
}
}
/// Indicates that the signer may have some signatures for us, so we should retry if we're
/// blocked.
- #[allow(unused)]
+ #[cfg(async_signing)]
pub fn signer_maybe_unblocked<L: Deref>(&mut self, logger: &L) -> Option<msgs::FundingCreated> where L::Target: Logger {
if self.context.signer_pending_funding && self.context.is_outbound() {
log_trace!(logger, "Signer unblocked a funding_created");
pub unfunded_context: UnfundedChannelContext,
}
+/// Fetches the [`ChannelTypeFeatures`] that will be used for a channel built from a given
+/// [`msgs::OpenChannel`].
+pub(super) fn channel_type_from_open_channel(
+ msg: &msgs::OpenChannel, their_features: &InitFeatures,
+ our_supported_features: &ChannelTypeFeatures
+) -> Result<ChannelTypeFeatures, ChannelError> {
+ if let Some(channel_type) = &msg.channel_type {
+ if channel_type.supports_any_optional_bits() {
+ return Err(ChannelError::Close("Channel Type field contained optional bits - this is not allowed".to_owned()));
+ }
+
+ // We only support the channel types defined by the `ChannelManager` in
+ // `provided_channel_type_features`. The channel type must always support
+ // `static_remote_key`.
+ if !channel_type.requires_static_remote_key() {
+ return Err(ChannelError::Close("Channel Type was not understood - we require static remote key".to_owned()));
+ }
+ // Make sure we support all of the features behind the channel type.
+ if !channel_type.is_subset(our_supported_features) {
+ return Err(ChannelError::Close("Channel Type contains unsupported features".to_owned()));
+ }
+ let announced_channel = if (msg.channel_flags & 1) == 1 { true } else { false };
+ if channel_type.requires_scid_privacy() && announced_channel {
+ return Err(ChannelError::Close("SCID Alias/Privacy Channel Type cannot be set on a public channel".to_owned()));
+ }
+ Ok(channel_type.clone())
+ } else {
+ let channel_type = ChannelTypeFeatures::from_init(&their_features);
+ if channel_type != ChannelTypeFeatures::only_static_remote_key() {
+ return Err(ChannelError::Close("Only static_remote_key is supported for non-negotiated channel types".to_owned()));
+ }
+ Ok(channel_type)
+ }
+}
+
impl<SP: Deref> InboundV1Channel<SP> where SP::Target: SignerProvider {
/// Creates a new channel from a remote sides' request for one.
/// Assumes chain_hash has already been checked and corresponds with what we expect!
// First check the channel type is known, failing before we do anything else if we don't
// support this channel type.
- let channel_type = if let Some(channel_type) = &msg.channel_type {
- if channel_type.supports_any_optional_bits() {
- return Err(ChannelError::Close("Channel Type field contained optional bits - this is not allowed".to_owned()));
- }
-
- // We only support the channel types defined by the `ChannelManager` in
- // `provided_channel_type_features`. The channel type must always support
- // `static_remote_key`.
- if !channel_type.requires_static_remote_key() {
- return Err(ChannelError::Close("Channel Type was not understood - we require static remote key".to_owned()));
- }
- // Make sure we support all of the features behind the channel type.
- if !channel_type.is_subset(our_supported_features) {
- return Err(ChannelError::Close("Channel Type contains unsupported features".to_owned()));
- }
- if channel_type.requires_scid_privacy() && announced_channel {
- return Err(ChannelError::Close("SCID Alias/Privacy Channel Type cannot be set on a public channel".to_owned()));
- }
- channel_type.clone()
- } else {
- let channel_type = ChannelTypeFeatures::from_init(&their_features);
- if channel_type != ChannelTypeFeatures::only_static_remote_key() {
- return Err(ChannelError::Close("Only static_remote_key is supported for non-negotiated channel types".to_owned()));
- }
- channel_type
- };
+ let channel_type = channel_type_from_open_channel(msg, their_features, our_supported_features)?;
let channel_keys_id = signer_provider.generate_channel_keys_id(true, msg.funding_satoshis, user_id);
let holder_signer = signer_provider.derive_channel_signer(msg.funding_satoshis, channel_keys_id);
use bitcoin::hashes::sha256::Hash as Sha256;
macro_rules! doc_comment {
- ($x:expr, $($tt:tt)*) => {
- #[doc = $x]
- $($tt)*
- };
+ ($x:expr, $($tt:tt)*) => {
+ #[doc = $x]
+ $($tt)*
+ };
}
macro_rules! basepoint_impl {
- ($BasepointT:ty) => {
- impl $BasepointT {
- /// Get inner Public Key
- pub fn to_public_key(&self) -> PublicKey {
- self.0
- }
- }
-
- impl From<PublicKey> for $BasepointT {
- fn from(value: PublicKey) -> Self {
- Self(value)
- }
- }
-
- }
+ ($BasepointT:ty) => {
+ impl $BasepointT {
+ /// Get inner Public Key
+ pub fn to_public_key(&self) -> PublicKey {
+ self.0
+ }
+ }
+
+ impl From<PublicKey> for $BasepointT {
+ fn from(value: PublicKey) -> Self {
+ Self(value)
+ }
+ }
+
+ }
}
macro_rules! key_impl {
- ($BasepointT:ty, $KeyName:expr) => {
- doc_comment! {
- concat!("Generate ", $KeyName, " using per_commitment_point"),
- pub fn from_basepoint<T: secp256k1::Signing>(
- secp_ctx: &Secp256k1<T>,
- basepoint: &$BasepointT,
- per_commitment_point: &PublicKey,
- ) -> Self {
- Self(derive_public_key(secp_ctx, per_commitment_point, &basepoint.0))
- }
- }
-
- doc_comment! {
- concat!("Generate ", $KeyName, " from privkey"),
- pub fn from_secret_key<T: secp256k1::Signing>(secp_ctx: &Secp256k1<T>, sk: &SecretKey) -> Self {
- Self(PublicKey::from_secret_key(&secp_ctx, &sk))
- }
- }
-
- /// Get inner Public Key
- pub fn to_public_key(&self) -> PublicKey {
- self.0
- }
- }
+ ($BasepointT:ty, $KeyName:expr) => {
+ doc_comment! {
+ concat!("Derive a public ", $KeyName, " using one node's `per_commitment_point` and its countersignatory's `basepoint`"),
+ pub fn from_basepoint<T: secp256k1::Signing>(
+ secp_ctx: &Secp256k1<T>,
+ countersignatory_basepoint: &$BasepointT,
+ per_commitment_point: &PublicKey,
+ ) -> Self {
+ Self(derive_public_key(secp_ctx, per_commitment_point, &countersignatory_basepoint.0))
+ }
+ }
+
+ doc_comment! {
+ concat!("Build a ", $KeyName, " directly from an already-derived private key"),
+ pub fn from_secret_key<T: secp256k1::Signing>(secp_ctx: &Secp256k1<T>, sk: &SecretKey) -> Self {
+ Self(PublicKey::from_secret_key(&secp_ctx, &sk))
+ }
+ }
+
+ /// Get inner Public Key
+ pub fn to_public_key(&self) -> PublicKey {
+ self.0
+ }
+ }
}
macro_rules! key_read_write {
- ($SelfT:ty) => {
- impl Writeable for $SelfT {
- fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
- self.0.serialize().write(w)
- }
- }
-
- impl Readable for $SelfT {
- fn read<R: io::Read>(r: &mut R) -> Result<Self, DecodeError> {
- let key: PublicKey = Readable::read(r)?;
- Ok(Self(key))
- }
- }
- }
+ ($SelfT:ty) => {
+ impl Writeable for $SelfT {
+ fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
+ self.0.serialize().write(w)
+ }
+ }
+
+ impl Readable for $SelfT {
+ fn read<R: io::Read>(r: &mut R) -> Result<Self, DecodeError> {
+ let key: PublicKey = Readable::read(r)?;
+ Ok(Self(key))
+ }
+ }
+ }
}
-/// Master key used in conjunction with per_commitment_point to generate [`local_delayedpubkey`](https://github.com/lightning/bolts/blob/master/03-transactions.md#key-derivation) for the latest state of a channel.
-/// A watcher can be given a [DelayedPaymentBasepoint] to generate per commitment [DelayedPaymentKey] to create justice transactions.
+/// Base key used in conjunction with a `per_commitment_point` to generate a [`DelayedPaymentKey`].
+///
+/// The delayed payment key is used to pay the commitment state broadcaster their
+/// non-HTLC-encumbered funds after a delay to give their counterparty a chance to punish if the
+/// state broadcasted was previously revoked.
#[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)]
pub struct DelayedPaymentBasepoint(pub PublicKey);
basepoint_impl!(DelayedPaymentBasepoint);
key_read_write!(DelayedPaymentBasepoint);
-/// [delayedpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation)
-/// To allow a counterparty to contest a channel state published by a node, Lightning protocol sets delays for some of the outputs, before can be spend.
-/// For example a commitment transaction has to_local output encumbered by a delay, negotiated at the channel establishment flow.
-/// To spend from such output a node has to generate a script using, among others, a local delayed payment key.
+
+/// A derived key built from a [`DelayedPaymentBasepoint`] and `per_commitment_point`.
+///
+/// The delayed payment key is used to pay the commitment state broadcaster their
+/// non-HTLC-encumbered funds after a delay. This delay gives their counterparty a chance to
+/// punish and claim all the channel funds if the state broadcasted was previously revoked.
+///
+/// [See the BOLT specs]
+/// (https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation)
+/// for more information on key derivation details.
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub struct DelayedPaymentKey(pub PublicKey);
impl DelayedPaymentKey {
- key_impl!(DelayedPaymentBasepoint, "delayedpubkey");
+ key_impl!(DelayedPaymentBasepoint, "delayedpubkey");
}
key_read_write!(DelayedPaymentKey);
-/// Master key used in conjunction with per_commitment_point to generate a [localpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#key-derivation) for the latest state of a channel.
-/// Also used to generate a commitment number in a commitment transaction or as a Payment Key for a remote node (not us) in an anchor output if `option_static_remotekey` is enabled.
-/// Shared by both nodes in a channel establishment message flow.
-#[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)]
-pub struct PaymentBasepoint(pub PublicKey);
-basepoint_impl!(PaymentBasepoint);
-key_read_write!(PaymentBasepoint);
-
-
-/// [localpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation) is a child key of a payment basepoint,
-/// that enables a secure hash-lock for off-chain payments without risk of funds getting stuck or stolen. A payment key is normally shared with a counterparty so that it can generate
-/// a commitment transaction's to_remote ouput, which our node can claim in case the counterparty force closes the channel.
-#[derive(PartialEq, Eq, Clone, Copy, Debug)]
-pub struct PaymentKey(pub PublicKey);
-
-impl PaymentKey {
- key_impl!(PaymentBasepoint, "localpubkey");
-}
-key_read_write!(PaymentKey);
-
-/// Master key used in conjunction with per_commitment_point to generate [htlcpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#key-derivation) for the latest state of a channel.
+/// Base key used in conjunction with a `per_commitment_point` to generate an [`HtlcKey`].
+///
+/// HTLC keys are used to ensure only the recipient of an HTLC can claim it on-chain with the HTLC
+/// preimage and that only the sender of an HTLC can claim it on-chain after it has timed out.
+/// Thus, both channel counterparties' HTLC keys will appears in each HTLC output's script.
#[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)]
pub struct HtlcBasepoint(pub PublicKey);
basepoint_impl!(HtlcBasepoint);
key_read_write!(HtlcBasepoint);
-
-/// [htlcpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation) is a child key of an htlc basepoint,
-/// that enables secure routing of payments in onion scheme without a risk of them getting stuck or diverted. It is used to claim the funds in successful or timed out htlc outputs.
+/// A derived key built from a [`HtlcBasepoint`] and `per_commitment_point`.
+///
+/// HTLC keys are used to ensure only the recipient of an HTLC can claim it on-chain with the HTLC
+/// preimage and that only the sender of an HTLC can claim it on-chain after it has timed out.
+/// Thus, both channel counterparties' HTLC keys will appears in each HTLC output's script.
+///
+/// [See the BOLT specs]
+/// (https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation)
+/// for more information on key derivation details.
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub struct HtlcKey(pub PublicKey);
impl HtlcKey {
- key_impl!(HtlcBasepoint, "htlcpubkey");
+ key_impl!(HtlcBasepoint, "htlcpubkey");
}
key_read_write!(HtlcKey);
sha.input(&per_commitment_point.serialize());
sha.input(&base_point.serialize());
let res = Sha256::from_engine(sha).to_byte_array();
-
let hashkey = PublicKey::from_secret_key(&secp_ctx,
&SecretKey::from_slice(&res).expect("Hashes should always be valid keys unless SHA-256 is broken"));
key_read_write!(RevocationBasepoint);
-/// [htlcpubkey](https://github.com/lightning/bolts/blob/master/03-transactions.md#localpubkey-local_htlcpubkey-remote_htlcpubkey-local_delayedpubkey-and-remote_delayedpubkey-derivation) is a child key of a revocation basepoint,
-/// that enables a node to create a justice transaction punishing a counterparty for an attempt to steal funds. Used to in generation of commitment and htlc outputs.
+/// The revocation key is used to allow a channel party to revoke their state - giving their
+/// counterparty the required material to claim all of their funds if they broadcast that state.
+///
+/// Each commitment transaction has a revocation key based on the basepoint and
+/// per_commitment_point which is used in both commitment and HTLC transactions.
+///
+/// See [the BOLT spec for derivation details]
+/// (https://github.com/lightning/bolts/blob/master/03-transactions.md#revocationpubkey-derivation)
#[derive(PartialEq, Eq, Clone, Copy, Debug, Hash)]
pub struct RevocationKey(pub PublicKey);
impl RevocationKey {
- /// Derives a per-commitment-transaction revocation public key from its constituent parts. This is
- /// the public equivalend of derive_private_revocation_key - using only public keys to derive a
- /// public key instead of private keys.
- ///
- /// Only the cheating participant owns a valid witness to propagate a revoked
- /// commitment transaction, thus per_commitment_point always come from cheater
- /// and revocation_base_point always come from punisher, which is the broadcaster
- /// of the transaction spending with this key knowledge.
- ///
- /// Note that this is infallible iff we trust that at least one of the two input keys are randomly
- /// generated (ie our own).
- pub fn from_basepoint<T: secp256k1::Verification>(
- secp_ctx: &Secp256k1<T>,
- basepoint: &RevocationBasepoint,
- per_commitment_point: &PublicKey,
- ) -> Self {
- let rev_append_commit_hash_key = {
- let mut sha = Sha256::engine();
- sha.input(&basepoint.to_public_key().serialize());
- sha.input(&per_commitment_point.serialize());
-
- Sha256::from_engine(sha).to_byte_array()
- };
- let commit_append_rev_hash_key = {
- let mut sha = Sha256::engine();
- sha.input(&per_commitment_point.serialize());
- sha.input(&basepoint.to_public_key().serialize());
-
- Sha256::from_engine(sha).to_byte_array()
- };
-
- let countersignatory_contrib = basepoint.to_public_key().mul_tweak(&secp_ctx, &Scalar::from_be_bytes(rev_append_commit_hash_key).unwrap())
- .expect("Multiplying a valid public key by a hash is expected to never fail per secp256k1 docs");
- let broadcaster_contrib = (&per_commitment_point).mul_tweak(&secp_ctx, &Scalar::from_be_bytes(commit_append_rev_hash_key).unwrap())
- .expect("Multiplying a valid public key by a hash is expected to never fail per secp256k1 docs");
- let pk = countersignatory_contrib.combine(&broadcaster_contrib)
- .expect("Addition only fails if the tweak is the inverse of the key. This is not possible when the tweak commits to the key.");
- Self(pk)
- }
-
- /// Get inner Public Key
- pub fn to_public_key(&self) -> PublicKey {
- self.0
- }
+ /// Derives a per-commitment-transaction revocation public key from one party's per-commitment
+ /// point and the other party's [`RevocationBasepoint`]. This is the public equivalent of
+ /// [`chan_utils::derive_private_revocation_key`] - using only public keys to derive a public
+ /// key instead of private keys.
+ ///
+ /// Note that this is infallible iff we trust that at least one of the two input keys are randomly
+ /// generated (ie our own).
+ ///
+ /// [`chan_utils::derive_private_revocation_key`]: crate::ln::chan_utils::derive_private_revocation_key
+ pub fn from_basepoint<T: secp256k1::Verification>(
+ secp_ctx: &Secp256k1<T>,
+ countersignatory_basepoint: &RevocationBasepoint,
+ per_commitment_point: &PublicKey,
+ ) -> Self {
+ let rev_append_commit_hash_key = {
+ let mut sha = Sha256::engine();
+ sha.input(&countersignatory_basepoint.to_public_key().serialize());
+ sha.input(&per_commitment_point.serialize());
+
+ Sha256::from_engine(sha).to_byte_array()
+ };
+ let commit_append_rev_hash_key = {
+ let mut sha = Sha256::engine();
+ sha.input(&per_commitment_point.serialize());
+ sha.input(&countersignatory_basepoint.to_public_key().serialize());
+
+ Sha256::from_engine(sha).to_byte_array()
+ };
+
+ let countersignatory_contrib = countersignatory_basepoint.to_public_key().mul_tweak(&secp_ctx, &Scalar::from_be_bytes(rev_append_commit_hash_key).unwrap())
+ .expect("Multiplying a valid public key by a hash is expected to never fail per secp256k1 docs");
+ let broadcaster_contrib = (&per_commitment_point).mul_tweak(&secp_ctx, &Scalar::from_be_bytes(commit_append_rev_hash_key).unwrap())
+ .expect("Multiplying a valid public key by a hash is expected to never fail per secp256k1 docs");
+ let pk = countersignatory_contrib.combine(&broadcaster_contrib)
+ .expect("Addition only fails if the tweak is the inverse of the key. This is not possible when the tweak commits to the key.");
+ Self(pk)
+ }
+
+ /// Get inner Public Key
+ pub fn to_public_key(&self) -> PublicKey {
+ self.0
+ }
}
key_read_write!(RevocationKey);
-
#[cfg(test)]
mod test {
- use bitcoin::secp256k1::{Secp256k1, SecretKey, PublicKey};
- use bitcoin::hashes::hex::FromHex;
- use super::derive_public_key;
+ use bitcoin::secp256k1::{Secp256k1, SecretKey, PublicKey};
+ use bitcoin::hashes::hex::FromHex;
+ use super::derive_public_key;
- #[test]
+ #[test]
fn test_key_derivation() {
// Test vectors from BOLT 3 Appendix E:
let secp_ctx = Secp256k1::new();
assert_eq!(per_commitment_point.serialize()[..], <Vec<u8>>::from_hex("025f7117a78150fe2ef97db7cfc83bd57b2e2c0d0dd25eaf467a4a1c2a45ce1486").unwrap()[..]);
assert_eq!(derive_public_key(&secp_ctx, &per_commitment_point, &base_point).serialize()[..],
- <Vec<u8>>::from_hex("0235f2dbfaa89b57ec7b055afe29849ef7ddfeb1cefdb9ebdc43f5494984db29e5").unwrap()[..]);
+ <Vec<u8>>::from_hex("0235f2dbfaa89b57ec7b055afe29849ef7ddfeb1cefdb9ebdc43f5494984db29e5").unwrap()[..]);
}
}
// Since this struct is returned in `list_channels` methods, expose it here in case users want to
// construct one themselves.
use crate::ln::{inbound_payment, ChannelId, PaymentHash, PaymentPreimage, PaymentSecret};
-use crate::ln::channel::{Channel, ChannelPhase, ChannelContext, ChannelError, ChannelUpdateStatus, ShutdownResult, UnfundedChannelContext, UpdateFulfillCommitFetch, OutboundV1Channel, InboundV1Channel, WithChannelContext};
+use crate::ln::channel::{self, Channel, ChannelPhase, ChannelContext, ChannelError, ChannelUpdateStatus, ShutdownResult, UnfundedChannelContext, UpdateFulfillCommitFetch, OutboundV1Channel, InboundV1Channel, WithChannelContext};
use crate::ln::features::{Bolt12InvoiceFeatures, ChannelFeatures, ChannelTypeFeatures, InitFeatures, NodeFeatures};
#[cfg(any(feature = "_test_utils", test))]
use crate::ln::features::Bolt11InvoiceFeatures;
-use crate::routing::gossip::NetworkGraph;
-use crate::routing::router::{BlindedTail, DefaultRouter, InFlightHtlcs, Path, Payee, PaymentParameters, Route, RouteParameters, Router};
-use crate::routing::scoring::{ProbabilisticScorer, ProbabilisticScoringFeeParameters};
+use crate::routing::router::{BlindedTail, InFlightHtlcs, Path, Payee, PaymentParameters, Route, RouteParameters, Router};
use crate::ln::onion_payment::{check_incoming_htlc_cltv, create_recv_pending_htlc_info, create_fwd_pending_htlc_info, decode_incoming_update_add_htlc_onion, InboundOnionErr, NextPacketDetails};
use crate::ln::msgs;
use crate::ln::onion_utils;
use crate::offers::offer::{DerivedMetadata, Offer, OfferBuilder};
use crate::offers::parse::Bolt12SemanticError;
use crate::offers::refund::{Refund, RefundBuilder};
-use crate::onion_message::{Destination, OffersMessage, OffersMessageHandler, PendingOnionMessage, new_pending_onion_message};
-use crate::sign::{EntropySource, KeysManager, NodeSigner, Recipient, SignerProvider};
+use crate::onion_message::{Destination, MessageRouter, OffersMessage, OffersMessageHandler, PendingOnionMessage, new_pending_onion_message};
+use crate::sign::{EntropySource, NodeSigner, Recipient, SignerProvider};
use crate::sign::ecdsa::WriteableEcdsaChannelSigner;
use crate::util::config::{UserConfig, ChannelConfig, ChannelConfigUpdate};
use crate::util::wakers::{Future, Notifier};
use crate::util::ser::{BigSize, FixedLengthReader, Readable, ReadableArgs, MaybeReadable, Writeable, Writer, VecWriter};
use crate::util::logger::{Level, Logger, WithContext};
use crate::util::errors::APIError;
+#[cfg(not(c_bindings))]
+use {
+ crate::routing::router::DefaultRouter,
+ crate::routing::gossip::NetworkGraph,
+ crate::routing::scoring::{ProbabilisticScorer, ProbabilisticScoringFeeParameters},
+ crate::sign::KeysManager,
+};
use alloc::collections::{btree_map, BTreeMap};
struct MsgHandleErrInternal {
err: msgs::LightningError,
- chan_id: Option<(ChannelId, u128)>, // If Some a channel of ours has been closed
+ closes_channel: bool,
shutdown_finish: Option<(ShutdownResult, Option<msgs::ChannelUpdate>)>,
- channel_capacity: Option<u64>,
}
impl MsgHandleErrInternal {
#[inline]
},
},
},
- chan_id: None,
+ closes_channel: false,
shutdown_finish: None,
- channel_capacity: None,
}
}
#[inline]
fn from_no_close(err: msgs::LightningError) -> Self {
- Self { err, chan_id: None, shutdown_finish: None, channel_capacity: None }
+ Self { err, closes_channel: false, shutdown_finish: None }
}
#[inline]
- fn from_finish_shutdown(err: String, channel_id: ChannelId, user_channel_id: u128, shutdown_res: ShutdownResult, channel_update: Option<msgs::ChannelUpdate>, channel_capacity: u64) -> Self {
+ fn from_finish_shutdown(err: String, channel_id: ChannelId, shutdown_res: ShutdownResult, channel_update: Option<msgs::ChannelUpdate>) -> Self {
let err_msg = msgs::ErrorMessage { channel_id, data: err.clone() };
let action = if shutdown_res.monitor_update.is_some() {
// We have a closing `ChannelMonitorUpdate`, which means the channel was funded and we
};
Self {
err: LightningError { err, action },
- chan_id: Some((channel_id, user_channel_id)),
+ closes_channel: true,
shutdown_finish: Some((shutdown_res, channel_update)),
- channel_capacity: Some(channel_capacity)
}
}
#[inline]
},
},
},
- chan_id: None,
+ closes_channel: false,
shutdown_finish: None,
- channel_capacity: None,
}
}
fn closes_channel(&self) -> bool {
- self.chan_id.is_some()
+ self.closes_channel
}
}
// |
// |__`peer_state`
// |
-// |__`id_to_peer`
+// |__`outpoint_to_peer`
// |
// |__`short_to_chan_info`
// |
/// See `ChannelManager` struct-level documentation for lock order requirements.
outbound_scid_aliases: Mutex<HashSet<u64>>,
- /// `channel_id` -> `counterparty_node_id`.
- ///
- /// Only `channel_id`s are allowed as keys in this map, and not `temporary_channel_id`s. As
- /// multiple channels with the same `temporary_channel_id` to different peers can exist,
- /// allowing `temporary_channel_id`s in this map would cause collisions for such channels.
+ /// Channel funding outpoint -> `counterparty_node_id`.
///
/// Note that this map should only be used for `MonitorEvent` handling, to be able to access
/// the corresponding channel for the event, as we only have access to the `channel_id` during
/// required to access the channel with the `counterparty_node_id`.
///
/// See `ChannelManager` struct-level documentation for lock order requirements.
- id_to_peer: Mutex<HashMap<ChannelId, PublicKey>>,
+ #[cfg(not(test))]
+ outpoint_to_peer: Mutex<HashMap<OutPoint, PublicKey>>,
+ #[cfg(test)]
+ pub(crate) outpoint_to_peer: Mutex<HashMap<OutPoint, PublicKey>>,
/// SCIDs (and outbound SCID aliases) -> `counterparty_node_id`s and `channel_id`s.
///
match $internal {
Ok(msg) => Ok(msg),
- Err(MsgHandleErrInternal { err, chan_id, shutdown_finish, channel_capacity }) => {
+ Err(MsgHandleErrInternal { err, shutdown_finish, .. }) => {
let mut msg_events = Vec::with_capacity(2);
if let Some((shutdown_res, update_option)) = shutdown_finish {
+ let counterparty_node_id = shutdown_res.counterparty_node_id;
+ let channel_id = shutdown_res.channel_id;
+ let logger = WithContext::from(
+ &$self.logger, Some(counterparty_node_id), Some(channel_id),
+ );
+ log_error!(logger, "Force-closing channel: {}", err.err);
+
$self.finish_close_channel(shutdown_res);
if let Some(update) = update_option {
msg_events.push(events::MessageSendEvent::BroadcastChannelUpdate {
msg: update
});
}
- if let Some((channel_id, user_channel_id)) = chan_id {
- $self.pending_events.lock().unwrap().push_back((events::Event::ChannelClosed {
- channel_id, user_channel_id,
- reason: ClosureReason::ProcessingError { err: err.err.clone() },
- counterparty_node_id: Some($counterparty_node_id),
- channel_capacity_sats: channel_capacity,
- }, None));
- }
+ } else {
+ log_error!($self.logger, "Got non-closing error: {}", err.err);
}
- let logger = WithContext::from(
- &$self.logger, Some($counterparty_node_id), chan_id.map(|(chan_id, _)| chan_id)
- );
- log_error!(logger, "{}", err.err);
if let msgs::ErrorAction::IgnoreError = err.action {
} else {
msg_events.push(events::MessageSendEvent::HandleError {
macro_rules! update_maps_on_chan_removal {
($self: expr, $channel_context: expr) => {{
- $self.id_to_peer.lock().unwrap().remove(&$channel_context.channel_id());
+ if let Some(outpoint) = $channel_context.get_funding_txo() {
+ $self.outpoint_to_peer.lock().unwrap().remove(&outpoint);
+ }
let mut short_to_chan_info = $self.short_to_chan_info.write().unwrap();
if let Some(short_id) = $channel_context.get_short_channel_id() {
short_to_chan_info.remove(&short_id);
let logger = WithChannelContext::from(&$self.logger, &$channel.context);
log_error!(logger, "Closing channel {} due to close-required error: {}", $channel_id, msg);
update_maps_on_chan_removal!($self, $channel.context);
- let shutdown_res = $channel.context.force_shutdown(true);
- let user_id = $channel.context.get_user_id();
- let channel_capacity_satoshis = $channel.context.get_value_satoshis();
-
- (true, MsgHandleErrInternal::from_finish_shutdown(msg, *$channel_id, user_id,
- shutdown_res, $channel_update, channel_capacity_satoshis))
+ let reason = ClosureReason::ProcessingError { err: msg.clone() };
+ let shutdown_res = $channel.context.force_shutdown(true, reason);
+ let err =
+ MsgHandleErrInternal::from_finish_shutdown(msg, *$channel_id, shutdown_res, $channel_update);
+ (true, err)
},
}
};
forward_htlcs: Mutex::new(HashMap::new()),
claimable_payments: Mutex::new(ClaimablePayments { claimable_payments: HashMap::new(), pending_claiming_payments: HashMap::new() }),
pending_intercepted_htlcs: Mutex::new(HashMap::new()),
- id_to_peer: Mutex::new(HashMap::new()),
+ outpoint_to_peer: Mutex::new(HashMap::new()),
short_to_chan_info: FairRwLock::new(HashMap::new()),
our_network_pubkey: node_signer.get_node_id(Recipient::Node).unwrap(),
fn list_funded_channels_with_filter<Fn: FnMut(&(&ChannelId, &Channel<SP>)) -> bool + Copy>(&self, f: Fn) -> Vec<ChannelDetails> {
// Allocate our best estimate of the number of channels we have in the `res`
// Vec. Sadly the `short_to_chan_info` map doesn't cover channels without
- // a scid or a scid alias, and the `id_to_peer` shouldn't be used outside
+ // a scid or a scid alias, and the `outpoint_to_peer` shouldn't be used outside
// of the ChannelMonitor handling. Therefore reallocations may still occur, but is
// unlikely as the `short_to_chan_info` map often contains 2 entries for
// the same channel.
pub fn list_channels(&self) -> Vec<ChannelDetails> {
// Allocate our best estimate of the number of channels we have in the `res`
// Vec. Sadly the `short_to_chan_info` map doesn't cover channels without
- // a scid or a scid alias, and the `id_to_peer` shouldn't be used outside
+ // a scid or a scid alias, and the `outpoint_to_peer` shouldn't be used outside
// of the ChannelMonitor handling. Therefore reallocations may still occur, but is
// unlikely as the `short_to_chan_info` map often contains 2 entries for
// the same channel.
.collect()
}
- /// Helper function that issues the channel close events
- fn issue_channel_close_events(&self, context: &ChannelContext<SP>, closure_reason: ClosureReason) {
- let mut pending_events_lock = self.pending_events.lock().unwrap();
- match context.unbroadcasted_funding() {
- Some(transaction) => {
- pending_events_lock.push_back((events::Event::DiscardFunding {
- channel_id: context.channel_id(), transaction
- }, None));
- },
- None => {},
- }
- pending_events_lock.push_back((events::Event::ChannelClosed {
- channel_id: context.channel_id(),
- user_channel_id: context.get_user_id(),
- reason: closure_reason,
- counterparty_node_id: Some(context.get_counterparty_node_id()),
- channel_capacity_sats: Some(context.get_value_satoshis()),
- }, None));
- }
-
fn close_channel_internal(&self, channel_id: &ChannelId, counterparty_node_id: &PublicKey, target_feerate_sats_per_1000_weight: Option<u32>, override_shutdown_script: Option<ShutdownScript>) -> Result<(), APIError> {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
peer_state_lock, peer_state, per_peer_state, chan);
}
} else {
- self.issue_channel_close_events(chan_phase_entry.get().context(), ClosureReason::HolderForceClosed);
let mut chan_phase = remove_channel_phase!(self, chan_phase_entry);
- shutdown_result = Some(chan_phase.context_mut().force_shutdown(false));
+ shutdown_result = Some(chan_phase.context_mut().force_shutdown(false, ClosureReason::HolderForceClosed));
}
},
hash_map::Entry::Vacant(_) => {
let logger = WithContext::from(
&self.logger, Some(shutdown_res.counterparty_node_id), Some(shutdown_res.channel_id),
);
- log_debug!(logger, "Finishing closure of channel with {} HTLCs to fail", shutdown_res.dropped_outbound_htlcs.len());
+
+ log_debug!(logger, "Finishing closure of channel due to {} with {} HTLCs to fail",
+ shutdown_res.closure_reason, shutdown_res.dropped_outbound_htlcs.len());
for htlc_source in shutdown_res.dropped_outbound_htlcs.drain(..) {
let (source, payment_hash, counterparty_node_id, channel_id) = htlc_source;
let reason = HTLCFailReason::from_failure_code(0x4000 | 8);
let mut peer_state = peer_state_mutex.lock().unwrap();
if let Some(mut chan) = peer_state.channel_by_id.remove(&channel_id) {
update_maps_on_chan_removal!(self, &chan.context());
- self.issue_channel_close_events(&chan.context(), ClosureReason::FundingBatchClosure);
- shutdown_results.push(chan.context_mut().force_shutdown(false));
+ shutdown_results.push(chan.context_mut().force_shutdown(false, ClosureReason::FundingBatchClosure));
}
}
has_uncompleted_channel = Some(has_uncompleted_channel.map_or(!state, |v| v || !state));
"Closing a batch where all channels have completed initial monitor update",
);
}
+
+ {
+ let mut pending_events = self.pending_events.lock().unwrap();
+ pending_events.push_back((events::Event::ChannelClosed {
+ channel_id: shutdown_res.channel_id,
+ user_channel_id: shutdown_res.user_channel_id,
+ reason: shutdown_res.closure_reason,
+ counterparty_node_id: Some(shutdown_res.counterparty_node_id),
+ channel_capacity_sats: Some(shutdown_res.channel_capacity_satoshis),
+ }, None));
+
+ if let Some(transaction) = shutdown_res.unbroadcasted_funding_tx {
+ pending_events.push_back((events::Event::DiscardFunding {
+ channel_id: shutdown_res.channel_id, transaction
+ }, None));
+ }
+ }
for shutdown_result in shutdown_results.drain(..) {
self.finish_close_channel(shutdown_result);
}
let logger = WithContext::from(&self.logger, Some(*peer_node_id), Some(*channel_id));
if let hash_map::Entry::Occupied(chan_phase_entry) = peer_state.channel_by_id.entry(channel_id.clone()) {
log_error!(logger, "Force-closing channel {}", channel_id);
- self.issue_channel_close_events(&chan_phase_entry.get().context(), closure_reason);
let mut chan_phase = remove_channel_phase!(self, chan_phase_entry);
mem::drop(peer_state);
mem::drop(per_peer_state);
match chan_phase {
ChannelPhase::Funded(mut chan) => {
- self.finish_close_channel(chan.context.force_shutdown(broadcast));
+ self.finish_close_channel(chan.context.force_shutdown(broadcast, closure_reason));
(self.get_channel_update_for_broadcast(&chan).ok(), chan.context.get_counterparty_node_id())
},
ChannelPhase::UnfundedOutboundV1(_) | ChannelPhase::UnfundedInboundV1(_) => {
- self.finish_close_channel(chan_phase.context_mut().force_shutdown(false));
+ self.finish_close_channel(chan_phase.context_mut().force_shutdown(false, closure_reason));
// Unfunded channel has no update
(None, chan_phase.context().get_counterparty_node_id())
},
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
- let (chan, msg_opt) = match peer_state.channel_by_id.remove(temporary_channel_id) {
+ let funding_txo;
+ let (mut chan, msg_opt) = match peer_state.channel_by_id.remove(temporary_channel_id) {
Some(ChannelPhase::UnfundedOutboundV1(mut chan)) => {
- let funding_txo = find_funding_output(&chan, &funding_transaction)?;
+ funding_txo = find_funding_output(&chan, &funding_transaction)?;
let logger = WithChannelContext::from(&self.logger, &chan.context);
let funding_res = chan.get_funding_created(funding_transaction, funding_txo, is_batch_funding, &&logger)
.map_err(|(mut chan, e)| if let ChannelError::Close(msg) = e {
let channel_id = chan.context.channel_id();
- let user_id = chan.context.get_user_id();
- let shutdown_res = chan.context.force_shutdown(false);
- let channel_capacity = chan.context.get_value_satoshis();
- (chan, MsgHandleErrInternal::from_finish_shutdown(msg, channel_id, user_id, shutdown_res, None, channel_capacity))
+ let reason = ClosureReason::ProcessingError { err: msg.clone() };
+ let shutdown_res = chan.context.force_shutdown(false, reason);
+ (chan, MsgHandleErrInternal::from_finish_shutdown(msg, channel_id, shutdown_res, None))
} else { unreachable!(); });
match funding_res {
Ok(funding_msg) => (chan, funding_msg),
panic!("Generated duplicate funding txid?");
},
hash_map::Entry::Vacant(e) => {
- let mut id_to_peer = self.id_to_peer.lock().unwrap();
- if id_to_peer.insert(chan.context.channel_id(), chan.context.get_counterparty_node_id()).is_some() {
- panic!("id_to_peer map already contained funding txid, which shouldn't be possible");
+ let mut outpoint_to_peer = self.outpoint_to_peer.lock().unwrap();
+ match outpoint_to_peer.entry(funding_txo) {
+ hash_map::Entry::Vacant(e) => { e.insert(chan.context.get_counterparty_node_id()); },
+ hash_map::Entry::Occupied(o) => {
+ let err = format!(
+ "An existing channel using outpoint {} is open with peer {}",
+ funding_txo, o.get()
+ );
+ mem::drop(outpoint_to_peer);
+ mem::drop(peer_state_lock);
+ mem::drop(per_peer_state);
+ let reason = ClosureReason::ProcessingError { err: err.clone() };
+ self.finish_close_channel(chan.context.force_shutdown(true, reason));
+ return Err(APIError::ChannelUnavailable { err });
+ }
}
e.insert(ChannelPhase::UnfundedOutboundV1(chan));
}
.and_then(|mut peer_state| peer_state.channel_by_id.remove(&channel_id))
.map(|mut chan| {
update_maps_on_chan_removal!(self, &chan.context());
- self.issue_channel_close_events(&chan.context(), ClosureReason::ProcessingError { err: e.clone() });
- shutdown_results.push(chan.context_mut().force_shutdown(false));
+ let closure_reason = ClosureReason::ProcessingError { err: e.clone() };
+ shutdown_results.push(chan.context_mut().force_shutdown(false, closure_reason));
});
}
}
log_error!(logger,
"Force-closing pending channel with ID {} for not establishing in a timely manner", chan_id);
update_maps_on_chan_removal!(self, &context);
- self.issue_channel_close_events(&context, ClosureReason::HolderForceClosed);
- shutdown_channels.push(context.force_shutdown(false));
+ shutdown_channels.push(context.force_shutdown(false, ClosureReason::HolderForceClosed));
pending_msg_events.push(MessageSendEvent::HandleError {
node_id: counterparty_node_id,
action: msgs::ErrorAction::SendErrorMessage {
}
let preimage_update = ChannelMonitorUpdate {
update_id: CLOSED_CHANNEL_UPDATE_ID,
+ counterparty_node_id: None,
updates: vec![ChannelMonitorUpdateStep::PaymentPreimage {
payment_preimage,
}],
Some(cp_id) => cp_id.clone(),
None => {
// TODO: Once we can rely on the counterparty_node_id from the
- // monitor event, this and the id_to_peer map should be removed.
- let id_to_peer = self.id_to_peer.lock().unwrap();
- match id_to_peer.get(&funding_txo.to_channel_id()) {
+ // monitor event, this and the outpoint_to_peer map should be removed.
+ let outpoint_to_peer = self.outpoint_to_peer.lock().unwrap();
+ match outpoint_to_peer.get(&funding_txo) {
Some(cp_id) => cp_id.clone(),
None => return,
}
}
fn do_accept_inbound_channel(&self, temporary_channel_id: &ChannelId, counterparty_node_id: &PublicKey, accept_0conf: bool, user_channel_id: u128) -> Result<(), APIError> {
+
+ let logger = WithContext::from(&self.logger, Some(*counterparty_node_id), Some(*temporary_channel_id));
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
let peers_without_funded_channels =
self.peers_without_funded_channels(|peer| { peer.total_channel_count() > 0 });
let per_peer_state = self.per_peer_state.read().unwrap();
let peer_state_mutex = per_peer_state.get(counterparty_node_id)
- .ok_or_else(|| APIError::ChannelUnavailable { err: format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id) })?;
+ .ok_or_else(|| {
+ let err_str = format!("Can't find a peer matching the passed counterparty node_id {}", counterparty_node_id);
+ log_error!(logger, "{}", err_str);
+
+ APIError::ChannelUnavailable { err: err_str }
+ })?;
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
let is_only_peer_channel = peer_state.total_channel_count() == 1;
InboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider,
counterparty_node_id.clone(), &self.channel_type_features(), &peer_state.latest_features,
&unaccepted_channel.open_channel_msg, user_channel_id, &self.default_configuration, best_block_height,
- &self.logger, accept_0conf).map_err(|e| APIError::ChannelUnavailable { err: e.to_string() })
+ &self.logger, accept_0conf).map_err(|e| {
+ let err_str = e.to_string();
+ log_error!(logger, "{}", err_str);
+
+ APIError::ChannelUnavailable { err: err_str }
+ })
+ }
+ _ => {
+ let err_str = "No such channel awaiting to be accepted.".to_owned();
+ log_error!(logger, "{}", err_str);
+
+ Err(APIError::APIMisuseError { err: err_str })
}
- _ => Err(APIError::APIMisuseError { err: "No such channel awaiting to be accepted.".to_owned() })
}?;
if accept_0conf {
}
};
peer_state.pending_msg_events.push(send_msg_err_event);
- return Err(APIError::APIMisuseError { err: "Please use accept_inbound_channel_from_trusted_peer_0conf to accept channels with zero confirmations.".to_owned() });
+ let err_str = "Please use accept_inbound_channel_from_trusted_peer_0conf to accept channels with zero confirmations.".to_owned();
+ log_error!(logger, "{}", err_str);
+
+ return Err(APIError::APIMisuseError { err: err_str });
} else {
// If this peer already has some channels, a new channel won't increase our number of peers
// with unfunded channels, so as long as we aren't over the maximum number of unfunded
}
};
peer_state.pending_msg_events.push(send_msg_err_event);
- return Err(APIError::APIMisuseError { err: "Too many peers with unfunded channels, refusing to accept new ones".to_owned() });
+ let err_str = "Too many peers with unfunded channels, refusing to accept new ones".to_owned();
+ log_error!(logger, "{}", err_str);
+
+ return Err(APIError::APIMisuseError { err: err_str });
}
}
// If we're doing manual acceptance checks on the channel, then defer creation until we're sure we want to accept.
if self.default_configuration.manually_accept_inbound_channels {
+ let channel_type = channel::channel_type_from_open_channel(
+ &msg, &peer_state.latest_features, &self.channel_type_features()
+ ).map_err(|e|
+ MsgHandleErrInternal::from_chan_no_close(e, msg.temporary_channel_id)
+ )?;
let mut pending_events = self.pending_events.lock().unwrap();
pending_events.push_back((events::Event::OpenChannelRequest {
temporary_channel_id: msg.temporary_channel_id.clone(),
counterparty_node_id: counterparty_node_id.clone(),
funding_satoshis: msg.funding_satoshis,
push_msat: msg.push_msat,
- channel_type: msg.channel_type.clone().unwrap(),
+ channel_type,
}, None));
peer_state.inbound_channel_request_by_id.insert(channel_id, InboundChannelRequest {
open_channel_msg: msg.clone(),
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
- let (chan, funding_msg_opt, monitor) =
+ let (mut chan, funding_msg_opt, monitor) =
match peer_state.channel_by_id.remove(&msg.temporary_channel_id) {
Some(ChannelPhase::UnfundedInboundV1(inbound_chan)) => {
let logger = WithChannelContext::from(&self.logger, &inbound_chan.context);
match inbound_chan.funding_created(msg, best_block, &self.signer_provider, &&logger) {
Ok(res) => res,
- Err((mut inbound_chan, err)) => {
+ Err((inbound_chan, err)) => {
// We've already removed this inbound channel from the map in `PeerState`
// above so at this point we just need to clean up any lingering entries
// concerning this channel as it is safe to do so.
- update_maps_on_chan_removal!(self, &inbound_chan.context);
- let user_id = inbound_chan.context.get_user_id();
- let shutdown_res = inbound_chan.context.force_shutdown(false);
- return Err(MsgHandleErrInternal::from_finish_shutdown(format!("{}", err),
- msg.temporary_channel_id, user_id, shutdown_res, None, inbound_chan.context.get_value_satoshis()));
+ debug_assert!(matches!(err, ChannelError::Close(_)));
+ // Really we should be returning the channel_id the peer expects based
+ // on their funding info here, but they're horribly confused anyway, so
+ // there's not a lot we can do to save them.
+ return Err(convert_chan_phase_err!(self, err, &mut ChannelPhase::UnfundedInboundV1(inbound_chan), &msg.temporary_channel_id).1);
},
}
},
- Some(ChannelPhase::Funded(_)) | Some(ChannelPhase::UnfundedOutboundV1(_)) => {
- return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got an unexpected funding_created message from peer with counterparty_node_id {}", counterparty_node_id), msg.temporary_channel_id));
+ Some(mut phase) => {
+ let err_msg = format!("Got an unexpected funding_created message from peer with counterparty_node_id {}", counterparty_node_id);
+ let err = ChannelError::Close(err_msg);
+ return Err(convert_chan_phase_err!(self, err, &mut phase, &msg.temporary_channel_id).1);
},
None => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.temporary_channel_id))
};
- match peer_state.channel_by_id.entry(chan.context.channel_id()) {
+ let funded_channel_id = chan.context.channel_id();
+
+ macro_rules! fail_chan { ($err: expr) => { {
+ // Note that at this point we've filled in the funding outpoint on our
+ // channel, but its actually in conflict with another channel. Thus, if
+ // we call `convert_chan_phase_err` immediately (thus calling
+ // `update_maps_on_chan_removal`), we'll remove the existing channel
+ // from `outpoint_to_peer`. Thus, we must first unset the funding outpoint
+ // on the channel.
+ let err = ChannelError::Close($err.to_owned());
+ chan.unset_funding_info(msg.temporary_channel_id);
+ return Err(convert_chan_phase_err!(self, err, chan, &funded_channel_id, UNFUNDED_CHANNEL).1);
+ } } }
+
+ match peer_state.channel_by_id.entry(funded_channel_id) {
hash_map::Entry::Occupied(_) => {
- Err(MsgHandleErrInternal::send_err_msg_no_close(
- "Already had channel with the new channel_id".to_owned(),
- chan.context.channel_id()
- ))
+ fail_chan!("Already had channel with the new channel_id");
},
hash_map::Entry::Vacant(e) => {
- let mut id_to_peer_lock = self.id_to_peer.lock().unwrap();
- match id_to_peer_lock.entry(chan.context.channel_id()) {
+ let mut outpoint_to_peer_lock = self.outpoint_to_peer.lock().unwrap();
+ match outpoint_to_peer_lock.entry(monitor.get_funding_txo().0) {
hash_map::Entry::Occupied(_) => {
- return Err(MsgHandleErrInternal::send_err_msg_no_close(
- "The funding_created message had the same funding_txid as an existing channel - funding is not possible".to_owned(),
- chan.context.channel_id()))
+ fail_chan!("The funding_created message had the same funding_txid as an existing channel - funding is not possible");
},
hash_map::Entry::Vacant(i_e) => {
let monitor_res = self.chain_monitor.watch_channel(monitor.get_funding_txo().0, monitor);
if let Ok(persist_state) = monitor_res {
i_e.insert(chan.context.get_counterparty_node_id());
- mem::drop(id_to_peer_lock);
+ mem::drop(outpoint_to_peer_lock);
// There's no problem signing a counterparty's funding transaction if our monitor
// hasn't persisted to disk yet - we can't lose money on a transaction that we haven't
} else {
let logger = WithChannelContext::from(&self.logger, &chan.context);
log_error!(logger, "Persisting initial ChannelMonitor failed, implying the funding outpoint was duplicated");
- let channel_id = match funding_msg_opt {
- Some(msg) => msg.channel_id,
- None => chan.context.channel_id(),
- };
- return Err(MsgHandleErrInternal::send_err_msg_no_close(
- "The funding_created message had the same funding_txid as an existing channel - funding is not possible".to_owned(),
- channel_id));
+ fail_chan!("Duplicate funding outpoint");
}
}
}
let res =
chan.funding_signed(&msg, best_block, &self.signer_provider, &&logger);
match res {
- Ok((chan, monitor)) => {
+ Ok((mut chan, monitor)) => {
if let Ok(persist_status) = self.chain_monitor.watch_channel(chan.context.get_funding_txo().unwrap(), monitor) {
// We really should be able to insert here without doing a second
// lookup, but sadly rust stdlib doesn't currently allow keeping
Ok(())
} else {
let e = ChannelError::Close("Channel funding outpoint was a duplicate".to_owned());
+ // We weren't able to watch the channel to begin with, so no
+ // updates should be made on it. Previously, full_stack_target
+ // found an (unreachable) panic when the monitor update contained
+ // within `shutdown_finish` was applied.
+ chan.unset_funding_info(msg.channel_id);
return Err(convert_chan_phase_err!(self, e, &mut ChannelPhase::Funded(chan), &msg.channel_id).1);
}
},
let context = phase.context_mut();
let logger = WithChannelContext::from(&self.logger, context);
log_error!(logger, "Immediately closing unfunded channel {} as peer asked to cooperatively shut it down (which is unnecessary)", &msg.channel_id);
- self.issue_channel_close_events(&context, ClosureReason::CounterpartyCoopClosedUnfundedChannel);
let mut chan = remove_channel_phase!(self, chan_phase_entry);
- finish_shutdown = Some(chan.context_mut().force_shutdown(false));
+ finish_shutdown = Some(chan.context_mut().force_shutdown(false, ClosureReason::CounterpartyCoopClosedUnfundedChannel));
},
}
} else {
msg: update
});
}
- self.issue_channel_close_events(&chan.context, ClosureReason::CooperativeClosure);
}
mem::drop(per_peer_state);
if let Some(shutdown_result) = shutdown_result {
Some(cp_id) => Some(cp_id),
None => {
// TODO: Once we can rely on the counterparty_node_id from the
- // monitor event, this and the id_to_peer map should be removed.
- let id_to_peer = self.id_to_peer.lock().unwrap();
- id_to_peer.get(&funding_outpoint.to_channel_id()).cloned()
+ // monitor event, this and the outpoint_to_peer map should be removed.
+ let outpoint_to_peer = self.outpoint_to_peer.lock().unwrap();
+ outpoint_to_peer.get(&funding_outpoint).cloned()
}
};
if let Some(counterparty_node_id) = counterparty_node_id_opt {
let pending_msg_events = &mut peer_state.pending_msg_events;
if let hash_map::Entry::Occupied(chan_phase_entry) = peer_state.channel_by_id.entry(funding_outpoint.to_channel_id()) {
if let ChannelPhase::Funded(mut chan) = remove_channel_phase!(self, chan_phase_entry) {
- failed_channels.push(chan.context.force_shutdown(false));
+ failed_channels.push(chan.context.force_shutdown(false, ClosureReason::HolderForceClosed));
if let Ok(update) = self.get_channel_update_for_broadcast(&chan) {
pending_msg_events.push(events::MessageSendEvent::BroadcastChannelUpdate {
msg: update
});
}
- self.issue_channel_close_events(&chan.context, ClosureReason::HolderForceClosed);
pending_msg_events.push(events::MessageSendEvent::HandleError {
node_id: chan.context.get_counterparty_node_id(),
action: msgs::ErrorAction::DisconnectPeer {
/// attempted in every channel, or in the specifically provided channel.
///
/// [`ChannelSigner`]: crate::sign::ChannelSigner
- #[cfg(test)] // This is only implemented for one signer method, and should be private until we
- // actually finish implementing it fully.
+ #[cfg(async_signing)]
pub fn signer_unblocked(&self, channel_opt: Option<(PublicKey, ChannelId)>) {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
});
}
- self.issue_channel_close_events(&chan.context, ClosureReason::CooperativeClosure);
-
log_info!(logger, "Broadcasting {}", log_tx!(tx));
self.tx_broadcaster.broadcast_transactions(&[&tx]);
update_maps_on_chan_removal!(self, &chan.context);
///
/// # Privacy
///
- /// Uses a one-hop [`BlindedPath`] for the offer with [`ChannelManager::get_our_node_id`] as the
- /// introduction node and a derived signing pubkey for recipient privacy. As such, currently,
- /// the node must be announced. Otherwise, there is no way to find a path to the introduction
- /// node in order to send the [`InvoiceRequest`].
+ /// Uses [`MessageRouter::create_blinded_paths`] to construct a [`BlindedPath`] for the offer.
+ /// However, if one is not found, uses a one-hop [`BlindedPath`] with
+ /// [`ChannelManager::get_our_node_id`] as the introduction node instead. In the latter case,
+ /// the node must be announced, otherwise, there is no way to find a path to the introduction in
+ /// order to send the [`InvoiceRequest`].
+ ///
+ /// Also, uses a derived signing pubkey in the offer for recipient privacy.
///
/// # Limitations
///
/// Requires a direct connection to the introduction node in the responding [`InvoiceRequest`]'s
/// reply path.
///
+ /// # Errors
+ ///
+ /// Errors if the parameterized [`Router`] is unable to create a blinded path for the offer.
+ ///
/// This is not exported to bindings users as builder patterns don't map outside of move semantics.
///
/// [`Offer`]: crate::offers::offer::Offer
/// [`InvoiceRequest`]: crate::offers::invoice_request::InvoiceRequest
pub fn create_offer_builder(
&self, description: String
- ) -> OfferBuilder<DerivedMetadata, secp256k1::All> {
+ ) -> Result<OfferBuilder<DerivedMetadata, secp256k1::All>, Bolt12SemanticError> {
let node_id = self.get_our_node_id();
let expanded_key = &self.inbound_payment_key;
let entropy = &*self.entropy_source;
let secp_ctx = &self.secp_ctx;
- let path = self.create_one_hop_blinded_path();
- OfferBuilder::deriving_signing_pubkey(description, node_id, expanded_key, entropy, secp_ctx)
+ let path = self.create_blinded_path().map_err(|_| Bolt12SemanticError::MissingPaths)?;
+ let builder = OfferBuilder::deriving_signing_pubkey(
+ description, node_id, expanded_key, entropy, secp_ctx
+ )
.chain_hash(self.chain_hash)
- .path(path)
+ .path(path);
+
+ Ok(builder)
}
/// Creates a [`RefundBuilder`] such that the [`Refund`] it builds is recognized by the
///
/// # Privacy
///
- /// Uses a one-hop [`BlindedPath`] for the refund with [`ChannelManager::get_our_node_id`] as
- /// the introduction node and a derived payer id for payer privacy. As such, currently, the
- /// node must be announced. Otherwise, there is no way to find a path to the introduction node
- /// in order to send the [`Bolt12Invoice`].
+ /// Uses [`MessageRouter::create_blinded_paths`] to construct a [`BlindedPath`] for the refund.
+ /// However, if one is not found, uses a one-hop [`BlindedPath`] with
+ /// [`ChannelManager::get_our_node_id`] as the introduction node instead. In the latter case,
+ /// the node must be announced, otherwise, there is no way to find a path to the introduction in
+ /// order to send the [`Bolt12Invoice`].
+ ///
+ /// Also, uses a derived payer id in the refund for payer privacy.
///
/// # Limitations
///
///
/// # Errors
///
- /// Errors if a duplicate `payment_id` is provided given the caveats in the aforementioned link
- /// or if `amount_msats` is invalid.
+ /// Errors if:
+ /// - a duplicate `payment_id` is provided given the caveats in the aforementioned link,
+ /// - `amount_msats` is invalid, or
+ /// - the parameterized [`Router`] is unable to create a blinded path for the refund.
///
/// This is not exported to bindings users as builder patterns don't map outside of move semantics.
///
/// [`Refund`]: crate::offers::refund::Refund
/// [`Bolt12Invoice`]: crate::offers::invoice::Bolt12Invoice
/// [`Bolt12Invoice::payment_paths`]: crate::offers::invoice::Bolt12Invoice::payment_paths
+ /// [Avoiding Duplicate Payments]: #avoiding-duplicate-payments
pub fn create_refund_builder(
&self, description: String, amount_msats: u64, absolute_expiry: Duration,
payment_id: PaymentId, retry_strategy: Retry, max_total_routing_fee_msat: Option<u64>
let expanded_key = &self.inbound_payment_key;
let entropy = &*self.entropy_source;
let secp_ctx = &self.secp_ctx;
- let path = self.create_one_hop_blinded_path();
+ let path = self.create_blinded_path().map_err(|_| Bolt12SemanticError::MissingPaths)?;
let builder = RefundBuilder::deriving_payer_id(
description, node_id, expanded_key, entropy, secp_ctx, amount_msats, payment_id
)?
///
/// # Errors
///
- /// Errors if a duplicate `payment_id` is provided given the caveats in the aforementioned link
- /// or if the provided parameters are invalid for the offer.
+ /// Errors if:
+ /// - a duplicate `payment_id` is provided given the caveats in the aforementioned link,
+ /// - the provided parameters are invalid for the offer,
+ /// - the parameterized [`Router`] is unable to create a blinded reply path for the invoice
+ /// request.
///
/// [`InvoiceRequest`]: crate::offers::invoice_request::InvoiceRequest
/// [`InvoiceRequest::quantity`]: crate::offers::invoice_request::InvoiceRequest::quantity
None => builder,
Some(payer_note) => builder.payer_note(payer_note),
};
-
let invoice_request = builder.build_and_sign()?;
- let reply_path = self.create_one_hop_blinded_path();
+ let reply_path = self.create_blinded_path().map_err(|_| Bolt12SemanticError::MissingPaths)?;
let expiration = StaleExpiration::TimerTicks(1);
self.pending_outbound_payments
/// node meeting the aforementioned criteria, but there's no guarantee that they will be
/// received and no retries will be made.
///
+ /// # Errors
+ ///
+ /// Errors if the parameterized [`Router`] is unable to create a blinded payment path or reply
+ /// path for the invoice.
+ ///
/// [`Bolt12Invoice`]: crate::offers::invoice::Bolt12Invoice
pub fn request_refund_payment(&self, refund: &Refund) -> Result<(), Bolt12SemanticError> {
let expanded_key = &self.inbound_payment_key;
match self.create_inbound_payment(Some(amount_msats), relative_expiry, None) {
Ok((payment_hash, payment_secret)) => {
- let payment_paths = vec![
- self.create_one_hop_blinded_payment_path(payment_secret),
- ];
+ let payment_paths = self.create_blinded_payment_paths(amount_msats, payment_secret)
+ .map_err(|_| Bolt12SemanticError::MissingPaths)?;
+
#[cfg(not(feature = "no-std"))]
let builder = refund.respond_using_derived_keys(
payment_paths, payment_hash, expanded_key, entropy
payment_paths, payment_hash, created_at, expanded_key, entropy
)?;
let invoice = builder.allow_mpp().build_and_sign(secp_ctx)?;
- let reply_path = self.create_one_hop_blinded_path();
+ let reply_path = self.create_blinded_path()
+ .map_err(|_| Bolt12SemanticError::MissingPaths)?;
let mut pending_offers_messages = self.pending_offers_messages.lock().unwrap();
if refund.paths().is_empty() {
inbound_payment::get_payment_preimage(payment_hash, payment_secret, &self.inbound_payment_key)
}
- /// Creates a one-hop blinded path with [`ChannelManager::get_our_node_id`] as the introduction
- /// node.
- fn create_one_hop_blinded_path(&self) -> BlindedPath {
+ /// Creates a blinded path by delegating to [`MessageRouter::create_blinded_paths`].
+ ///
+ /// Errors if the `MessageRouter` errors or returns an empty `Vec`.
+ fn create_blinded_path(&self) -> Result<BlindedPath, ()> {
+ let recipient = self.get_our_node_id();
let entropy_source = self.entropy_source.deref();
let secp_ctx = &self.secp_ctx;
- BlindedPath::one_hop_for_message(self.get_our_node_id(), entropy_source, secp_ctx).unwrap()
+
+ let peers = self.per_peer_state.read().unwrap()
+ .iter()
+ .filter(|(_, peer)| peer.lock().unwrap().latest_features.supports_onion_messages())
+ .map(|(node_id, _)| *node_id)
+ .collect::<Vec<_>>();
+
+ self.router
+ .create_blinded_paths(recipient, peers, entropy_source, secp_ctx)
+ .and_then(|paths| paths.into_iter().next().ok_or(()))
}
- /// Creates a one-hop blinded path with [`ChannelManager::get_our_node_id`] as the introduction
- /// node.
- fn create_one_hop_blinded_payment_path(
- &self, payment_secret: PaymentSecret
- ) -> (BlindedPayInfo, BlindedPath) {
+ /// Creates multi-hop blinded payment paths for the given `amount_msats` by delegating to
+ /// [`Router::create_blinded_payment_paths`].
+ fn create_blinded_payment_paths(
+ &self, amount_msats: u64, payment_secret: PaymentSecret
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()> {
let entropy_source = self.entropy_source.deref();
let secp_ctx = &self.secp_ctx;
+ let first_hops = self.list_usable_channels();
let payee_node_id = self.get_our_node_id();
- let max_cltv_expiry = self.best_block.read().unwrap().height() + LATENCY_GRACE_PERIOD_BLOCKS;
+ let max_cltv_expiry = self.best_block.read().unwrap().height() + CLTV_FAR_FAR_AWAY
+ + LATENCY_GRACE_PERIOD_BLOCKS;
let payee_tlvs = ReceiveTlvs {
payment_secret,
payment_constraints: PaymentConstraints {
htlc_minimum_msat: 1,
},
};
- // TODO: Err for overflow?
- BlindedPath::one_hop_for_payment(
- payee_node_id, payee_tlvs, entropy_source, secp_ctx
- ).unwrap()
+ self.router.create_blinded_payment_paths(
+ payee_node_id, first_hops, payee_tlvs, amount_msats, entropy_source, secp_ctx
+ )
}
/// Gets a fake short channel id for use in receiving [phantom node payments]. These fake scids
update_maps_on_chan_removal!(self, &channel.context);
// It looks like our counterparty went on-chain or funding transaction was
// reorged out of the main chain. Close the channel.
- failed_channels.push(channel.context.force_shutdown(true));
+ let reason_message = format!("{}", reason);
+ failed_channels.push(channel.context.force_shutdown(true, reason));
if let Ok(update) = self.get_channel_update_for_broadcast(&channel) {
pending_msg_events.push(events::MessageSendEvent::BroadcastChannelUpdate {
msg: update
});
}
- let reason_message = format!("{}", reason);
- self.issue_channel_close_events(&channel.context, reason);
pending_msg_events.push(events::MessageSendEvent::HandleError {
node_id: channel.context.get_counterparty_node_id(),
action: msgs::ErrorAction::DisconnectPeer {
};
// Clean up for removal.
update_maps_on_chan_removal!(self, &context);
- self.issue_channel_close_events(&context, ClosureReason::DisconnectedPeer);
- failed_channels.push(context.force_shutdown(false));
+ failed_channels.push(context.force_shutdown(false, ClosureReason::DisconnectedPeer));
false
});
// Note that we don't bother generating any events for pre-accept channels -
let pending_msg_events = &mut peer_state.pending_msg_events;
peer_state.channel_by_id.iter_mut().filter_map(|(_, phase)|
- if let ChannelPhase::Funded(chan) = phase { Some(chan) } else {
- // Since unfunded channel maps are cleared upon disconnecting a peer, and they're not persisted
- // (so won't be recovered after a crash), they shouldn't exist here and we would never need to
- // worry about closing and removing them.
- debug_assert!(false);
- None
- }
+ if let ChannelPhase::Funded(chan) = phase { Some(chan) } else { None }
).for_each(|chan| {
let logger = WithChannelContext::from(&self.logger, &chan.context);
pending_msg_events.push(events::MessageSendEvent::SendChannelReestablish {
let amount_msats = match InvoiceBuilder::<DerivedSigningPubkey>::amount_msats(
&invoice_request
) {
- Ok(amount_msats) => Some(amount_msats),
+ Ok(amount_msats) => amount_msats,
Err(error) => return Some(OffersMessage::InvoiceError(error.into())),
};
let invoice_request = match invoice_request.verify(expanded_key, secp_ctx) {
return Some(OffersMessage::InvoiceError(error.into()));
},
};
- let relative_expiry = DEFAULT_RELATIVE_EXPIRY.as_secs() as u32;
- match self.create_inbound_payment(amount_msats, relative_expiry, None) {
- Ok((payment_hash, payment_secret)) if invoice_request.keys.is_some() => {
- let payment_paths = vec![
- self.create_one_hop_blinded_payment_path(payment_secret),
- ];
- #[cfg(not(feature = "no-std"))]
- let builder = invoice_request.respond_using_derived_keys(
- payment_paths, payment_hash
- );
- #[cfg(feature = "no-std")]
- let created_at = Duration::from_secs(
- self.highest_seen_timestamp.load(Ordering::Acquire) as u64
- );
- #[cfg(feature = "no-std")]
- let builder = invoice_request.respond_using_derived_keys_no_std(
- payment_paths, payment_hash, created_at
- );
- match builder.and_then(|b| b.allow_mpp().build_and_sign(secp_ctx)) {
- Ok(invoice) => Some(OffersMessage::Invoice(invoice)),
- Err(error) => Some(OffersMessage::InvoiceError(error.into())),
- }
- },
- Ok((payment_hash, payment_secret)) => {
- let payment_paths = vec![
- self.create_one_hop_blinded_payment_path(payment_secret),
- ];
- #[cfg(not(feature = "no-std"))]
- let builder = invoice_request.respond_with(payment_paths, payment_hash);
- #[cfg(feature = "no-std")]
- let created_at = Duration::from_secs(
- self.highest_seen_timestamp.load(Ordering::Acquire) as u64
- );
- #[cfg(feature = "no-std")]
- let builder = invoice_request.respond_with_no_std(
- payment_paths, payment_hash, created_at
- );
- let response = builder.and_then(|builder| builder.allow_mpp().build())
- .map_err(|e| OffersMessage::InvoiceError(e.into()))
- .and_then(|invoice|
- match invoice.sign(|invoice| self.node_signer.sign_bolt12_invoice(invoice)) {
- Ok(invoice) => Ok(OffersMessage::Invoice(invoice)),
- Err(SignError::Signing(())) => Err(OffersMessage::InvoiceError(
- InvoiceError::from_string("Failed signing invoice".to_string())
- )),
- Err(SignError::Verification(_)) => Err(OffersMessage::InvoiceError(
- InvoiceError::from_string("Failed invoice signature verification".to_string())
- )),
- });
- match response {
- Ok(invoice) => Some(invoice),
- Err(error) => Some(error),
- }
+ let relative_expiry = DEFAULT_RELATIVE_EXPIRY.as_secs() as u32;
+ let (payment_hash, payment_secret) = match self.create_inbound_payment(
+ Some(amount_msats), relative_expiry, None
+ ) {
+ Ok((payment_hash, payment_secret)) => (payment_hash, payment_secret),
+ Err(()) => {
+ let error = Bolt12SemanticError::InvalidAmount;
+ return Some(OffersMessage::InvoiceError(error.into()));
},
+ };
+
+ let payment_paths = match self.create_blinded_payment_paths(
+ amount_msats, payment_secret
+ ) {
+ Ok(payment_paths) => payment_paths,
Err(()) => {
- Some(OffersMessage::InvoiceError(Bolt12SemanticError::InvalidAmount.into()))
+ let error = Bolt12SemanticError::MissingPaths;
+ return Some(OffersMessage::InvoiceError(error.into()));
},
+ };
+
+ #[cfg(feature = "no-std")]
+ let created_at = Duration::from_secs(
+ self.highest_seen_timestamp.load(Ordering::Acquire) as u64
+ );
+
+ if invoice_request.keys.is_some() {
+ #[cfg(not(feature = "no-std"))]
+ let builder = invoice_request.respond_using_derived_keys(
+ payment_paths, payment_hash
+ );
+ #[cfg(feature = "no-std")]
+ let builder = invoice_request.respond_using_derived_keys_no_std(
+ payment_paths, payment_hash, created_at
+ );
+ match builder.and_then(|b| b.allow_mpp().build_and_sign(secp_ctx)) {
+ Ok(invoice) => Some(OffersMessage::Invoice(invoice)),
+ Err(error) => Some(OffersMessage::InvoiceError(error.into())),
+ }
+ } else {
+ #[cfg(not(feature = "no-std"))]
+ let builder = invoice_request.respond_with(payment_paths, payment_hash);
+ #[cfg(feature = "no-std")]
+ let builder = invoice_request.respond_with_no_std(
+ payment_paths, payment_hash, created_at
+ );
+ let response = builder.and_then(|builder| builder.allow_mpp().build())
+ .map_err(|e| OffersMessage::InvoiceError(e.into()))
+ .and_then(|invoice|
+ match invoice.sign(|invoice| self.node_signer.sign_bolt12_invoice(invoice)) {
+ Ok(invoice) => Ok(OffersMessage::Invoice(invoice)),
+ Err(SignError::Signing(())) => Err(OffersMessage::InvoiceError(
+ InvoiceError::from_string("Failed signing invoice".to_string())
+ )),
+ Err(SignError::Verification(_)) => Err(OffersMessage::InvoiceError(
+ InvoiceError::from_string("Failed invoice signature verification".to_string())
+ )),
+ });
+ match response {
+ Ok(invoice) => Some(invoice),
+ Err(error) => Some(error),
+ }
}
},
OffersMessage::Invoice(invoice) => {
let channel_count: u64 = Readable::read(reader)?;
let mut funding_txo_set = HashSet::with_capacity(cmp::min(channel_count as usize, 128));
let mut funded_peer_channels: HashMap<PublicKey, HashMap<ChannelId, ChannelPhase<SP>>> = HashMap::with_capacity(cmp::min(channel_count as usize, 128));
- let mut id_to_peer = HashMap::with_capacity(cmp::min(channel_count as usize, 128));
+ let mut outpoint_to_peer = HashMap::with_capacity(cmp::min(channel_count as usize, 128));
let mut short_to_chan_info = HashMap::with_capacity(cmp::min(channel_count as usize, 128));
let mut channel_closures = VecDeque::new();
let mut close_background_events = Vec::new();
log_error!(logger, " The ChannelMonitor for channel {} is at counterparty commitment transaction number {} but the ChannelManager is at counterparty commitment transaction number {}.",
&channel.context.channel_id(), monitor.get_cur_counterparty_commitment_number(), channel.get_cur_counterparty_commitment_transaction_number());
}
- let mut shutdown_result = channel.context.force_shutdown(true);
+ let mut shutdown_result = channel.context.force_shutdown(true, ClosureReason::OutdatedChannelManager);
if shutdown_result.unbroadcasted_batch_funding_txid.is_some() {
return Err(DecodeError::InvalidValue);
}
if let Some(short_channel_id) = channel.context.get_short_channel_id() {
short_to_chan_info.insert(short_channel_id, (channel.context.get_counterparty_node_id(), channel.context.channel_id()));
}
- if channel.context.is_funding_broadcast() {
- id_to_peer.insert(channel.context.channel_id(), channel.context.get_counterparty_node_id());
+ if let Some(funding_txo) = channel.context.get_funding_txo() {
+ outpoint_to_peer.insert(funding_txo, channel.context.get_counterparty_node_id());
}
match funded_peer_channels.entry(channel.context.get_counterparty_node_id()) {
hash_map::Entry::Occupied(mut entry) => {
// If we were persisted and shut down while the initial ChannelMonitor persistence
// was in-progress, we never broadcasted the funding transaction and can still
// safely discard the channel.
- let _ = channel.context.force_shutdown(false);
+ let _ = channel.context.force_shutdown(false, ClosureReason::DisconnectedPeer);
channel_closures.push_back((events::Event::ChannelClosed {
channel_id: channel.context.channel_id(),
user_channel_id: channel.context.get_user_id(),
&funding_txo.to_channel_id());
let monitor_update = ChannelMonitorUpdate {
update_id: CLOSED_CHANNEL_UPDATE_ID,
+ counterparty_node_id: None,
updates: vec![ChannelMonitorUpdateStep::ChannelForceClosed { should_broadcast: true }],
};
close_background_events.push(BackgroundEvent::ClosedMonitorUpdateRegeneratedOnStartup((*funding_txo, monitor_update)));
// We only rebuild the pending payments map if we were most recently serialized by
// 0.0.102+
for (_, monitor) in args.channel_monitors.iter() {
- let counterparty_opt = id_to_peer.get(&monitor.get_funding_txo().0.to_channel_id());
+ let counterparty_opt = outpoint_to_peer.get(&monitor.get_funding_txo().0);
if counterparty_opt.is_none() {
let logger = WithChannelMonitor::from(&args.logger, monitor);
for (htlc_source, (htlc, _)) in monitor.get_pending_or_resolved_outbound_htlcs() {
// without the new monitor persisted - we'll end up right back here on
// restart.
let previous_channel_id = claimable_htlc.prev_hop.outpoint.to_channel_id();
- if let Some(peer_node_id) = id_to_peer.get(&previous_channel_id){
+ if let Some(peer_node_id) = outpoint_to_peer.get(&claimable_htlc.prev_hop.outpoint) {
let peer_state_mutex = per_peer_state.get(peer_node_id).unwrap();
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
forward_htlcs: Mutex::new(forward_htlcs),
claimable_payments: Mutex::new(ClaimablePayments { claimable_payments, pending_claiming_payments: pending_claiming_payments.unwrap() }),
outbound_scid_aliases: Mutex::new(outbound_scid_aliases),
- id_to_peer: Mutex::new(id_to_peer),
+ outpoint_to_peer: Mutex::new(outpoint_to_peer),
short_to_chan_info: FairRwLock::new(short_to_chan_info),
fake_scid_rand_bytes: fake_scid_rand_bytes.unwrap(),
}
#[test]
- fn test_id_to_peer_coverage() {
- // Test that the `ChannelManager:id_to_peer` contains channels which have been assigned
+ fn test_outpoint_to_peer_coverage() {
+ // Test that the `ChannelManager:outpoint_to_peer` contains channels which have been assigned
// a `channel_id` (i.e. have had the funding tx created), and that they are removed once
// the channel is successfully closed.
let chanmon_cfgs = create_chanmon_cfgs(2);
let accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &accept_channel);
- let (temporary_channel_id, tx, _funding_output) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42);
+ let (temporary_channel_id, tx, funding_output) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42);
let channel_id = ChannelId::from_bytes(tx.txid().to_byte_array());
{
- // Ensure that the `id_to_peer` map is empty until either party has received the
+ // Ensure that the `outpoint_to_peer` map is empty until either party has received the
// funding transaction, and have the real `channel_id`.
- assert_eq!(nodes[0].node.id_to_peer.lock().unwrap().len(), 0);
- assert_eq!(nodes[1].node.id_to_peer.lock().unwrap().len(), 0);
+ assert_eq!(nodes[0].node.outpoint_to_peer.lock().unwrap().len(), 0);
+ assert_eq!(nodes[1].node.outpoint_to_peer.lock().unwrap().len(), 0);
}
nodes[0].node.funding_transaction_generated(&temporary_channel_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap();
{
- // Assert that `nodes[0]`'s `id_to_peer` map is populated with the channel as soon as
+ // Assert that `nodes[0]`'s `outpoint_to_peer` map is populated with the channel as soon as
// as it has the funding transaction.
- let nodes_0_lock = nodes[0].node.id_to_peer.lock().unwrap();
+ let nodes_0_lock = nodes[0].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_0_lock.len(), 1);
- assert!(nodes_0_lock.contains_key(&channel_id));
+ assert!(nodes_0_lock.contains_key(&funding_output));
}
- assert_eq!(nodes[1].node.id_to_peer.lock().unwrap().len(), 0);
+ assert_eq!(nodes[1].node.outpoint_to_peer.lock().unwrap().len(), 0);
let funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg);
{
- let nodes_0_lock = nodes[0].node.id_to_peer.lock().unwrap();
+ let nodes_0_lock = nodes[0].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_0_lock.len(), 1);
- assert!(nodes_0_lock.contains_key(&channel_id));
+ assert!(nodes_0_lock.contains_key(&funding_output));
}
expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
{
- // Assert that `nodes[1]`'s `id_to_peer` map is populated with the channel as soon as
- // as it has the funding transaction.
- let nodes_1_lock = nodes[1].node.id_to_peer.lock().unwrap();
+ // Assert that `nodes[1]`'s `outpoint_to_peer` map is populated with the channel as
+ // soon as it has the funding transaction.
+ let nodes_1_lock = nodes[1].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_1_lock.len(), 1);
- assert!(nodes_1_lock.contains_key(&channel_id));
+ assert!(nodes_1_lock.contains_key(&funding_output));
}
check_added_monitors!(nodes[1], 1);
let funding_signed = get_event_msg!(nodes[1], MessageSendEvent::SendFundingSigned, nodes[0].node.get_our_node_id());
let closing_signed_node_0 = get_event_msg!(nodes[0], MessageSendEvent::SendClosingSigned, nodes[1].node.get_our_node_id());
nodes[1].node.handle_closing_signed(&nodes[0].node.get_our_node_id(), &closing_signed_node_0);
{
- // Assert that the channel is kept in the `id_to_peer` map for both nodes until the
+ // Assert that the channel is kept in the `outpoint_to_peer` map for both nodes until the
// channel can be fully closed by both parties (i.e. no outstanding htlcs exists, the
// fee for the closing transaction has been negotiated and the parties has the other
// party's signature for the fee negotiated closing transaction.)
- let nodes_0_lock = nodes[0].node.id_to_peer.lock().unwrap();
+ let nodes_0_lock = nodes[0].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_0_lock.len(), 1);
- assert!(nodes_0_lock.contains_key(&channel_id));
+ assert!(nodes_0_lock.contains_key(&funding_output));
}
{
// At this stage, `nodes[1]` has proposed a fee for the closing transaction in the
// `handle_closing_signed` call above. As `nodes[1]` has not yet received the signature
// from `nodes[0]` for the closing transaction with the proposed fee, the channel is
- // kept in the `nodes[1]`'s `id_to_peer` map.
- let nodes_1_lock = nodes[1].node.id_to_peer.lock().unwrap();
+ // kept in the `nodes[1]`'s `outpoint_to_peer` map.
+ let nodes_1_lock = nodes[1].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_1_lock.len(), 1);
- assert!(nodes_1_lock.contains_key(&channel_id));
+ assert!(nodes_1_lock.contains_key(&funding_output));
}
nodes[0].node.handle_closing_signed(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendClosingSigned, nodes[0].node.get_our_node_id()));
// `nodes[0]` accepts `nodes[1]`'s proposed fee for the closing transaction, and
// therefore has all it needs to fully close the channel (both signatures for the
// closing transaction).
- // Assert that the channel is removed from `nodes[0]`'s `id_to_peer` map as it can be
+ // Assert that the channel is removed from `nodes[0]`'s `outpoint_to_peer` map as it can be
// fully closed by `nodes[0]`.
- assert_eq!(nodes[0].node.id_to_peer.lock().unwrap().len(), 0);
+ assert_eq!(nodes[0].node.outpoint_to_peer.lock().unwrap().len(), 0);
- // Assert that the channel is still in `nodes[1]`'s `id_to_peer` map, as `nodes[1]`
+ // Assert that the channel is still in `nodes[1]`'s `outpoint_to_peer` map, as `nodes[1]`
// doesn't have `nodes[0]`'s signature for the closing transaction yet.
- let nodes_1_lock = nodes[1].node.id_to_peer.lock().unwrap();
+ let nodes_1_lock = nodes[1].node.outpoint_to_peer.lock().unwrap();
assert_eq!(nodes_1_lock.len(), 1);
- assert!(nodes_1_lock.contains_key(&channel_id));
+ assert!(nodes_1_lock.contains_key(&funding_output));
}
let (_nodes_0_update, closing_signed_node_0) = get_closing_signed_broadcast!(nodes[0].node, nodes[1].node.get_our_node_id());
nodes[1].node.handle_closing_signed(&nodes[0].node.get_our_node_id(), &closing_signed_node_0.unwrap());
{
- // Assert that the channel has now been removed from both parties `id_to_peer` map once
+ // Assert that the channel has now been removed from both parties `outpoint_to_peer` map once
// they both have everything required to fully close the channel.
- assert_eq!(nodes[1].node.id_to_peer.lock().unwrap().len(), 0);
+ assert_eq!(nodes[1].node.outpoint_to_peer.lock().unwrap().len(), 0);
}
let (_nodes_1_update, _none) = get_closing_signed_broadcast!(nodes[1].node, nodes[0].node.get_our_node_id());
//! (see [BOLT-4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#basic-multi-part-payments) for more information).
//! - `Wumbo` - requires/supports that a node create large channels. Called `option_support_large_channel` in the spec.
//! (see [BOLT-2](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message) for more information).
+//! - `AnchorsZeroFeeHtlcTx` - requires/supports that commitment transactions include anchor outputs
+//! and HTLC transactions are pre-signed with zero fee (see
+//! [BOLT-3](https://github.com/lightning/bolts/blob/master/03-transactions.md) for more
+//! information).
+//! - `RouteBlinding` - requires/supports that a node can relay payments over blinded paths
+//! (see [BOLT-4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#route-blinding) for more information).
//! - `ShutdownAnySegwit` - requires/supports that future segwit versions are allowed in `shutdown`
//! (see [BOLT-2](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md) for more information).
//! - `OnionMessages` - requires/supports forwarding onion messages
//! for more info).
//! - `Keysend` - send funds to a node without an invoice
//! (see the [`Keysend` feature assignment proposal](https://github.com/lightning/bolts/issues/605#issuecomment-606679798) for more information).
-//! - `AnchorsZeroFeeHtlcTx` - requires/supports that commitment transactions include anchor outputs
-//! and HTLC transactions are pre-signed with zero fee (see
-//! [BOLT-3](https://github.com/lightning/bolts/blob/master/03-transactions.md) for more
-//! information).
//!
//! LDK knows about the following features, but does not support them:
//! - `AnchorsNonzeroFeeHtlcTx` - the initial version of anchor outputs, which was later found to be
// Byte 2
BasicMPP | Wumbo | AnchorsNonzeroFeeHtlcTx | AnchorsZeroFeeHtlcTx,
// Byte 3
- ShutdownAnySegwit | Taproot,
+ RouteBlinding | ShutdownAnySegwit | Taproot,
// Byte 4
OnionMessages,
// Byte 5
// Byte 2
BasicMPP | Wumbo | AnchorsNonzeroFeeHtlcTx | AnchorsZeroFeeHtlcTx,
// Byte 3
- ShutdownAnySegwit | Taproot,
+ RouteBlinding | ShutdownAnySegwit | Taproot,
// Byte 4
OnionMessages,
// Byte 5
define_feature!(23, AnchorsZeroFeeHtlcTx, [InitContext, NodeContext, ChannelTypeContext],
"Feature flags for `option_anchors_zero_fee_htlc_tx`.", set_anchors_zero_fee_htlc_tx_optional,
set_anchors_zero_fee_htlc_tx_required, supports_anchors_zero_fee_htlc_tx, requires_anchors_zero_fee_htlc_tx);
+ define_feature!(25, RouteBlinding, [InitContext, NodeContext],
+ "Feature flags for `option_route_blinding`.", set_route_blinding_optional,
+ set_route_blinding_required, supports_route_blinding, requires_route_blinding);
define_feature!(27, ShutdownAnySegwit, [InitContext, NodeContext],
"Feature flags for `opt_shutdown_anysegwit`.", set_shutdown_any_segwit_optional,
set_shutdown_any_segwit_required, supports_shutdown_anysegwit, requires_shutdown_anysegwit);
}
impl<T: sealed::Context> Hash for Features<T> {
fn hash<H: Hasher>(&self, hasher: &mut H) {
- self.flags.hash(hasher);
+ let mut nonzero_flags = &self.flags[..];
+ while nonzero_flags.last() == Some(&0) {
+ nonzero_flags = &nonzero_flags[..nonzero_flags.len() - 1];
+ }
+ nonzero_flags.hash(hasher);
}
}
impl<T: sealed::Context> PartialEq for Features<T> {
fn eq(&self, o: &Self) -> bool {
- self.flags.eq(&o.flags)
+ let mut o_iter = o.flags.iter();
+ let mut self_iter = self.flags.iter();
+ loop {
+ match (o_iter.next(), self_iter.next()) {
+ (Some(o), Some(us)) => if o != us { return false },
+ (Some(b), None) | (None, Some(b)) => if *b != 0 { return false },
+ (None, None) => return true,
+ }
+ }
}
}
impl<T: sealed::Context> PartialOrd for Features<T> {
init_features.set_basic_mpp_optional();
init_features.set_wumbo_optional();
init_features.set_anchors_zero_fee_htlc_tx_optional();
+ init_features.set_route_blinding_optional();
init_features.set_shutdown_any_segwit_optional();
init_features.set_onion_messages_optional();
init_features.set_channel_type_optional();
// Check that the flags are as expected:
// - option_data_loss_protect (req)
// - var_onion_optin (req) | static_remote_key (req) | payment_secret(req)
- // - basic_mpp | wumbo | anchors_zero_fee_htlc_tx
- // - opt_shutdown_anysegwit
+ // - basic_mpp | wumbo | option_anchors_zero_fee_htlc_tx
+ // - option_route_blinding | opt_shutdown_anysegwit
// - onion_messages
// - option_channel_type | option_scid_alias
// - option_zeroconf
assert_eq!(node_features.flags[0], 0b00000001);
assert_eq!(node_features.flags[1], 0b01010001);
assert_eq!(node_features.flags[2], 0b10001010);
- assert_eq!(node_features.flags[3], 0b00001000);
+ assert_eq!(node_features.flags[3], 0b00001010);
assert_eq!(node_features.flags[4], 0b10000000);
assert_eq!(node_features.flags[5], 0b10100000);
assert_eq!(node_features.flags[6], 0b00001000);
assert!(!converted_features.supports_any_optional_bits());
assert!(converted_features.requires_static_remote_key());
}
+
+ #[test]
+ #[cfg(feature = "std")]
+ fn test_excess_zero_bytes_ignored() {
+ // Checks that `Hash` and `PartialEq` ignore excess zero bytes, which may appear due to
+ // feature conversion or because a peer serialized their feature poorly.
+ use std::collections::hash_map::DefaultHasher;
+ use std::hash::{Hash, Hasher};
+
+ let mut zerod_features = InitFeatures::empty();
+ zerod_features.flags = vec![0];
+ let empty_features = InitFeatures::empty();
+ assert!(empty_features.flags.is_empty());
+
+ assert_eq!(zerod_features, empty_features);
+
+ let mut zerod_hash = DefaultHasher::new();
+ zerod_features.hash(&mut zerod_hash);
+ let mut empty_hash = DefaultHasher::new();
+ empty_features.hash(&mut empty_hash);
+ assert_eq!(zerod_hash.finish(), empty_hash.finish());
+ }
}
(tx, as_channel_ready.channel_id)
}
-pub fn create_chan_between_nodes_with_value_init<'a, 'b, 'c>(node_a: &Node<'a, 'b, 'c>, node_b: &Node<'a, 'b, 'c>, channel_value: u64, push_msat: u64) -> Transaction {
+pub fn exchange_open_accept_chan<'a, 'b, 'c>(node_a: &Node<'a, 'b, 'c>, node_b: &Node<'a, 'b, 'c>, channel_value: u64, push_msat: u64) -> ChannelId {
let create_chan_id = node_a.node.create_channel(node_b.node.get_our_node_id(), channel_value, push_msat, 42, None, None).unwrap();
let open_channel_msg = get_event_msg!(node_a, MessageSendEvent::SendOpenChannel, node_b.node.get_our_node_id());
assert_eq!(open_channel_msg.temporary_channel_id, create_chan_id);
node_a.node.handle_accept_channel(&node_b.node.get_our_node_id(), &accept_channel_msg);
assert_ne!(node_b.node.list_channels().iter().find(|channel| channel.channel_id == create_chan_id).unwrap().user_channel_id, 0);
+ create_chan_id
+}
+
+pub fn create_chan_between_nodes_with_value_init<'a, 'b, 'c>(node_a: &Node<'a, 'b, 'c>, node_b: &Node<'a, 'b, 'c>, channel_value: u64, push_msat: u64) -> Transaction {
+ let create_chan_id = exchange_open_accept_chan(node_a, node_b, channel_value, push_msat);
sign_funding_transaction(node_a, node_b, channel_value, create_chan_id)
}
pub reason: Option<ClosureReason>,
}
+impl ExpectedCloseEvent {
+ pub fn from_id_reason(channel_id: ChannelId, discard_funding: bool, reason: ClosureReason) -> Self {
+ Self {
+ channel_capacity_sats: None,
+ channel_id: Some(channel_id),
+ counterparty_node_id: None,
+ discard_funding,
+ reason: Some(reason),
+ }
+ }
+}
+
/// Check that multiple channel closing events have been issued.
pub fn check_closed_events(node: &Node, expected_close_events: &[ExpectedCloseEvent]) {
let closed_events_count = expected_close_events.len();
//check to see if the funder, who sent the update_fee request, can afford the new fee (funder_balance >= fee+channel_reserve)
//Should produce and error.
nodes[1].node.handle_commitment_signed(&nodes[0].node.get_our_node_id(), &commit_signed_msg);
- nodes[1].logger.assert_log("lightning::ln::channelmanager", "Funding remote cannot afford proposed new fee".to_string(), 1);
+ nodes[1].logger.assert_log_contains("lightning::ln::channelmanager", "Funding remote cannot afford proposed new fee", 3);
check_added_monitors!(nodes[1], 1);
check_closed_broadcast!(nodes[1], true);
check_closed_event!(nodes[1], 1, ClosureReason::ProcessingError { err: String::from("Funding remote cannot afford proposed new fee") },
nodes[0].node.handle_update_add_htlc(&nodes[1].node.get_our_node_id(), &msg);
// Check that the payment failed and the channel is closed in response to the malicious UpdateAdd.
- nodes[0].logger.assert_log("lightning::ln::channelmanager", "Cannot accept HTLC that would put our balance under counterparty-announced channel reserve value".to_string(), 1);
+ nodes[0].logger.assert_log_contains("lightning::ln::channelmanager", "Cannot accept HTLC that would put our balance under counterparty-announced channel reserve value", 3);
assert_eq!(nodes[0].node.list_channels().len(), 0);
let err_msg = check_closed_broadcast!(nodes[0], true).unwrap();
assert_eq!(err_msg.data, "Cannot accept HTLC that would put our balance under counterparty-announced channel reserve value");
nodes[1].node.handle_update_add_htlc(&nodes[0].node.get_our_node_id(), &msg);
// Check that the payment failed and the channel is closed in response to the malicious UpdateAdd.
- nodes[1].logger.assert_log("lightning::ln::channelmanager", "Remote HTLC add would put them under remote reserve value".to_string(), 1);
+ nodes[1].logger.assert_log_contains("lightning::ln::channelmanager", "Remote HTLC add would put them under remote reserve value", 3);
assert_eq!(nodes[1].node.list_channels().len(), 1);
let err_msg = check_closed_broadcast!(nodes[1], true).unwrap();
assert_eq!(err_msg.data, "Remote HTLC add would put them under remote reserve value");
let events = nodes[1].node.get_and_clear_pending_events();
assert_eq!(events.len(), if deliver_bs_raa { 3 + nodes.len() - 1 } else { 4 + nodes.len() });
- match events[0] {
- Event::ChannelClosed { reason: ClosureReason::CommitmentTxConfirmed, .. } => { },
- _ => panic!("Unexepected event"),
- }
- match events[1] {
- Event::PaymentPathFailed { ref payment_hash, .. } => {
- assert_eq!(*payment_hash, fourth_payment_hash);
- },
- _ => panic!("Unexpected event"),
- }
- match events[2] {
- Event::PaymentFailed { ref payment_hash, .. } => {
- assert_eq!(*payment_hash, fourth_payment_hash);
- },
- _ => panic!("Unexpected event"),
- }
+ assert!(events.iter().any(|ev| matches!(
+ ev,
+ Event::ChannelClosed { reason: ClosureReason::CommitmentTxConfirmed, .. }
+ )));
+ assert!(events.iter().any(|ev| matches!(
+ ev,
+ Event::PaymentPathFailed { ref payment_hash, .. } if *payment_hash == fourth_payment_hash
+ )));
+ assert!(events.iter().any(|ev| matches!(
+ ev,
+ Event::PaymentFailed { ref payment_hash, .. } if *payment_hash == fourth_payment_hash
+ )));
nodes[1].node.process_pending_htlc_forwards();
check_added_monitors!(nodes[1], 1);
updates.update_add_htlcs[0].amount_msat = 0;
nodes[1].node.handle_update_add_htlc(&nodes[0].node.get_our_node_id(), &updates.update_add_htlcs[0]);
- nodes[1].logger.assert_log("lightning::ln::channelmanager", "Remote side tried to send a 0-msat HTLC".to_string(), 1);
+ nodes[1].logger.assert_log_contains("lightning::ln::channelmanager", "Remote side tried to send a 0-msat HTLC", 3);
check_closed_broadcast!(nodes[1], true).unwrap();
check_added_monitors!(nodes[1], 1);
check_closed_event!(nodes[1], 1, ClosureReason::ProcessingError { err: "Remote side tried to send a 0-msat HTLC".to_string() },
}
}
+#[test]
+fn test_peer_funding_sidechannel() {
+ // Test that if a peer somehow learns which txid we'll use for our channel funding before we
+ // receive `funding_transaction_generated` the peer cannot cause us to crash. We'd previously
+ // assumed that LDK would receive `funding_transaction_generated` prior to our peer learning
+ // the txid and panicked if the peer tried to open a redundant channel to us with the same
+ // funding outpoint.
+ //
+ // While this assumption is generally safe, some users may have out-of-band protocols where
+ // they notify their LSP about a funding outpoint first, or this may be violated in the future
+ // with collaborative transaction construction protocols, i.e. dual-funding.
+ let chanmon_cfgs = create_chanmon_cfgs(3);
+ let node_cfgs = create_node_cfgs(3, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(3, &node_cfgs, &[None, None, None]);
+ let nodes = create_network(3, &node_cfgs, &node_chanmgrs);
+
+ let temp_chan_id_ab = exchange_open_accept_chan(&nodes[0], &nodes[1], 1_000_000, 0);
+ let temp_chan_id_ca = exchange_open_accept_chan(&nodes[2], &nodes[0], 1_000_000, 0);
+
+ let (_, tx, funding_output) =
+ create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42);
+
+ let cs_funding_events = nodes[2].node.get_and_clear_pending_events();
+ assert_eq!(cs_funding_events.len(), 1);
+ match cs_funding_events[0] {
+ Event::FundingGenerationReady { .. } => {}
+ _ => panic!("Unexpected event {:?}", cs_funding_events),
+ }
+
+ nodes[2].node.funding_transaction_generated_unchecked(&temp_chan_id_ca, &nodes[0].node.get_our_node_id(), tx.clone(), funding_output.index).unwrap();
+ let funding_created_msg = get_event_msg!(nodes[2], MessageSendEvent::SendFundingCreated, nodes[0].node.get_our_node_id());
+ nodes[0].node.handle_funding_created(&nodes[2].node.get_our_node_id(), &funding_created_msg);
+ get_event_msg!(nodes[0], MessageSendEvent::SendFundingSigned, nodes[2].node.get_our_node_id());
+ expect_channel_pending_event(&nodes[0], &nodes[2].node.get_our_node_id());
+ check_added_monitors!(nodes[0], 1);
+
+ let res = nodes[0].node.funding_transaction_generated(&temp_chan_id_ab, &nodes[1].node.get_our_node_id(), tx.clone());
+ let err_msg = format!("{:?}", res.unwrap_err());
+ assert!(err_msg.contains("An existing channel using outpoint "));
+ assert!(err_msg.contains(" is open with peer"));
+ // Even though the last funding_transaction_generated errored, it still generated a
+ // SendFundingCreated. However, when the peer responds with a funding_signed it will send the
+ // appropriate error message.
+ let as_funding_created = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &as_funding_created);
+ check_added_monitors!(nodes[1], 1);
+ expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
+ let reason = ClosureReason::ProcessingError { err: format!("An existing channel using outpoint {} is open with peer {}", funding_output, nodes[2].node.get_our_node_id()), };
+ check_closed_events(&nodes[0], &[ExpectedCloseEvent::from_id_reason(funding_output.to_channel_id(), true, reason)]);
+
+ let funding_signed = get_event_msg!(nodes[1], MessageSendEvent::SendFundingSigned, nodes[0].node.get_our_node_id());
+ nodes[0].node.handle_funding_signed(&nodes[1].node.get_our_node_id(), &funding_signed);
+ get_err_msg(&nodes[0], &nodes[1].node.get_our_node_id());
+}
+
+#[test]
+fn test_duplicate_conflicting_funding_from_second_peer() {
+ // Test that if a user tries to fund a channel with a funding outpoint they'd previously used
+ // we don't try to remove the previous ChannelMonitor. This is largely a test to ensure we
+ // don't regress in the fuzzer, as such funding getting passed our outpoint-matches checks
+ // implies the user (and our counterparty) has reused cryptographic keys across channels, which
+ // we require the user not do.
+ let chanmon_cfgs = create_chanmon_cfgs(4);
+ let node_cfgs = create_node_cfgs(4, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(4, &node_cfgs, &[None, None, None, None]);
+ let nodes = create_network(4, &node_cfgs, &node_chanmgrs);
+
+ let temp_chan_id = exchange_open_accept_chan(&nodes[0], &nodes[1], 1_000_000, 0);
+
+ let (_, tx, funding_output) =
+ create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 1_000_000, 42);
+
+ // Now that we have a funding outpoint, create a dummy `ChannelMonitor` and insert it into
+ // nodes[0]'s ChainMonitor so that the initial `ChannelMonitor` write fails.
+ let dummy_chan_id = create_chan_between_nodes(&nodes[2], &nodes[3]).3;
+ let dummy_monitor = get_monitor!(nodes[2], dummy_chan_id).clone();
+ nodes[0].chain_monitor.chain_monitor.watch_channel(funding_output, dummy_monitor).unwrap();
+
+ nodes[0].node.funding_transaction_generated(&temp_chan_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap();
+
+ let mut funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg);
+ let funding_signed_msg = get_event_msg!(nodes[1], MessageSendEvent::SendFundingSigned, nodes[0].node.get_our_node_id());
+ check_added_monitors!(nodes[1], 1);
+ expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
+
+ nodes[0].node.handle_funding_signed(&nodes[1].node.get_our_node_id(), &funding_signed_msg);
+ // At this point, the channel should be closed, after having generated one monitor write (the
+ // watch_channel call which failed), but zero monitor updates.
+ check_added_monitors!(nodes[0], 1);
+ get_err_msg(&nodes[0], &nodes[1].node.get_our_node_id());
+ let err_reason = ClosureReason::ProcessingError { err: "Channel funding outpoint was a duplicate".to_owned() };
+ check_closed_events(&nodes[0], &[ExpectedCloseEvent::from_id_reason(funding_signed_msg.channel_id, true, err_reason)]);
+}
+
+#[test]
+fn test_duplicate_funding_err_in_funding() {
+ // Test that if we have a live channel with one peer, then another peer comes along and tries
+ // to create a second channel with the same txid we'll fail and not overwrite the
+ // outpoint_to_peer map in `ChannelManager`.
+ //
+ // This was previously broken.
+ let chanmon_cfgs = create_chanmon_cfgs(3);
+ let node_cfgs = create_node_cfgs(3, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(3, &node_cfgs, &[None, None, None]);
+ let nodes = create_network(3, &node_cfgs, &node_chanmgrs);
+
+ let (_, _, _, real_channel_id, funding_tx) = create_chan_between_nodes(&nodes[0], &nodes[1]);
+ let real_chan_funding_txo = chain::transaction::OutPoint { txid: funding_tx.txid(), index: 0 };
+ assert_eq!(real_chan_funding_txo.to_channel_id(), real_channel_id);
+
+ nodes[2].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
+ let mut open_chan_msg = get_event_msg!(nodes[2], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
+ let node_c_temp_chan_id = open_chan_msg.temporary_channel_id;
+ open_chan_msg.temporary_channel_id = real_channel_id;
+ nodes[1].node.handle_open_channel(&nodes[2].node.get_our_node_id(), &open_chan_msg);
+ let mut accept_chan_msg = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[2].node.get_our_node_id());
+ accept_chan_msg.temporary_channel_id = node_c_temp_chan_id;
+ nodes[2].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &accept_chan_msg);
+
+ // Now that we have a second channel with the same funding txo, send a bogus funding message
+ // and let nodes[1] remove the inbound channel.
+ let (_, funding_tx, _) = create_funding_transaction(&nodes[2], &nodes[1].node.get_our_node_id(), 100_000, 42);
+
+ nodes[2].node.funding_transaction_generated(&node_c_temp_chan_id, &nodes[1].node.get_our_node_id(), funding_tx).unwrap();
+
+ let mut funding_created_msg = get_event_msg!(nodes[2], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+ funding_created_msg.temporary_channel_id = real_channel_id;
+ // Make the signature invalid by changing the funding output
+ funding_created_msg.funding_output_index += 10;
+ nodes[1].node.handle_funding_created(&nodes[2].node.get_our_node_id(), &funding_created_msg);
+ get_err_msg(&nodes[1], &nodes[2].node.get_our_node_id());
+ let err = "Invalid funding_created signature from peer".to_owned();
+ let reason = ClosureReason::ProcessingError { err };
+ let expected_closing = ExpectedCloseEvent::from_id_reason(real_channel_id, false, reason);
+ check_closed_events(&nodes[1], &[expected_closing]);
+
+ assert_eq!(
+ *nodes[1].node.outpoint_to_peer.lock().unwrap().get(&real_chan_funding_txo).unwrap(),
+ nodes[0].node.get_our_node_id()
+ );
+}
+
#[test]
fn test_duplicate_chan_id() {
// Test that if a given peer tries to open a channel with the same channel_id as one that is
chan.get_funding_created(tx.clone(), funding_outpoint, false, &&logger).map_err(|_| ()).unwrap()
},
_ => panic!("Unexpected ChannelPhase variant"),
- }
+ }.unwrap()
};
check_added_monitors!(nodes[0], 0);
- nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created.unwrap());
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created);
// At this point we'll look up if the channel_id is present and immediately fail the channel
// without trying to persist the `ChannelMonitor`.
check_added_monitors!(nodes[1], 0);
+ check_closed_events(&nodes[1], &[
+ ExpectedCloseEvent::from_id_reason(funding_created.temporary_channel_id, false, ClosureReason::ProcessingError {
+ err: "Already had channel with the new channel_id".to_owned()
+ })
+ ]);
+
// ...still, nodes[1] will reject the duplicate channel.
{
let events = nodes[1].node.get_and_clear_pending_msg_events();
#[cfg(test)]
#[allow(unused_mut)]
mod shutdown_tests;
-#[cfg(test)]
+#[cfg(all(test, async_signing))]
#[allow(unused_mut)]
mod async_signer_tests;
fn provided_init_features(&self, their_node_id: &PublicKey) -> InitFeatures;
}
+#[derive(Clone)]
+#[cfg_attr(test, derive(Debug, PartialEq))]
+/// Information communicated in the onion to the recipient for multi-part tracking and proof that
+/// the payment is associated with an invoice.
+pub struct FinalOnionHopData {
+ /// When sending a multi-part payment, this secret is used to identify a payment across HTLCs.
+ /// Because it is generated by the recipient and included in the invoice, it also provides
+ /// proof to the recipient that the payment was sent by someone with the generated invoice.
+ pub payment_secret: PaymentSecret,
+ /// The intended total amount that this payment is for.
+ ///
+ /// Message serialization may panic if this value is more than 21 million Bitcoin.
+ pub total_msat: u64,
+}
+
mod fuzzy_internal_msgs {
use bitcoin::secp256k1::PublicKey;
use crate::blinded_path::payment::{PaymentConstraints, PaymentRelay};
use crate::prelude::*;
use crate::ln::{PaymentPreimage, PaymentSecret};
use crate::ln::features::BlindedHopFeatures;
+ use super::FinalOnionHopData;
// These types aren't intended to be pub, but are exposed for direct fuzzing (as we deserialize
// them from untrusted input):
- #[derive(Clone)]
- #[cfg_attr(test, derive(Debug, PartialEq))]
- pub struct FinalOnionHopData {
- pub payment_secret: PaymentSecret,
- /// The total value, in msat, of the payment as received by the ultimate recipient.
- /// Message serialization may panic if this value is more than 21 million Bitcoin.
- pub total_msat: u64,
- }
pub enum InboundOnionPayload {
Forward {
use crate::ln::functional_test_utils::*;
use crate::routing::gossip::NodeId;
#[cfg(feature = "std")]
-use {
- crate::util::time::tests::SinceEpoch,
- std::time::{SystemTime, Instant, Duration}
-};
+use std::time::{SystemTime, Instant, Duration};
+#[cfg(not(feature = "no-std"))]
+use crate::util::time::tests::SinceEpoch;
#[test]
fn mpp_failure() {
use bitcoin::blockdata::constants::ChainHash;
use bitcoin::secp256k1::{self, Secp256k1, SecretKey, PublicKey};
-use crate::sign::{KeysManager, NodeSigner, Recipient};
+use crate::sign::{NodeSigner, Recipient};
use crate::events::{EventHandler, EventsProvider, MessageSendEvent, MessageSendEventsProvider};
use crate::ln::ChannelId;
use crate::ln::features::{InitFeatures, NodeFeatures};
#[cfg(not(c_bindings))]
use crate::onion_message::{SimpleArcOnionMessenger, SimpleRefOnionMessenger};
use crate::onion_message::{CustomOnionMessageHandler, OffersMessage, OffersMessageHandler, OnionMessageContents, PendingOnionMessage};
-use crate::routing::gossip::{NetworkGraph, P2PGossipSync, NodeId, NodeAlias};
+use crate::routing::gossip::{NodeId, NodeAlias};
use crate::util::atomic_counter::AtomicCounter;
use crate::util::logger::{Logger, WithContext};
use crate::util::string::PrintableString;
use crate::prelude::*;
use crate::io;
use alloc::collections::VecDeque;
-use crate::sync::{Arc, Mutex, MutexGuard, FairRwLock};
+use crate::sync::{Mutex, MutexGuard, FairRwLock};
use core::sync::atomic::{AtomicBool, AtomicU32, AtomicI32, Ordering};
use core::{cmp, hash, fmt, mem};
use core::ops::Deref;
use core::convert::Infallible;
-#[cfg(feature = "std")] use std::error;
+#[cfg(feature = "std")]
+use std::error;
+#[cfg(not(c_bindings))]
+use {
+ crate::routing::gossip::{NetworkGraph, P2PGossipSync},
+ crate::sign::KeysManager,
+ crate::sync::Arc,
+};
use bitcoin::hashes::sha256::Hash as Sha256;
use bitcoin::hashes::sha256::HashEngine as Sha256Engine;
use crate::sign::EntropySource;
use crate::chain::transaction::OutPoint;
use crate::events::{ClosureReason, Event, HTLCDestination, MessageSendEvent, MessageSendEventsProvider};
-use crate::ln::channelmanager::{ChannelManager, ChannelManagerReadArgs, PaymentId, Retry, RecipientOnionFields};
+use crate::ln::channelmanager::{ChannelManager, ChannelManagerReadArgs, PaymentId, RecipientOnionFields};
use crate::ln::msgs;
use crate::ln::msgs::{ChannelMessageHandler, RoutingMessageHandler, ErrorAction};
-use crate::routing::router::{RouteParameters, PaymentParameters};
use crate::util::test_channel_signer::TestChannelSigner;
use crate::util::test_utils;
use crate::util::errors::APIError;
use crate::util::ser::{Writeable, ReadableArgs};
use crate::util::config::UserConfig;
-use crate::util::string::UntrustedString;
use bitcoin::hash_types::BlockHash;
#[cfg(feature = "std")]
fn do_test_data_loss_protect(reconnect_panicing: bool, substantially_old: bool, not_stale: bool) {
+ use crate::routing::router::{RouteParameters, PaymentParameters};
+ use crate::ln::channelmanager::Retry;
+ use crate::util::string::UntrustedString;
// When we get a data_loss_protect proving we're behind, we immediately panic as the
// chain::Watch API requirements have been violated (e.g. the user restored from a backup). The
// panic message informs the user they should force-close without broadcasting, which is tested
use crate::events::{Event, EventsProvider};
use crate::ln::features::InitFeatures;
use crate::ln::msgs::{self, DecodeError, OnionMessageHandler, SocketAddress};
-use crate::sign::{NodeSigner, Recipient};
+use crate::sign::{EntropySource, NodeSigner, Recipient};
use crate::util::ser::{FixedLengthReader, LengthReadable, Writeable, Writer};
use crate::util::test_utils;
use super::{CustomOnionMessageHandler, Destination, MessageRouter, OffersMessage, OffersMessageHandler, OnionMessageContents, OnionMessagePath, OnionMessenger, PendingOnionMessage, SendError};
use bitcoin::network::constants::Network;
use bitcoin::hashes::hex::FromHex;
-use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey};
+use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey, self};
use crate::io;
use crate::io_extras::read_to_end;
Some(vec![SocketAddress::TcpIpV4 { addr: [127, 0, 0, 1], port: 1000 }]),
})
}
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES,
+ _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ unreachable!()
+ }
}
struct TestOffersMessageHandler {}
use crate::blinded_path::message::{advance_path_by_one, ForwardTlvs, ReceiveTlvs};
use crate::blinded_path::utils;
use crate::events::{Event, EventHandler, EventsProvider};
-use crate::sign::{EntropySource, KeysManager, NodeSigner, Recipient};
+use crate::sign::{EntropySource, NodeSigner, Recipient};
#[cfg(not(c_bindings))]
use crate::ln::channelmanager::{SimpleArcChannelManager, SimpleRefChannelManager};
use crate::ln::features::{InitFeatures, NodeFeatures};
use crate::ln::msgs::{self, OnionMessage, OnionMessageHandler, SocketAddress};
use crate::ln::onion_utils;
-use crate::ln::peer_handler::IgnoringMessageHandler;
use crate::routing::gossip::{NetworkGraph, NodeId};
pub use super::packet::OnionMessageContents;
use super::packet::ParsedOnionMessageContents;
use core::fmt;
use core::ops::Deref;
use crate::io;
-use crate::sync::{Arc, Mutex};
+use crate::sync::Mutex;
use crate::prelude::*;
+#[cfg(not(c_bindings))]
+use {
+ crate::sign::KeysManager,
+ crate::ln::peer_handler::IgnoringMessageHandler,
+ crate::sync::Arc,
+};
+
pub(super) const MAX_TIMER_TICKS: usize = 2;
/// A sender, receiver and forwarder of [`OnionMessage`]s.
/// # extern crate bitcoin;
/// # use bitcoin::hashes::_export::_core::time::Duration;
/// # use bitcoin::hashes::hex::FromHex;
-/// # use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey};
+/// # use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey, self};
/// # use lightning::blinded_path::BlindedPath;
-/// # use lightning::sign::KeysManager;
+/// # use lightning::sign::{EntropySource, KeysManager};
/// # use lightning::ln::peer_handler::IgnoringMessageHandler;
/// # use lightning::onion_message::{OnionMessageContents, Destination, MessageRouter, OnionMessagePath, OnionMessenger};
/// # use lightning::util::logger::{Logger, Record};
/// # first_node_addresses: None,
/// # })
/// # }
+/// # fn create_blinded_paths<ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification>(
+/// # &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES, _secp_ctx: &Secp256k1<T>
+/// # ) -> Result<Vec<BlindedPath>, ()> {
+/// # unreachable!()
+/// # }
/// # }
/// # let seed = [42u8; 32];
/// # let time = Duration::from_secs(123456);
///
/// These are obtained when released from [`OnionMessenger`]'s handlers after which they are
/// enqueued for sending.
-pub type PendingOnionMessage<T: OnionMessageContents> = (T, Destination, Option<BlindedPath>);
+pub type PendingOnionMessage<T> = (T, Destination, Option<BlindedPath>);
pub(crate) fn new_pending_onion_message<T: OnionMessageContents>(
contents: T, destination: Destination, reply_path: Option<BlindedPath>
fn find_path(
&self, sender: PublicKey, peers: Vec<PublicKey>, destination: Destination
) -> Result<OnionMessagePath, ()>;
+
+ /// Creates [`BlindedPath`]s to the `recipient` node. The nodes in `peers` are assumed to be
+ /// direct peers with the `recipient`.
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, recipient: PublicKey, peers: Vec<PublicKey>, entropy_source: &ES,
+ secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()>;
}
/// A [`MessageRouter`] that can only route to a directly connected [`Destination`].
}
}
}
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, recipient: PublicKey, peers: Vec<PublicKey>, entropy_source: &ES,
+ secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ // Limit the number of blinded paths that are computed.
+ const MAX_PATHS: usize = 3;
+
+ // Ensure peers have at least three channels so that it is more difficult to infer the
+ // recipient's node_id.
+ const MIN_PEER_CHANNELS: usize = 3;
+
+ let network_graph = self.network_graph.deref().read_only();
+ let paths = peers.iter()
+ // Limit to peers with announced channels
+ .filter(|pubkey|
+ network_graph
+ .node(&NodeId::from_pubkey(pubkey))
+ .map(|info| &info.channels[..])
+ .map(|channels| channels.len() >= MIN_PEER_CHANNELS)
+ .unwrap_or(false)
+ )
+ .map(|pubkey| vec![*pubkey, recipient])
+ .map(|node_pks| BlindedPath::new_for_message(&node_pks, entropy_source, secp_ctx))
+ .take(MAX_PATHS)
+ .collect::<Result<Vec<_>, _>>();
+
+ match paths {
+ Ok(paths) if !paths.is_empty() => Ok(paths),
+ _ => {
+ if network_graph.nodes().contains_key(&NodeId::from_pubkey(&recipient)) {
+ BlindedPath::one_hop_for_message(recipient, entropy_source, secp_ctx)
+ .map(|path| vec![path])
+ } else {
+ Err(())
+ }
+ },
+ }
+ }
}
/// A path for sending an [`OnionMessage`].
use crate::offers::invoice::Bolt12Invoice;
use crate::offers::parse::Bolt12ParseError;
use crate::onion_message::OnionMessageContents;
-use crate::onion_message::messenger::PendingOnionMessage;
use crate::util::logger::Logger;
use crate::util::ser::{Readable, ReadableArgs, Writeable, Writer};
+#[cfg(not(c_bindings))]
+use crate::onion_message::messenger::PendingOnionMessage;
use crate::prelude::*;
//! The router finds paths within a [`NetworkGraph`] for a payment.
-use bitcoin::secp256k1::PublicKey;
+use bitcoin::secp256k1::{PublicKey, Secp256k1, self};
use bitcoin::hashes::Hash;
use bitcoin::hashes::sha256::Hash as Sha256;
use crate::blinded_path::{BlindedHop, BlindedPath};
+use crate::blinded_path::payment::{ForwardNode, ForwardTlvs, PaymentConstraints, PaymentRelay, ReceiveTlvs};
use crate::ln::PaymentHash;
use crate::ln::channelmanager::{ChannelDetails, PaymentId};
-use crate::ln::features::{Bolt11InvoiceFeatures, Bolt12InvoiceFeatures, ChannelFeatures, NodeFeatures};
+use crate::ln::features::{BlindedHopFeatures, Bolt11InvoiceFeatures, Bolt12InvoiceFeatures, ChannelFeatures, NodeFeatures};
use crate::ln::msgs::{DecodeError, ErrorAction, LightningError, MAX_VALUE_MSAT};
use crate::offers::invoice::{BlindedPayInfo, Bolt12Invoice};
+use crate::onion_message::{DefaultMessageRouter, Destination, MessageRouter, OnionMessagePath};
use crate::routing::gossip::{DirectedChannelInfo, EffectiveCapacity, ReadOnlyNetworkGraph, NetworkGraph, NodeId, RoutingFees};
use crate::routing::scoring::{ChannelUsage, LockableScore, ScoreLookUp};
+use crate::sign::EntropySource;
use crate::util::ser::{Writeable, Readable, ReadableArgs, Writer};
use crate::util::logger::{Level, Logger};
use crate::util::chacha20::ChaCha20;
use core::ops::Deref;
/// A [`Router`] implemented using [`find_route`].
-pub struct DefaultRouter<G: Deref<Target = NetworkGraph<L>>, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> where
+pub struct DefaultRouter<G: Deref<Target = NetworkGraph<L>> + Clone, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> where
L::Target: Logger,
S::Target: for <'a> LockableScore<'a, ScoreLookUp = Sc>,
{
logger: L,
random_seed_bytes: Mutex<[u8; 32]>,
scorer: S,
- score_params: SP
+ score_params: SP,
+ message_router: DefaultMessageRouter<G, L>,
}
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> DefaultRouter<G, L, S, SP, Sc> where
+impl<G: Deref<Target = NetworkGraph<L>> + Clone, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> DefaultRouter<G, L, S, SP, Sc> where
L::Target: Logger,
S::Target: for <'a> LockableScore<'a, ScoreLookUp = Sc>,
{
/// Creates a new router.
pub fn new(network_graph: G, logger: L, random_seed_bytes: [u8; 32], scorer: S, score_params: SP) -> Self {
let random_seed_bytes = Mutex::new(random_seed_bytes);
- Self { network_graph, logger, random_seed_bytes, scorer, score_params }
+ let message_router = DefaultMessageRouter::new(network_graph.clone());
+ Self { network_graph, logger, random_seed_bytes, scorer, score_params, message_router }
}
}
-impl< G: Deref<Target = NetworkGraph<L>>, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> Router for DefaultRouter<G, L, S, SP, Sc> where
+impl<G: Deref<Target = NetworkGraph<L>> + Clone, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> Router for DefaultRouter<G, L, S, SP, Sc> where
L::Target: Logger,
S::Target: for <'a> LockableScore<'a, ScoreLookUp = Sc>,
{
&random_seed_bytes
)
}
+
+ fn create_blinded_payment_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, recipient: PublicKey, first_hops: Vec<ChannelDetails>, tlvs: ReceiveTlvs,
+ amount_msats: u64, entropy_source: &ES, secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()> {
+ // Limit the number of blinded paths that are computed.
+ const MAX_PAYMENT_PATHS: usize = 3;
+
+ // Ensure peers have at least three channels so that it is more difficult to infer the
+ // recipient's node_id.
+ const MIN_PEER_CHANNELS: usize = 3;
+
+ let network_graph = self.network_graph.deref().read_only();
+ let paths = first_hops.into_iter()
+ .filter(|details| details.counterparty.features.supports_route_blinding())
+ .filter(|details| amount_msats <= details.inbound_capacity_msat)
+ .filter(|details| amount_msats >= details.inbound_htlc_minimum_msat.unwrap_or(0))
+ .filter(|details| amount_msats <= details.inbound_htlc_maximum_msat.unwrap_or(u64::MAX))
+ .filter(|details| network_graph
+ .node(&NodeId::from_pubkey(&details.counterparty.node_id))
+ .map(|node_info| node_info.channels.len() >= MIN_PEER_CHANNELS)
+ .unwrap_or(false)
+ )
+ .filter_map(|details| {
+ let short_channel_id = match details.get_inbound_payment_scid() {
+ Some(short_channel_id) => short_channel_id,
+ None => return None,
+ };
+ let payment_relay: PaymentRelay = match details.counterparty.forwarding_info {
+ Some(forwarding_info) => forwarding_info.into(),
+ None => return None,
+ };
+
+ // Avoid exposing esoteric CLTV expiry deltas
+ let cltv_expiry_delta = match payment_relay.cltv_expiry_delta {
+ 0..=40 => 40u32,
+ 41..=80 => 80u32,
+ 81..=144 => 144u32,
+ 145..=216 => 216u32,
+ _ => return None,
+ };
+
+ let payment_constraints = PaymentConstraints {
+ max_cltv_expiry: tlvs.payment_constraints.max_cltv_expiry + cltv_expiry_delta,
+ htlc_minimum_msat: details.inbound_htlc_minimum_msat.unwrap_or(0),
+ };
+ Some(ForwardNode {
+ tlvs: ForwardTlvs {
+ short_channel_id,
+ payment_relay,
+ payment_constraints,
+ features: BlindedHopFeatures::empty(),
+ },
+ node_id: details.counterparty.node_id,
+ htlc_maximum_msat: details.inbound_htlc_maximum_msat.unwrap_or(u64::MAX),
+ })
+ })
+ .map(|forward_node| {
+ BlindedPath::new_for_payment(
+ &[forward_node], recipient, tlvs.clone(), u64::MAX, entropy_source, secp_ctx
+ )
+ })
+ .take(MAX_PAYMENT_PATHS)
+ .collect::<Result<Vec<_>, _>>();
+
+ match paths {
+ Ok(paths) if !paths.is_empty() => Ok(paths),
+ _ => {
+ if network_graph.nodes().contains_key(&NodeId::from_pubkey(&recipient)) {
+ BlindedPath::one_hop_for_payment(recipient, tlvs, entropy_source, secp_ctx)
+ .map(|path| vec![path])
+ } else {
+ Err(())
+ }
+ },
+ }
+ }
+}
+
+impl< G: Deref<Target = NetworkGraph<L>> + Clone, L: Deref, S: Deref, SP: Sized, Sc: ScoreLookUp<ScoreParams = SP>> MessageRouter for DefaultRouter<G, L, S, SP, Sc> where
+ L::Target: Logger,
+ S::Target: for <'a> LockableScore<'a, ScoreLookUp = Sc>,
+{
+ fn find_path(
+ &self, sender: PublicKey, peers: Vec<PublicKey>, destination: Destination
+ ) -> Result<OnionMessagePath, ()> {
+ self.message_router.find_path(sender, peers, destination)
+ }
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, recipient: PublicKey, peers: Vec<PublicKey>, entropy_source: &ES,
+ secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ self.message_router.create_blinded_paths(recipient, peers, entropy_source, secp_ctx)
+ }
}
/// A trait defining behavior for routing a payment.
-pub trait Router {
+pub trait Router: MessageRouter {
/// Finds a [`Route`] for a payment between the given `payer` and a payee.
///
/// The `payee` and the payment's value are given in [`RouteParameters::payment_params`]
) -> Result<Route, LightningError> {
self.find_route(payer, route_params, first_hops, inflight_htlcs)
}
+
+ /// Creates [`BlindedPath`]s for payment to the `recipient` node. The channels in `first_hops`
+ /// are assumed to be with the `recipient`'s peers. The payment secret and any constraints are
+ /// given in `tlvs`.
+ fn create_blinded_payment_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, recipient: PublicKey, first_hops: Vec<ChannelDetails>, tlvs: ReceiveTlvs,
+ amount_msats: u64, entropy_source: &ES, secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()>;
}
/// [`ScoreLookUp`] implementation that factors in in-flight HTLC liquidity.
#[cfg(any(ldk_bench, not(any(test, fuzzing))))]
const _GRAPH_NODE_FIXED_SIZE: usize = core::mem::size_of::<RouteGraphNode>() - 64;
+/// A [`CandidateRouteHop::FirstHop`] entry.
+#[derive(Clone, Debug)]
+pub struct FirstHopCandidate<'a> {
+ /// Channel details of the first hop
+ ///
+ /// [`ChannelDetails::get_outbound_payment_scid`] MUST be `Some` (indicating the channel
+ /// has been funded and is able to pay), and accessor methods may panic otherwise.
+ ///
+ /// [`find_route`] validates this prior to constructing a [`CandidateRouteHop`].
+ pub details: &'a ChannelDetails,
+ /// The node id of the payer, which is also the source side of this candidate route hop.
+ pub payer_node_id: &'a NodeId,
+}
+
+/// A [`CandidateRouteHop::PublicHop`] entry.
+#[derive(Clone, Debug)]
+pub struct PublicHopCandidate<'a> {
+ /// Information about the channel, including potentially its capacity and
+ /// direction-specific information.
+ pub info: DirectedChannelInfo<'a>,
+ /// The short channel ID of the channel, i.e. the identifier by which we refer to this
+ /// channel.
+ pub short_channel_id: u64,
+}
+
+/// A [`CandidateRouteHop::PrivateHop`] entry.
+#[derive(Clone, Debug)]
+pub struct PrivateHopCandidate<'a> {
+ /// Information about the private hop communicated via BOLT 11.
+ pub hint: &'a RouteHintHop,
+ /// Node id of the next hop in BOLT 11 route hint.
+ pub target_node_id: &'a NodeId
+}
+
+/// A [`CandidateRouteHop::Blinded`] entry.
+#[derive(Clone, Debug)]
+pub struct BlindedPathCandidate<'a> {
+ /// Information about the blinded path including the fee, HTLC amount limits, and
+ /// cryptographic material required to build an HTLC through the given path.
+ pub hint: &'a (BlindedPayInfo, BlindedPath),
+ /// Index of the hint in the original list of blinded hints.
+ ///
+ /// This is used to cheaply uniquely identify this blinded path, even though we don't have
+ /// a short channel ID for this hop.
+ hint_idx: usize,
+}
+
+/// A [`CandidateRouteHop::OneHopBlinded`] entry.
+#[derive(Clone, Debug)]
+pub struct OneHopBlindedPathCandidate<'a> {
+ /// Information about the blinded path including the fee, HTLC amount limits, and
+ /// cryptographic material required to build an HTLC terminating with the given path.
+ ///
+ /// Note that the [`BlindedPayInfo`] is ignored here.
+ pub hint: &'a (BlindedPayInfo, BlindedPath),
+ /// Index of the hint in the original list of blinded hints.
+ ///
+ /// This is used to cheaply uniquely identify this blinded path, even though we don't have
+ /// a short channel ID for this hop.
+ hint_idx: usize,
+}
+
/// A wrapper around the various hop representations.
///
/// Can be used to examine the properties of a hop,
#[derive(Clone, Debug)]
pub enum CandidateRouteHop<'a> {
/// A hop from the payer, where the outbound liquidity is known.
- FirstHop {
- /// Channel details of the first hop
- ///
- /// [`ChannelDetails::get_outbound_payment_scid`] MUST be `Some` (indicating the channel
- /// has been funded and is able to pay), and accessor methods may panic otherwise.
- ///
- /// [`find_route`] validates this prior to constructing a [`CandidateRouteHop`].
- details: &'a ChannelDetails,
- /// The node id of the payer, which is also the source side of this candidate route hop.
- payer_node_id: &'a NodeId,
- },
+ FirstHop(FirstHopCandidate<'a>),
/// A hop found in the [`ReadOnlyNetworkGraph`].
- PublicHop {
- /// Information about the channel, including potentially its capacity and
- /// direction-specific information.
- info: DirectedChannelInfo<'a>,
- /// The short channel ID of the channel, i.e. the identifier by which we refer to this
- /// channel.
- short_channel_id: u64,
- },
+ PublicHop(PublicHopCandidate<'a>),
/// A private hop communicated by the payee, generally via a BOLT 11 invoice.
///
/// Because BOLT 11 route hints can take multiple hops to get to the destination, this may not
/// terminate at the payee.
- PrivateHop {
- /// Information about the private hop communicated via BOLT 11.
- hint: &'a RouteHintHop,
- /// Node id of the next hop in BOLT 11 route hint.
- target_node_id: &'a NodeId
- },
+ PrivateHop(PrivateHopCandidate<'a>),
/// A blinded path which starts with an introduction point and ultimately terminates with the
/// payee.
///
///
/// Because blinded paths are "all or nothing", and we cannot use just one part of a blinded
/// path, the full path is treated as a single [`CandidateRouteHop`].
- Blinded {
- /// Information about the blinded path including the fee, HTLC amount limits, and
- /// cryptographic material required to build an HTLC through the given path.
- hint: &'a (BlindedPayInfo, BlindedPath),
- /// Index of the hint in the original list of blinded hints.
- ///
- /// This is used to cheaply uniquely identify this blinded path, even though we don't have
- /// a short channel ID for this hop.
- hint_idx: usize,
- },
+ Blinded(BlindedPathCandidate<'a>),
/// Similar to [`Self::Blinded`], but the path here only has one hop.
///
/// While we treat this similarly to [`CandidateRouteHop::Blinded`] in many respects (e.g.
///
/// This primarily exists to track that we need to included a blinded path at the end of our
/// [`Route`], even though it doesn't actually add an additional hop in the payment.
- OneHopBlinded {
- /// Information about the blinded path including the fee, HTLC amount limits, and
- /// cryptographic material required to build an HTLC terminating with the given path.
- ///
- /// Note that the [`BlindedPayInfo`] is ignored here.
- hint: &'a (BlindedPayInfo, BlindedPath),
- /// Index of the hint in the original list of blinded hints.
- ///
- /// This is used to cheaply uniquely identify this blinded path, even though we don't have
- /// a short channel ID for this hop.
- hint_idx: usize,
- },
+ OneHopBlinded(OneHopBlindedPathCandidate<'a>),
}
impl<'a> CandidateRouteHop<'a> {
#[inline]
fn short_channel_id(&self) -> Option<u64> {
match self {
- CandidateRouteHop::FirstHop { details, .. } => details.get_outbound_payment_scid(),
- CandidateRouteHop::PublicHop { short_channel_id, .. } => Some(*short_channel_id),
- CandidateRouteHop::PrivateHop { hint, .. } => Some(hint.short_channel_id),
- CandidateRouteHop::Blinded { .. } => None,
- CandidateRouteHop::OneHopBlinded { .. } => None,
+ CandidateRouteHop::FirstHop(hop) => hop.details.get_outbound_payment_scid(),
+ CandidateRouteHop::PublicHop(hop) => Some(hop.short_channel_id),
+ CandidateRouteHop::PrivateHop(hop) => Some(hop.hint.short_channel_id),
+ CandidateRouteHop::Blinded(_) => None,
+ CandidateRouteHop::OneHopBlinded(_) => None,
}
}
#[inline]
pub fn globally_unique_short_channel_id(&self) -> Option<u64> {
match self {
- CandidateRouteHop::FirstHop { details, .. } => if details.is_public { details.short_channel_id } else { None },
- CandidateRouteHop::PublicHop { short_channel_id, .. } => Some(*short_channel_id),
- CandidateRouteHop::PrivateHop { .. } => None,
- CandidateRouteHop::Blinded { .. } => None,
- CandidateRouteHop::OneHopBlinded { .. } => None,
+ CandidateRouteHop::FirstHop(hop) => if hop.details.is_public { hop.details.short_channel_id } else { None },
+ CandidateRouteHop::PublicHop(hop) => Some(hop.short_channel_id),
+ CandidateRouteHop::PrivateHop(_) => None,
+ CandidateRouteHop::Blinded(_) => None,
+ CandidateRouteHop::OneHopBlinded(_) => None,
}
}
// NOTE: This may alloc memory so avoid calling it in a hot code path.
fn features(&self) -> ChannelFeatures {
match self {
- CandidateRouteHop::FirstHop { details, .. } => details.counterparty.features.to_context(),
- CandidateRouteHop::PublicHop { info, .. } => info.channel().features.clone(),
- CandidateRouteHop::PrivateHop { .. } => ChannelFeatures::empty(),
- CandidateRouteHop::Blinded { .. } => ChannelFeatures::empty(),
- CandidateRouteHop::OneHopBlinded { .. } => ChannelFeatures::empty(),
+ CandidateRouteHop::FirstHop(hop) => hop.details.counterparty.features.to_context(),
+ CandidateRouteHop::PublicHop(hop) => hop.info.channel().features.clone(),
+ CandidateRouteHop::PrivateHop(_) => ChannelFeatures::empty(),
+ CandidateRouteHop::Blinded(_) => ChannelFeatures::empty(),
+ CandidateRouteHop::OneHopBlinded(_) => ChannelFeatures::empty(),
}
}
#[inline]
pub fn cltv_expiry_delta(&self) -> u32 {
match self {
- CandidateRouteHop::FirstHop { .. } => 0,
- CandidateRouteHop::PublicHop { info, .. } => info.direction().cltv_expiry_delta as u32,
- CandidateRouteHop::PrivateHop { hint, .. } => hint.cltv_expiry_delta as u32,
- CandidateRouteHop::Blinded { hint, .. } => hint.0.cltv_expiry_delta as u32,
- CandidateRouteHop::OneHopBlinded { .. } => 0,
+ CandidateRouteHop::FirstHop(_) => 0,
+ CandidateRouteHop::PublicHop(hop) => hop.info.direction().cltv_expiry_delta as u32,
+ CandidateRouteHop::PrivateHop(hop) => hop.hint.cltv_expiry_delta as u32,
+ CandidateRouteHop::Blinded(hop) => hop.hint.0.cltv_expiry_delta as u32,
+ CandidateRouteHop::OneHopBlinded(_) => 0,
}
}
#[inline]
pub fn htlc_minimum_msat(&self) -> u64 {
match self {
- CandidateRouteHop::FirstHop { details, .. } => details.next_outbound_htlc_minimum_msat,
- CandidateRouteHop::PublicHop { info, .. } => info.direction().htlc_minimum_msat,
- CandidateRouteHop::PrivateHop { hint, .. } => hint.htlc_minimum_msat.unwrap_or(0),
- CandidateRouteHop::Blinded { hint, .. } => hint.0.htlc_minimum_msat,
+ CandidateRouteHop::FirstHop(hop) => hop.details.next_outbound_htlc_minimum_msat,
+ CandidateRouteHop::PublicHop(hop) => hop.info.direction().htlc_minimum_msat,
+ CandidateRouteHop::PrivateHop(hop) => hop.hint.htlc_minimum_msat.unwrap_or(0),
+ CandidateRouteHop::Blinded(hop) => hop.hint.0.htlc_minimum_msat,
CandidateRouteHop::OneHopBlinded { .. } => 0,
}
}
#[inline]
pub fn fees(&self) -> RoutingFees {
match self {
- CandidateRouteHop::FirstHop { .. } => RoutingFees {
+ CandidateRouteHop::FirstHop(_) => RoutingFees {
base_msat: 0, proportional_millionths: 0,
},
- CandidateRouteHop::PublicHop { info, .. } => info.direction().fees,
- CandidateRouteHop::PrivateHop { hint, .. } => hint.fees,
- CandidateRouteHop::Blinded { hint, .. } => {
+ CandidateRouteHop::PublicHop(hop) => hop.info.direction().fees,
+ CandidateRouteHop::PrivateHop(hop) => hop.hint.fees,
+ CandidateRouteHop::Blinded(hop) => {
RoutingFees {
- base_msat: hint.0.fee_base_msat,
- proportional_millionths: hint.0.fee_proportional_millionths
+ base_msat: hop.hint.0.fee_base_msat,
+ proportional_millionths: hop.hint.0.fee_proportional_millionths
}
},
- CandidateRouteHop::OneHopBlinded { .. } =>
+ CandidateRouteHop::OneHopBlinded(_) =>
RoutingFees { base_msat: 0, proportional_millionths: 0 },
}
}
/// cached!
fn effective_capacity(&self) -> EffectiveCapacity {
match self {
- CandidateRouteHop::FirstHop { details, .. } => EffectiveCapacity::ExactLiquidity {
- liquidity_msat: details.next_outbound_htlc_limit_msat,
+ CandidateRouteHop::FirstHop(hop) => EffectiveCapacity::ExactLiquidity {
+ liquidity_msat: hop.details.next_outbound_htlc_limit_msat,
},
- CandidateRouteHop::PublicHop { info, .. } => info.effective_capacity(),
- CandidateRouteHop::PrivateHop { hint: RouteHintHop { htlc_maximum_msat: Some(max), .. }, .. } =>
+ CandidateRouteHop::PublicHop(hop) => hop.info.effective_capacity(),
+ CandidateRouteHop::PrivateHop(PrivateHopCandidate { hint: RouteHintHop { htlc_maximum_msat: Some(max), .. }, .. }) =>
EffectiveCapacity::HintMaxHTLC { amount_msat: *max },
- CandidateRouteHop::PrivateHop { hint: RouteHintHop { htlc_maximum_msat: None, .. }, .. } =>
+ CandidateRouteHop::PrivateHop(PrivateHopCandidate { hint: RouteHintHop { htlc_maximum_msat: None, .. }, .. }) =>
EffectiveCapacity::Infinite,
- CandidateRouteHop::Blinded { hint, .. } =>
- EffectiveCapacity::HintMaxHTLC { amount_msat: hint.0.htlc_maximum_msat },
- CandidateRouteHop::OneHopBlinded { .. } => EffectiveCapacity::Infinite,
+ CandidateRouteHop::Blinded(hop) =>
+ EffectiveCapacity::HintMaxHTLC { amount_msat: hop.hint.0.htlc_maximum_msat },
+ CandidateRouteHop::OneHopBlinded(_) => EffectiveCapacity::Infinite,
}
}
#[inline]
fn id(&self) -> CandidateHopId {
match self {
- CandidateRouteHop::Blinded { hint_idx, .. } => CandidateHopId::Blinded(*hint_idx),
- CandidateRouteHop::OneHopBlinded { hint_idx, .. } => CandidateHopId::Blinded(*hint_idx),
+ CandidateRouteHop::Blinded(hop) => CandidateHopId::Blinded(hop.hint_idx),
+ CandidateRouteHop::OneHopBlinded(hop) => CandidateHopId::Blinded(hop.hint_idx),
_ => CandidateHopId::Clear((self.short_channel_id().unwrap(), self.source() < self.target().unwrap())),
}
}
fn blinded_path(&self) -> Option<&'a BlindedPath> {
match self {
- CandidateRouteHop::Blinded { hint, .. } | CandidateRouteHop::OneHopBlinded { hint, .. } => {
+ CandidateRouteHop::Blinded(BlindedPathCandidate { hint, .. }) | CandidateRouteHop::OneHopBlinded(OneHopBlindedPathCandidate { hint, .. }) => {
Some(&hint.1)
},
_ => None,
#[inline]
pub fn source(&self) -> NodeId {
match self {
- CandidateRouteHop::FirstHop { payer_node_id, .. } => **payer_node_id,
- CandidateRouteHop::PublicHop { info, .. } => *info.source(),
- CandidateRouteHop::PrivateHop { hint, .. } => hint.src_node_id.into(),
- CandidateRouteHop::Blinded { hint, .. } => hint.1.introduction_node_id.into(),
- CandidateRouteHop::OneHopBlinded { hint, .. } => hint.1.introduction_node_id.into(),
+ CandidateRouteHop::FirstHop(hop) => *hop.payer_node_id,
+ CandidateRouteHop::PublicHop(hop) => *hop.info.source(),
+ CandidateRouteHop::PrivateHop(hop) => hop.hint.src_node_id.into(),
+ CandidateRouteHop::Blinded(hop) => hop.hint.1.introduction_node_id.into(),
+ CandidateRouteHop::OneHopBlinded(hop) => hop.hint.1.introduction_node_id.into(),
}
}
/// Returns the target node id of this hop, if known.
#[inline]
pub fn target(&self) -> Option<NodeId> {
match self {
- CandidateRouteHop::FirstHop { details, .. } => Some(details.counterparty.node_id.into()),
- CandidateRouteHop::PublicHop { info, .. } => Some(*info.target()),
- CandidateRouteHop::PrivateHop { target_node_id, .. } => Some(**target_node_id),
- CandidateRouteHop::Blinded { .. } => None,
- CandidateRouteHop::OneHopBlinded { .. } => None,
+ CandidateRouteHop::FirstHop(hop) => Some(hop.details.counterparty.node_id.into()),
+ CandidateRouteHop::PublicHop(hop) => Some(*hop.info.target()),
+ CandidateRouteHop::PrivateHop(hop) => Some(*hop.target_node_id),
+ CandidateRouteHop::Blinded(_) => None,
+ CandidateRouteHop::OneHopBlinded(_) => None,
}
}
}
impl<'a> fmt::Display for LoggedCandidateHop<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self.0 {
- CandidateRouteHop::Blinded { hint, .. } | CandidateRouteHop::OneHopBlinded { hint, .. } => {
+ CandidateRouteHop::Blinded(BlindedPathCandidate { hint, .. }) | CandidateRouteHop::OneHopBlinded(OneHopBlindedPathCandidate { hint, .. }) => {
"blinded route hint with introduction node id ".fmt(f)?;
hint.1.introduction_node_id.fmt(f)?;
" and blinding point ".fmt(f)?;
hint.1.blinding_point.fmt(f)
},
- CandidateRouteHop::FirstHop { .. } => {
+ CandidateRouteHop::FirstHop(_) => {
"first hop with SCID ".fmt(f)?;
self.0.short_channel_id().unwrap().fmt(f)
},
- CandidateRouteHop::PrivateHop { .. } => {
+ CandidateRouteHop::PrivateHop(_) => {
"route hint with SCID ".fmt(f)?;
self.0.short_channel_id().unwrap().fmt(f)
},
|scid| payment_params.previously_failed_channels.contains(&scid));
let (should_log_candidate, first_hop_details) = match $candidate {
- CandidateRouteHop::FirstHop { details, .. } => (true, Some(details)),
- CandidateRouteHop::PrivateHop { .. } => (true, None),
- CandidateRouteHop::Blinded { .. } => (true, None),
- CandidateRouteHop::OneHopBlinded { .. } => (true, None),
+ CandidateRouteHop::FirstHop(hop) => (true, Some(hop.details)),
+ CandidateRouteHop::PrivateHop(_) => (true, None),
+ CandidateRouteHop::Blinded(_) => (true, None),
+ CandidateRouteHop::OneHopBlinded(_) => (true, None),
_ => (false, None),
};
if !skip_node {
if let Some(first_channels) = first_hop_targets.get(&$node_id) {
for details in first_channels {
- let candidate = CandidateRouteHop::FirstHop {
+ let candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details, payer_node_id: &our_node_id,
- };
+ });
add_entry!(&candidate, fee_to_target_msat,
$next_hops_value_contribution,
next_hops_path_htlc_minimum_msat, next_hops_path_penalty_msat,
if let Some((directed_channel, source)) = chan.as_directed_to(&$node_id) {
if first_hops.is_none() || *source != our_node_id {
if directed_channel.direction().enabled {
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info: directed_channel,
short_channel_id: *chan_id,
- };
+ });
add_entry!(&candidate,
fee_to_target_msat,
$next_hops_value_contribution,
// place where it could be added.
payee_node_id_opt.map(|payee| first_hop_targets.get(&payee).map(|first_channels| {
for details in first_channels {
- let candidate = CandidateRouteHop::FirstHop {
+ let candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details, payer_node_id: &our_node_id,
- };
+ });
let added = add_entry!(&candidate, 0, path_value_msat,
0, 0u64, 0, 0).is_some();
log_trace!(logger, "{} direct route to payee via {}",
network_nodes.get(&intro_node_id).is_some();
if !have_intro_node_in_graph || our_node_id == intro_node_id { continue }
let candidate = if hint.1.blinded_hops.len() == 1 {
- CandidateRouteHop::OneHopBlinded { hint, hint_idx }
- } else { CandidateRouteHop::Blinded { hint, hint_idx } };
+ CandidateRouteHop::OneHopBlinded(OneHopBlindedPathCandidate { hint, hint_idx })
+ } else { CandidateRouteHop::Blinded(BlindedPathCandidate { hint, hint_idx }) };
let mut path_contribution_msat = path_value_msat;
if let Some(hop_used_msat) = add_entry!(&candidate,
0, path_contribution_msat, 0, 0_u64, 0, 0)
sort_first_hop_channels(first_channels, &used_liquidities, recommended_value_msat,
our_node_pubkey);
for details in first_channels {
- let first_hop_candidate = CandidateRouteHop::FirstHop {
+ let first_hop_candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details, payer_node_id: &our_node_id,
- };
+ });
let blinded_path_fee = match compute_fees(path_contribution_msat, candidate.fees()) {
Some(fee) => fee,
None => continue
let candidate = network_channels
.get(&hop.short_channel_id)
.and_then(|channel| channel.as_directed_to(&target))
- .map(|(info, _)| CandidateRouteHop::PublicHop {
+ .map(|(info, _)| CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: hop.short_channel_id,
- })
- .unwrap_or_else(|| CandidateRouteHop::PrivateHop { hint: hop, target_node_id: target });
+ }))
+ .unwrap_or_else(|| CandidateRouteHop::PrivateHop(PrivateHopCandidate { hint: hop, target_node_id: target }));
if let Some(hop_used_msat) = add_entry!(&candidate,
aggregate_next_hops_fee_msat, aggregate_path_contribution_msat,
sort_first_hop_channels(first_channels, &used_liquidities,
recommended_value_msat, our_node_pubkey);
for details in first_channels {
- let first_hop_candidate = CandidateRouteHop::FirstHop {
+ let first_hop_candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details, payer_node_id: &our_node_id,
- };
+ });
add_entry!(&first_hop_candidate,
aggregate_next_hops_fee_msat, aggregate_path_contribution_msat,
aggregate_next_hops_path_htlc_minimum_msat, aggregate_next_hops_path_penalty_msat,
sort_first_hop_channels(first_channels, &used_liquidities,
recommended_value_msat, our_node_pubkey);
for details in first_channels {
- let first_hop_candidate = CandidateRouteHop::FirstHop {
+ let first_hop_candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details, payer_node_id: &our_node_id,
- };
+ });
add_entry!(&first_hop_candidate,
aggregate_next_hops_fee_msat,
aggregate_path_contribution_msat,
let target = ordered_hops.last().unwrap().0.candidate.target().unwrap_or(maybe_dummy_payee_node_id);
if let Some(first_channels) = first_hop_targets.get(&target) {
for details in first_channels {
- if let CandidateRouteHop::FirstHop { details: last_hop_details, .. }
+ if let CandidateRouteHop::FirstHop(FirstHopCandidate { details: last_hop_details, .. })
= ordered_hops.last().unwrap().0.candidate
{
if details.get_outbound_payment_scid() == last_hop_details.get_outbound_payment_scid() {
.filter(|(h, _)| h.candidate.short_channel_id().is_some())
{
let target = hop.candidate.target().expect("target is defined when short_channel_id is defined");
- let maybe_announced_channel = if let CandidateRouteHop::PublicHop { .. } = hop.candidate {
+ let maybe_announced_channel = if let CandidateRouteHop::PublicHop(_) = hop.candidate {
// If we sourced the hop from the graph we're sure the target node is announced.
true
- } else if let CandidateRouteHop::FirstHop { details, .. } = hop.candidate {
+ } else if let CandidateRouteHop::FirstHop(first_hop) = &hop.candidate {
// If this is a first hop we also know if it's announced.
- details.is_public
+ first_hop.details.is_public
} else {
// If we sourced it any other way, we double-check the network graph to see if
// there are announced channels between the endpoints. If so, the hop might be
use crate::routing::utxo::UtxoResult;
use crate::routing::router::{get_route, build_route_from_hops_internal, add_random_cltv_offset, default_node_features,
BlindedTail, InFlightHtlcs, Path, PaymentParameters, Route, RouteHint, RouteHintHop, RouteHop, RoutingFees,
- DEFAULT_MAX_TOTAL_CLTV_EXPIRY_DELTA, MAX_PATH_LENGTH_ESTIMATE, RouteParameters, CandidateRouteHop};
+ DEFAULT_MAX_TOTAL_CLTV_EXPIRY_DELTA, MAX_PATH_LENGTH_ESTIMATE, RouteParameters, CandidateRouteHop, PublicHopCandidate};
use crate::routing::scoring::{ChannelUsage, FixedPenaltyScorer, ScoreLookUp, ProbabilisticScorer, ProbabilisticScoringFeeParameters, ProbabilisticScoringDecayParameters};
use crate::routing::test_utils::{add_channel, add_or_update_node, build_graph, build_line_graph, id_to_feature_flags, get_nodes, update_channel};
use crate::chain::transaction::OutPoint;
let channels = network_graph.channels();
let channel = channels.get(&5).unwrap();
let info = channel.as_directed_from(&NodeId::from_pubkey(&nodes[3])).unwrap();
- let candidate: CandidateRouteHop = CandidateRouteHop::PublicHop {
+ let candidate: CandidateRouteHop = CandidateRouteHop::PublicHop(PublicHopCandidate {
info: info.0,
short_channel_id: 5,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, &scorer_params), 456);
// Then check we can get a normal route
pub(crate) mod bench_utils {
use super::*;
use std::fs::File;
+ use std::time::Duration;
use bitcoin::hashes::Hash;
use bitcoin::secp256k1::{PublicKey, Secp256k1, SecretKey};
if let Ok(route) = route_res {
for path in route.paths {
if seed & 0x80 == 0 {
- scorer.payment_path_successful(&path);
+ scorer.payment_path_successful(&path, Duration::ZERO);
} else {
let short_channel_id = path.hops[path.hops.len() / 2].short_channel_id;
- scorer.payment_path_failed(&path, short_channel_id);
+ scorer.payment_path_failed(&path, short_channel_id, Duration::ZERO);
}
seed = seed.overflowing_mul(6364136223846793005).0.overflowing_add(1).0;
}
use crate::ln::msgs::DecodeError;
use crate::routing::gossip::{EffectiveCapacity, NetworkGraph, NodeId};
-use crate::routing::router::{Path, CandidateRouteHop};
+use crate::routing::router::{Path, CandidateRouteHop, PublicHopCandidate};
use crate::util::ser::{Readable, ReadableArgs, Writeable, Writer};
use crate::util::logger::Logger;
-use crate::util::time::Time;
use crate::prelude::*;
use core::{cmp, fmt};
-use core::cell::{RefCell, RefMut, Ref};
use core::convert::TryInto;
use core::ops::{Deref, DerefMut};
use core::time::Duration;
use crate::io::{self, Read};
-use crate::sync::{Mutex, MutexGuard, RwLock, RwLockReadGuard, RwLockWriteGuard};
+use crate::sync::{RwLock, RwLockReadGuard, RwLockWriteGuard};
+#[cfg(not(c_bindings))]
+use {
+ core::cell::{RefCell, RefMut, Ref},
+ crate::sync::{Mutex, MutexGuard},
+};
/// We define Score ever-so-slightly differently based on whether we are being built for C bindings
/// or not. For users, `LockableScore` must somehow be writeable to disk. For Rust users, this is
/// `ScoreUpdate` is used to update the scorer's internal state after a payment attempt.
pub trait ScoreUpdate {
/// Handles updating channel penalties after failing to route through a channel.
- fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64);
+ fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration);
/// Handles updating channel penalties after successfully routing along a path.
- fn payment_path_successful(&mut self, path: &Path);
+ fn payment_path_successful(&mut self, path: &Path, duration_since_epoch: Duration);
/// Handles updating channel penalties after a probe over the given path failed.
- fn probe_failed(&mut self, path: &Path, short_channel_id: u64);
+ fn probe_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration);
/// Handles updating channel penalties after a probe over the given path succeeded.
- fn probe_successful(&mut self, path: &Path);
+ fn probe_successful(&mut self, path: &Path, duration_since_epoch: Duration);
+
+ /// Scorers may wish to reduce their certainty of channel liquidity information over time.
+ /// Thus, this method is provided to allow scorers to observe the passage of time - the holder
+ /// of this object should call this method regularly (generally via the
+ /// `lightning-background-processor` crate).
+ fn time_passed(&mut self, duration_since_epoch: Duration);
}
/// A trait which can both lookup and update routing channel penalty scores.
#[cfg(not(c_bindings))]
impl<S: ScoreUpdate, T: DerefMut<Target=S>> ScoreUpdate for T {
- fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) {
- self.deref_mut().payment_path_failed(path, short_channel_id)
+ fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
+ self.deref_mut().payment_path_failed(path, short_channel_id, duration_since_epoch)
+ }
+
+ fn payment_path_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
+ self.deref_mut().payment_path_successful(path, duration_since_epoch)
}
- fn payment_path_successful(&mut self, path: &Path) {
- self.deref_mut().payment_path_successful(path)
+ fn probe_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
+ self.deref_mut().probe_failed(path, short_channel_id, duration_since_epoch)
}
- fn probe_failed(&mut self, path: &Path, short_channel_id: u64) {
- self.deref_mut().probe_failed(path, short_channel_id)
+ fn probe_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
+ self.deref_mut().probe_successful(path, duration_since_epoch)
}
- fn probe_successful(&mut self, path: &Path) {
- self.deref_mut().probe_successful(path)
+ fn time_passed(&mut self, duration_since_epoch: Duration) {
+ self.deref_mut().time_passed(duration_since_epoch)
}
}
} }
#[cfg(c_bindings)]
impl<'a, T: Score> ScoreUpdate for MultiThreadedScoreLockWrite<'a, T> {
- fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) {
- self.0.payment_path_failed(path, short_channel_id)
+ fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
+ self.0.payment_path_failed(path, short_channel_id, duration_since_epoch)
}
- fn payment_path_successful(&mut self, path: &Path) {
- self.0.payment_path_successful(path)
+ fn payment_path_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
+ self.0.payment_path_successful(path, duration_since_epoch)
}
- fn probe_failed(&mut self, path: &Path, short_channel_id: u64) {
- self.0.probe_failed(path, short_channel_id)
+ fn probe_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
+ self.0.probe_failed(path, short_channel_id, duration_since_epoch)
}
- fn probe_successful(&mut self, path: &Path) {
- self.0.probe_successful(path)
+ fn probe_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
+ self.0.probe_successful(path, duration_since_epoch)
+ }
+
+ fn time_passed(&mut self, duration_since_epoch: Duration) {
+ self.0.time_passed(duration_since_epoch)
}
}
}
impl ScoreUpdate for FixedPenaltyScorer {
- fn payment_path_failed(&mut self, _path: &Path, _short_channel_id: u64) {}
+ fn payment_path_failed(&mut self, _path: &Path, _short_channel_id: u64, _duration_since_epoch: Duration) {}
+
+ fn payment_path_successful(&mut self, _path: &Path, _duration_since_epoch: Duration) {}
- fn payment_path_successful(&mut self, _path: &Path) {}
+ fn probe_failed(&mut self, _path: &Path, _short_channel_id: u64, _duration_since_epoch: Duration) {}
- fn probe_failed(&mut self, _path: &Path, _short_channel_id: u64) {}
+ fn probe_successful(&mut self, _path: &Path, _duration_since_epoch: Duration) {}
- fn probe_successful(&mut self, _path: &Path) {}
+ fn time_passed(&mut self, _duration_since_epoch: Duration) {}
}
impl Writeable for FixedPenaltyScorer {
}
}
-#[cfg(not(feature = "no-std"))]
-type ConfiguredTime = crate::util::time::MonotonicTime;
-#[cfg(feature = "no-std")]
-use crate::util::time::Eternity;
-#[cfg(feature = "no-std")]
-type ConfiguredTime = Eternity;
-
/// [`ScoreLookUp`] implementation using channel success probability distributions.
///
/// Channels are tracked with upper and lower liquidity bounds - when an HTLC fails at a channel,
/// formula, but using the history of a channel rather than our latest estimates for the liquidity
/// bounds.
///
-/// # Note
-///
-/// Mixing the `no-std` feature between serialization and deserialization results in undefined
-/// behavior.
-///
/// [1]: https://arxiv.org/abs/2107.05322
/// [`liquidity_penalty_multiplier_msat`]: ProbabilisticScoringFeeParameters::liquidity_penalty_multiplier_msat
/// [`liquidity_penalty_amount_multiplier_msat`]: ProbabilisticScoringFeeParameters::liquidity_penalty_amount_multiplier_msat
/// [`liquidity_offset_half_life`]: ProbabilisticScoringDecayParameters::liquidity_offset_half_life
/// [`historical_liquidity_penalty_multiplier_msat`]: ProbabilisticScoringFeeParameters::historical_liquidity_penalty_multiplier_msat
/// [`historical_liquidity_penalty_amount_multiplier_msat`]: ProbabilisticScoringFeeParameters::historical_liquidity_penalty_amount_multiplier_msat
-pub type ProbabilisticScorer<G, L> = ProbabilisticScorerUsingTime::<G, L, ConfiguredTime>;
-
-/// Probabilistic [`ScoreLookUp`] implementation.
-///
-/// This is not exported to bindings users generally all users should use the [`ProbabilisticScorer`] type alias.
-pub struct ProbabilisticScorerUsingTime<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time>
+pub struct ProbabilisticScorer<G: Deref<Target = NetworkGraph<L>>, L: Deref>
where L::Target: Logger {
decay_params: ProbabilisticScoringDecayParameters,
network_graph: G,
logger: L,
- // TODO: Remove entries of closed channels.
- channel_liquidities: HashMap<u64, ChannelLiquidity<T>>,
+ channel_liquidities: HashMap<u64, ChannelLiquidity>,
}
/// Parameters for configuring [`ProbabilisticScorer`].
///
/// Default value: 14 days
///
- /// [`historical_estimated_channel_liquidity_probabilities`]: ProbabilisticScorerUsingTime::historical_estimated_channel_liquidity_probabilities
+ /// [`historical_estimated_channel_liquidity_probabilities`]: ProbabilisticScorer::historical_estimated_channel_liquidity_probabilities
pub historical_no_updates_half_life: Duration,
/// Whenever this amount of time elapses since the last update to a channel's liquidity bounds,
/// Direction is defined in terms of [`NodeId`] partial ordering, where the source node is the
/// first node in the ordering of the channel's counterparties. Thus, swapping the two liquidity
/// offset fields gives the opposite direction.
-struct ChannelLiquidity<T: Time> {
+struct ChannelLiquidity {
/// Lower channel liquidity bound in terms of an offset from zero.
min_liquidity_offset_msat: u64,
/// Upper channel liquidity bound in terms of an offset from the effective capacity.
max_liquidity_offset_msat: u64,
- /// Time when the liquidity bounds were last modified.
- last_updated: T,
-
min_liquidity_offset_history: HistoricalBucketRangeTracker,
max_liquidity_offset_history: HistoricalBucketRangeTracker,
+
+ /// Time when either liquidity bound was last modified as an offset since the unix epoch.
+ last_updated: Duration,
+
+ /// Time when the historical liquidity bounds were last modified as an offset against the unix
+ /// epoch.
+ offset_history_last_updated: Duration,
}
-/// A snapshot of [`ChannelLiquidity`] in one direction assuming a certain channel capacity and
-/// decayed with a given half life.
-struct DirectedChannelLiquidity<L: Deref<Target = u64>, BRT: Deref<Target = HistoricalBucketRangeTracker>, T: Time, U: Deref<Target = T>> {
+/// A snapshot of [`ChannelLiquidity`] in one direction assuming a certain channel capacity.
+struct DirectedChannelLiquidity<L: Deref<Target = u64>, BRT: Deref<Target = HistoricalBucketRangeTracker>, T: Deref<Target = Duration>> {
min_liquidity_offset_msat: L,
max_liquidity_offset_msat: L,
liquidity_history: HistoricalMinMaxBuckets<BRT>,
capacity_msat: u64,
- last_updated: U,
- now: T,
- decay_params: ProbabilisticScoringDecayParameters,
+ last_updated: T,
+ offset_history_last_updated: T,
}
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> ProbabilisticScorerUsingTime<G, L, T> where L::Target: Logger {
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> ProbabilisticScorer<G, L> where L::Target: Logger {
/// Creates a new scorer using the given scoring parameters for sending payments from a node
/// through a network graph.
pub fn new(decay_params: ProbabilisticScoringDecayParameters, network_graph: G, logger: L) -> Self {
}
#[cfg(test)]
- fn with_channel(mut self, short_channel_id: u64, liquidity: ChannelLiquidity<T>) -> Self {
+ fn with_channel(mut self, short_channel_id: u64, liquidity: ChannelLiquidity) -> Self {
assert!(self.channel_liquidities.insert(short_channel_id, liquidity).is_none());
self
}
/// Note that this writes roughly one line per channel for which we have a liquidity estimate,
/// which may be a substantial amount of log output.
pub fn debug_log_liquidity_stats(&self) {
- let now = T::now();
-
let graph = self.network_graph.read_only();
for (scid, liq) in self.channel_liquidities.iter() {
if let Some(chan_debug) = graph.channels().get(scid) {
let log_direction = |source, target| {
if let Some((directed_info, _)) = chan_debug.as_directed_to(target) {
let amt = directed_info.effective_capacity().as_msat();
- let dir_liq = liq.as_directed(source, target, amt, self.decay_params);
+ let dir_liq = liq.as_directed(source, target, amt);
- let (min_buckets, max_buckets) = dir_liq.liquidity_history
- .get_decayed_buckets(now, *dir_liq.last_updated,
- self.decay_params.historical_no_updates_half_life)
- .unwrap_or(([0; 32], [0; 32]));
+ let min_buckets = &dir_liq.liquidity_history.min_liquidity_offset_history.buckets;
+ let max_buckets = &dir_liq.liquidity_history.max_liquidity_offset_history.buckets;
log_debug!(self.logger, core::concat!(
"Liquidity from {} to {} via {} is in the range ({}, {}).\n",
if let Some(liq) = self.channel_liquidities.get(&scid) {
if let Some((directed_info, source)) = chan.as_directed_to(target) {
let amt = directed_info.effective_capacity().as_msat();
- let dir_liq = liq.as_directed(source, target, amt, self.decay_params);
+ let dir_liq = liq.as_directed(source, target, amt);
return Some((dir_liq.min_liquidity_msat(), dir_liq.max_liquidity_msat()));
}
}
/// in the top and bottom bucket, and roughly with similar (recent) frequency.
///
/// Because the datapoints are decayed slowly over time, values will eventually return to
- /// `Some(([1; 32], [1; 32]))` and then to `None` once no datapoints remain.
+ /// `Some(([0; 32], [0; 32]))` or `None` if no data remains for a channel.
///
/// In order to fetch a single success probability from the buckets provided here, as used in
/// the scoring model, see [`Self::historical_estimated_payment_success_probability`].
if let Some(liq) = self.channel_liquidities.get(&scid) {
if let Some((directed_info, source)) = chan.as_directed_to(target) {
let amt = directed_info.effective_capacity().as_msat();
- let dir_liq = liq.as_directed(source, target, amt, self.decay_params);
+ let dir_liq = liq.as_directed(source, target, amt);
- let (min_buckets, mut max_buckets) =
- dir_liq.liquidity_history.get_decayed_buckets(
- dir_liq.now, *dir_liq.last_updated,
- self.decay_params.historical_no_updates_half_life
- )?;
+ let min_buckets = dir_liq.liquidity_history.min_liquidity_offset_history.buckets;
+ let mut max_buckets = dir_liq.liquidity_history.max_liquidity_offset_history.buckets;
// Note that the liquidity buckets are an offset from the edge, so we inverse
// the max order to get the probabilities from zero.
if let Some(liq) = self.channel_liquidities.get(&scid) {
if let Some((directed_info, source)) = chan.as_directed_to(target) {
let capacity_msat = directed_info.effective_capacity().as_msat();
- let dir_liq = liq.as_directed(source, target, capacity_msat, self.decay_params);
+ let dir_liq = liq.as_directed(source, target, capacity_msat);
return dir_liq.liquidity_history.calculate_success_probability_times_billion(
- dir_liq.now, *dir_liq.last_updated,
- self.decay_params.historical_no_updates_half_life, ¶ms, amount_msat,
- capacity_msat
+ ¶ms, amount_msat, capacity_msat
).map(|p| p as f64 / (1024 * 1024 * 1024) as f64);
}
}
}
}
-impl<T: Time> ChannelLiquidity<T> {
- #[inline]
- fn new() -> Self {
+impl ChannelLiquidity {
+ fn new(last_updated: Duration) -> Self {
Self {
min_liquidity_offset_msat: 0,
max_liquidity_offset_msat: 0,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
- last_updated: T::now(),
+ last_updated,
+ offset_history_last_updated: last_updated,
}
}
/// Returns a view of the channel liquidity directed from `source` to `target` assuming
/// `capacity_msat`.
fn as_directed(
- &self, source: &NodeId, target: &NodeId, capacity_msat: u64, decay_params: ProbabilisticScoringDecayParameters
- ) -> DirectedChannelLiquidity<&u64, &HistoricalBucketRangeTracker, T, &T> {
+ &self, source: &NodeId, target: &NodeId, capacity_msat: u64,
+ ) -> DirectedChannelLiquidity<&u64, &HistoricalBucketRangeTracker, &Duration> {
let (min_liquidity_offset_msat, max_liquidity_offset_msat, min_liquidity_offset_history, max_liquidity_offset_history) =
if source < target {
(&self.min_liquidity_offset_msat, &self.max_liquidity_offset_msat,
},
capacity_msat,
last_updated: &self.last_updated,
- now: T::now(),
- decay_params: decay_params,
+ offset_history_last_updated: &self.offset_history_last_updated,
}
}
/// Returns a mutable view of the channel liquidity directed from `source` to `target` assuming
/// `capacity_msat`.
fn as_directed_mut(
- &mut self, source: &NodeId, target: &NodeId, capacity_msat: u64, decay_params: ProbabilisticScoringDecayParameters
- ) -> DirectedChannelLiquidity<&mut u64, &mut HistoricalBucketRangeTracker, T, &mut T> {
+ &mut self, source: &NodeId, target: &NodeId, capacity_msat: u64,
+ ) -> DirectedChannelLiquidity<&mut u64, &mut HistoricalBucketRangeTracker, &mut Duration> {
let (min_liquidity_offset_msat, max_liquidity_offset_msat, min_liquidity_offset_history, max_liquidity_offset_history) =
if source < target {
(&mut self.min_liquidity_offset_msat, &mut self.max_liquidity_offset_msat,
},
capacity_msat,
last_updated: &mut self.last_updated,
- now: T::now(),
- decay_params: decay_params,
+ offset_history_last_updated: &mut self.offset_history_last_updated,
+ }
+ }
+
+ fn decayed_offset(
+ &self, offset: u64, duration_since_epoch: Duration,
+ decay_params: ProbabilisticScoringDecayParameters,
+ ) -> u64 {
+ let half_life = decay_params.liquidity_offset_half_life.as_secs_f64();
+ if half_life != 0.0 {
+ let elapsed_time = duration_since_epoch.saturating_sub(self.last_updated).as_secs_f64();
+ ((offset as f64) * powf64(0.5, elapsed_time / half_life)) as u64
+ } else {
+ 0
}
}
}
(numerator, denominator)
}
-impl<L: Deref<Target = u64>, BRT: Deref<Target = HistoricalBucketRangeTracker>, T: Time, U: Deref<Target = T>> DirectedChannelLiquidity< L, BRT, T, U> {
+impl<L: Deref<Target = u64>, BRT: Deref<Target = HistoricalBucketRangeTracker>, T: Deref<Target = Duration>>
+DirectedChannelLiquidity< L, BRT, T> {
/// Returns a liquidity penalty for routing the given HTLC `amount_msat` through the channel in
/// this direction.
fn penalty_msat(&self, amount_msat: u64, score_params: &ProbabilisticScoringFeeParameters) -> u64 {
if score_params.historical_liquidity_penalty_multiplier_msat != 0 ||
score_params.historical_liquidity_penalty_amount_multiplier_msat != 0 {
if let Some(cumulative_success_prob_times_billion) = self.liquidity_history
- .calculate_success_probability_times_billion(self.now, *self.last_updated,
- self.decay_params.historical_no_updates_half_life, score_params, amount_msat,
- self.capacity_msat)
+ .calculate_success_probability_times_billion(
+ score_params, amount_msat, self.capacity_msat)
{
let historical_negative_log10_times_2048 = approx::negative_log10_times_2048(cumulative_success_prob_times_billion + 1, 1024 * 1024 * 1024);
res = res.saturating_add(Self::combined_penalty_msat(amount_msat,
/// Returns the lower bound of the channel liquidity balance in this direction.
#[inline(always)]
fn min_liquidity_msat(&self) -> u64 {
- self.decayed_offset_msat(*self.min_liquidity_offset_msat)
+ *self.min_liquidity_offset_msat
}
/// Returns the upper bound of the channel liquidity balance in this direction.
#[inline(always)]
fn max_liquidity_msat(&self) -> u64 {
self.capacity_msat
- .saturating_sub(self.decayed_offset_msat(*self.max_liquidity_offset_msat))
- }
-
- fn decayed_offset_msat(&self, offset_msat: u64) -> u64 {
- let half_life = self.decay_params.liquidity_offset_half_life.as_secs();
- if half_life != 0 {
- // Decay the offset by the appropriate number of half lives. If half of the next half
- // life has passed, approximate an additional three-quarter life to help smooth out the
- // decay.
- let elapsed_time = self.now.duration_since(*self.last_updated).as_secs();
- let half_decays = elapsed_time / (half_life / 2);
- let decays = half_decays / 2;
- let decayed_offset_msat = offset_msat.checked_shr(decays as u32).unwrap_or(0);
- if half_decays % 2 == 0 {
- decayed_offset_msat
- } else {
- // 11_585 / 16_384 ~= core::f64::consts::FRAC_1_SQRT_2
- // 16_384 == 2^14
- (decayed_offset_msat as u128 * 11_585 / 16_384) as u64
- }
- } else {
- 0
- }
+ .saturating_sub(*self.max_liquidity_offset_msat)
}
}
-impl<L: DerefMut<Target = u64>, BRT: DerefMut<Target = HistoricalBucketRangeTracker>, T: Time, U: DerefMut<Target = T>> DirectedChannelLiquidity<L, BRT, T, U> {
+impl<L: DerefMut<Target = u64>, BRT: DerefMut<Target = HistoricalBucketRangeTracker>, T: DerefMut<Target = Duration>>
+DirectedChannelLiquidity<L, BRT, T> {
/// Adjusts the channel liquidity balance bounds when failing to route `amount_msat`.
- fn failed_at_channel<Log: Deref>(&mut self, amount_msat: u64, chan_descr: fmt::Arguments, logger: &Log) where Log::Target: Logger {
+ fn failed_at_channel<Log: Deref>(
+ &mut self, amount_msat: u64, duration_since_epoch: Duration, chan_descr: fmt::Arguments, logger: &Log
+ ) where Log::Target: Logger {
let existing_max_msat = self.max_liquidity_msat();
if amount_msat < existing_max_msat {
log_debug!(logger, "Setting max liquidity of {} from {} to {}", chan_descr, existing_max_msat, amount_msat);
- self.set_max_liquidity_msat(amount_msat);
+ self.set_max_liquidity_msat(amount_msat, duration_since_epoch);
} else {
log_trace!(logger, "Max liquidity of {} is {} (already less than or equal to {})",
chan_descr, existing_max_msat, amount_msat);
}
- self.update_history_buckets(0);
+ self.update_history_buckets(0, duration_since_epoch);
}
/// Adjusts the channel liquidity balance bounds when failing to route `amount_msat` downstream.
- fn failed_downstream<Log: Deref>(&mut self, amount_msat: u64, chan_descr: fmt::Arguments, logger: &Log) where Log::Target: Logger {
+ fn failed_downstream<Log: Deref>(
+ &mut self, amount_msat: u64, duration_since_epoch: Duration, chan_descr: fmt::Arguments, logger: &Log
+ ) where Log::Target: Logger {
let existing_min_msat = self.min_liquidity_msat();
if amount_msat > existing_min_msat {
log_debug!(logger, "Setting min liquidity of {} from {} to {}", existing_min_msat, chan_descr, amount_msat);
- self.set_min_liquidity_msat(amount_msat);
+ self.set_min_liquidity_msat(amount_msat, duration_since_epoch);
} else {
log_trace!(logger, "Min liquidity of {} is {} (already greater than or equal to {})",
chan_descr, existing_min_msat, amount_msat);
}
- self.update_history_buckets(0);
+ self.update_history_buckets(0, duration_since_epoch);
}
/// Adjusts the channel liquidity balance bounds when successfully routing `amount_msat`.
- fn successful<Log: Deref>(&mut self, amount_msat: u64, chan_descr: fmt::Arguments, logger: &Log) where Log::Target: Logger {
+ fn successful<Log: Deref>(&mut self,
+ amount_msat: u64, duration_since_epoch: Duration, chan_descr: fmt::Arguments, logger: &Log
+ ) where Log::Target: Logger {
let max_liquidity_msat = self.max_liquidity_msat().checked_sub(amount_msat).unwrap_or(0);
log_debug!(logger, "Subtracting {} from max liquidity of {} (setting it to {})", amount_msat, chan_descr, max_liquidity_msat);
- self.set_max_liquidity_msat(max_liquidity_msat);
- self.update_history_buckets(amount_msat);
+ self.set_max_liquidity_msat(max_liquidity_msat, duration_since_epoch);
+ self.update_history_buckets(amount_msat, duration_since_epoch);
}
/// Updates the history buckets for this channel. Because the history buckets track what we now
/// know about the channel's state *prior to our payment* (i.e. what we assume is "steady
/// state"), we allow the caller to set an offset applied to our liquidity bounds which
/// represents the amount of the successful payment we just made.
- fn update_history_buckets(&mut self, bucket_offset_msat: u64) {
- let half_lives = self.now.duration_since(*self.last_updated).as_secs()
- .checked_div(self.decay_params.historical_no_updates_half_life.as_secs())
- .map(|v| v.try_into().unwrap_or(u32::max_value())).unwrap_or(u32::max_value());
- self.liquidity_history.min_liquidity_offset_history.time_decay_data(half_lives);
- self.liquidity_history.max_liquidity_offset_history.time_decay_data(half_lives);
-
- let min_liquidity_offset_msat = self.decayed_offset_msat(*self.min_liquidity_offset_msat);
+ fn update_history_buckets(&mut self, bucket_offset_msat: u64, duration_since_epoch: Duration) {
self.liquidity_history.min_liquidity_offset_history.track_datapoint(
- min_liquidity_offset_msat + bucket_offset_msat, self.capacity_msat
+ *self.min_liquidity_offset_msat + bucket_offset_msat, self.capacity_msat
);
- let max_liquidity_offset_msat = self.decayed_offset_msat(*self.max_liquidity_offset_msat);
self.liquidity_history.max_liquidity_offset_history.track_datapoint(
- max_liquidity_offset_msat.saturating_sub(bucket_offset_msat), self.capacity_msat
+ self.max_liquidity_offset_msat.saturating_sub(bucket_offset_msat), self.capacity_msat
);
+ *self.offset_history_last_updated = duration_since_epoch;
}
/// Adjusts the lower bound of the channel liquidity balance in this direction.
- fn set_min_liquidity_msat(&mut self, amount_msat: u64) {
+ fn set_min_liquidity_msat(&mut self, amount_msat: u64, duration_since_epoch: Duration) {
*self.min_liquidity_offset_msat = amount_msat;
- *self.max_liquidity_offset_msat = if amount_msat > self.max_liquidity_msat() {
- 0
- } else {
- self.decayed_offset_msat(*self.max_liquidity_offset_msat)
- };
- *self.last_updated = self.now;
+ if amount_msat > self.max_liquidity_msat() {
+ *self.max_liquidity_offset_msat = 0;
+ }
+ *self.last_updated = duration_since_epoch;
}
/// Adjusts the upper bound of the channel liquidity balance in this direction.
- fn set_max_liquidity_msat(&mut self, amount_msat: u64) {
+ fn set_max_liquidity_msat(&mut self, amount_msat: u64, duration_since_epoch: Duration) {
*self.max_liquidity_offset_msat = self.capacity_msat.checked_sub(amount_msat).unwrap_or(0);
- *self.min_liquidity_offset_msat = if amount_msat < self.min_liquidity_msat() {
- 0
- } else {
- self.decayed_offset_msat(*self.min_liquidity_offset_msat)
- };
- *self.last_updated = self.now;
+ if amount_msat < *self.min_liquidity_offset_msat {
+ *self.min_liquidity_offset_msat = 0;
+ }
+ *self.last_updated = duration_since_epoch;
}
}
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> ScoreLookUp for ProbabilisticScorerUsingTime<G, L, T> where L::Target: Logger {
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> ScoreLookUp for ProbabilisticScorer<G, L> where L::Target: Logger {
type ScoreParams = ProbabilisticScoringFeeParameters;
fn channel_penalty_msat(
&self, candidate: &CandidateRouteHop, usage: ChannelUsage, score_params: &ProbabilisticScoringFeeParameters
) -> u64 {
let (scid, target) = match candidate {
- CandidateRouteHop::PublicHop { info, short_channel_id } => {
+ CandidateRouteHop::PublicHop(PublicHopCandidate { info, short_channel_id }) => {
(short_channel_id, info.target())
},
_ => return 0,
let capacity_msat = usage.effective_capacity.as_msat();
self.channel_liquidities
.get(&scid)
- .unwrap_or(&ChannelLiquidity::new())
- .as_directed(&source, &target, capacity_msat, self.decay_params)
+ .unwrap_or(&ChannelLiquidity::new(Duration::ZERO))
+ .as_directed(&source, &target, capacity_msat)
.penalty_msat(amount_msat, score_params)
.saturating_add(anti_probing_penalty_msat)
.saturating_add(base_penalty_msat)
}
}
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> ScoreUpdate for ProbabilisticScorerUsingTime<G, L, T> where L::Target: Logger {
- fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) {
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> ScoreUpdate for ProbabilisticScorer<G, L> where L::Target: Logger {
+ fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
let amount_msat = path.final_value_msat();
log_trace!(self.logger, "Scoring path through to SCID {} as having failed at {} msat", short_channel_id, amount_msat);
let network_graph = self.network_graph.read_only();
if at_failed_channel {
self.channel_liquidities
.entry(hop.short_channel_id)
- .or_insert_with(ChannelLiquidity::new)
- .as_directed_mut(source, &target, capacity_msat, self.decay_params)
- .failed_at_channel(amount_msat, format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
+ .or_insert_with(|| ChannelLiquidity::new(duration_since_epoch))
+ .as_directed_mut(source, &target, capacity_msat)
+ .failed_at_channel(amount_msat, duration_since_epoch,
+ format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
} else {
self.channel_liquidities
.entry(hop.short_channel_id)
- .or_insert_with(ChannelLiquidity::new)
- .as_directed_mut(source, &target, capacity_msat, self.decay_params)
- .failed_downstream(amount_msat, format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
+ .or_insert_with(|| ChannelLiquidity::new(duration_since_epoch))
+ .as_directed_mut(source, &target, capacity_msat)
+ .failed_downstream(amount_msat, duration_since_epoch,
+ format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
}
} else {
log_debug!(self.logger, "Not able to penalize channel with SCID {} as we do not have graph info for it (likely a route-hint last-hop).",
}
}
- fn payment_path_successful(&mut self, path: &Path) {
+ fn payment_path_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
let amount_msat = path.final_value_msat();
log_trace!(self.logger, "Scoring path through SCID {} as having succeeded at {} msat.",
path.hops.split_last().map(|(hop, _)| hop.short_channel_id).unwrap_or(0), amount_msat);
let capacity_msat = channel.effective_capacity().as_msat();
self.channel_liquidities
.entry(hop.short_channel_id)
- .or_insert_with(ChannelLiquidity::new)
- .as_directed_mut(source, &target, capacity_msat, self.decay_params)
- .successful(amount_msat, format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
+ .or_insert_with(|| ChannelLiquidity::new(duration_since_epoch))
+ .as_directed_mut(source, &target, capacity_msat)
+ .successful(amount_msat, duration_since_epoch,
+ format_args!("SCID {}, towards {:?}", hop.short_channel_id, target), &self.logger);
} else {
log_debug!(self.logger, "Not able to learn for channel with SCID {} as we do not have graph info for it (likely a route-hint last-hop).",
hop.short_channel_id);
}
}
- fn probe_failed(&mut self, path: &Path, short_channel_id: u64) {
- self.payment_path_failed(path, short_channel_id)
+ fn probe_failed(&mut self, path: &Path, short_channel_id: u64, duration_since_epoch: Duration) {
+ self.payment_path_failed(path, short_channel_id, duration_since_epoch)
}
- fn probe_successful(&mut self, path: &Path) {
- self.payment_path_failed(path, u64::max_value())
+ fn probe_successful(&mut self, path: &Path, duration_since_epoch: Duration) {
+ self.payment_path_failed(path, u64::max_value(), duration_since_epoch)
+ }
+
+ fn time_passed(&mut self, duration_since_epoch: Duration) {
+ let decay_params = self.decay_params;
+ self.channel_liquidities.retain(|_scid, liquidity| {
+ liquidity.min_liquidity_offset_msat =
+ liquidity.decayed_offset(liquidity.min_liquidity_offset_msat, duration_since_epoch, decay_params);
+ liquidity.max_liquidity_offset_msat =
+ liquidity.decayed_offset(liquidity.max_liquidity_offset_msat, duration_since_epoch, decay_params);
+ liquidity.last_updated = duration_since_epoch;
+
+ let elapsed_time =
+ duration_since_epoch.saturating_sub(liquidity.offset_history_last_updated);
+ if elapsed_time > decay_params.historical_no_updates_half_life {
+ let half_life = decay_params.historical_no_updates_half_life.as_secs_f64();
+ if half_life != 0.0 {
+ let divisor = powf64(2048.0, elapsed_time.as_secs_f64() / half_life) as u64;
+ for bucket in liquidity.min_liquidity_offset_history.buckets.iter_mut() {
+ *bucket = ((*bucket as u64) * 1024 / divisor) as u16;
+ }
+ for bucket in liquidity.max_liquidity_offset_history.buckets.iter_mut() {
+ *bucket = ((*bucket as u64) * 1024 / divisor) as u16;
+ }
+ liquidity.offset_history_last_updated = duration_since_epoch;
+ }
+ }
+ liquidity.min_liquidity_offset_msat != 0 || liquidity.max_liquidity_offset_msat != 0 ||
+ liquidity.min_liquidity_offset_history.buckets != [0; 32] ||
+ liquidity.max_liquidity_offset_history.buckets != [0; 32]
+ });
}
}
#[cfg(c_bindings)]
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> Score for ProbabilisticScorerUsingTime<G, L, T>
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> Score for ProbabilisticScorer<G, L>
where L::Target: Logger {}
+#[cfg(feature = "std")]
+#[inline]
+fn powf64(n: f64, exp: f64) -> f64 {
+ n.powf(exp)
+}
+#[cfg(not(feature = "std"))]
+fn powf64(n: f64, exp: f64) -> f64 {
+ libm::powf(n as f32, exp as f32) as f64
+}
+
mod approx {
const BITS: u32 = 64;
const HIGHEST_BIT: u32 = BITS - 1;
/// in each of 32 buckets.
#[derive(Clone, Copy)]
pub(super) struct HistoricalBucketRangeTracker {
- buckets: [u16; 32],
+ pub(super) buckets: [u16; 32],
}
/// Buckets are stored in fixed point numbers with a 5 bit fractional part. Thus, the value
self.buckets[bucket] = self.buckets[bucket].saturating_add(BUCKET_FIXED_POINT_ONE);
}
}
- /// Decay all buckets by the given number of half-lives. Used to more aggressively remove old
- /// datapoints as we receive newer information.
- #[inline]
- pub(super) fn time_decay_data(&mut self, half_lives: u32) {
- for e in self.buckets.iter_mut() {
- *e = e.checked_shr(half_lives).unwrap_or(0);
- }
- }
}
impl_writeable_tlv_based!(HistoricalBucketRangeTracker, { (0, buckets, required) });
}
impl<D: Deref<Target = HistoricalBucketRangeTracker>> HistoricalMinMaxBuckets<D> {
- pub(super) fn get_decayed_buckets<T: Time>(&self, now: T, last_updated: T, half_life: Duration)
- -> Option<([u16; 32], [u16; 32])> {
- let (_, required_decays) = self.get_total_valid_points(now, last_updated, half_life)?;
-
- let mut min_buckets = *self.min_liquidity_offset_history;
- min_buckets.time_decay_data(required_decays);
- let mut max_buckets = *self.max_liquidity_offset_history;
- max_buckets.time_decay_data(required_decays);
- Some((min_buckets.buckets, max_buckets.buckets))
- }
#[inline]
- pub(super) fn get_total_valid_points<T: Time>(&self, now: T, last_updated: T, half_life: Duration)
- -> Option<(u64, u32)> {
- let required_decays = now.duration_since(last_updated).as_secs()
- .checked_div(half_life.as_secs())
- .map_or(u32::max_value(), |decays| cmp::min(decays, u32::max_value() as u64) as u32);
+ pub(super) fn calculate_success_probability_times_billion(
+ &self, params: &ProbabilisticScoringFeeParameters, amount_msat: u64,
+ capacity_msat: u64
+ ) -> Option<u64> {
+ // If historical penalties are enabled, we try to calculate a probability of success
+ // given our historical distribution of min- and max-liquidity bounds in a channel.
+ // To do so, we walk the set of historical liquidity bucket (min, max) combinations
+ // (where min_idx < max_idx, as having a minimum above our maximum is an invalid
+ // state). For each pair, we calculate the probability as if the bucket's corresponding
+ // min- and max- liquidity bounds were our current liquidity bounds and then multiply
+ // that probability by the weight of the selected buckets.
+ let payment_pos = amount_to_pos(amount_msat, capacity_msat);
+ if payment_pos >= POSITION_TICKS { return None; }
let mut total_valid_points_tracked = 0;
for (min_idx, min_bucket) in self.min_liquidity_offset_history.buckets.iter().enumerate() {
// If the total valid points is smaller than 1.0 (i.e. 32 in our fixed-point scheme),
// treat it as if we were fully decayed.
const FULLY_DECAYED: u16 = BUCKET_FIXED_POINT_ONE * BUCKET_FIXED_POINT_ONE;
- if total_valid_points_tracked.checked_shr(required_decays).unwrap_or(0) < FULLY_DECAYED.into() {
+ if total_valid_points_tracked < FULLY_DECAYED.into() {
return None;
}
- Some((total_valid_points_tracked, required_decays))
- }
-
- #[inline]
- pub(super) fn calculate_success_probability_times_billion<T: Time>(
- &self, now: T, last_updated: T, half_life: Duration,
- params: &ProbabilisticScoringFeeParameters, amount_msat: u64, capacity_msat: u64
- ) -> Option<u64> {
- // If historical penalties are enabled, we try to calculate a probability of success
- // given our historical distribution of min- and max-liquidity bounds in a channel.
- // To do so, we walk the set of historical liquidity bucket (min, max) combinations
- // (where min_idx < max_idx, as having a minimum above our maximum is an invalid
- // state). For each pair, we calculate the probability as if the bucket's corresponding
- // min- and max- liquidity bounds were our current liquidity bounds and then multiply
- // that probability by the weight of the selected buckets.
- let payment_pos = amount_to_pos(amount_msat, capacity_msat);
- if payment_pos >= POSITION_TICKS { return None; }
-
- // Check if all our buckets are zero, once decayed and treat it as if we had no data. We
- // don't actually use the decayed buckets, though, as that would lose precision.
- let (total_valid_points_tracked, _)
- = self.get_total_valid_points(now, last_updated, half_life)?;
-
let mut cumulative_success_prob_times_billion = 0;
// Special-case the 0th min bucket - it generally means we failed a payment, so only
// consider the highest (i.e. largest-offset-from-max-capacity) max bucket for all
}
use bucketed_history::{LegacyHistoricalBucketRangeTracker, HistoricalBucketRangeTracker, HistoricalMinMaxBuckets};
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> Writeable for ProbabilisticScorerUsingTime<G, L, T> where L::Target: Logger {
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> Writeable for ProbabilisticScorer<G, L> where L::Target: Logger {
#[inline]
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
write_tlv_fields!(w, {
}
}
-impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time>
-ReadableArgs<(ProbabilisticScoringDecayParameters, G, L)> for ProbabilisticScorerUsingTime<G, L, T> where L::Target: Logger {
+impl<G: Deref<Target = NetworkGraph<L>>, L: Deref>
+ReadableArgs<(ProbabilisticScoringDecayParameters, G, L)> for ProbabilisticScorer<G, L> where L::Target: Logger {
#[inline]
fn read<R: Read>(
r: &mut R, args: (ProbabilisticScoringDecayParameters, G, L)
}
}
-impl<T: Time> Writeable for ChannelLiquidity<T> {
+impl Writeable for ChannelLiquidity {
#[inline]
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
- let duration_since_epoch = T::duration_since_epoch() - self.last_updated.elapsed();
write_tlv_fields!(w, {
(0, self.min_liquidity_offset_msat, required),
// 1 was the min_liquidity_offset_history in octile form
(2, self.max_liquidity_offset_msat, required),
// 3 was the max_liquidity_offset_history in octile form
- (4, duration_since_epoch, required),
+ (4, self.last_updated, required),
(5, Some(self.min_liquidity_offset_history), option),
(7, Some(self.max_liquidity_offset_history), option),
+ (9, self.offset_history_last_updated, required),
});
Ok(())
}
}
-impl<T: Time> Readable for ChannelLiquidity<T> {
+impl Readable for ChannelLiquidity {
#[inline]
fn read<R: Read>(r: &mut R) -> Result<Self, DecodeError> {
let mut min_liquidity_offset_msat = 0;
let mut legacy_max_liq_offset_history: Option<LegacyHistoricalBucketRangeTracker> = None;
let mut min_liquidity_offset_history: Option<HistoricalBucketRangeTracker> = None;
let mut max_liquidity_offset_history: Option<HistoricalBucketRangeTracker> = None;
- let mut duration_since_epoch = Duration::from_secs(0);
+ let mut last_updated = Duration::from_secs(0);
+ let mut offset_history_last_updated = None;
read_tlv_fields!(r, {
(0, min_liquidity_offset_msat, required),
(1, legacy_min_liq_offset_history, option),
(2, max_liquidity_offset_msat, required),
(3, legacy_max_liq_offset_history, option),
- (4, duration_since_epoch, required),
+ (4, last_updated, required),
(5, min_liquidity_offset_history, option),
(7, max_liquidity_offset_history, option),
+ (9, offset_history_last_updated, option),
});
- // On rust prior to 1.60 `Instant::duration_since` will panic if time goes backwards.
- // We write `last_updated` as wallclock time even though its ultimately an `Instant` (which
- // is a time from a monotonic clock usually represented as an offset against boot time).
- // Thus, we have to construct an `Instant` by subtracting the difference in wallclock time
- // from the one that was written. However, because `Instant` can panic if we construct one
- // in the future, we must handle wallclock time jumping backwards, which we do by simply
- // using `Instant::now()` in that case.
- let wall_clock_now = T::duration_since_epoch();
- let now = T::now();
- let last_updated = if wall_clock_now > duration_since_epoch {
- now - (wall_clock_now - duration_since_epoch)
- } else { now };
+
if min_liquidity_offset_history.is_none() {
if let Some(legacy_buckets) = legacy_min_liq_offset_history {
min_liquidity_offset_history = Some(legacy_buckets.into_current());
min_liquidity_offset_history: min_liquidity_offset_history.unwrap(),
max_liquidity_offset_history: max_liquidity_offset_history.unwrap(),
last_updated,
+ offset_history_last_updated: offset_history_last_updated.unwrap_or(last_updated),
})
}
}
#[cfg(test)]
mod tests {
- use super::{ChannelLiquidity, HistoricalBucketRangeTracker, ProbabilisticScoringFeeParameters, ProbabilisticScoringDecayParameters, ProbabilisticScorerUsingTime};
+ use super::{ChannelLiquidity, HistoricalBucketRangeTracker, ProbabilisticScoringFeeParameters, ProbabilisticScoringDecayParameters, ProbabilisticScorer};
use crate::blinded_path::{BlindedHop, BlindedPath};
use crate::util::config::UserConfig;
- use crate::util::time::Time;
- use crate::util::time::tests::SinceEpoch;
use crate::ln::channelmanager;
use crate::ln::msgs::{ChannelAnnouncement, ChannelUpdate, UnsignedChannelAnnouncement, UnsignedChannelUpdate};
use crate::routing::gossip::{EffectiveCapacity, NetworkGraph, NodeId};
- use crate::routing::router::{BlindedTail, Path, RouteHop, CandidateRouteHop};
+ use crate::routing::router::{BlindedTail, Path, RouteHop, CandidateRouteHop, PublicHopCandidate};
use crate::routing::scoring::{ChannelUsage, ScoreLookUp, ScoreUpdate};
use crate::util::ser::{ReadableArgs, Writeable};
use crate::util::test_utils::{self, TestLogger};
// `ProbabilisticScorer` tests
- /// A probabilistic scorer for testing with time that can be manually advanced.
- type ProbabilisticScorer<'a> = ProbabilisticScorerUsingTime::<&'a NetworkGraph<&'a TestLogger>, &'a TestLogger, SinceEpoch>;
-
fn sender_privkey() -> SecretKey {
SecretKey::from_slice(&[41; 32]).unwrap()
}
#[test]
fn liquidity_bounds_directed_from_lowest_node_id() {
let logger = TestLogger::new();
- let last_updated = SinceEpoch::now();
+ let last_updated = Duration::ZERO;
+ let offset_history_last_updated = Duration::ZERO;
let network_graph = network_graph(&logger);
let decay_params = ProbabilisticScoringDecayParameters::default();
let mut scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger)
.with_channel(42,
ChannelLiquidity {
- min_liquidity_offset_msat: 700, max_liquidity_offset_msat: 100, last_updated,
+ min_liquidity_offset_msat: 700, max_liquidity_offset_msat: 100,
+ last_updated, offset_history_last_updated,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
})
.with_channel(43,
ChannelLiquidity {
- min_liquidity_offset_msat: 700, max_liquidity_offset_msat: 100, last_updated,
+ min_liquidity_offset_msat: 700, max_liquidity_offset_msat: 100,
+ last_updated, offset_history_last_updated,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
});
// Update minimum liquidity.
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 100);
assert_eq!(liquidity.max_liquidity_msat(), 300);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 700);
assert_eq!(liquidity.max_liquidity_msat(), 900);
scorer.channel_liquidities.get_mut(&42).unwrap()
- .as_directed_mut(&source, &target, 1_000, decay_params)
- .set_min_liquidity_msat(200);
+ .as_directed_mut(&source, &target, 1_000)
+ .set_min_liquidity_msat(200, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 200);
assert_eq!(liquidity.max_liquidity_msat(), 300);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 700);
assert_eq!(liquidity.max_liquidity_msat(), 800);
// Update maximum liquidity.
let liquidity = scorer.channel_liquidities.get(&43).unwrap()
- .as_directed(&target, &recipient, 1_000, decay_params);
+ .as_directed(&target, &recipient, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 700);
assert_eq!(liquidity.max_liquidity_msat(), 900);
let liquidity = scorer.channel_liquidities.get(&43).unwrap()
- .as_directed(&recipient, &target, 1_000, decay_params);
+ .as_directed(&recipient, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 100);
assert_eq!(liquidity.max_liquidity_msat(), 300);
scorer.channel_liquidities.get_mut(&43).unwrap()
- .as_directed_mut(&target, &recipient, 1_000, decay_params)
- .set_max_liquidity_msat(200);
+ .as_directed_mut(&target, &recipient, 1_000)
+ .set_max_liquidity_msat(200, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&43).unwrap()
- .as_directed(&target, &recipient, 1_000, decay_params);
+ .as_directed(&target, &recipient, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 0);
assert_eq!(liquidity.max_liquidity_msat(), 200);
let liquidity = scorer.channel_liquidities.get(&43).unwrap()
- .as_directed(&recipient, &target, 1_000, decay_params);
+ .as_directed(&recipient, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 800);
assert_eq!(liquidity.max_liquidity_msat(), 1000);
}
#[test]
fn resets_liquidity_upper_bound_when_crossed_by_lower_bound() {
let logger = TestLogger::new();
- let last_updated = SinceEpoch::now();
+ let last_updated = Duration::ZERO;
+ let offset_history_last_updated = Duration::ZERO;
let network_graph = network_graph(&logger);
let decay_params = ProbabilisticScoringDecayParameters::default();
let mut scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger)
.with_channel(42,
ChannelLiquidity {
- min_liquidity_offset_msat: 200, max_liquidity_offset_msat: 400, last_updated,
+ min_liquidity_offset_msat: 200, max_liquidity_offset_msat: 400,
+ last_updated, offset_history_last_updated,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
});
// Check initial bounds.
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 400);
assert_eq!(liquidity.max_liquidity_msat(), 800);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 200);
assert_eq!(liquidity.max_liquidity_msat(), 600);
// Reset from source to target.
scorer.channel_liquidities.get_mut(&42).unwrap()
- .as_directed_mut(&source, &target, 1_000, decay_params)
- .set_min_liquidity_msat(900);
+ .as_directed_mut(&source, &target, 1_000)
+ .set_min_liquidity_msat(900, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 900);
assert_eq!(liquidity.max_liquidity_msat(), 1_000);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 0);
assert_eq!(liquidity.max_liquidity_msat(), 100);
// Reset from target to source.
scorer.channel_liquidities.get_mut(&42).unwrap()
- .as_directed_mut(&target, &source, 1_000, decay_params)
- .set_min_liquidity_msat(400);
+ .as_directed_mut(&target, &source, 1_000)
+ .set_min_liquidity_msat(400, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 0);
assert_eq!(liquidity.max_liquidity_msat(), 600);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 400);
assert_eq!(liquidity.max_liquidity_msat(), 1_000);
}
#[test]
fn resets_liquidity_lower_bound_when_crossed_by_upper_bound() {
let logger = TestLogger::new();
- let last_updated = SinceEpoch::now();
+ let last_updated = Duration::ZERO;
+ let offset_history_last_updated = Duration::ZERO;
let network_graph = network_graph(&logger);
let decay_params = ProbabilisticScoringDecayParameters::default();
let mut scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger)
.with_channel(42,
ChannelLiquidity {
- min_liquidity_offset_msat: 200, max_liquidity_offset_msat: 400, last_updated,
+ min_liquidity_offset_msat: 200, max_liquidity_offset_msat: 400,
+ last_updated, offset_history_last_updated,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
});
// Check initial bounds.
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 400);
assert_eq!(liquidity.max_liquidity_msat(), 800);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 200);
assert_eq!(liquidity.max_liquidity_msat(), 600);
// Reset from source to target.
scorer.channel_liquidities.get_mut(&42).unwrap()
- .as_directed_mut(&source, &target, 1_000, decay_params)
- .set_max_liquidity_msat(300);
+ .as_directed_mut(&source, &target, 1_000)
+ .set_max_liquidity_msat(300, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 0);
assert_eq!(liquidity.max_liquidity_msat(), 300);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 700);
assert_eq!(liquidity.max_liquidity_msat(), 1_000);
// Reset from target to source.
scorer.channel_liquidities.get_mut(&42).unwrap()
- .as_directed_mut(&target, &source, 1_000, decay_params)
- .set_max_liquidity_msat(600);
+ .as_directed_mut(&target, &source, 1_000)
+ .set_max_liquidity_msat(600, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 400);
assert_eq!(liquidity.max_liquidity_msat(), 1_000);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&target, &source, 1_000, decay_params);
+ .as_directed(&target, &source, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 0);
assert_eq!(liquidity.max_liquidity_msat(), 600);
}
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 10_240, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
#[test]
fn constant_penalty_outside_liquidity_bounds() {
let logger = TestLogger::new();
- let last_updated = SinceEpoch::now();
+ let last_updated = Duration::ZERO;
+ let offset_history_last_updated = Duration::ZERO;
let network_graph = network_graph(&logger);
let params = ProbabilisticScoringFeeParameters {
liquidity_penalty_multiplier_msat: 1_000,
let scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger)
.with_channel(42,
ChannelLiquidity {
- min_liquidity_offset_msat: 40, max_liquidity_offset_msat: 40, last_updated,
+ min_liquidity_offset_msat: 40, max_liquidity_offset_msat: 40,
+ last_updated, offset_history_last_updated,
min_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
max_liquidity_offset_history: HistoricalBucketRangeTracker::new(),
});
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 50, ..usage };
assert_ne!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let successful_path = payment_path_for_amount(200);
let channel = &network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 41,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 301);
- scorer.payment_path_failed(&failed_path, 41);
+ scorer.payment_path_failed(&failed_path, 41, Duration::ZERO);
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 301);
- scorer.payment_path_successful(&successful_path);
+ scorer.payment_path_successful(&successful_path, Duration::ZERO);
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 301);
}
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
let usage = ChannelUsage { amount_msat: 500, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 301);
let usage = ChannelUsage { amount_msat: 750, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 602);
- scorer.payment_path_failed(&path, 43);
+ scorer.payment_path_failed(&path, 43, Duration::ZERO);
let usage = ChannelUsage { amount_msat: 250, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
let usage = ChannelUsage { amount_msat: 500, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 301);
let usage = ChannelUsage { amount_msat: 750, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 602);
- scorer.payment_path_failed(&path, 42);
+ scorer.payment_path_failed(&path, 42, Duration::ZERO);
let usage = ChannelUsage { amount_msat: 250, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_a).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
// Note that a default liquidity bound is used for B -> C as no channel exists
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_b).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 43,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
let channel = network_graph.read_only().channel(44).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_c).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 44,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
- scorer.payment_path_failed(&Path { hops: path, blinded_tail: None }, 43);
+ scorer.payment_path_failed(&Path { hops: path, blinded_tail: None }, 43, Duration::ZERO);
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_a).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 80);
// Note that a default liquidity bound is used for B -> C as no channel exists
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_b).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 43,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
let channel = network_graph.read_only().channel(44).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&node_c).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 44,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 128);
}
let channel_42 = network_graph.get(&42).unwrap();
let channel_43 = network_graph.get(&43).unwrap();
let (info, _) = channel_42.as_directed_from(&source).unwrap();
- let candidate_41 = CandidateRouteHop::PublicHop {
+ let candidate_41 = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 41,
- };
+ });
let (info, target) = channel_42.as_directed_from(&source).unwrap();
- let candidate_42 = CandidateRouteHop::PublicHop {
+ let candidate_42 = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
let (info, _) = channel_43.as_directed_from(&target).unwrap();
- let candidate_43 = CandidateRouteHop::PublicHop {
+ let candidate_43 = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 43,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate_41, usage, ¶ms), 128);
assert_eq!(scorer.channel_penalty_msat(&candidate_42, usage, ¶ms), 128);
assert_eq!(scorer.channel_penalty_msat(&candidate_43, usage, ¶ms), 128);
- scorer.payment_path_successful(&payment_path_for_amount(500));
+ scorer.payment_path_successful(&payment_path_for_amount(500), Duration::ZERO);
assert_eq!(scorer.channel_penalty_msat(&candidate_41, usage, ¶ms), 128);
assert_eq!(scorer.channel_penalty_msat(&candidate_42, usage, ¶ms), 300);
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 1_023, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 2_000);
- scorer.payment_path_failed(&payment_path_for_amount(768), 42);
- scorer.payment_path_failed(&payment_path_for_amount(128), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(768), 42, Duration::ZERO);
+ scorer.payment_path_failed(&payment_path_for_amount(128), 43, Duration::ZERO);
// Initial penalties
let usage = ChannelUsage { amount_msat: 128, ..usage };
let usage = ChannelUsage { amount_msat: 896, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
- // No decay
- SinceEpoch::advance(Duration::from_secs(4));
- let usage = ChannelUsage { amount_msat: 128, ..usage };
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
- let usage = ChannelUsage { amount_msat: 256, ..usage };
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 93);
- let usage = ChannelUsage { amount_msat: 768, ..usage };
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 1_479);
- let usage = ChannelUsage { amount_msat: 896, ..usage };
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
-
// Half decay (i.e., three-quarter life)
- SinceEpoch::advance(Duration::from_secs(1));
+ scorer.time_passed(Duration::from_secs(5));
let usage = ChannelUsage { amount_msat: 128, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 22);
let usage = ChannelUsage { amount_msat: 256, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
// One decay (i.e., half life)
- SinceEpoch::advance(Duration::from_secs(5));
+ scorer.time_passed(Duration::from_secs(10));
let usage = ChannelUsage { amount_msat: 64, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 128, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
// Fully decay liquidity lower bound.
- SinceEpoch::advance(Duration::from_secs(10 * 7));
+ scorer.time_passed(Duration::from_secs(10 * 8));
let usage = ChannelUsage { amount_msat: 0, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 1, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
// Fully decay liquidity upper bound.
- SinceEpoch::advance(Duration::from_secs(10));
+ scorer.time_passed(Duration::from_secs(10 * 9));
let usage = ChannelUsage { amount_msat: 0, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 1_024, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
- SinceEpoch::advance(Duration::from_secs(10));
+ scorer.time_passed(Duration::from_secs(10 * 10));
let usage = ChannelUsage { amount_msat: 0, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
let usage = ChannelUsage { amount_msat: 1_024, ..usage };
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
}
- #[test]
- fn decays_liquidity_bounds_without_shift_overflow() {
- let logger = TestLogger::new();
- let network_graph = network_graph(&logger);
- let params = ProbabilisticScoringFeeParameters {
- liquidity_penalty_multiplier_msat: 1_000,
- ..ProbabilisticScoringFeeParameters::zero_penalty()
- };
- let decay_params = ProbabilisticScoringDecayParameters {
- liquidity_offset_half_life: Duration::from_secs(10),
- ..ProbabilisticScoringDecayParameters::default()
- };
- let mut scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger);
- let source = source_node_id();
- let usage = ChannelUsage {
- amount_msat: 256,
- inflight_htlc_msat: 0,
- effective_capacity: EffectiveCapacity::Total { capacity_msat: 1_024, htlc_maximum_msat: 1_000 },
- };
- let channel = network_graph.read_only().channel(42).unwrap().to_owned();
- let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
- info,
- short_channel_id: 42,
- };
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 125);
-
- scorer.payment_path_failed(&payment_path_for_amount(512), 42);
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 281);
-
- // An unchecked right shift 64 bits or more in DirectedChannelLiquidity::decayed_offset_msat
- // would cause an overflow.
- SinceEpoch::advance(Duration::from_secs(10 * 64));
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 125);
-
- SinceEpoch::advance(Duration::from_secs(10));
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 125);
- }
-
#[test]
fn restricts_liquidity_bounds_after_decay() {
let logger = TestLogger::new();
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
// More knowledge gives higher confidence (256, 768), meaning a lower penalty.
- scorer.payment_path_failed(&payment_path_for_amount(768), 42);
- scorer.payment_path_failed(&payment_path_for_amount(256), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(768), 42, Duration::ZERO);
+ scorer.payment_path_failed(&payment_path_for_amount(256), 43, Duration::ZERO);
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 281);
// Decaying knowledge gives less confidence (128, 896), meaning a higher penalty.
- SinceEpoch::advance(Duration::from_secs(10));
+ scorer.time_passed(Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 291);
// Reducing the upper bound gives more confidence (128, 832) that the payment amount (512)
// is closer to the upper bound, meaning a higher penalty.
- scorer.payment_path_successful(&payment_path_for_amount(64));
+ scorer.payment_path_successful(&payment_path_for_amount(64), Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 331);
// Increasing the lower bound gives more confidence (256, 832) that the payment amount (512)
// is closer to the lower bound, meaning a lower penalty.
- scorer.payment_path_failed(&payment_path_for_amount(256), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(256), 43, Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 245);
// Further decaying affects the lower bound more than the upper bound (128, 928).
- SinceEpoch::advance(Duration::from_secs(10));
+ scorer.time_passed(Duration::from_secs(20));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 280);
}
effective_capacity: EffectiveCapacity::Total { capacity_msat: 1_000, htlc_maximum_msat: 1_000 },
};
- scorer.payment_path_failed(&payment_path_for_amount(500), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(500), 42, Duration::ZERO);
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
- SinceEpoch::advance(Duration::from_secs(10));
+ scorer.time_passed(Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 473);
- scorer.payment_path_failed(&payment_path_for_amount(250), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(250), 43, Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
let mut serialized_scorer = Vec::new();
let mut serialized_scorer = io::Cursor::new(&serialized_scorer);
let deserialized_scorer =
- <ProbabilisticScorer>::read(&mut serialized_scorer, (decay_params, &network_graph, &logger)).unwrap();
+ <ProbabilisticScorer<_, _>>::read(&mut serialized_scorer, (decay_params, &network_graph, &logger)).unwrap();
assert_eq!(deserialized_scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
}
- #[test]
- fn decays_persisted_liquidity_bounds() {
+ fn do_decays_persisted_liquidity_bounds(decay_before_reload: bool) {
let logger = TestLogger::new();
let network_graph = network_graph(&logger);
let params = ProbabilisticScoringFeeParameters {
effective_capacity: EffectiveCapacity::Total { capacity_msat: 1_000, htlc_maximum_msat: 1_000 },
};
- scorer.payment_path_failed(&payment_path_for_amount(500), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(500), 42, Duration::ZERO);
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
+ if decay_before_reload {
+ scorer.time_passed(Duration::from_secs(10));
+ }
+
let mut serialized_scorer = Vec::new();
scorer.write(&mut serialized_scorer).unwrap();
- SinceEpoch::advance(Duration::from_secs(10));
-
let mut serialized_scorer = io::Cursor::new(&serialized_scorer);
- let deserialized_scorer =
- <ProbabilisticScorer>::read(&mut serialized_scorer, (decay_params, &network_graph, &logger)).unwrap();
+ let mut deserialized_scorer =
+ <ProbabilisticScorer<_, _>>::read(&mut serialized_scorer, (decay_params, &network_graph, &logger)).unwrap();
+ if !decay_before_reload {
+ scorer.time_passed(Duration::from_secs(10));
+ deserialized_scorer.time_passed(Duration::from_secs(10));
+ }
assert_eq!(deserialized_scorer.channel_penalty_msat(&candidate, usage, ¶ms), 473);
- scorer.payment_path_failed(&payment_path_for_amount(250), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(250), 43, Duration::from_secs(10));
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
- SinceEpoch::advance(Duration::from_secs(10));
+ deserialized_scorer.time_passed(Duration::from_secs(20));
assert_eq!(deserialized_scorer.channel_penalty_msat(&candidate, usage, ¶ms), 370);
}
+ #[test]
+ fn decays_persisted_liquidity_bounds() {
+ do_decays_persisted_liquidity_bounds(false);
+ do_decays_persisted_liquidity_bounds(true);
+ }
+
#[test]
fn scores_realistic_payments() {
// Shows the scores of "realistic" sends of 100k sats over channels of 1-10m sats (with a
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 11497);
let usage = ChannelUsage {
effective_capacity: EffectiveCapacity::Total { capacity_msat: 1_950_000_000, htlc_maximum_msat: 1_000 }, ..usage
let scorer = ProbabilisticScorer::new(ProbabilisticScoringDecayParameters::default(), &network_graph, &logger);
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 58);
let params = ProbabilisticScoringFeeParameters {
let scorer = ProbabilisticScorer::new(ProbabilisticScoringDecayParameters::default(), &network_graph, &logger);
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
let params = ProbabilisticScoringFeeParameters {
let decay_params = ProbabilisticScoringDecayParameters::zero_penalty();
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
let scorer = ProbabilisticScorer::new(decay_params, &network_graph, &logger);
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 80_000);
}
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_ne!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), u64::max_value());
let usage = ChannelUsage { inflight_htlc_msat: 251, ..usage };
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), base_penalty_msat);
let usage = ChannelUsage { amount_msat: 1_000, ..usage };
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
// With no historical data the normal liquidity penalty calculation is used.
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 168);
assert_eq!(scorer.historical_estimated_payment_success_probability(42, &target, 42, ¶ms),
None);
- scorer.payment_path_failed(&payment_path_for_amount(1), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(1), 42, Duration::ZERO);
{
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 2048);
assert_eq!(scorer.channel_penalty_msat(&candidate, usage_1, ¶ms), 249);
// Even after we tell the scorer we definitely have enough available liquidity, it will
// still remember that there was some failure in the past, and assign a non-0 penalty.
- scorer.payment_path_failed(&payment_path_for_amount(1000), 43);
+ scorer.payment_path_failed(&payment_path_for_amount(1000), 43, Duration::ZERO);
{
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 105);
}
// Advance the time forward 16 half-lives (which the docs claim will ensure all data is
// gone), and check that we're back to where we started.
- SinceEpoch::advance(Duration::from_secs(10 * 16));
+ scorer.time_passed(Duration::from_secs(10 * 16));
{
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 168);
}
// Once fully decayed we still have data, but its all-0s. In the future we may remove the
// data entirely instead.
assert_eq!(scorer.historical_estimated_channel_liquidity_probabilities(42, &target),
- None);
+ Some(([0; 32], [0; 32])));
assert_eq!(scorer.historical_estimated_payment_success_probability(42, &target, 1, ¶ms), None);
- let mut usage = ChannelUsage {
+ let usage = ChannelUsage {
amount_msat: 100,
inflight_htlc_msat: 1024,
effective_capacity: EffectiveCapacity::Total { capacity_msat: 1_024, htlc_maximum_msat: 1_024 },
};
- scorer.payment_path_failed(&payment_path_for_amount(1), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(1), 42, Duration::from_secs(10 * 16));
{
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 2050);
- usage.inflight_htlc_msat = 0;
- assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 866);
let usage = ChannelUsage {
amount_msat: 1,
}
// Advance to decay all liquidity offsets to zero.
- SinceEpoch::advance(Duration::from_secs(60 * 60 * 10));
+ scorer.time_passed(Duration::from_secs(10 * (16 + 60 * 60)));
+
+ // Once even the bounds have decayed information about the channel should be removed
+ // entirely.
+ assert_eq!(scorer.historical_estimated_channel_liquidity_probabilities(42, &target),
+ None);
// Use a path in the opposite direction, which have zero for htlc_maximum_msat. This will
// ensure that the effective capacity is zero to test division-by-zero edge cases.
path_hop(source_pubkey(), 42, 1),
path_hop(sender_pubkey(), 41, 0),
];
- scorer.payment_path_failed(&Path { hops: path, blinded_tail: None }, 42);
+ scorer.payment_path_failed(&Path { hops: path, blinded_tail: None }, 42, Duration::from_secs(10 * (16 + 60 * 60)));
}
#[test]
let network_graph = network_graph.read_only();
let channel = network_graph.channel(42).unwrap();
let (info, _) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 0);
// Check we receive anti-probing penalty for htlc_maximum_msat == channel_capacity.
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, target) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 300);
let mut path = payment_path_for_amount(768);
// final value is taken into account.
assert!(scorer.channel_liquidities.get(&42).is_none());
- scorer.payment_path_failed(&path, 42);
+ scorer.payment_path_failed(&path, 42, Duration::ZERO);
path.blinded_tail.as_mut().unwrap().final_value_msat = 256;
- scorer.payment_path_failed(&path, 43);
+ scorer.payment_path_failed(&path, 43, Duration::ZERO);
let liquidity = scorer.channel_liquidities.get(&42).unwrap()
- .as_directed(&source, &target, 1_000, decay_params);
+ .as_directed(&source, &target, 1_000);
assert_eq!(liquidity.min_liquidity_msat(), 256);
assert_eq!(liquidity.max_liquidity_msat(), 768);
}
};
let channel = network_graph.read_only().channel(42).unwrap().to_owned();
let (info, target) = channel.as_directed_from(&source).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info,
short_channel_id: 42,
- };
+ });
// With no historical data the normal liquidity penalty calculation is used, which results
// in a success probability of ~75%.
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms), 1269);
None);
// Fail to pay once, and then check the buckets and penalty.
- scorer.payment_path_failed(&payment_path_for_amount(amount_msat), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(amount_msat), 42, Duration::ZERO);
// The penalty should be the maximum penalty, as the payment we're scoring is now in the
// same bucket which is the only maximum datapoint.
assert_eq!(scorer.channel_penalty_msat(&candidate, usage, ¶ms),
// ...but once we see a failure, we consider the payment to be substantially less likely,
// even though not a probability of zero as we still look at the second max bucket which
// now shows 31.
- scorer.payment_path_failed(&payment_path_for_amount(amount_msat), 42);
+ scorer.payment_path_failed(&payment_path_for_amount(amount_msat), 42, Duration::ZERO);
assert_eq!(scorer.historical_estimated_channel_liquidity_probabilities(42, &target),
Some(([63, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[32, 31, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])));
Some(0.0));
}
}
+
+#[cfg(ldk_bench)]
+pub mod benches {
+ use super::*;
+ use criterion::Criterion;
+ use crate::routing::router::{bench_utils, RouteHop};
+ use crate::util::test_utils::TestLogger;
+ use crate::ln::features::{ChannelFeatures, NodeFeatures};
+
+ pub fn decay_100k_channel_bounds(bench: &mut Criterion) {
+ let logger = TestLogger::new();
+ let network_graph = bench_utils::read_network_graph(&logger).unwrap();
+ let mut scorer = ProbabilisticScorer::new(Default::default(), &network_graph, &logger);
+ // Score a number of random channels
+ let mut seed: u64 = 0xdeadbeef;
+ for _ in 0..100_000 {
+ seed = seed.overflowing_mul(6364136223846793005).0.overflowing_add(1).0;
+ let (victim, victim_dst, amt) = {
+ let rong = network_graph.read_only();
+ let channels = rong.channels();
+ let chan = channels.unordered_iter()
+ .skip((seed as usize) % channels.len())
+ .next().unwrap();
+ seed = seed.overflowing_mul(6364136223846793005).0.overflowing_add(1).0;
+ let amt = seed % chan.1.capacity_sats.map(|c| c * 1000)
+ .or(chan.1.one_to_two.as_ref().map(|info| info.htlc_maximum_msat))
+ .or(chan.1.two_to_one.as_ref().map(|info| info.htlc_maximum_msat))
+ .unwrap_or(1_000_000_000).saturating_add(1);
+ (*chan.0, chan.1.node_two, amt)
+ };
+ let path = Path {
+ hops: vec![RouteHop {
+ pubkey: victim_dst.as_pubkey().unwrap(),
+ node_features: NodeFeatures::empty(),
+ short_channel_id: victim,
+ channel_features: ChannelFeatures::empty(),
+ fee_msat: amt,
+ cltv_expiry_delta: 42,
+ maybe_announced_channel: true,
+ }],
+ blinded_tail: None
+ };
+ seed = seed.overflowing_mul(6364136223846793005).0.overflowing_add(1).0;
+ if seed % 1 == 0 {
+ scorer.probe_failed(&path, victim, Duration::ZERO);
+ } else {
+ scorer.probe_successful(&path, Duration::ZERO);
+ }
+ }
+ let mut cur_time = Duration::ZERO;
+ cur_time += Duration::from_millis(1);
+ scorer.time_passed(cur_time);
+ bench.bench_function("decay_100k_channel_bounds", |b| b.iter(|| {
+ cur_time += Duration::from_millis(1);
+ scorer.time_passed(cur_time);
+ }));
+ }
+}
Ok(MutexGuard { lock: self.inner.borrow_mut() })
}
- pub fn try_lock<'a>(&'a self) -> LockResult<MutexGuard<'a, T>> {
- Ok(MutexGuard { lock: self.inner.borrow_mut() })
- }
-
pub fn into_inner(self) -> LockResult<T> {
Ok(self.inner.into_inner())
}
}
#[inline]
+#[allow(unused_variables)]
pub fn sign_with_aux_rand<C: Signing, ES: Deref>(
ctx: &Secp256k1<C>, msg: &Message, sk: &SecretKey, entropy_source: &ES
) -> Signature where ES::Target: EntropySource {
///
/// # Pruning stale channel updates
///
-/// Stale updates are pruned when a full monitor is written. The old monitor is first read, and if
-/// that succeeds, updates in the range between the old and new monitors are deleted. The `lazy`
-/// flag is used on the [`KVStore::remove`] method, so there are no guarantees that the deletions
+/// Stale updates are pruned when the consolidation threshold is reached according to `maximum_pending_updates`.
+/// Monitor updates in the range between the latest `update_id` and `update_id - maximum_pending_updates`
+/// are deleted.
+/// The `lazy` flag is used on the [`KVStore::remove`] method, so there are no guarantees that the deletions
/// will complete. However, stale updates are not a problem for data integrity, since updates are
/// only read that are higher than the stored [`ChannelMonitor`]'s `update_id`.
///
) -> chain::ChannelMonitorUpdateStatus {
// Determine the proper key for this monitor
let monitor_name = MonitorName::from(funding_txo);
- let maybe_old_monitor = self.read_monitor(&monitor_name);
- match maybe_old_monitor {
- Ok((_, ref old_monitor)) => {
- // Check that this key isn't already storing a monitor with a higher update_id
- // (collision)
- if old_monitor.get_latest_update_id() > monitor.get_latest_update_id() {
- log_error!(
- self.logger,
- "Tried to write a monitor at the same outpoint {} with a higher update_id!",
- monitor_name.as_str()
- );
- return chain::ChannelMonitorUpdateStatus::UnrecoverableError;
- }
- }
- // This means the channel monitor is new.
- Err(ref e) if e.kind() == io::ErrorKind::NotFound => {}
- _ => return chain::ChannelMonitorUpdateStatus::UnrecoverableError,
- }
// Serialize and write the new monitor
let mut monitor_bytes = Vec::with_capacity(
MONITOR_UPDATING_PERSISTER_PREPEND_SENTINEL.len() + monitor.serialized_length(),
&monitor_bytes,
) {
Ok(_) => {
- // Assess cleanup. Typically, we'll clean up only between the last two known full
- // monitors.
- if let Ok((_, old_monitor)) = maybe_old_monitor {
- let start = old_monitor.get_latest_update_id();
- let end = if monitor.get_latest_update_id() == CLOSED_CHANNEL_UPDATE_ID {
- // We don't want to clean the rest of u64, so just do possible pending
- // updates. Note that we never write updates at
- // `CLOSED_CHANNEL_UPDATE_ID`.
- cmp::min(
- start.saturating_add(self.maximum_pending_updates),
- CLOSED_CHANNEL_UPDATE_ID - 1,
- )
- } else {
- monitor.get_latest_update_id().saturating_sub(1)
- };
- // We should bother cleaning up only if there's at least one update
- // expected.
- for update_id in start..=end {
- let update_name = UpdateName::from(update_id);
- #[cfg(debug_assertions)]
- {
- if let Ok(update) =
- self.read_monitor_update(&monitor_name, &update_name)
- {
- // Assert that we are reading what we think we are.
- debug_assert_eq!(update.update_id, update_name.0);
- } else if update_id != start && monitor.get_latest_update_id() != CLOSED_CHANNEL_UPDATE_ID
- {
- // We're deleting something we should know doesn't exist.
- panic!(
- "failed to read monitor update {}",
- update_name.as_str()
- );
- }
- // On closed channels, we will unavoidably try to read
- // non-existent updates since we have to guess at the range of
- // stale updates, so do nothing.
- }
- if let Err(e) = self.kv_store.remove(
- CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE,
- monitor_name.as_str(),
- update_name.as_str(),
- true,
- ) {
- log_error!(
- self.logger,
- "error cleaning up channel monitor updates for monitor {}, reason: {}",
- monitor_name.as_str(),
- e
- );
- };
- }
- };
chain::ChannelMonitorUpdateStatus::Completed
}
Err(e) => {
log_error!(
self.logger,
- "error writing channel monitor {}/{}/{} reason: {}",
+ "Failed to write ChannelMonitor {}/{}/{} reason: {}",
CHANNEL_MONITOR_PERSISTENCE_PRIMARY_NAMESPACE,
CHANNEL_MONITOR_PERSISTENCE_SECONDARY_NAMESPACE,
monitor_name.as_str(),
Err(e) => {
log_error!(
self.logger,
- "error writing channel monitor update {}/{}/{} reason: {}",
+ "Failed to write ChannelMonitorUpdate {}/{}/{} reason: {}",
CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE,
monitor_name.as_str(),
update_name.as_str(),
}
}
} else {
- // We could write this update, but it meets criteria of our design that call for a full monitor write.
- self.persist_new_channel(funding_txo, monitor, monitor_update_call_id)
+ let monitor_name = MonitorName::from(funding_txo);
+ // In case of channel-close monitor update, we need to read old monitor before persisting
+ // the new one in order to determine the cleanup range.
+ let maybe_old_monitor = match monitor.get_latest_update_id() {
+ CLOSED_CHANNEL_UPDATE_ID => self.read_monitor(&monitor_name).ok(),
+ _ => None
+ };
+
+ // We could write this update, but it meets criteria of our design that calls for a full monitor write.
+ let monitor_update_status = self.persist_new_channel(funding_txo, monitor, monitor_update_call_id);
+
+ if let chain::ChannelMonitorUpdateStatus::Completed = monitor_update_status {
+ let cleanup_range = if monitor.get_latest_update_id() == CLOSED_CHANNEL_UPDATE_ID {
+ // If there is an error while reading old monitor, we skip clean up.
+ maybe_old_monitor.map(|(_, ref old_monitor)| {
+ let start = old_monitor.get_latest_update_id();
+ // We never persist an update with update_id = CLOSED_CHANNEL_UPDATE_ID
+ let end = cmp::min(
+ start.saturating_add(self.maximum_pending_updates),
+ CLOSED_CHANNEL_UPDATE_ID - 1,
+ );
+ (start, end)
+ })
+ } else {
+ let end = monitor.get_latest_update_id();
+ let start = end.saturating_sub(self.maximum_pending_updates);
+ Some((start, end))
+ };
+
+ if let Some((start, end)) = cleanup_range {
+ self.cleanup_in_range(monitor_name, start, end);
+ }
+ }
+
+ monitor_update_status
}
} else {
// There is no update given, so we must persist a new monitor.
}
}
+impl<K: Deref, L: Deref, ES: Deref, SP: Deref> MonitorUpdatingPersister<K, L, ES, SP>
+where
+ ES::Target: EntropySource + Sized,
+ K::Target: KVStore,
+ L::Target: Logger,
+ SP::Target: SignerProvider + Sized
+{
+ // Cleans up monitor updates for given monitor in range `start..=end`.
+ fn cleanup_in_range(&self, monitor_name: MonitorName, start: u64, end: u64) {
+ for update_id in start..=end {
+ let update_name = UpdateName::from(update_id);
+ if let Err(e) = self.kv_store.remove(
+ CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE,
+ monitor_name.as_str(),
+ update_name.as_str(),
+ true,
+ ) {
+ log_error!(
+ self.logger,
+ "Failed to clean up channel monitor updates for monitor {}, reason: {}",
+ monitor_name.as_str(),
+ e
+ );
+ };
+ }
+ }
+}
+
/// A struct representing a name for a monitor.
#[derive(Debug)]
struct MonitorName(String);
#[test]
fn persister_with_real_monitors() {
// This value is used later to limit how many iterations we perform.
- let test_max_pending_updates = 7;
+ let persister_0_max_pending_updates = 7;
+ // Intentionally set this to a smaller value to test a different alignment.
+ let persister_1_max_pending_updates = 3;
let chanmon_cfgs = create_chanmon_cfgs(4);
let persister_0 = MonitorUpdatingPersister {
kv_store: &TestStore::new(false),
logger: &TestLogger::new(),
- maximum_pending_updates: test_max_pending_updates,
+ maximum_pending_updates: persister_0_max_pending_updates,
entropy_source: &chanmon_cfgs[0].keys_manager,
signer_provider: &chanmon_cfgs[0].keys_manager,
};
let persister_1 = MonitorUpdatingPersister {
kv_store: &TestStore::new(false),
logger: &TestLogger::new(),
- // Intentionally set this to a smaller value to test a different alignment.
- maximum_pending_updates: 3,
+ maximum_pending_updates: persister_1_max_pending_updates,
entropy_source: &chanmon_cfgs[1].keys_manager,
signer_provider: &chanmon_cfgs[1].keys_manager,
};
node_cfgs[1].chain_monitor = chain_mon_1;
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
-
let broadcaster_0 = &chanmon_cfgs[2].tx_broadcaster;
let broadcaster_1 = &chanmon_cfgs[3].tx_broadcaster;
for (_, mon) in persisted_chan_data_0.iter() {
// check that when we read it, we got the right update id
assert_eq!(mon.get_latest_update_id(), $expected_update_id);
- // if the CM is at the correct update id without updates, ensure no updates are stored
+
+ // if the CM is at consolidation threshold, ensure no updates are stored.
let monitor_name = MonitorName::from(mon.get_funding_txo().0);
- let (_, cm_0) = persister_0.read_monitor(&monitor_name).unwrap();
- if cm_0.get_latest_update_id() == $expected_update_id {
+ if mon.get_latest_update_id() % persister_0_max_pending_updates == 0
+ || mon.get_latest_update_id() == CLOSED_CHANNEL_UPDATE_ID {
assert_eq!(
persister_0.kv_store.list(CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE,
monitor_name.as_str()).unwrap().len(),
for (_, mon) in persisted_chan_data_1.iter() {
assert_eq!(mon.get_latest_update_id(), $expected_update_id);
let monitor_name = MonitorName::from(mon.get_funding_txo().0);
- let (_, cm_1) = persister_1.read_monitor(&monitor_name).unwrap();
- if cm_1.get_latest_update_id() == $expected_update_id {
+ // if the CM is at consolidation threshold, ensure no updates are stored.
+ if mon.get_latest_update_id() % persister_1_max_pending_updates == 0
+ || mon.get_latest_update_id() == CLOSED_CHANNEL_UPDATE_ID {
assert_eq!(
persister_1.kv_store.list(CHANNEL_MONITOR_UPDATE_PERSISTENCE_PRIMARY_NAMESPACE,
monitor_name.as_str()).unwrap().len(),
// Send a few more payments to try all the alignments of max pending updates with
// updates for a payment sent and received.
let mut sender = 0;
- for i in 3..=test_max_pending_updates * 2 {
+ for i in 3..=persister_0_max_pending_updates * 2 {
let receiver;
if sender == 0 {
sender = 1;
// You may not use this file except in accordance with one or both of these
// licenses.
+use crate::blinded_path::BlindedPath;
+use crate::blinded_path::payment::ReceiveTlvs;
use crate::chain;
use crate::chain::WatchedOutput;
use crate::chain::chaininterface;
use crate::chain::channelmonitor;
use crate::chain::channelmonitor::MonitorEvent;
use crate::chain::transaction::OutPoint;
-use crate::routing::router::CandidateRouteHop;
+use crate::routing::router::{CandidateRouteHop, FirstHopCandidate, PublicHopCandidate, PrivateHopCandidate};
use crate::sign;
use crate::events;
use crate::events::bump_transaction::{WalletSource, Utxo};
use crate::ln::ChannelId;
-use crate::ln::channelmanager;
+use crate::ln::channelmanager::{ChannelDetails, self};
use crate::ln::chan_utils::CommitmentTransaction;
use crate::ln::features::{ChannelFeatures, InitFeatures, NodeFeatures};
use crate::ln::{msgs, wire};
use crate::ln::msgs::LightningError;
use crate::ln::script::ShutdownScript;
-use crate::offers::invoice::UnsignedBolt12Invoice;
+use crate::offers::invoice::{BlindedPayInfo, UnsignedBolt12Invoice};
use crate::offers::invoice_request::UnsignedInvoiceRequest;
+use crate::onion_message::{Destination, MessageRouter, OnionMessagePath};
use crate::routing::gossip::{EffectiveCapacity, NetworkGraph, NodeId, RoutingFees};
use crate::routing::utxo::{UtxoLookup, UtxoLookupError, UtxoResult};
use crate::routing::router::{find_route, InFlightHtlcs, Path, Route, RouteParameters, RouteHintHop, Router, ScorerAccountingForInFlightHtlcs};
use bitcoin::hash_types::{BlockHash, Txid};
use bitcoin::sighash::{SighashCache, EcdsaSighashType};
-use bitcoin::secp256k1::{PublicKey, Scalar, Secp256k1, SecretKey};
+use bitcoin::secp256k1::{PublicKey, Scalar, Secp256k1, SecretKey, self};
use bitcoin::secp256k1::ecdh::SharedSecret;
use bitcoin::secp256k1::ecdsa::{RecoverableSignature, Signature};
use bitcoin::secp256k1::schnorr;
impl<'a> Router for TestRouter<'a> {
fn find_route(
- &self, payer: &PublicKey, params: &RouteParameters, first_hops: Option<&[&channelmanager::ChannelDetails]>,
+ &self, payer: &PublicKey, params: &RouteParameters, first_hops: Option<&[&ChannelDetails]>,
inflight_htlcs: InFlightHtlcs
) -> Result<Route, msgs::LightningError> {
if let Some((find_route_query, find_route_res)) = self.next_routes.lock().unwrap().pop_front() {
if let Some(first_hops) = first_hops {
if let Some(idx) = first_hops.iter().position(|h| h.get_outbound_payment_scid() == Some(hop.short_channel_id)) {
let node_id = NodeId::from_pubkey(payer);
- let candidate = CandidateRouteHop::FirstHop {
+ let candidate = CandidateRouteHop::FirstHop(FirstHopCandidate {
details: first_hops[idx],
payer_node_id: &node_id,
- };
+ });
scorer.channel_penalty_msat(&candidate, usage, &());
continue;
}
let network_graph = self.network_graph.read_only();
if let Some(channel) = network_graph.channel(hop.short_channel_id) {
let (directed, _) = channel.as_directed_to(&NodeId::from_pubkey(&hop.pubkey)).unwrap();
- let candidate = CandidateRouteHop::PublicHop {
+ let candidate = CandidateRouteHop::PublicHop(PublicHopCandidate {
info: directed,
short_channel_id: hop.short_channel_id,
- };
+ });
scorer.channel_penalty_msat(&candidate, usage, &());
} else {
let target_node_id = NodeId::from_pubkey(&hop.pubkey);
htlc_minimum_msat: None,
htlc_maximum_msat: None,
};
- let candidate = CandidateRouteHop::PrivateHop {
+ let candidate = CandidateRouteHop::PrivateHop(PrivateHopCandidate {
hint: &route_hint,
target_node_id: &target_node_id,
- };
+ });
scorer.channel_penalty_msat(&candidate, usage, &());
}
prev_hop_node = &hop.pubkey;
&[42; 32]
)
}
+
+ fn create_blinded_payment_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _first_hops: Vec<ChannelDetails>, _tlvs: ReceiveTlvs,
+ _amount_msats: u64, _entropy_source: &ES, _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<(BlindedPayInfo, BlindedPath)>, ()> {
+ unreachable!()
+ }
+}
+
+impl<'a> MessageRouter for TestRouter<'a> {
+ fn find_path(
+ &self, _sender: PublicKey, _peers: Vec<PublicKey>, _destination: Destination
+ ) -> Result<OnionMessagePath, ()> {
+ unreachable!()
+ }
+
+ fn create_blinded_paths<
+ ES: EntropySource + ?Sized, T: secp256k1::Signing + secp256k1::Verification
+ >(
+ &self, _recipient: PublicKey, _peers: Vec<PublicKey>, _entropy_source: &ES,
+ _secp_ctx: &Secp256k1<T>
+ ) -> Result<Vec<BlindedPath>, ()> {
+ unreachable!()
+ }
}
impl<'a> Drop for TestRouter<'a> {
}
impl ScoreUpdate for TestScorer {
- fn payment_path_failed(&mut self, _actual_path: &Path, _actual_short_channel_id: u64) {}
+ fn payment_path_failed(&mut self, _actual_path: &Path, _actual_short_channel_id: u64, _duration_since_epoch: Duration) {}
+
+ fn payment_path_successful(&mut self, _actual_path: &Path, _duration_since_epoch: Duration) {}
- fn payment_path_successful(&mut self, _actual_path: &Path) {}
+ fn probe_failed(&mut self, _actual_path: &Path, _: u64, _duration_since_epoch: Duration) {}
- fn probe_failed(&mut self, _actual_path: &Path, _: u64) {}
+ fn probe_successful(&mut self, _actual_path: &Path, _duration_since_epoch: Duration) {}
- fn probe_successful(&mut self, _actual_path: &Path) {}
+ fn time_passed(&mut self, _duration_since_epoch: Duration) {}
}
impl Drop for TestScorer {
+++ /dev/null
- * `ChannelManager`s written with LDK 0.0.119 are no longer readable by versions
- of LDK prior to 0.0.113. Users wishing to downgrade to LDK 0.0.112 or before
- can read an 0.0.119-serialized `ChannelManager` with a version of LDK from
- 0.0.113 to 0.0.118, re-serialize it, and then downgrade.
+++ /dev/null
-## API Updates
-
-- The `Confirm::get_relevant_txids()` call now also returns the height under which LDK expects the respective transaction to be confirmed.
+++ /dev/null
-## Backwards Compatibility
-
-* Nodes that upgrade to 0.0.119 and subsequently downgrade after receiving a payment to a blinded
- path may lose privacy if one or more of those HTLCs fails.
+++ /dev/null
-## Backwards Compat
-
-* Forwarding a blinded HTLC and subsequently downgrading to an LDK version prior to 0.0.119 may
- result in a forwarding failure or an HTLC being failed backwards with an unblinded error.
+++ /dev/null
-## Bug fixes
-
-* In LDK versions 0.0.116 through 0.0.118, in rare cases where skimmed fees are present on shutdown
- the `ChannelManager` may fail to deserialize on startup.