Matt Corallo [Wed, 10 Jan 2024 22:24:43 +0000 (22:24 +0000)]
Drop `PeerManager` type bound on `UtxoLookup` entirely
In 67659677d4fdb6bf418d66dfa37c61706425232b we relaxed the bounds
set on `UtxoLookup` to enable those using `RoutingMessageHandler`
other than `P2PGossipSync` to use `UtxoLookup`. Sadly, because this
requires having a concrete `PeerManager` type which does *not* use
`UtxoLookup` in the `RoutingMessageHandler` type, this broke users
who were directly using `P2PGossipSync`.
We could split `UtxoLookup` into two, with different bounds, for
the two use-cases, but instead here we simply switch to storing a
reference to the `PeerManager` via a `dyn Fn` which allows us to
wake the `PeerManager` when we need to.
Matt Corallo [Fri, 15 Dec 2023 22:32:24 +0000 (22:32 +0000)]
Use correct default value when comparing to `htlc_maximum_msat`
62f866965436fff1a8e98ee655a8a6dcbb8716c1 added two
`htlc_maximum_msat.unwrap_or`s, but used a default value of 0,
spuriously causing all HTLCs to fail if we don't have an htlc
maximum value. This should be mostly harmless, but we should fix it
anyway.
Matt Corallo [Fri, 15 Dec 2023 18:23:42 +0000 (18:23 +0000)]
Make `FinalOnionHopData` public
In 4b5db8c3ce, `channelmanager::PendingHTLCRouting` was made
public, exposing a `FinalOnionHopData` field to the world. However,
`FinalOnionHopData` was left crate-private, making the enum
impossible to construct.
There isn't a strong reason for this (even though the
`FinalOnionHopData` API is somewhat confusing, being separated from
the rest of the onion structs), so we expose it here.
Matt Corallo [Fri, 15 Dec 2023 18:14:56 +0000 (18:14 +0000)]
Drop explicit `bitcoin_hashes` dependency in `lightning-invoice`
Since `lightning-invoice` now depends on the `bitcoin` crate
directly, also depending on the `bitcoin_hashes` crate is redundant
and just means we confuse users by setting the `std` flag only on
`bitcoin`. Thus, we drop the explicit dependency here and replace
it with `bitcoin::hashes`.
Matt Corallo [Thu, 14 Dec 2023 22:49:58 +0000 (22:49 +0000)]
Un-export the `PrivateRoute` inner field as there are invariants
When we make the `PrivateRoute` inner `RouteHint` `pub`, we failed
to note that the `PrivateRoute::new` constructor actually verifies
a length invariant. Thus, we un-export the inner field and force
users to go back through the `new` fn.
Jeffrey Czyz [Fri, 15 Dec 2023 03:19:57 +0000 (21:19 -0600)]
Use one-hop blinded paths only for announced nodes
To avoid exposing a node's identity in a blinded path, only create
one-hop blinded paths if the node has been announced, and thus has
public channels. Otherwise, there is no way to route a payment to the
node, exposing its identity needlessly.
Jeffrey Czyz [Fri, 8 Dec 2023 21:54:21 +0000 (15:54 -0600)]
Multi-hop blinded payment paths in ChannelManager
When constructing blinded payment paths for Bolt12Invoice, delegate to
Router::create_blinded_payment_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the Router fails or returns
no paths.
Jeffrey Czyz [Fri, 8 Dec 2023 18:03:06 +0000 (12:03 -0600)]
Add create_blinded_payment_paths to Router
The Router trait is used to find a Route for paying a node. Expand the
interface with a create_blinded_payment paths method for creating such
paths to a recipient node.
Provide an implementation for DefaultRouter that creates two-hop
blinded paths where the recipient's peers serve as the introduction
nodes.
Jeffrey Czyz [Fri, 3 Mar 2023 15:38:45 +0000 (09:38 -0600)]
Multi-hop blinded paths in ChannelManager
When constructing blinded paths for Offer and Refund, delegate to
MessageRouter::create_blinded_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the MessageRouter fails or
returns no paths.
Likewise, do the same for InvoiceRequest and Bolt12Invoice reply paths.
Jeffrey Czyz [Fri, 15 Dec 2023 21:37:18 +0000 (15:37 -0600)]
Use CLTV_FAR_FAR_AWAY in PaymentConstraints
When finding a route through a blinded path, a random CLTV offset may be
added to the path in order to preserve privacy. This needs to be
accounted for in the blinded path's PaymentConstraints. Add
CLTV_FAR_FAR_AWAY to the max_cltv_expiry constraint to allow for such
offsets.
Matt Corallo [Thu, 30 Nov 2023 00:04:09 +0000 (00:04 +0000)]
Consistently clean up when failing in `internal_funding_created`
When we fail to accept a counterparty's funding for various
reasons, we should ensure we call the correct cleanup methods in
`internal_funding_created` to remove the temporary data for the
channel in our various internal structs (primarily the SCID alias
map).
This adds the missing cleanup, using `convert_chan_phase_err`
consistently in all the error paths.
This also ensures we get a `ChannelClosed` event when relevant.
Jeffrey Czyz [Thu, 7 Dec 2023 21:53:15 +0000 (15:53 -0600)]
Require any Router also implements MessageRouter
ChannelManager is parameterized by a Router in order to find routes when
sending and retrying payments. For the offers flow, it needs to be able
to construct blinded paths (e.g., in the offer and in reply paths).
Instead of adding yet another parameter to ChannelManager, require that
any Router also implements MessageRouter. Implement this for
DefaultRouter by delegating to a DefaultMessageRouter.
Jeffrey Czyz [Thu, 7 Dec 2023 21:48:43 +0000 (15:48 -0600)]
Add create_blinded_paths to MessageRouter
The MessageRouter trait is used to find an OnionMessagePath to a
Destination (e.g., to a BlindedPath). Expand the interface with a
create_blinded_paths method for creating such paths to a recipient.
Provide a default implementation creating two-hop blinded paths where
the recipient's peers serve as introduction nodes.
Jeffrey Czyz [Thu, 2 Mar 2023 20:57:07 +0000 (14:57 -0600)]
Add RouteBlinding feature flag
The RouteBlinding feature flag is signals support for relaying payments
over blinded paths. It is used for paying BOLT 12 invoices, which are
required to included at least one blinded path.
Gursharan Singh [Fri, 8 Dec 2023 19:28:19 +0000 (11:28 -0800)]
Stop cleaning monitor updates on new block connect
Previously, we used to cleanup monitor updates at both consolidation
threshold and new block connects. With this change we will only
cleanup when our consolidation criteria is met. Also, we remove
monitor read from cleanup logic, in case of update consolidation.
Note: In case of channel-closing monitor update, we still need to
read the old monitor before persisting the new one in order to
determine the cleanup range.
Matt Corallo [Wed, 29 Nov 2023 00:33:16 +0000 (00:33 +0000)]
Drop half-life-based bucket decay in `update_history_buckets`
Because we decay the bucket information in the background, there's
not much reason to try to decay them immediately prior to updating,
and in removing that we can also clean up a good bit of dead code,
which we do here.
Matt Corallo [Wed, 29 Nov 2023 00:31:00 +0000 (00:31 +0000)]
Make scorer decay + persistence more frequent
There's some edge cases in our scoring when the information really
should be decayed but hasn't yet been prior to an update. Rather
than try to fix them exactly, we instead decay the scorer a bit
more often, which largely solves them but also gives us a bit more
accurate bounds on our channels, allowing us to reuse channels at
a similar amount to what just failed immediately, but at a
substantial penalty.
Matt Corallo [Thu, 12 Oct 2023 18:23:51 +0000 (18:23 +0000)]
Drop warning about mixing `no-std` and `std` `ProbabilisticScorer`s
Now that the serialization format of `no-std` and `std`
`ProbabilisticScorer`s both just use `Duration` since UNIX epoch
and don't care about time except when decaying, we don't need to
warn users to not mix the scorers across `no-std` and `std` flags.
Matt Corallo [Mon, 9 Oct 2023 01:52:20 +0000 (01:52 +0000)]
Drop now-unused `T: Time` bound on `ProbabilisticScorer`
Now that we don't access time via the `Time` trait in
`ProbabilisticScorer`, we can finally drop the `Time` bound
entirely, removing the `ProbabilisticScorerUsingTime` and type
alias indirection and replacing it with a simple struct.
Matt Corallo [Mon, 9 Oct 2023 01:44:33 +0000 (01:44 +0000)]
Use `Duration` based time info in scoring rather than `Time`
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to switch over to
using the `ScoreUpdate`-provided current time (as a `Duration`
since the unix epoch), making the `T` bound entirely unused.
Matt Corallo [Mon, 9 Oct 2023 01:15:18 +0000 (01:15 +0000)]
Pipe `Duration`-based time information through scoring pipeline
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to pass the
current time (as a `Duration` since the unix epoch) through the
score updating pipeline, allowing us to keep the
`*last_updated_time` fields up-to-date as we go.
Matt Corallo [Wed, 29 Nov 2023 03:07:54 +0000 (03:07 +0000)]
Update history bucket last_update time immediately on update
Now that we aren't decaying during scoring, when we set the
last_updated time in the history bucket logic doesn't matter, so
we should just update it when we've just updated the history
buckets.
Matt Corallo [Mon, 9 Oct 2023 01:11:10 +0000 (01:11 +0000)]
Stop decaying liquidity information during bounds-based scoring
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we ultimately remove it
entirely here.
Matt Corallo [Mon, 9 Oct 2023 02:14:21 +0000 (02:14 +0000)]
Stop decaying historical liquidity information during scoring
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we remove the historical bucket
liquidity decaying here.
Matt Corallo [Mon, 2 Oct 2023 20:07:21 +0000 (20:07 +0000)]
Impl decaying in `ProbabilisticScorer::decay_liquidity_certainty`
This implements decaying in the `ProbabilisticScorer`'s
`ScoreLookup::decay_liquidity_certainty` implementation, using
floats for accuracy since we're no longer particularly
time-sensitive. Further, it (finally) removes score entries which
have decayed to zero.
Matt Corallo [Mon, 2 Oct 2023 19:44:36 +0000 (19:44 +0000)]
Track historical liquidity update time separately from the bounds
In the next commit, we'll start to use the new
`ScoreUpdate::decay_liquidity_certainty` to decay our bounds in the
background. This will result in the `last_updated` field getting
updated regularly on decay, rather than only on update. While this
isn't an issue for the regular liquidity bounds, it poses a problem
for the historical liquidity buckets, which are decayed on a
separate (and by default much longer) timer. If we didn't move to
tracking their decays separately, we'd never let the `last_updated`
field get old enough for the historical buckets to decay at all.
Instead, here we introduce a new `Duration` in the
`ChannelLiquidity` which tracks the last time the historical
liquidity buckets were last updated. We initialize it to a copy of
`last_updated` on deserialization if it is missing.
Matt Corallo [Mon, 2 Oct 2023 19:14:26 +0000 (19:14 +0000)]
Add a scoring decay method to the `ScoreUpdate` trait
Rather than relying on fetching the current time during
routefinding, here we introduce a new trait method to `ScoreUpdate`
to do so. This largely mirrors what we do with the `NetworkGraph`,
and allows us to take on much more expensive operations (floating
point exponentiation) in our decaying.
Matt Corallo [Wed, 13 Dec 2023 22:55:32 +0000 (22:55 +0000)]
cfg-gate async signing logic
We are intending to release without having completed our async
signing logic, which sadly means we need to cfg-gate it to ensure
we restore the previous state of panicking on signer errors, rather
than putting us in a stuck state with no way to recover.
Matt Corallo [Wed, 1 Nov 2023 01:16:12 +0000 (01:16 +0000)]
Depend on `libm` in `no-std` for `powf`(64)
In the next commits we'll need `f64`'s `powf`, which is only
available in `std`. For `no-std`, here we depend on `libm` (a
`rust-lang` org project), which we can use for `powf`.
Add redundant blinded HTLC failure check for posterity.
Although this new check is unreachable right now, it helps prevent potential
future errors where we incorrectly fail blinded HTLCs with an unblinded error.
If a blinded HTLC errors when added to a Channel, such as if the recipient has
already sent a shutdown message, they should malformed-fail backwards with
error code INVALID_ONION_BLINDING and a zeroed out onion hash per BOLT 4.
Fix blinded recipient fail on receive reqs violation
If a blinded HTLC does not satisfy the receiver's requirements, e.g. bad CLTV
or amount, they should malformed-fail backwards with error code
INVALID_ONION_BLINDING and a zeroed out onion hash per BOLt 4.
Fix blinded recipient fail on onion decode failure
If a recipient behind a multihop blinded path fails to decode their onion
payload, they should fail backwards with error code INVALID_ONION_BLINDING and
a zeroed out onion hash per BOLT 4.
And use it in the multihop blinded path receive failure test. Will be used in
the next commit to test receiving an invalid blinded final onion payload.
We can't use the existing get_route test util here because blinded payments
rely on the sender adding a random shadow CLTV offset to the final hop; without
this the payment will be failed with cltv-expiry-too-soon.
If a blinded recipient to a multihop blinded path needs to fail back a
malformed HTLC, they should use error code INVALID_ONION_BLINDING and a zeroed
out onion hash per BOLT 4.
Support failing blinded non-intro HTLCs after RAA processing.
If an HTLC fails after its RAA is processed, it is failed back with
ChannelManager::fail_htlc_backwards_internal. This method will now correctly
inform the channel that this HTLC is blinded and to construct an
update_malformed message accordingly.
Adapt Channel::fail_htlc for failing with malformed OR update_fail_htlc.
Useful for failing blinded payments back with malformed, and will also be
useful in the future when we move onion decoding into
process_pending_htlc_forwards, after which Channel::fail_htlc will be used for
all malformed htlcs.
Set up Channel::fail_htlc to be able to return update_malformed
Currently it returns only update_fail, but we'll want it to be able to return
update_malformed as well in upcoming commits. We'll use this for correctly
failing blinded received HTLCs backwards with malformed and
invalid_onion_blinding error per BOLT 4.
Channel: add holding cell HTLC variant for blinded HTLCs.
For context, blinded HTLCs where we are not the intro node must always be
failed back with malformed and invalid_onion_blinding error per BOLT 4.
Prior to supporting blinded payments, the only way for an update_malformed to
be returned from Channel was if an onion was actually found to be malformed
during initial update_add processing. This meant that any malformed HTLCs would
never live in the holding cell but instead would be returned directly upon
initial RAA processing.
Now, we need to be able to store these HTLCs in the holding cell because the
HTLC failure necessitating an update_malformed may come long after the RAA is
initially processed, and we may not be a state to send the update_malformed
message at that time.
Therefore, add a new holding cell HTLC variant for blinded non-intro node
HTLCs, which will signal to Channel to fail with malformed and the correct
error code.
Set HTLCPreviousHopData::blinded for blinded received HTLCs.
Will be used in the next commit(s) to let us know to fail blinded received
HTLCs backwards with the malformed and invalid_onion_blinding error per BOLT 4.
Jeffrey Czyz [Tue, 12 Dec 2023 14:46:10 +0000 (08:46 -0600)]
Relax OnionMessenger::peer_disconnected assertion
When a peer is connected, OnionMessenger tracks it only if it supports
onion messages. On disconnect, we debug_assert that the peer was in a
state ConnectedPeer, failing when it is in the PendingConnection state.
However, we were mistakenly asserting for peers that we were not
tracking (i.e., that don't support onion messages). Relax the check to
not fail on the latter.
Matt Corallo [Wed, 29 Nov 2023 21:59:38 +0000 (21:59 +0000)]
Set `counterparty_node_id` on `ChannelMonitor`s as they're updated
Historically, `ChannelMonitor`s had idea who their counterparty
was. This was fine, until `ChannelManager` started indexing by
peer, at which point it needed to know the counterparty when it saw
a `ChannelMonitorUpdate` complete. To address this, a "temporary"
map from channel ID to peer was added, but no upgrade path was
created for existing `ChannelMonitor`s to not rely on this map.
This commit adds such an upgrade path, setting the
`counterparty_node_id` on all `ChannelMonitor`s as they're updated,
allowing us to eventually break backwards compatibility and remove
`ChannelManager::outpoint_to_peer`.
Matt Corallo [Wed, 29 Nov 2023 21:39:46 +0000 (21:39 +0000)]
Move channel -> peer tracking to `OutPoint`s from Channel IDs
For backwards compatibility reasons, we need to track a mapping
from funding outpoints to channel ids. To reduce diff, this was
previously done with channel IDs, converting the `OutPoint`s to
channel IDs before using the map.
This worked fine, but is somewhat brittle - because we allow
redundant channel IDs across different peers, we had to avoid
insertion until we had a real channel ID, and thus also had to be
careful to avoid removal unless we were using a real channel ID,
rather than a temporary one.
This brittleness actually crept in to handling of errors in funding
acceptance, allowing a remote party to get us to remove an entry by
sending a overlapping temporary channel ID with a separate real
channel ID.
Luckily, this map is relatively infrequently used, only used in the
case we see a monitor update completion from a rather ancient
monitor which is unaware of the counterparty node.
Even after this change, the channel -> peer tracking storage is
still somewhat brittle, as we rely on entries not being added until
we are confident no conflicting `OutPoint`s have been used across
channels, and similarly not removing unless that check has
completed.
Wilmer Paulino [Fri, 3 Nov 2023 19:43:06 +0000 (12:43 -0700)]
Refactor commitment broadcast to always go through OnchainTxHandler
Currently, our holder commitment broadcast only goes through the
`OnchainTxHandler` for anchor outputs channels because we can actually
bump the commitment transaction fees with it. For non-anchor outputs
channels, we would just broadcast once directly via the
`ChannelForceClosed` monitor update, without going through the
`OnchainTxHandler`.
As we add support for async signing, we need to be tolerable to signing
failures. A signing failure of our holder commitment will currently
panic, but once the panic is removed, we must be able to retry signing
once the signer is available. We can easily achieve this via the
existing `OnchainTxHandler::rebroadcast_pending_claims`, but this
requires that we first queue our holder commitment as a claim. This
commit ensures we do so everywhere we need to broadcast a holder
commitment transaction, regardless of the channel type.
Wilmer Paulino [Tue, 31 Oct 2023 08:12:58 +0000 (01:12 -0700)]
Cancel previous commitment claims on newly confirmed commitment
Once a commitment transaction is broadcast/confirms, we may need to
claim some of the HTLCs in it. These claims are sent as requests to the
`OnchainTxHandler`, which will bump their feerate as they remain
unconfirmed. When said commitment transaction becomes unconfirmed
though, and another commitment confirms instead, i.e., a reorg happens,
the `OnchainTxHandler` doesn't have any insight into whether these
claims are still valid or not, so it continues attempting to claim the
HTLCs from the previous commitment (now unconfirmed) forever, along with
the HTLCs from the newly confirmed commitment.
Improving block conenction logging and filtered txids
Implement the Display trait for Outpoint and utilize it in the codebase for monitoring outpoints.
Additionally, add log tracing for best_block_update and confirmed transactions.
solves #2348
Matt Corallo [Fri, 8 Dec 2023 23:05:37 +0000 (23:05 +0000)]
Add CI test that `#[cfg]` tags are from a defined set
Rust is fairly relaxed in checking the validity of arguments
passed to #[cfg]. While it should probably be more strict when
checking features, it cannot be strict when checking loose cfg
tags, because those can be anything and are simply passed to rustc
via unconstrained arguments.
Thus, we do it for rustc manually, but scanning all our source and
checking that all our cfg tags match a known cfg tag.
Error if onion payloads exceed max length on packet construction.
Ensure that if we call construct_onion_packet and friends where payloads are
too large for the allotted packet length, we'll fail to construct. Previously,
senders would happily construct invalid packets by array-shifting the final
node's HMAC out of the packet when adding an intermediate onion layer, causing
the receiver to error with "final payload provided for us as an intermediate
node."