Matt Corallo [Sat, 21 Sep 2024 04:23:09 +0000 (04:23 +0000)]
Avoid a `short_to_chan_info` read lock in `claim_funds_from_hop`
In 453ed11f80b40f28b6e95a74b1f7ed2cd7f012ad we started tracking the
counterparty's `node_id` in `HTLCPreviousHopData`, however we were
still trying to look it up using `prev_short_channel_id` in
`claim_funds_from_hop`.
Because we now usually have the counterparty's `node_id` directly
accessible, we should skip the `prev_short_channel_id` lookup.
This will also be more important in the next commit where we need
to look up state for our counterparty to generate
`ChannelMonitorUpdate`s whether we have a live channel or not.
Matt Corallo [Sun, 29 Sep 2024 19:30:48 +0000 (19:30 +0000)]
Add missing `update_maps_on_chan_removal` call in signer restore
When a channel is closed, we have to call
`update_maps_on_chan_removal` in the same per-peer-state lock as
the removal of the `ChannelPhase` object. We forgot to do so in
`ChannelManager::signer_unblocked` leaving dangling references to
the channel.
We also take this opportunity to include more context in the
channel-closure log in `ChannelManager::signer_unblocked` and add
documentation to `update_maps_on_chan_removal` and
`finish_close_channel` to hopefully avoid this issue in the future.
Matt Corallo [Sun, 29 Sep 2024 15:22:29 +0000 (15:22 +0000)]
Pass the `peer_state` lock through to `update_maps_on_chan_removal`
`update_maps_on_chan_removal` is used to perform `ChannelManager`
state updates when a channel is being removed, prior to dropping
the `peer_state` lock. In a future commit we'll use it to update
fields in the `per_peer_state`, but in order to do so we'll need to
have access to that state in the macro.
Here we get set up for this by passing the per-peer state to
`update_maps_on_chan_removal`, which is sadly a fairly large patch.
Matt Corallo [Sun, 15 Sep 2024 17:24:19 +0000 (17:24 +0000)]
Doc the on-upgrade `ChannelMonitor` startup persistence semantics
Because the new startup `ChannelMonitor` persistence semantics rely
on new information stored in `ChannelMonitor` only for claims made
in the upgraded code, users upgrading from previous version of LDK
must apply the old `ChannelMonitor` persistence semantics at least
once (as the old code will be used to handle partial claims).
Matt Corallo [Thu, 20 Jun 2024 15:17:10 +0000 (15:17 +0000)]
Stop relying on `ChannelMonitor` persistence after manager read
When we discover we've only partially claimed an MPP HTLC during
`ChannelManager` reading, we need to add the payment preimage to
all other `ChannelMonitor`s that were a part of the payment.
We previously did this with a direct call on the `ChannelMonitor`,
requiring users write the full `ChannelMonitor` to disk to ensure
that updated information made it.
This adds quite a bit of delay during initial startup - fully
resilvering each `ChannelMonitor` just to handle this one case is
incredibly excessive.
Over the past few commits we dropped the need to pass HTLCs
directly to the `ChannelMonitor`s using the background events to
provide `ChannelMonitorUpdate`s insetad.
Thus, here we finally drop the requirement to resilver
`ChannelMonitor`s on startup.
Matt Corallo [Mon, 30 Sep 2024 20:09:01 +0000 (20:09 +0000)]
Replay MPP claims via background events using new CM metadata
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we finally implement claiming using the new MPP part list and
metadata stored in `ChannelMonitor`s. In doing so, we use much more
of the existing HTLC-claiming pipeline in `ChannelManager`,
utilizing the on-startup background events flow as well as properly
re-applying the RAA-blockers to ensure preimages cannot be lost.
Matt Corallo [Sun, 15 Sep 2024 23:27:35 +0000 (23:27 +0000)]
Handle duplicate payment claims during initialization
In the next commit we'll start using (much of) the normal HTLC
claim pipeline to replay payment claims on startup. In order to do
so, however, we have to properly handle cases where we get a
`DuplicateClaim` back from the channel for an inbound-payment HTLC.
Here we do so, handling the `MonitorUpdateCompletionAction` and
allowing an already-completed RAA blocker.
Matt Corallo [Mon, 16 Sep 2024 00:16:51 +0000 (00:16 +0000)]
Move payment claim initialization to an fn on `ClaimablePayments`
Here we wrap the logic which moves claimable payments from
`claimable_payments` to `pending_claiming_payments` to a new
utility function on `ClaimablePayments`. This will allow us to call
this new logic during `ChannelManager` deserialization in a few
commits.
Matt Corallo [Mon, 30 Sep 2024 19:42:51 +0000 (19:42 +0000)]
Move `ChannelManager`-read preimage relay to after struct build
In a coming commit we'll use the existing `ChannelManager` claim
flow to claim HTLCs which we found partially claimed on startup,
necessitating having a full `ChannelManager` when we go to do so.
Here we move the re-claim logic down in the `ChannelManager`-read
logic so that we have that.
Matt Corallo [Mon, 16 Sep 2024 00:07:48 +0000 (00:07 +0000)]
Store info about claimed payments, incl HTLCs in `ChannelMonitor`s
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we store the required MPP parts and metadata in
`ChannelMonitor`s and make them available to `ChannelManager` on
load.
Matt Corallo [Sun, 15 Sep 2024 23:50:31 +0000 (23:50 +0000)]
Pass info about claimed payments, incl HTLCs to `ChannelMonitor`s
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we take the first step, building a list of MPP parts and
metadata in `ChannelManager` and passing it through to
`ChannelMonitor` in the `ChannelMonitorUpdate`s.
Matt Corallo [Fri, 14 Jun 2024 14:10:38 +0000 (14:10 +0000)]
Use a struct to track MPP parts pending claiming
When we started tracking which channels had MPP parts claimed
durably on-disk in their `ChannelMonitor`, we did so with a tuple.
This was fine in that it was only ever accessed in two places, but
as we will start tracking it through to the `ChannelMonitor`s
themselves in the coming commit(s), it is useful to have it in a
struct instead.
Matt Corallo [Mon, 30 Sep 2024 21:02:53 +0000 (21:02 +0000)]
Add missing `inbound_payment_id_secret` write in `ChannelManager`
In aa09c33a1719944769ba98624bfe18ea33083f44 we added a new secret
in `ChannelManager` with which to derive inbound `PaymentId`s. We
added read support for the new field, but forgot to add writing
support for it. Here we fix this oversight.
Matt Corallo [Wed, 23 Oct 2024 15:59:23 +0000 (15:59 +0000)]
Allow `clippy::unwrap-or-default` because its usually wrong
`or_default` is generally less readable than writing out the thing
we're writing, as `Default` is opaque but explicit constructors
generally are not. Thus, we ignore the clippy lint (ideally we
could invert it and ban the use of `Default` in the crate entirely
but alas).
olegkubrakov [Thu, 17 Oct 2024 21:28:12 +0000 (14:28 -0700)]
Make monitor update name public
These structs are meant for MonitoringUpdatingPersister implementation, but some
external implementations may still reuse them, so going to make them public.
Duncan Dean [Fri, 4 Oct 2024 14:35:43 +0000 (16:35 +0200)]
DRY `funding_created()` and `funding_signed()` for V1 channels
There is a decent amount of shared code in these two methods so we make
an attempt to share that code here by introducing the
`InitialRemoteCommitmentReceiver` trait. This trait will also come in
handy when we need similar commitment_signed handling behaviour for
dual-funded channels.
Jeffrey Czyz [Fri, 18 Oct 2024 22:42:18 +0000 (17:42 -0500)]
Use total_inflight_amount_msat for probability fns
Rename parameters used when calculating success probability to make it
clear that the total mount in-flight should be used rather than the
payment amount.
Jeffrey Czyz [Thu, 10 Oct 2024 23:20:25 +0000 (18:20 -0500)]
Correct base_penalty_amount_multiplier_msat docs
Commit df52da7b31494c7ec77a705cca4c44bc840f8a95 modified
ProbabilisticScorer to apply some penalty amount multipliers to the
total amount flowing over the channel. However, the commit updated the
docs for base_penalty_amount_multiplier_msat even though that behavior
didn't change. This commit reverts those docs.
Jeffrey Czyz [Thu, 10 Oct 2024 23:01:23 +0000 (18:01 -0500)]
Don't over-penalize channels with inflight HTLCs
Commit df52da7b31494c7ec77a705cca4c44bc840f8a95 modified
ProbabilisticScorer to apply some penalty amount multipliers (e.g.,
liquidity_penalty_amount_multiplier_msat) to the total amount flowing
over the channel (i.e., including inflight HTLCs), not just the payment
in question. This led to over-penalizing in-use channels. Instead, only
apply the total amount when calculating success probability.
Matt Corallo [Fri, 18 Oct 2024 15:57:25 +0000 (15:57 +0000)]
Add a test for the fee-bump rate of timeout HTLC claims on cp txn
In a previous commit we updated the fee-bump-rate of claims against
HTLC timeouts on counterparty commitment transactions so that
instead of immediately attempting to bump every block we consider
the fact that we actually have at least `MIN_CLTV_EXPIRY_DELTA`
blocks to do so, and bumping at the appropriate rate given that.
Here we test that by adding an extra check to an existing test
that we do not bump in the very next block after the HTLC timeout
claim was initially broadcasted.
Matt Corallo [Wed, 18 Sep 2024 18:20:46 +0000 (18:20 +0000)]
Set correct `counterparty_spendable_height` for outb local HTLCs
For outbound HTLCs, the counterparty can spend the output
immediately. This fixes the `counterparty_spendable_height` in the
`PackageTemplate` claiming outbound HTLCs on local commitment
transactions, which was previously spuriously set to the HTLC
timeout (at which point *we* can claim the HTLC).
Matt Corallo [Thu, 17 Oct 2024 19:38:19 +0000 (19:38 +0000)]
Stop exporting `lightning::ln::features`
Now that the module only contains some implementations of
serialization for the `lightning_types::features` structs, there's
no reason for it to be public.
Matt Corallo [Tue, 20 Aug 2024 02:22:22 +0000 (02:22 +0000)]
Add a test of gossip message buffer limiting in `PeerManager`
This adds a simple test that the gossip message buffer in
`PeerManager` is limited, including the new behavior of bypassing
the limit when the broadcast comes from the
`ChannelMessageHandler`.
Matt Corallo [Tue, 20 Aug 2024 01:57:06 +0000 (01:57 +0000)]
Add a constructor for the test `SocketDescriptor` and `hang_writes`
In testing, its useful to be able to tell the `SocketDescriptor` to
pretend the system network buffer is full, which we add here by
creating a new `hang_writes` flag. In order to simplify
constructing, we also add a new constructor which existing tests
are moved to.
Matt Corallo [Mon, 24 Jun 2024 20:24:36 +0000 (20:24 +0000)]
Reliably deliver gossip messages from our `ChannelMessageHandler`
When our `ChannelMessageHandler` creates gossip broadcast
`MessageSendEvent`s, we generally want these to be reliably
delivered to all our peers, even if there's not much buffer space
available.
Here we do this by passing an extra flag to `forward_broadcast_msg`
which indicates where the message came from, then ignoring the
buffer-full criteria when the flag is set.
Matt Corallo [Wed, 18 Sep 2024 16:48:24 +0000 (16:48 +0000)]
Rename `soonest_conf_deadline` to `counterparty_spendable_height`
This renames the field in `PackageTemplate` which describes the
height at which a counterparty can make a claim to an output to
match its actual use.
Previously it had been set based on when a counterparty can claim
an output but also used for other purposes. In the previous commit
we cleaned up its use for fee-bumping-rate, so here we can rename
it as it is now only used as the `counteraprty_spendable_height`.
Matt Corallo [Wed, 18 Sep 2024 16:00:20 +0000 (16:00 +0000)]
Clean up `PackageTemplate::get_height_timer` to consider type
`PackageTemplate::get_height_timer` is used to decide when to next
bump our feerate on claims which need to make it on chain within
some window. It does so by comparing the current height with some
deadline and increasing the bump rate as the deadline approaches.
However, the deadline used is the `counterparty_spendable_height`,
which is the height at which the counterparty might be able to
spend the same output, irrespective of why. This doesn't make sense
for all output types, for example outbound HTLCs are spendable by
our counteraprty immediately (by revealing the preimage), but we
don't need to get our HTLC timeout claims confirmed immedaitely,
as we actually have `MIN_CLTV_EXPIRY` blocks before the inbound
edge of a forwarded HTLC becomes claimable by our (other)
counterparty.
Thus, here, we adapt `get_height_timer` to look at the type of
output being claimed, and adjust the rate at which we bump the fee
according to the real deadline.
Matt Corallo [Fri, 6 Sep 2024 00:33:45 +0000 (00:33 +0000)]
Stop passing current height to `PackageTemplate::build_package`
Now that we don't store the confirmation height of the inputs
being spent, passing the current height to
`PackageTemplate::build_package` is useless - we only use it to set
the height at which we should next bump the fee, but we just want
it to be "next block", so we might as well use `0` and avoid the
extra argument. Further, in one case we were already passing `0`,
so passing the argument is just confusing as we can't rely on it
being set.
Note that this does remove an assertion that we never merge
packages that were crated at different heights, and in the future
we may wish to do that (as there's no specific reason not to), but
we do not currently change the behavior.
Matt Corallo [Thu, 5 Sep 2024 23:48:02 +0000 (23:48 +0000)]
Drop unused `PackageTemplate::height_original`
This has never been used, and its set to a fixed value of zero for
HTLCs on local commitment transactions making it impossible to rely
on so might as well remove it.
Elias Rohrer [Sat, 21 Sep 2024 04:51:21 +0000 (13:51 +0900)]
Add `lightning-macros` crate
Previously, we used the `bdk_macros` dependency for some simple proc
macros in `lightning-transaction-sync`. However, post-1.0 BDK doesn't
further maintain this crate and will at some point probably yank it
together with the old `bdk` crate that was split up.
Here, we create a new crate for utility proc macros and ~~steal~~ add
what we currently use (slightly modified for the latest `syn` version's
API though). In the future we may want to expand this crate, e.g., for
some `maybe_async` macros in the context of an `async KVStore`
implementation.
This function was very confusing - its used to determine by when
we have to stop aggregating this claim with others as it starts to
be at risk of pinning due to the counterparty's ability to spend
the output.
It is not ever used as a timelock for a transaction, and thus its
name is very confusing.
Instead we rename it `counterparty_spendable_height`.
Matt Corallo [Thu, 5 Sep 2024 21:06:16 +0000 (21:06 +0000)]
Rename claim cleaning match bool for accuracy
We don't actually care if a confirmed transaction claimed other
outputs, only that it claimed a superset of the outputs in the
pending claim we're looking at. Thus, the variable to detect that
is renamed `is_claim_subset_of_tx` instead of `are_sets_equal`.
This reverts commit 85eb8145fba1dbf3b9348d9142cc105ee13db33b.
Logging here can be overly verbose and moreover in case of event
handling failure, we loop back without any added delay.
Previously, the `ChainListenerSet` `Listen` implementation wouldn't
forward to the listeners `block_connected` implementation outside of
tests. This would result in the default implementation of
`Listen::block_connected` being used and the listeners implementation
never being called.
Matt Corallo [Thu, 3 Oct 2024 16:54:20 +0000 (16:54 +0000)]
Hold a reference to byte arrays when serializing to bech32
When we serialize from a byte array to bech32 in
`lightning-invoice`, we can either copy the array itself into the
iterator or hold a reference to the array and iterate through that.
In aa2f6b47df312f026213d0ceaaff20ffe955c377 we opted to copy the
array into the iterator, which is fine for the current array sizes
we're working with, but does result in additional memory on the
stack if, in the future, we end up writing large arrays.
Instead, here, we switch to using the slice serialization code when
writing arrays, (very marginally) reducing code size and reducing
stack usage.
Matt Corallo [Thu, 3 Oct 2024 16:54:14 +0000 (16:54 +0000)]
Marginally reduce allocations in `lightning-invoice`
In aa2f6b47df312f026213d0ceaaff20ffe955c377 we refactored
`lightning-invoice` de/serialization to use the new version of
`bech32`, but in order to keep the public API the same we
introduced one allocation we could have skipped.
Instead, here, we replace the public `Utf8Error` with
`FromUtf8Error` which contains the original data which failed
conversion, removing an allocation in the process.
In aa2f6b47df312f026213d0ceaaff20ffe955c377 we refactored
`lightning-invoice` de/serialization to use the new version of
`bech32`, but ended up adding one unnecessary allocation in our
offers logic, which we drop here.
Matt Corallo [Thu, 3 Oct 2024 16:53:56 +0000 (16:53 +0000)]
Marginally reduce allocations in `lightning-invoice`
In aa2f6b47df312f026213d0ceaaff20ffe955c377 we refactored
`lightning-invoice` de/serialization to use the new version of
`bech32`, also reducing some trivial unnecessary allocations when
we did so.
Here we drop a few additional allocations which came up in review.
Matt Corallo [Wed, 2 Oct 2024 18:21:33 +0000 (18:21 +0000)]
Allow a `DNSResolverMessageHandler` to set `dns_resolver` feature
A `DNSResolverMessageHandler` which handles resolution requests
should want the `NodeFeatures` included in the node's
`node_announcement` to include `dns_resolver` to indicate to the
world that it provides that service. Here we enable this by
requesting extra feature flags from the `DNSResolverMessageHandler`
in the features `OnionMessenger`, in turn, provides to
`PeerManager` (which builds the `node_announcement`).
Matt Corallo [Wed, 2 Oct 2024 18:12:38 +0000 (18:12 +0000)]
Add support for parsing the `dns_resolver` feature bit
This feature bit is used to indicate that a node will make DNS
queries on behalf of onion message senders, returning DNSSEC TXT
proofs for the requested names.
It is used to signal support for bLIP 32 resolution and can be used
to find nodes from which we can try to resolve BIP 32 HRNs.
Duncan Dean [Fri, 6 Sep 2024 10:26:19 +0000 (12:26 +0200)]
Add an `explicit_type` TLV syntax for avoiding certain cases of type inference
This new syntax is used to fix "dependency on fallback of ! -> ()".
This avoids cases where code compiles with a fallback of the
never type leading to the unit type. The behaviour in Rust edition 2024
would make this a compile error.
Matt Corallo [Mon, 24 Jun 2024 20:21:08 +0000 (20:21 +0000)]
Use a `MessageSendEvent`-handling fn rather than a single lopp
Rather than building a single `Vec` of `MessageSendEvent`s to
handle then iterating over them, we move the body of the loop into
a lambda and run the loop twice. In some cases, this may save a
single allocation, but more importantly it sets us up for the next
commit, which needs to know from which handler the
`MessageSendEvent` it is processing came from.
Matt Corallo [Thu, 12 Sep 2024 15:17:15 +0000 (15:17 +0000)]
Call `ChannelMessageHandler::message_received` without peer lock
While `message_received` purports to be called on every message,
prior to the message, doing so on `Init` messages means we have to
call `message_received` while holding the per-peer mutex, which
can cause some lock contention.
Instead, here, we call `message_received` after processing `Init`
messages (which is probably more useful anyway - the peer isn't
really "connected" until we've processed the `Init` messages),
allowing us to call it unlocked.
Matt Corallo [Thu, 12 Sep 2024 15:13:11 +0000 (15:13 +0000)]
Check that we aren't reading a second message in BOLT 12 retry test
`creates_and_pays_for_offer_with_retry` intends to check that we
re-send a BOLT 12 `invoice_request` in response to a
`message_received` call, but doesn't actually test that there were
no messages in the outbound buffer after the initial send, which we
do here.