Matthew Rheaume [Tue, 5 Nov 2024 00:11:37 +0000 (16:11 -0800)]
Updated docs on `PeerManager::process_events`.
Try to make it a bit more clear that there are downsides to solely
relying on `lightning-net-tokio`, and it's still recommended to
occasionally call this function in a separate loop.
Jeffrey Czyz [Mon, 12 Aug 2024 21:54:55 +0000 (16:54 -0500)]
Parse experimental invoice TLV records
The BOLT12 spec defines an experimental TLV range that is allowed in
offer and invoice_request messages. The remaining TLV-space is for
experimental use in invoice messages. Allow this range when parsing an
invoice and include it when signing one.
Jeffrey Czyz [Thu, 8 Aug 2024 21:50:26 +0000 (16:50 -0500)]
Test verification with experimental invreq TLVs
Payer metadata is generated from the invreq TLVs and should included
those in the experimental range. When verifying invoice messages, these
TLVs must be included. Modify the BOLT12 verification tests to cover
them.
Jeffrey Czyz [Thu, 8 Aug 2024 16:44:03 +0000 (11:44 -0500)]
Parse experimental invreq TLV records
The BOLT12 spec defines an experimental TLV range that are allowed in
invoice_request messages. Allow this range when parsing an invoice
request and include those bytes in any invoice. Also include those bytes
when verifying that a Bolt12Invoice is for a valid InvoiceRequest.
Jeffrey Czyz [Tue, 6 Aug 2024 21:21:32 +0000 (16:21 -0500)]
Test verification with experimental offer TLVs
Offer metadata is generated from the offer TLVs and should included
those in the experimental range. When verifying invoice request and
invoice messages, these TLVs must be included. Similarly, OfferId
construction should included these TLVs as well. Modify the BOLT12
verification tests to cover these TLVs.
Jeffrey Czyz [Mon, 5 Aug 2024 23:51:32 +0000 (18:51 -0500)]
Parse experimental offer TLV records
The BOLT12 spec defines an experimental TLV range that are allowed in
offer messages. Allow this range when parsing an offer and include those
bytes in any invoice requests. Also include those bytes when computing
an OfferId and verifying that an InvoiceRequest is for a valid Offer.
Jeffrey Czyz [Thu, 17 Oct 2024 22:51:54 +0000 (17:51 -0500)]
Include experimental TLV records when verifying
Upcoming commits will allow parsing BOLT12 messages that include TLV
records in the experimental range. Include these ranges when verifying
messages since they will be included in the message bytes.
Passing bytes directly to InvoiceContents::verify improves readability
as then a TlvStream for each TLV record range can be created from the
bytes instead of needing to clone the TlvStream upfront. In an upcoming
commit, the experimental TLV record range will utilize this.
Add a utility function for iterating over Offer TLV records contained in
any valid TLV stream bytes. Using a common function ensures that
experimental TLV records are included once they are supported.
Jeffrey Czyz [Fri, 9 Aug 2024 23:36:24 +0000 (18:36 -0500)]
Separate bytes for experimental TLVs
When constructing UnsignedInvoiceRequest or UnsignedBolt12Invoice, use a
separate field for experimental TLV bytes. This allows for properly
inserting the signature TLVs before the experimental TLVs when signing.
Factor invoice requests into payment path length limiting
Async payments include the original invoice request in the payment onion.
Since invreqs may include blinded paths, it's important to factor them into our
max path length calculations since they may take up a significant portion of
the 1300-byte onion.
Include invreq in payment onion when retrying async payments
While in the last commit we began including invoice requests in async payment
onions on initial send, further work is needed to include them on retry. Here
we begin storing invreqs in our retry data, and pass them along for inclusion
in the onion on payment retry.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Include invreq in payment onion when sending async payments
Past commits have set us up to include invoice requests in outbound async
payment onions. Here we actually pull the invoice request from where it's
stored in outbound_payments and pass it into the correct utility for inclusion
in the onion on initial send.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Store invreqs in StaticInvoiceReceived outbound payments
When transitioning outbound payments from AwaitingInvoice to
StaticInvoiceReceived, include the invreq in the new state's outbound payment
storage for future inclusion in an async payment onion.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Add a new invoice request parameter to outbound_payments and channelmanager
send-to-route internal utils. As of this commit the invreq will always be
passed in as None, to be updated in future commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Support including invreqs when building payment onions
Add a new invoice request parameter to onion_utils::create_payment_onion. As of
this commit it will always be passed in as None, to be updated in future
commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Support including invreqs when building onion payloads
Add a new invoice request parameter to onion_utils::build_onion_payloads.
As of this commit it will always be passed in as None, to be updated in future
commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Support encoding invreqs in outbound onion payloads
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
We use an experimental TLV type for this new onion payload field, since the
spec is still not merged in the BOLTs.
Stop taking &self in outbound_payments' create_inbound_payment
The method doesn't actually use its &self parameter, and this makes it more
obvious that we aren't going to deadlock by calling the method if the
outbound_payments lock is already acquired.
Prefix AsyncPaymentsMessageHandler methods with handle_*
"Release" is overloaded in the trait's release_pending_messages method, since
the latter releases pending async payments onion messages to the peer manager,
vs the release_held_htlc method handles the release_held_htlc onion message by
attempting to send an HTLC to the recipient.
Add new PaymentFailureReason::BlindedPathCreationFailed
RouteNotFound did not fit here because that error is reserved for failing to
find a route for a payment, whereas here we are failing to create a blinded
path back to ourselves..
TlvRecord has a few fields, but comparing only the record_bytes is
sufficient for equality since the other fields are initialized from it.
Remove the Eq and PartialEq derives as they compare these other fields.
Jeffrey Czyz [Fri, 2 Aug 2024 16:54:42 +0000 (11:54 -0500)]
Add optional lifetime to tlv_stream macro
Using the tlv_stream macro without a type needing a reference results in
a compilation error because of an unused lifetime parameter. To avoid
this, add an optional lifetime parameter to the macro. This allows for
experimental TLVs, which will be empty initially, and TLVs of entirely
primitive types.
Matt Corallo [Sun, 13 Oct 2024 17:14:23 +0000 (17:14 +0000)]
Re-broadcast `channel_announcement`s every six blocks for a week
When we first get a public channel confirmed at six blocks, we
broadcast a `channel_announcement` once and then move on. As long
as it makes it into our local network graph that should be okay, as
we should send peers our network graph contents as they seek to
sync, however its possible an ill-timed shutdown could cause this
to fail, and relying on peers to do a full historical sync from us
may delay `channel_announcement` propagation.
Instead, here, we re-broadcast our `channel_announcement`s every
six blocks for a week, which should be way more than robust enough
to get them properly across the P2P network.
Matt Corallo [Sun, 15 Sep 2024 17:24:19 +0000 (17:24 +0000)]
Doc the on-upgrade `ChannelMonitor` startup persistence semantics
Because the new startup `ChannelMonitor` persistence semantics rely
on new information stored in `ChannelMonitor` only for claims made
in the upgraded code, users upgrading from previous version of LDK
must apply the old `ChannelMonitor` persistence semantics at least
once (as the old code will be used to handle partial claims).
Matt Corallo [Thu, 20 Jun 2024 15:17:10 +0000 (15:17 +0000)]
Stop relying on `ChannelMonitor` persistence after manager read
When we discover we've only partially claimed an MPP HTLC during
`ChannelManager` reading, we need to add the payment preimage to
all other `ChannelMonitor`s that were a part of the payment.
We previously did this with a direct call on the `ChannelMonitor`,
requiring users write the full `ChannelMonitor` to disk to ensure
that updated information made it.
This adds quite a bit of delay during initial startup - fully
resilvering each `ChannelMonitor` just to handle this one case is
incredibly excessive.
Over the past few commits we dropped the need to pass HTLCs
directly to the `ChannelMonitor`s using the background events to
provide `ChannelMonitorUpdate`s insetad.
Thus, here we finally drop the requirement to resilver
`ChannelMonitor`s on startup.
Matt Corallo [Mon, 30 Sep 2024 20:09:01 +0000 (20:09 +0000)]
Replay MPP claims via background events using new CM metadata
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we finally implement claiming using the new MPP part list and
metadata stored in `ChannelMonitor`s. In doing so, we use much more
of the existing HTLC-claiming pipeline in `ChannelManager`,
utilizing the on-startup background events flow as well as properly
re-applying the RAA-blockers to ensure preimages cannot be lost.
Matt Corallo [Sun, 15 Sep 2024 23:27:35 +0000 (23:27 +0000)]
Handle duplicate payment claims during initialization
In the next commit we'll start using (much of) the normal HTLC
claim pipeline to replay payment claims on startup. In order to do
so, however, we have to properly handle cases where we get a
`DuplicateClaim` back from the channel for an inbound-payment HTLC.
Here we do so, handling the `MonitorUpdateCompletionAction` and
allowing an already-completed RAA blocker.
Matt Corallo [Mon, 16 Sep 2024 00:16:51 +0000 (00:16 +0000)]
Move payment claim initialization to an fn on `ClaimablePayments`
Here we wrap the logic which moves claimable payments from
`claimable_payments` to `pending_claiming_payments` to a new
utility function on `ClaimablePayments`. This will allow us to call
this new logic during `ChannelManager` deserialization in a few
commits.
Matt Corallo [Mon, 30 Sep 2024 19:42:51 +0000 (19:42 +0000)]
Move `ChannelManager`-read preimage relay to after struct build
In a coming commit we'll use the existing `ChannelManager` claim
flow to claim HTLCs which we found partially claimed on startup,
necessitating having a full `ChannelManager` when we go to do so.
Here we move the re-claim logic down in the `ChannelManager`-read
logic so that we have that.
Matt Corallo [Mon, 16 Sep 2024 00:07:48 +0000 (00:07 +0000)]
Store info about claimed payments, incl HTLCs in `ChannelMonitor`s
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we store the required MPP parts and metadata in
`ChannelMonitor`s and make them available to `ChannelManager` on
load.
Matt Corallo [Sun, 15 Sep 2024 23:50:31 +0000 (23:50 +0000)]
Pass info about claimed payments, incl HTLCs to `ChannelMonitor`s
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we take the first step, building a list of MPP parts and
metadata in `ChannelManager` and passing it through to
`ChannelMonitor` in the `ChannelMonitorUpdate`s.
Matt Corallo [Fri, 14 Jun 2024 14:10:38 +0000 (14:10 +0000)]
Use a struct to track MPP parts pending claiming
When we started tracking which channels had MPP parts claimed
durably on-disk in their `ChannelMonitor`, we did so with a tuple.
This was fine in that it was only ever accessed in two places, but
as we will start tracking it through to the `ChannelMonitor`s
themselves in the coming commit(s), it is useful to have it in a
struct instead.
Matt Corallo [Mon, 30 Sep 2024 21:02:53 +0000 (21:02 +0000)]
Add missing `inbound_payment_id_secret` write in `ChannelManager`
In aa09c33a1719944769ba98624bfe18ea33083f44 we added a new secret
in `ChannelManager` with which to derive inbound `PaymentId`s. We
added read support for the new field, but forgot to add writing
support for it. Here we fix this oversight.
Matt Corallo [Wed, 23 Oct 2024 15:59:23 +0000 (15:59 +0000)]
Allow `clippy::unwrap-or-default` because its usually wrong
`or_default` is generally less readable than writing out the thing
we're writing, as `Default` is opaque but explicit constructors
generally are not. Thus, we ignore the clippy lint (ideally we
could invert it and ban the use of `Default` in the crate entirely
but alas).
olegkubrakov [Thu, 17 Oct 2024 21:28:12 +0000 (14:28 -0700)]
Make monitor update name public
These structs are meant for MonitoringUpdatingPersister implementation, but some
external implementations may still reuse them, so going to make them public.
Duncan Dean [Fri, 4 Oct 2024 14:35:43 +0000 (16:35 +0200)]
DRY `funding_created()` and `funding_signed()` for V1 channels
There is a decent amount of shared code in these two methods so we make
an attempt to share that code here by introducing the
`InitialRemoteCommitmentReceiver` trait. This trait will also come in
handy when we need similar commitment_signed handling behaviour for
dual-funded channels.
Jeffrey Czyz [Fri, 18 Oct 2024 22:42:18 +0000 (17:42 -0500)]
Use total_inflight_amount_msat for probability fns
Rename parameters used when calculating success probability to make it
clear that the total mount in-flight should be used rather than the
payment amount.
Jeffrey Czyz [Thu, 10 Oct 2024 23:20:25 +0000 (18:20 -0500)]
Correct base_penalty_amount_multiplier_msat docs
Commit df52da7b31494c7ec77a705cca4c44bc840f8a95 modified
ProbabilisticScorer to apply some penalty amount multipliers to the
total amount flowing over the channel. However, the commit updated the
docs for base_penalty_amount_multiplier_msat even though that behavior
didn't change. This commit reverts those docs.
Jeffrey Czyz [Thu, 10 Oct 2024 23:01:23 +0000 (18:01 -0500)]
Don't over-penalize channels with inflight HTLCs
Commit df52da7b31494c7ec77a705cca4c44bc840f8a95 modified
ProbabilisticScorer to apply some penalty amount multipliers (e.g.,
liquidity_penalty_amount_multiplier_msat) to the total amount flowing
over the channel (i.e., including inflight HTLCs), not just the payment
in question. This led to over-penalizing in-use channels. Instead, only
apply the total amount when calculating success probability.
Matt Corallo [Fri, 18 Oct 2024 15:57:25 +0000 (15:57 +0000)]
Add a test for the fee-bump rate of timeout HTLC claims on cp txn
In a previous commit we updated the fee-bump-rate of claims against
HTLC timeouts on counterparty commitment transactions so that
instead of immediately attempting to bump every block we consider
the fact that we actually have at least `MIN_CLTV_EXPIRY_DELTA`
blocks to do so, and bumping at the appropriate rate given that.
Here we test that by adding an extra check to an existing test
that we do not bump in the very next block after the HTLC timeout
claim was initially broadcasted.
Matt Corallo [Wed, 18 Sep 2024 18:20:46 +0000 (18:20 +0000)]
Set correct `counterparty_spendable_height` for outb local HTLCs
For outbound HTLCs, the counterparty can spend the output
immediately. This fixes the `counterparty_spendable_height` in the
`PackageTemplate` claiming outbound HTLCs on local commitment
transactions, which was previously spuriously set to the HTLC
timeout (at which point *we* can claim the HTLC).
Matt Corallo [Thu, 17 Oct 2024 19:38:19 +0000 (19:38 +0000)]
Stop exporting `lightning::ln::features`
Now that the module only contains some implementations of
serialization for the `lightning_types::features` structs, there's
no reason for it to be public.
Matt Corallo [Tue, 20 Aug 2024 02:22:22 +0000 (02:22 +0000)]
Add a test of gossip message buffer limiting in `PeerManager`
This adds a simple test that the gossip message buffer in
`PeerManager` is limited, including the new behavior of bypassing
the limit when the broadcast comes from the
`ChannelMessageHandler`.
Matt Corallo [Tue, 20 Aug 2024 01:57:06 +0000 (01:57 +0000)]
Add a constructor for the test `SocketDescriptor` and `hang_writes`
In testing, its useful to be able to tell the `SocketDescriptor` to
pretend the system network buffer is full, which we add here by
creating a new `hang_writes` flag. In order to simplify
constructing, we also add a new constructor which existing tests
are moved to.
Matt Corallo [Mon, 24 Jun 2024 20:24:36 +0000 (20:24 +0000)]
Reliably deliver gossip messages from our `ChannelMessageHandler`
When our `ChannelMessageHandler` creates gossip broadcast
`MessageSendEvent`s, we generally want these to be reliably
delivered to all our peers, even if there's not much buffer space
available.
Here we do this by passing an extra flag to `forward_broadcast_msg`
which indicates where the message came from, then ignoring the
buffer-full criteria when the flag is set.
Matt Corallo [Wed, 18 Sep 2024 16:48:24 +0000 (16:48 +0000)]
Rename `soonest_conf_deadline` to `counterparty_spendable_height`
This renames the field in `PackageTemplate` which describes the
height at which a counterparty can make a claim to an output to
match its actual use.
Previously it had been set based on when a counterparty can claim
an output but also used for other purposes. In the previous commit
we cleaned up its use for fee-bumping-rate, so here we can rename
it as it is now only used as the `counteraprty_spendable_height`.