Jeffrey Czyz [Tue, 31 Jan 2023 16:44:19 +0000 (10:44 -0600)]
Re-write CustomMessageHandler documentation
Documentation for CustomMessageHandler wasn't clear how it is related to
PeerManager and contained some grammatical and factual errors. Re-write
the docs and link to the lightning_custom_message crate.
Jeffrey Czyz [Tue, 3 Jan 2023 17:24:30 +0000 (11:24 -0600)]
Macro for composing custom message handlers
BOLT 1 specifies a custom message type range for use with experimental
or application-specific messages. While a `CustomMessageHandler` can be
defined to support more than one message type, defining such a handler
requires a significant amount of boilerplate and can be error prone.
Add a crate exporting a `composite_custom_message_handler` macro for
easily composing pre-defined custom message handlers. The resulting
handler can be further composed with other custom message handlers using
the same macro.
This requires a separate crate since the macro needs to support "or"
patterns in macro_rules, which is only available in edition 2021.
Otherwise, a crate defining a handler for a set of custom messages could
not easily be reused with another custom message handler. Doing so would
require explicitly duplicating the reused handlers type ids, but those
may change when the crate is updated.
Elias Rohrer [Wed, 23 Nov 2022 08:33:37 +0000 (09:33 +0100)]
Add transaction sync crate
This crate provides utilities for syncing LDK via the transaction-based
`Confirm` interface. The initial implementation facilitates
synchronization with an Esplora backend server.
Matt Corallo [Mon, 30 Jan 2023 17:56:46 +0000 (17:56 +0000)]
Don't apply gossip backpressure to non-channel-announcing peers
When we apply the new gossip-async-check backpressure on peer
connections, if a peer has never sent us a `channel_announcement`
at all, we really shouldn't delay reading their messages.
This does so by tracking, on a per-peer basis, whether they've sent
us a channel_announcement, and resetting that state whenever we're
not backlogged.
Matt Corallo [Sun, 22 Jan 2023 18:08:33 +0000 (18:08 +0000)]
Apply backpressure when we have too many gossip checks in-flight
Now that the `RoutingMessageHandler` can signal that it needs to
apply message backpressure, we implement it here in the
`PeerManager`. There's not much complicated here, aside from noting
that we need to add the ability to call `send_data` with no data
to indicate that reading should resume (and track when we may need
to make such calls when updating the routing-backpressure state).
Matt Corallo [Sun, 22 Jan 2023 05:12:45 +0000 (05:12 +0000)]
Allow `RoutingMessageHandler` to signal backpressure
Now that we allow `handle_channel_announcement` to (indirectly)
spawn async tasks which will complete later, we have to ensure it
can apply backpressure all the way up to the TCP socket to ensure
we don't end up with too many buffers allocated for UTXO
validation.
We do this by adding a new method to `RoutingMessageHandler` which
allows it to signal if there are "many" checks pending and
`channel_announcement` messages should be delayed. The actual
`PeerManager` implementation thereof is done in the next commit.
Matt Corallo [Sun, 22 Jan 2023 04:14:58 +0000 (04:14 +0000)]
Forward gossip messages which were verified asynchronously
Gossip messages which were verified against the chain
asynchronously should still be forwarded to peers, but must now go
out via a new `P2PGossipSync` parameter in the
`AccessResolver::resolve` method, allowing us to wire them up to
the `P2PGossipSync`'s `MessageSendEventsProvider` implementation.
Matt Corallo [Sun, 22 Jan 2023 03:41:28 +0000 (03:41 +0000)]
Add the ability to broadcast gossip msgs via the event pipeline
When we process gossip messages asynchronously we may find that we
want to forward a gossip message to a peer after we've returned
from the existing `handle_*` method. In order to do so, we need to
be able to send arbitrary loose gossip messages back to the
`PeerManager` via `MessageSendEvent`.
This commit modifies `MessageSendEvent` in order to support this.
Matt Corallo [Tue, 7 Feb 2023 20:38:20 +0000 (20:38 +0000)]
Process `channel_update`/`node_announcement` async if needed
If we have a `channel_announcement` which is waiting on a UTXO
lookup before we can process it, and we receive a `channel_update`
or `node_announcement` for the same channel or a node which is a
part of the channel, we have to wait until the lookup completes
until we can decide if we want to accept the new message.
Here, we store the new message in the pending lookup state and
process it asynchronously like the original `channel_announcement`.
Matt Corallo [Wed, 8 Feb 2023 22:11:56 +0000 (22:11 +0000)]
Track in-flight `channel_announcement` lookups and avoid duplicates
If we receive two `channel_announcement`s for the same channel at
the same time, we shouldn't spawn a second UTXO lookup for an
identical message. This likely isn't too rare - if we start syncing
the graph from two peers at the same time, it isn't unlikely that
we'll end up with the same messages around the same time.
In order to avoid this we keep a hash map of all the pending
`channel_announcement` messages by SCID and simply ignore duplicate
message lookups.
Matt Corallo [Wed, 8 Feb 2023 22:06:11 +0000 (22:06 +0000)]
Add an async resolution option to `ChainAccess::get_utxo`
For those operating in an async environment, requiring
`ChainAccess::get_utxo` return information about the requested UTXO
synchronously is incredibly painful. Requesting information about a
random UTXO is likely to go over the network, and likely to be a
rather slow request.
Thus, here, we change the return type of `get_utxo` to have both a
synchronous and asynchronous form. The asynchronous form requires
the user construct a `AccessFuture` which they `clone` and pass
back to us. Internally, an `AccessFuture` has an `Arc` to the
`channel_announcement` message which we need to process. When the
user completes their lookup, they call `resolve` on their
`AccessFuture` which we pull the `channel_announcement` from and
then apply to the network graph.
Matt Corallo [Sat, 21 Jan 2023 03:28:35 +0000 (03:28 +0000)]
Move `chain::Access` to `routing` and rename it `UtxoLookup`
The `chain::Access` trait (and the `chain::AccessError` enum) is a
bit strange - it only really makes sense if users import it via the
`chain` module, otherwise they're left with a trait just called
`Access`. Worse, for bindings users its always just called
`Access`, in part because many downstream languages don't have a
mechanism to import a module and then refer to it.
Further, its stuck dangling in the `chain` top-level mod.rs file,
sitting in a module that doesn't use it at all (it's only used in
`routing::gossip`).
Instead, we give it its full name - `UtxoLookup` (and rename the
error enum `UtxoLookupError`) and put it in the a new
`routing::utxo` module, next to `routing::gossip`.
Alec Chen [Mon, 6 Feb 2023 21:33:02 +0000 (15:33 -0600)]
Swap `PublicKey` for `NodeId` in `UnsignedNodeAnnouncement`
Also swaps `PublicKey` for `NodeId` in `get_next_node_announcement`
and `InitSyncTracker` to avoid unnecessary deserialization that came
from changing `UnsignedNodeAnnouncement`.
Alec Chen [Mon, 6 Feb 2023 20:41:18 +0000 (14:41 -0600)]
Swap `PublicKey` for `NodeId` in `UnsignedChannelAnnouncement`
Adds the macro `get_pubkey_from_node_id`
to parse `PublicKey`s back from `NodeId`s for signature
verification, as well as `make_funding_redeemscript_from_slices`
to avoid parsing back and forth between types.
jurvis [Mon, 5 Dec 2022 01:24:08 +0000 (17:24 -0800)]
Expose pending payments through ChannelManager
Adds a new method, `list_recent_payments ` to `ChannelManager` that
returns an array of `RecentPaymentDetails` containing the payment
status (Fulfilled/Retryable/Abandoned) and its total amount across all
paths.
Matt Corallo [Sat, 28 Jan 2023 02:14:26 +0000 (02:14 +0000)]
Use only the failed amount when retrying payments, not the full amt
When we landed the initial in-`ChannelManager` payment retries, we
stored the `RouteParameters` in the payment info, and then re-use
it as-is for new payments. `RouteParameters` is intended to store
the information specific to the *route*, `PaymentParameters` exists
to store information specific to a payment.
Worse, because we don't recalculate the amount stored in the
`RouteParameters` before fetching a new route with it, we end up
attempting to retry the full payment amount, rather than only the
failed part.
This issue brought to you by having redundant data in
datastructures, part 5,001.
Matt Corallo [Fri, 27 Jan 2023 23:01:39 +0000 (23:01 +0000)]
Move retry-limiting to `retry_payment_with_route`
The documentation for `Retry` is very clear that it counts the
number of failed paths, not discrete retries. When we added
retries internally in `ChannelManager`, we switched to counting
the number if discrete retries, even if multiple paths failed and
were replace with multiple MPP HTLCs.
Because we are now rewriting retries, we take this opportunity to
reduce the places where retries are analyzed, specifically a good
chunk of code is removed from `pay_internal`.
Because we now retry multiple failed paths with one single retry,
we keep the new behavior, updating the docs on `Retry` to describe
the new behavior.
Matt Corallo [Fri, 27 Jan 2023 20:31:10 +0000 (20:31 +0000)]
Test the `RouteParameters` passed to `TestRouter`
`TestRouter` allows us to simply select the `Route` that will be
returned in the next `find_route` call, but it does so without any
checking of what was *requested* for the call. This makes it a
somewhat dubious test utility as it very helpfully ensures we
ignore errors in the routes we're looking for.
Instead, we require users of `TestRouter` pass a `RouteParameters`
to `expect_find_route`, which we compare against the requested
parameters passed to `find_route`.
Matt Corallo [Fri, 27 Jan 2023 20:28:20 +0000 (20:28 +0000)]
Use the `PaymentParameter` final CLTV delta over `RouteParameters`
`PaymentParams` is all about the parameters for a payment, i.e. the
parameters which are static across all the paths of a paymet.
`RouteParameters` is about the information specific to a given
`Route` (i.e. a set of paths, among multiple potential sets of
paths for a payment). The CLTV delta thus doesn't belong in
`RouterParameters` but instead in `PaymentParameters`.
Worse, because `RouteParameters` is built from the information in
the last hops of a `Route`, when we deliberately inflate the CLTV
delta in path-finding, retries of the payment will have the final
CLTV delta double-inflated as it inflates starting from the final
CLTV delta used in the last attempt.
When we calculate the `final_cltv_expiry_delta` to put in the
`RouteParameters` returned via events after a payment failure, we
should re-use the new one in the `PaymentParameters`, rather than
the one that was in the route itself.
Matt Corallo [Fri, 27 Jan 2023 19:24:52 +0000 (19:24 +0000)]
Move the final CLTV delta to `PaymentParameters` from `RouteParams`
`PaymentParams` is all about the parameters for a payment, i.e. the
parameters which are static across all the paths of a paymet.
`RouteParameters` is about the information specific to a given
`Route` (i.e. a set of paths, among multiple potential sets of
paths for a payment). The CLTV delta thus doesn't belong in
`RouterParameters` but instead in `PaymentParameters`.
Worse, because `RouteParameters` is built from the information in
the last hops of a `Route`, when we deliberately inflate the CLTV
delta in path-finding, retries of the payment will have the final
CLTV delta double-inflated as it inflates starting from the final
CLTV delta used in the last attempt.
By moving the CLTV delta to `PaymentParameters` we avoid this
issue, leaving only the sought amount in the `RouteParameters`.
Elias Rohrer [Tue, 31 Jan 2023 23:15:46 +0000 (17:15 -0600)]
Add version note in `Confirm` docs
While now `ChannelManager` will only return previously confirmed
transactions, we can't ensure the same for `ChainMonitor`, as we need to
maintain backwards compatibility with version prior to 0.0.113, at which
we started tracking the block hash in `ChannelMonitor`s. We therefore
add a note to the docs stating that users need to track confirmations on
their own for channels created prior to 0.0.113.
Elias Rohrer [Tue, 31 Jan 2023 23:07:31 +0000 (17:07 -0600)]
Return only `Some(block_hash)` in CM rel. txids
As of now the `Confirm::get_relevant_txids()` docs state that it won't
return any transactions for which we hadn't previously seen a
confirmation. To align its functionality a bit more with the docs, at
least for `ChannelManager`, we only return values for which we had
registered a confirmation block hash before.
Matt Corallo [Mon, 16 Jan 2023 23:18:39 +0000 (23:18 +0000)]
Calc decayed buckets to decide if we have valid historical points
When we're calculating if, once we apply the unupdated decays, the
historical data tracker has enough data to assign a score, we
previously calculated the decayed points while walking the buckets
as we don't use the decayed buckets anyway (to avoid losing
precision). That is fine, except that as written it decayed
individual buckets additional times.
Instead, here we actually calculate the full set of decayed buckets
and use those to decide if we have valid points. This adds some
additional stack space and may in fact be slower, but will be
useful in the next commit and shouldn't be a huge change.
Jeffrey Czyz [Thu, 19 Jan 2023 00:58:20 +0000 (18:58 -0600)]
Disallow offer_metadata in Refund
The offer_metadata was optional but is redundant with invreq_metadata
(i.e., payer_metadata) for refunds. It is now disallowed in the spec and
was already unsupported by RefundBuilder.
Jeffrey Czyz [Wed, 18 Jan 2023 23:29:31 +0000 (17:29 -0600)]
Allow quantity in Refund
The spec always allowed this but the reason was unclear. It's useful if
the refund is for an invoice paid for offer where a quantity was given
in the request. The description in the refund would be from the offer,
which may have given a unit for each item. So allowing a quantity makes
it clear how many items the refund is for.
Jeffrey Czyz [Wed, 18 Jan 2023 22:44:16 +0000 (16:44 -0600)]
Support explicit quantity_max = 1 in Offer
The spec was modified to allow setting offer_quantity_max explicitly to
one. This is to support a use case where more than one item is supported
but only one item is left in the inventory. Introduce a Quantity::One
variant to replace Quantity::Bounded(1) so the later can be used for the
explicit setting.
Matt Corallo [Thu, 26 Jan 2023 02:21:31 +0000 (02:21 +0000)]
Remove the `ChannelMonitor` secp context
`ChannelMonitor` indirectly already has a context - the
`OnchainTxHandler` has one. This makes it trivial to remove the
existing one, so we do so for a free memory usage reduction.
Matt Corallo [Thu, 26 Jan 2023 02:23:08 +0000 (02:23 +0000)]
Implement `PartialEq` for `ChannelMonitor`
It turns out `#[derive(PartialEq)]` will automatically bound the
`PartialEq` implementation by any bounds on the struct also being
`PartialEq`. This means to use an auto-derived `ChannelMonitor`
`PartialEq` the `EcdsaSigner` used must also be `PartialEq`, but
for the use-cases we have today for a `ChannelMonitor` `PartialEq`
it doesn't really matter - we use it internally in tests and
downstream users wanted similar test-only usage.
Matt Corallo [Wed, 25 Jan 2023 02:56:13 +0000 (02:56 +0000)]
Make `test_duplicate_payment_hash_one_failure_one_success` robust
`test_duplicate_payment_hash_one_failure_one_success` currently
fails if the "wrong" HTLC is picked to be claimed. Given the HTLCs
are identical, there's no way to figure out which we should claim.
The test instead relies on a magic value - the first one is the
right one....unless we change our CSPRNG implementation. When we
try to do so, the test randomly fails.
Here we change one HTLC to a lower amount so we can figure out
which transaction to broadcast to make the test robust against
CSPRNG changes.
Alec Chen [Wed, 25 Jan 2023 18:27:59 +0000 (12:27 -0600)]
Add FailureCode enum and ChannelManager::fail_htlc_backwards_with_reason
FailureCode is used to specify which error code and data to send
to peers when failing back an HTLC.
ChannelManager::fail_htlc_backwards_with_reason
allows a user to specify the error code and
corresponding data to send to peers when failing back an HTLC.
This function is mentioned in Event::PaymentClaimable docs.
ChannelManager::get_htlc_fail_reason_from_failure_code was also
added to assist with this function.
Decode onion fail outside of outbound_payments lock
It's not ideal to do all this computation while the lock is held. We also want
to decode the failure *before* taking the lock, so we can store the failed scid
in the relevant outbound for retry in the next commit(s).
Matt Corallo [Wed, 25 Jan 2023 17:42:20 +0000 (17:42 +0000)]
Clean up `compute_fees` and add a saturating variant
Often when we call `compute_fees` we really just want it to
saturate and we deal with `u64::max_value` later. In that case,
we're much better off doing the saturating in the `compute_fees` as
it can use CMOVs rather than branching at each step and then
`unwrap_or`ing at the callsite.
Matt Corallo [Thu, 19 Jan 2023 17:59:10 +0000 (17:59 +0000)]
Swap `IndexedMap` implementation for a `HashMap`+B-Tree
Our network graph has to be iterable in a deterministic order and
with the ability to iterate over a specific range. Thus,
historically, we've used a `BTreeMap` to do the iteration. This is
fine, except our map needs to also provide high performance lookups
in order to make route-finding fast. Sadly, `BTreeMap`s are quite
slow due to the branching penalty.
Here we replace the implementation of our `IndexedMap` with a
`HashMap` to store the elements itself and a `BTreeSet` to store
the keys set in sorted order for iteration.
As of this commit on the same hardware as the above few commits,
the benchmark results are:
```
test routing::router::benches::generate_mpp_routes_with_probabilistic_scorer ... bench: 109,544,993 ns/iter (+/- 27,553,574)
test routing::router::benches::generate_mpp_routes_with_zero_penalty_scorer ... bench: 81,164,590 ns/iter (+/- 55,422,930)
test routing::router::benches::generate_routes_with_probabilistic_scorer ... bench: 34,726,569 ns/iter (+/- 9,646,345)
test routing::router::benches::generate_routes_with_zero_penalty_scorer ... bench: 22,772,355 ns/iter (+/- 9,574,418)
```
Matt Corallo [Tue, 25 Oct 2022 03:50:07 +0000 (03:50 +0000)]
Add a new `IndexedMap` type and use it in network graph storage
Our network graph has to be iterable in a deterministic order and
with the ability to iterate over a specific range. Thus,
historically, we've used a `BTreeMap` to do the iteration. This is
fine, except our map needs to also provide high performance lookups
in order to make route-finding fast. Sadly, `BTreeMap`s are quite
slow due to the branching penalty.
Here we replace the `BTreeMap`s in the scorer with a dummy wrapper.
In the next commit the internals thereof will be replaced with a
`HashMap`-based implementation.