Fred Walker [Sat, 18 Mar 2023 14:11:29 +0000 (10:11 -0400)]
Update route hint selection to prefer lowest channel above minimum
This commit updates the way that we choose our preferred channel per
counterparty when selecting route hints. Previously, we would choose
the largest usable channel above our requested minimum.
This change updates selection to prefer the smallest channel above the
minimum amount (if provided, plus a scaling factor of 10% to allow
some margin for error). This is the off chain version of not "grinding
our change" - we want to supply route hints for channels that have
enough inbound to facilitate the receive, but not overload our high
inbound channels with smaller payments (since we may need this large
chunk of inbound for larger payment later on).
If there is no minimum amount provided, we err on the side of caution
and just select our highest inbound channels so that payments of any
size have a chance of succeeding. Likewise, if we have no channels above
the minimum, we select the largest channel to maximize our changes of
receiving the payment in multiple parts.
Matt Corallo [Thu, 30 Mar 2023 22:11:22 +0000 (22:11 +0000)]
Remove `futures` dependency in `lightning-background-processor`
As `futures` apparently makes no guarantees on MSRVs even in patch
releases we really can't rely on it at all, and while it currently
has an acceptable MSRV without the macros feature, its best to just
remove it wholesale.
Luckily, removing it is relatively trivial, even if it requires
the most trivial of unsafe tags.
Matt Corallo [Thu, 30 Mar 2023 21:52:03 +0000 (21:52 +0000)]
Replace `futures` `select` with our own select enum to fix MSRV
`futures` recently broke our MSRV by bumping the `syn` major
version in a patch release. This makes it impractical for us to
use, instead here we replace the usage of its `select_biased` macro
with a trivial enum.
Given its simplicity we likely should have done this without ever
taking the dependency.
Matt Corallo [Thu, 30 Mar 2023 18:51:56 +0000 (18:51 +0000)]
Avoid connection-per-RPC-call again by caching connections
In general, only one request will be in flight at a time in
`lightning-block-sync`. Ideally we'd only have one connection, but
without using the `futures` mutex type.
Here we solve this narrowly for the one-request-at-a-time case by
caching the connection and takeing the connection out of the cache
while we work on it.
Matt Corallo [Thu, 30 Mar 2023 18:33:04 +0000 (18:33 +0000)]
Drop `futures` dependency from `lightning-block-sync`
Some how I'd understood that `futures` had reasonable MSRV
guarantees (e.g. at least Debian stable), but apparently that isn't
actually the case, as they bumped it to upgrade to syn (with
apparently no actual features or bugfixes added as a result?) with
no minor version bump or any available alternative (unlike Tokio,
which does LTS releases).
Luckily its relatively easy to just drop the `futures` dependency -
it means a new connection for each request, which is annoying, but
certainly not the end of the world, and its easier than trying to
deal with pinning `futures`.
See https://github.com/rust-lang/futures-rs/pull/2733
Wilmer Paulino [Wed, 22 Mar 2023 18:46:05 +0000 (11:46 -0700)]
Ignore lockorder violation on same callsite with different construction
As long as the lock order on such locks is still valid, we should allow
them regardless of whether they were constructed at the same location or
not. Note that we can only really enforce this if we require one lock
call per line, or if we have access to symbol columns (as we do on Linux
and macOS). We opt for a smaller patch by relying on the latter.
This was previously triggered by some recent test changes to
`test_manager_serialize_deserialize_inconsistent_monitor`. When the
test ends and a node is dropped causing us to persist each, we'd detect
a possible lockorder violation deadlock across three different `Mutex`
instances that are held at the same location when serializing our
`per_peer_states` in `ChannelManager::write`.
The presumed lockorder violation happens because the first `Mutex` held
shares the same construction location with the third one, while the
second `Mutex` has a different construction location. When we hold the
second one, we consider the first as a dependency, and then consider the
second as a dependency when holding the third, causing a circular
dependency (since the third shares the same construction location as the
first).
This isn't considered a lockorder violation that could result in a
deadlock as the comment suggests inline though, since we are under a
dependent write lock which no one else can have access to.
Alec Chen [Tue, 7 Mar 2023 01:51:44 +0000 (19:51 -0600)]
Use onion amount `amt_to_forward` for MPP set calculation
If routing nodes take less fees and pay the final node more than
`amt_to_forward`, the receiver may see that `total_msat` has been met
before all of the sender's intended HTLCs have arrived. The receiver
may then prematurely claim the payment and release the payment hash,
allowing routing nodes to claim the remaining HTLCs. Using the onion
value `amt_to_forward` to determine when `total_msat` has been met
allows the sender to control the set total.
Alec Chen [Tue, 7 Mar 2023 01:33:45 +0000 (19:33 -0600)]
Allow overshooting final cltv_expiry
Final nodes previously had stricter requirements on HTLC contents
matching onion value compared to intermediate nodes. This allowed
for probing, i.e. the last intermediate node could overshoot the
value by a small amount and conclude from the acceptance or rejection
of the HTLC whether the next node was the destination. This also
applies to the msat amount, however this change was already present.
Alec Chen [Wed, 1 Mar 2023 00:42:39 +0000 (18:42 -0600)]
Allow overshooting `total_msat` for an MPP
While retrying a failed path of an MPP, a node may want to overshoot
the `total_msat` in order to use a path with an `htlc_minimum_msat`
greater than the remaining value being sent. This commit no longer
fails MPPs that overshoot the `total_msat`, however it does fail
HTLCs with the same payment hash that are received *after* a
payment has become claimable.
Alec Chen [Thu, 23 Mar 2023 19:34:57 +0000 (14:34 -0500)]
Add `total_value_received` to `ClaimableHTLC` for claim validation
This is pre-work for allowing nodes to overshoot onion values and
changing validation for MPP completion. This adds a field to
`ClaimableHTLC` that is separate from the onion values, which
represents the actual received amount reported in `PaymentClaimable`
which is what we want to validate against when a user goes to claim.
Wilmer Paulino [Tue, 7 Mar 2023 01:01:21 +0000 (17:01 -0800)]
Expose HTLC transaction locktime in BumpTransactionEvent::HTLCResolution
While users could easily figure it out based on the set of HTLC
descriptors included within, we already track it within the
`OnchainTxHandler`, so we might as well expose it to users as a
nice-to-have. It's also yet another thing they must get right to ensure
their HTLC transaction broadcasts are valid.
Wilmer Paulino [Tue, 28 Mar 2023 19:09:13 +0000 (12:09 -0700)]
Set transaction locktime on malleable packages to discourage fee sniping
This only applies to all malleable packages on channels pre-dating
anchors and malleables packages for counterparty commitments
post-anchors. Malleables packages for holder commitments post-anchors
should have their transaction locktime applied manually by the consumer
of `BumpTransactionEvent::HTLCResolution` events.
Wilmer Paulino [Tue, 28 Mar 2023 19:03:19 +0000 (12:03 -0700)]
Re-work PackageSolvingData::absolute_tx_timelock
Previously, this would return the earliest height the output could be
confirmed, which seems to no longer be useful. The only use of the
method was to determine whether we should delay a package to a future
block. Instead, we choose to return the absolute locktime the
transaction spending the output should have, which better corresponds to
the method name and still supports the delay functionality mentioned.
Doing so also allows us to expose the locktime required for HTLC
transactions we need to broadcast based on our own commitments for
anchor channels.
Elias Rohrer [Mon, 27 Mar 2023 11:19:15 +0000 (13:19 +0200)]
Move test `bitcoind`/`electrsd` out of `OnceCell`
`OnceCell` doesn't call `drop`, which makes the spawned
`bitcoind`/`electrsd` instances linger around after our tests have
finished. To fix this, we move them out of `OnceCell` and let every test
that needs them spawn their own instances. This additional let us drop
the `OnceCell` dev dependency.
Wilmer Paulino [Tue, 7 Mar 2023 21:57:01 +0000 (13:57 -0800)]
Move events.rs into its own top-level module
This is largely motivated by some follow-up work for anchors that will
introduce an event handler for `BumpTransaction` events, which we can
now include in this new top-level `events` module.
Wilmer Paulino [Tue, 28 Feb 2023 18:45:48 +0000 (10:45 -0800)]
Avoid refusing ChannelMonitorUpdates we expect to receive after closing
There is no need to fill the user's logs with errors that are expected
to be hit based on specific edge cases, like providing preimages after
a monitor has seen a confirmed commitment on-chain.
This doesn't really change our behavior – we still apply and persist the
state changes resulting from processing these updates regardless of
whether they succeed or not.
Wilmer Paulino [Tue, 28 Feb 2023 18:45:48 +0000 (10:45 -0800)]
Queue BackgroundEvent to force close channels upon ChannelManager::read
This results in a new, potentially redundant, `ChannelMonitorUpdate`
that must be applied to `ChannelMonitor`s to broadcast the holder's
latest commitment transaction.
This is a behavior change for anchor channels since their commitments
may require additional fees to be attached through a child anchor
transaction. Recall that anchor transactions are only generated by the
event consumer after processing a `BumpTransactionEvent::ChannelClose`
event, which is yielded after applying a
`ChannelMonitorUpdateStep::ChannelForceClosed` monitor update. Assuming
the node operator is not watching the mempool to generate these anchor
transactions without LDK, an anchor channel which we had to fail when
deserializing our `ChannelManager` would have its commitment transaction
broadcast by itself, potentially exposing the node operator to loss of
funds if the commitment transaction's fee is not enough to be accepted
into the network's mempools.
Wilmer Paulino [Tue, 28 Feb 2023 18:45:44 +0000 (10:45 -0800)]
Use CLOSED_CHANNEL_UPDATE_ID in force closing ChannelMonitorUpdates
Currently, all that is required to force close a channel is to broadcast
either of the available commitment transactions, but this changes with
anchor outputs – commitment transactions may need to have
additional fees attached in order to confirm in a timely manner. While
we may be able to just queue a new update using the channel's next
available update ID, this may result in a violation of the
`ChannelMonitor` API (each update ID must strictly increase by 1) if the
channel had updates that were persisted by its `ChannelMonitor`, but not
the `ChannelManager`. Therefore, we choose to re-purpose the existing
`CLOSED_CHANNEL_UPDATE_ID` update ID to also apply to
`ChannelMonitorUpdate`s that will force close their respective channel
by broadcasting the holder's latest commitment transaction.
Remove redundant `addresses` field from `NodeAnnouncementInfo`
...replacing it with an acessor `addresses()`.
Besides removing a redundant data structure already present on inner
`NodeAnnouncement`, this change makes it possible to discover new address types
upon deserialization thanks to `UnsignedNodeAnnouncement`'s implementation.
Wilmer Paulino [Tue, 14 Feb 2023 23:49:37 +0000 (15:49 -0800)]
Maintain order of yielded claim events
Since the claim events are stored internally within a HashMap, they will
be yielded in a random order once dispatched. Claim events may be
invalidated if a conflicting claim has confirmed on-chain and we need to
generate a new claim event; the randomized order could result in the
new claim event being handled prior to the previous. To maintain the
order in which the claim events are generated, we track them in a Vec
instead and ensure only one instance of a PackageId only ever exists
within it.
This would have certain performance implications, but since we're
bounded by the total number of HTLCs in a commitment anyway, we're
comfortable with taking the cost.
Wilmer Paulino [Wed, 15 Feb 2023 00:24:30 +0000 (16:24 -0800)]
Clarify OnchainEvent::Claim behavior
The previous documentation was slightly incorrect, a `Claim` can also be
from the counterparty if they happened to claim the same exact set of
outputs as a claiming transaction we generated.
Wilmer Paulino [Tue, 14 Feb 2023 23:35:40 +0000 (15:35 -0800)]
Implement PartialEq manually
Since we don't store `pending_claim_events` within `OnchainTxHandler` as
they'll be regenerated on restarts, we opt to implement `PartialEq`
manually such that the field is not longer considered.
Matt Corallo [Sun, 19 Mar 2023 23:39:56 +0000 (23:39 +0000)]
Drop `serde` dependency from `lightning-block-sync`
`serde` doesn't bother with MSRVs, so its expected to break
frequently. Yesterday, the `derive` feature had its MSRV broken in
a patch version without care.
Luckily its trivial for us to remove the `serde` dependency in
`lightning-block-sync`, using only `serde_json` for the JSON
deserialization part. It even ends up net-negative on LoC.
Matt Corallo [Thu, 9 Feb 2023 19:02:04 +0000 (19:02 +0000)]
Include a route hint for public, not-yet-announced channels
If we have a public channel which doesn't yet have six
confirmations the network can't possibly know about it as we cannot
have announced it yet. However, because we refuse to include
route-hints if we have any public channels, we will generate
invoices that no one can pay.
Thus, if we have any public, not-yet-announced channels, include
them as a route-hint.
Matt Corallo [Wed, 15 Mar 2023 18:08:35 +0000 (18:08 +0000)]
Bump MSRV to 1.48
1.48.0 was released at the end of 2020, nearly 2.5 years ago. It
has been the rustc available on Debian stable since bullseye,
released in 2021. supporting Debian oldstable for more than a year
seems more than sufficient time to give Debian folks to upgrade,
and bullseye is set to become `oldstable` later this year with the
release of `bookworm`, likely this summer.
This also allows us to clean up our MSRV substantially, having a
single MSRV across our crates rather than a number of separate
ones. Sadly, windows already requires 1.49.
Matt Corallo [Thu, 9 Mar 2023 19:23:58 +0000 (19:23 +0000)]
Correct `outbound_payment` route-fetch calls to pass the hash + ID
`Route::get_route_with_id` exists to provide users payment-specific
data when fetching a route, however we were failing to call it when
we have such info, opting for the simple `get_route` instead. This
defeats the purpose of the additional-metadata method, which we
swap to using here.
Elias Rohrer [Wed, 8 Mar 2023 11:05:57 +0000 (12:05 +0100)]
Support HTTPS Esplora endpoints via new feature
To support HTTPS endpoints, the async HTTP library `reqwest` needs one of
the `-tls` features enabled. While the users could specify this in their
own cargo dependencies, we here provide a new `esplora-async-https`
feature for conveinience.
Elias Rohrer [Tue, 7 Mar 2023 10:19:41 +0000 (11:19 +0100)]
Add `list_channels_by_counterparty` method
While we already provide a `list_channels` method, it could result in
quite a large `Vec<ChannelDetails>`. Here, we provide the means to query
our channels by `counterparty_node_id` and DRY up the code.
Matt Corallo [Tue, 7 Mar 2023 18:06:12 +0000 (18:06 +0000)]
Avoid `poll`ing completed futures in the `background-processor`
`poll`ing completed futures invokes undefined behavior in Rust
(panics, etc, obviously not memory corruption as its not unsafe).
Sadly, in our futures-based version of
`lightning-background-processor` we have one case where we can
`poll` a completed future - if the timer for the network graph
prune + persist completes without a network graph to prune +
persist we'll happily poll the same future over and over again,
likely panicing in user code.