Elias Rohrer [Mon, 27 Mar 2023 11:19:15 +0000 (13:19 +0200)]
Move test `bitcoind`/`electrsd` out of `OnceCell`
`OnceCell` doesn't call `drop`, which makes the spawned
`bitcoind`/`electrsd` instances linger around after our tests have
finished. To fix this, we move them out of `OnceCell` and let every test
that needs them spawn their own instances. This additional let us drop
the `OnceCell` dev dependency.
Wilmer Paulino [Tue, 7 Mar 2023 21:57:01 +0000 (13:57 -0800)]
Move events.rs into its own top-level module
This is largely motivated by some follow-up work for anchors that will
introduce an event handler for `BumpTransaction` events, which we can
now include in this new top-level `events` module.
Remove redundant `addresses` field from `NodeAnnouncementInfo`
...replacing it with an acessor `addresses()`.
Besides removing a redundant data structure already present on inner
`NodeAnnouncement`, this change makes it possible to discover new address types
upon deserialization thanks to `UnsignedNodeAnnouncement`'s implementation.
Wilmer Paulino [Tue, 14 Feb 2023 23:49:37 +0000 (15:49 -0800)]
Maintain order of yielded claim events
Since the claim events are stored internally within a HashMap, they will
be yielded in a random order once dispatched. Claim events may be
invalidated if a conflicting claim has confirmed on-chain and we need to
generate a new claim event; the randomized order could result in the
new claim event being handled prior to the previous. To maintain the
order in which the claim events are generated, we track them in a Vec
instead and ensure only one instance of a PackageId only ever exists
within it.
This would have certain performance implications, but since we're
bounded by the total number of HTLCs in a commitment anyway, we're
comfortable with taking the cost.
Wilmer Paulino [Wed, 15 Feb 2023 00:24:30 +0000 (16:24 -0800)]
Clarify OnchainEvent::Claim behavior
The previous documentation was slightly incorrect, a `Claim` can also be
from the counterparty if they happened to claim the same exact set of
outputs as a claiming transaction we generated.
Wilmer Paulino [Tue, 14 Feb 2023 23:35:40 +0000 (15:35 -0800)]
Implement PartialEq manually
Since we don't store `pending_claim_events` within `OnchainTxHandler` as
they'll be regenerated on restarts, we opt to implement `PartialEq`
manually such that the field is not longer considered.
Matt Corallo [Sun, 19 Mar 2023 23:39:56 +0000 (23:39 +0000)]
Drop `serde` dependency from `lightning-block-sync`
`serde` doesn't bother with MSRVs, so its expected to break
frequently. Yesterday, the `derive` feature had its MSRV broken in
a patch version without care.
Luckily its trivial for us to remove the `serde` dependency in
`lightning-block-sync`, using only `serde_json` for the JSON
deserialization part. It even ends up net-negative on LoC.
Matt Corallo [Thu, 9 Feb 2023 19:02:04 +0000 (19:02 +0000)]
Include a route hint for public, not-yet-announced channels
If we have a public channel which doesn't yet have six
confirmations the network can't possibly know about it as we cannot
have announced it yet. However, because we refuse to include
route-hints if we have any public channels, we will generate
invoices that no one can pay.
Thus, if we have any public, not-yet-announced channels, include
them as a route-hint.
Matt Corallo [Wed, 15 Mar 2023 18:08:35 +0000 (18:08 +0000)]
Bump MSRV to 1.48
1.48.0 was released at the end of 2020, nearly 2.5 years ago. It
has been the rustc available on Debian stable since bullseye,
released in 2021. supporting Debian oldstable for more than a year
seems more than sufficient time to give Debian folks to upgrade,
and bullseye is set to become `oldstable` later this year with the
release of `bookworm`, likely this summer.
This also allows us to clean up our MSRV substantially, having a
single MSRV across our crates rather than a number of separate
ones. Sadly, windows already requires 1.49.
Matt Corallo [Thu, 9 Mar 2023 19:23:58 +0000 (19:23 +0000)]
Correct `outbound_payment` route-fetch calls to pass the hash + ID
`Route::get_route_with_id` exists to provide users payment-specific
data when fetching a route, however we were failing to call it when
we have such info, opting for the simple `get_route` instead. This
defeats the purpose of the additional-metadata method, which we
swap to using here.
Elias Rohrer [Wed, 8 Mar 2023 11:05:57 +0000 (12:05 +0100)]
Support HTTPS Esplora endpoints via new feature
To support HTTPS endpoints, the async HTTP library `reqwest` needs one of
the `-tls` features enabled. While the users could specify this in their
own cargo dependencies, we here provide a new `esplora-async-https`
feature for conveinience.
Elias Rohrer [Tue, 7 Mar 2023 10:19:41 +0000 (11:19 +0100)]
Add `list_channels_by_counterparty` method
While we already provide a `list_channels` method, it could result in
quite a large `Vec<ChannelDetails>`. Here, we provide the means to query
our channels by `counterparty_node_id` and DRY up the code.
Matt Corallo [Tue, 7 Mar 2023 18:06:12 +0000 (18:06 +0000)]
Avoid `poll`ing completed futures in the `background-processor`
`poll`ing completed futures invokes undefined behavior in Rust
(panics, etc, obviously not memory corruption as its not unsafe).
Sadly, in our futures-based version of
`lightning-background-processor` we have one case where we can
`poll` a completed future - if the timer for the network graph
prune + persist completes without a network graph to prune +
persist we'll happily poll the same future over and over again,
likely panicing in user code.
Wilmer Paulino [Wed, 22 Feb 2023 19:46:21 +0000 (11:46 -0800)]
Update same amount and preimage test vector
The amount for HTLC #6 was updated in the spec's test vectors, but the
"same amount and preimage" test vector itself was not updated, even
though the new HTLC amount resulted in a different commitment
transaction, and thus, different signatures.
Wilmer Paulino [Wed, 22 Feb 2023 19:45:43 +0000 (11:45 -0800)]
Add missing test vector for anchors_zero_fee_htlc_tx
Tests the case where only one anchor output exists for the funder in the
commitment transaction due to the remote having a dust balance (in this
case, 0).
Matt Corallo [Sat, 4 Mar 2023 01:16:57 +0000 (01:16 +0000)]
Make `fuzz_threaded_connections` more robust
In `fuzz_threaded_connections`, if one thread is being run while
another is starved, and the running thread manages to call
`timer_tick_ocurred` twice after the starved thread constructs the
inbound connection but before it delivers the first bytes, we'll
receive an immediate error and `unwrap` it, causing failure.
The fix is trivial, simply remove the unwrap and return if we're
already disconnected when we do the initial read.
While we're here, we also reduce the frequency of the
`timer_tick_ocurred` calls to give us a chance to occasionally
deliver some additional messages.
Jeffrey Czyz [Fri, 6 Jan 2023 04:00:31 +0000 (22:00 -0600)]
Guard against division by zero in scorer
Since a node may announce that the htlc_maximum_msat of a channel is
zero, adding one to the denominator in the bucket formulas will prevent
the panic from ever happening. While the routing algorithm may never
select such a channel to score, this precaution may still be useful in
case the algorithm changes or if the scorer is used with a different
routing algorithm.
Matt Corallo [Fri, 3 Mar 2023 20:03:57 +0000 (20:03 +0000)]
Expose the node secret key in `{Phantom,}KeysManager`
When we removed the private keys from the signing interface we
forgot to re-add them in the public interface of our own
implementations, which users may need.
Matt Corallo [Fri, 3 Mar 2023 05:14:04 +0000 (05:14 +0000)]
Do not auto-select the lightning `std` feature from tx-sync crate
We have some downstream folks who are using LDK in wasm compiled
via the normal rust wasm path. To ensure nothing breaks they want
to use `no-std` on the lightning crate, disabling time calls as
those panic. However, the HTTP logic in
`lightning-transaction-sync` gets automatically stubbed out by the
HTTP client crates when targeting wasm via `wasm_bindgen`, so it
works fine despite the std restrictions.
In order to make both work, `lightning-transaction-sync` can remain
`std`, but needs to not automatically enable the `std` flag on the
`lightning` crate, ie by setting `default-features = false`. We do
so here.
Matt Corallo [Fri, 3 Mar 2023 01:24:24 +0000 (01:24 +0000)]
Pass `FailureCode` to `fail_htlc_backwards` by ownership
`FaliureCode` is a trivial enum with no body, so we shouldn't be
passing it by reference. Its sufficiently strange that the Java
bindings aren't happy with it, which is fine, we should just fix it
here.
Matt Corallo [Wed, 22 Feb 2023 02:40:59 +0000 (02:40 +0000)]
Track claimed outbound HTLCs in ChannelMonitors
When we receive an update_fulfill_htlc message, we immediately try
to "claim" the HTLC against the HTLCSource. If there is one, this
works great, we immediately generate a `ChannelMonitorUpdate` for
the corresponding inbound HTLC and persist that before we ever get
to processing our counterparty's `commitment_signed` and persisting
the corresponding `ChannelMonitorUpdate`.
However, if there isn't one (and this is the first successful HTLC
for a payment we sent), we immediately generate a `PaymentSent`
event and queue it up for the user. Then, a millisecond later, we
receive the `commitment_signed` from our peer, removing the HTLC
from the latest local commitment transaction as a side-effect of
the `ChannelMonitorUpdate` applied.
If the user has processed the `PaymentSent` event by that point,
great, we're done. However, if they have not, and we crash prior to
persisting the `ChannelManager`, on startup we get confused about
the state of the payment. We'll force-close the channel for being
stale, and see an HTLC which was removed and is no longer present
in the latest commitment transaction (which we're broadcasting).
Because we claim corresponding inbound HTLCs before updating a
`ChannelMonitor`, we assume such HTLCs have failed - attempting to
fail after having claimed should be a noop. However, in the
sent-payment case we now generate a `PaymentFailed` event for the
user, allowing an HTLC to complete without giving the user a
preimage.
Here we address this issue by storing the payment preimages for
claimed outbound HTLCs in the `ChannelMonitor`, in addition to the
existing inbound HTLC preimages already stored there. This allows
us to fix the specific issue described by checking for a preimage
and switching the type of event generated in response. In addition,
it reduces the risk of future confusion by ensuring we don't fail
HTLCs which were claimed but not fully committed to before a crash.
It does not, however, full fix the issue here - because the
preimages are removed after the HTLC has been fully removed from
available commitment transactions if we are substantially delayed
in persisting the `ChannelManager` from the time we receive the
`update_fulfill_htlc` until after a full commitment signed dance
completes we may still hit this issue. The full fix for this issue
is to delay the persistence of the `ChannelMonitorUpdate` until
after the `PaymentSent` event has been processed. This avoids the
issue entirely, ensuring we process the event before updating the
`ChannelMonitor`, the same as we ensure the upstream HTLC has been
claimed before updating the `ChannelMonitor` for forwarded
payments.
The full solution will be implemented in a later work, however this
change still makes sense at that point as well - if we were to
delay the initial `commitment_signed` `ChannelMonitorUpdate` util
after the `PaymentSent` event has been processed (which likely
requires a database update on the users' end), we'd hold our
`commitment_signed` + `revoke_and_ack` response for two DB writes
(i.e. `fsync()` calls), making our commitment transaction
processing a full `fsync` slower. By making this change first, we
can instead delay the `ChannelMonitorUpdate` from the
counterparty's final `revoke_and_ack` message until the event has
been processed, giving us a full network roundtrip to do so and
avoiding delaying our response as long as an `fsync` is faster than
a network roundtrip.