Matt Corallo [Sun, 26 Jun 2022 01:44:21 +0000 (01:44 +0000)]
Concretize `WriteableScore` into `MultiThreadedLockableScore`
In general the bindings don't handle blanket implementations well -
they generate concrete implementations for everything and don't
bother building up enough context to be aware of the blanket
implementation to avoid duplicating it while still allowing users
to access struct(s) as all implemented traits.
Thus, implementing `WriteableScore` for all `LockableScore`s that
also implement `Writeable` is particularly impractical to map in
bindings.
Further, because `Score` already requires `Writeable`, having a
separate `WriteableScore` doesn't really make any sense.
Here we simply remove `WriteableScore` (in `c_bindings` mode)
entirely and push users through `MultiThreadedLockableScore` in the
higher-level traits that require `Score`.
Matt Corallo [Fri, 17 Dec 2021 22:32:24 +0000 (22:32 +0000)]
(Bindings Only) Concretize LockableScore as MultiThreadedLockableScore
We don't really care about more than this in bindings - calling
into a custom `Score` is likely too slow to be practical anyway,
so this is also a performance improvement.
Works around https://github.com/rust-lang/rust/issues/90448
Elias Rohrer [Fri, 10 Jun 2022 08:45:57 +0000 (10:45 +0200)]
Check release build profile in CI
So far, CI did not check the code in the `release` build profile, which
could result in some things not getting caught. To fix this, we now
implement a new CI job that runs checks in the `release` profile.
Jeffrey Czyz [Thu, 2 Jun 2022 21:48:32 +0000 (14:48 -0700)]
Support only one GossipSync in BackgroundProcessor
BackgroundProcessor can take an optional P2PGossipSync and an optional
RapidGossipSync, but doing so may be easy to misuse. Each has a
reference to a NetworkGraph, which could be different between the two,
but only one is actually used.
Instead, allow passing one object wrapped in a GossipSync enum. Also,
fix a bug where the NetworkGraph is not persisted on shutdown if only a
RapidGossipSync is given.
Jeffrey Czyz [Fri, 3 Jun 2022 05:59:14 +0000 (22:59 -0700)]
Implement EventHandler for NetworkGraph
Instead of implementing EventHandler for P2PGossipSync, implement it on
NetworkGraph. This allows RapidGossipSync to handle events, too, by
delegating to its NetworkGraph.
Jeffrey Czyz [Sat, 4 Jun 2022 04:35:37 +0000 (21:35 -0700)]
Parameterize NetworkGraph with Logger
P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
Jeffrey Czyz [Fri, 3 Jun 2022 04:37:59 +0000 (21:37 -0700)]
Move Secp256k1 context to NetworkGraph
P2PGossipSync has a Secp256k1 context field, which it only uses to pass
to NetworkGraph methods. Move the field to NetworkGraph so other callers
don't need to pass in a Secp256k1 context.
Jeffrey Czyz [Wed, 1 Jun 2022 17:28:34 +0000 (10:28 -0700)]
Rename NetGraphMsgHandler to P2PGossipSync
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
Jeffrey Czyz [Fri, 5 Nov 2021 17:55:25 +0000 (12:55 -0500)]
Rename ChannelClosed to ChannelFailure
A NetworkUpdate indicating ChannelClosed actually corresponds to a
channel failure as described in BOLT 4:
0x2000 (NODE): node failure (otherwise channel)
Rename the enum variant to ChannelFailure and rename NetworkGraph
methods close_channel_from_update and fail_node to channel_failed and
node_failed, respectively.
Arik Sosman [Wed, 1 Jun 2022 22:26:07 +0000 (15:26 -0700)]
Indicate ongoing rapid sync to background processor.
Create a wrapper struct for rapid gossip sync that can be passed to
BackgroundProcessor's start method, allowing it to only start pruning
the network graph upon rapid gossip sync's completion.
Matt Corallo [Thu, 2 Jun 2022 03:37:16 +0000 (03:37 +0000)]
Do not panic on early tx broadcasts in fuzzing
If the user broadcasts a funding transaction before the
counterparty provides a `funding_signed` we will panic in
`check_get_channel_ready`. This is expected - the user did
something which may lead to loss of funds, and we *really* need to
let them know.
However, the fuzzer can do this and we shouldn't treat it as a bug,
its a totally expected panic. Thus, we disable the panic in fuzz.
Thanks to Chaincode for providing fuzzing resources which managed
to hit this panic.
Matt Corallo [Mon, 30 May 2022 17:50:02 +0000 (17:50 +0000)]
Re-export `core2::io` or `std::io` depending on feature flags
This is useful in bindings as the `lightning::io` module is used in
the public interface, but also useful for users who want to refer
to the `io` as used in lightning irrespective of the feature flags.
Matt Corallo [Tue, 19 Apr 2022 22:06:50 +0000 (22:06 +0000)]
Drop return value from `fail_htlc_backwards`, clarify docs
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.
While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.
Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.
We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
Matt Corallo [Tue, 19 Apr 2022 21:46:44 +0000 (21:46 +0000)]
Do additional pre-flight checks before claiming a payment
As additional sanity checks, before claiming a payment, we check
that we have the full amount available in `claimable_htlcs` that
the payment should be for. Concretely, this prevents one
somewhat-absurd edge case where a user may receive an MPP payment,
wait many *blocks* before claiming it, allowing us to fail the
pending HTLCs and the sender to retry some subset of the payment
before we go to claim. More generally, this is just good
belt-and-suspenders against any edge cases we may have missed.
Matt Corallo [Wed, 4 May 2022 18:12:09 +0000 (18:12 +0000)]
Provide a redundant `Event::PaymentClaimed` on restart if needed
If we crashed during a payment claim and then detected a partial
claim on restart, we should ensure the user is aware that the
payment has been claimed. We do so here by using the new
partial-claim detection logic to create a `PaymentClaimed` event.
Matt Corallo [Mon, 2 May 2022 15:23:52 +0000 (15:23 +0000)]
Add test of 0conf channels getting the funding transaction reorg'd
In a previous version of the 0-conf code we did not correctly
handle 0-conf channels getting the funding transaction reorg'd out
(and the real SCID possibly changing on us).
Matt Corallo [Fri, 1 Apr 2022 01:36:38 +0000 (01:36 +0000)]
Expose outbound SCID alias in `ChannelDetails` and use in routing
This supports routing outbound over 0-conf channels by utilizing
the outbound SCID alias that we assign to all channels to refer to
the selected channel when routing.
Matt Corallo [Tue, 1 Feb 2022 21:57:01 +0000 (21:57 +0000)]
Lock outbound channels at 0conf if the peer indicates support for it
If our peer sets a minimum depth of 0, and we're set to trusting
ourselves to not double-spend our own funding transactions, send a
funding_locked message immediately after funding signed.
Note that some special care has to be taken around the
`channel_state` values - `ChannelFunded` no longer implies the
funding transaction is confirmed on-chain. Thus, for example, the
should-we-re-broadcast logic has to now accept `channel_state`
values greater than `ChannelFunded` as indicating we may still need
to re-broadcast our funding tranasction, unless `minimum_depth` is
greater than 0.
Further note that this starts writing `Channel` objects with a
`MIN_SERIALIZATION_VERSION` of 2. Thus, LDK versions prior to
0.0.99 (July 2021) will now refuse to read serialized
Channels/ChannelManagers.
Matt Corallo [Fri, 4 Mar 2022 21:24:39 +0000 (21:24 +0000)]
Handle cases where a channel is in use w/o an SCID in ChannelManager
In the next few commits we add support for 0conf channels, allowing
us to have an active channel with HTLC and other updates flying
prior to having an SCID available. This would break several
assumptions made in `ChannelManager`, which we address here by
looking at SCID aliases in addition to SCIDs.
Elias Rohrer [Wed, 25 May 2022 23:44:22 +0000 (16:44 -0700)]
Allow building of a route from given hops
Implements `build_route_from_hops`, which provides a simple way to build
a route from us (payer) to the target node (payee) via the given hops
(which should exclude the payer, but include the payee). This may be
useful, e.g., for probing the chosen path.
Matt Corallo [Mon, 18 Apr 2022 15:42:11 +0000 (15:42 +0000)]
Ensure all HTLCs for a claimed payment are claimed on startup
While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.
Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!
The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.
This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
Matt Corallo [Tue, 24 May 2022 22:02:15 +0000 (22:02 +0000)]
Correct bogus references to `revocation_point` in `ChannelMonitor`
The `ChannelMonitor` had a field for the counterparty's
`cur_revocation_points`. Somewhat confusingly, this actually stored
the counterparty's *per-commitment* points, not the (derived)
revocation points.
Here we correct this by simply renaming the references as
appropriate. Note the update in `channel.rs` makes the variable
names align correctly.
Matt Corallo [Thu, 19 May 2022 00:56:16 +0000 (00:56 +0000)]
Rename HTLC `onchain_value_satoshis` to `htlc_value_satoshis`
In `HTLCUpdate` and `OnchainEvent` tracking, we store the HTLC
value (rounded down to whole satoshis). This is somewhat
confusingly referred to as the `onchain_value_satoshis` even though
it refers to the commitment transaction output value, not the value
available on chain (which may have been reduced by an
HTLC-Timeout/HTLC-Success transaction).
Matt Corallo [Sun, 24 Apr 2022 20:30:50 +0000 (20:30 +0000)]
Rename HTLC `input_idx` fields to `commitment_tx_output_idx`
Several fields used in tracking on-chain HTLC outputs were
named `input_idx` despite referring to the output index in the
commitment transaction. Here they are all renamed
`commitment_tx_output_idx` for clarity.
For direct channels, the channel liquidity is known with certainty. Use
this knowledge in ProbabilisticScorer by either penalizing with the
per-hop penalty or u64::max_value depending on the amount.
Scorers could benefit from having the channel's EffectiveCapacity rather
than a u64 msat value. For instance, ProbabilisticScorer can give a more
accurate penalty when given the ExactLiquidity variant. Pass a struct
wrapping the effective capacity, the proposed amount, and any in-flight
HTLC value.
Jeffrey Czyz [Tue, 17 May 2022 21:57:55 +0000 (16:57 -0500)]
Use correct penalty and CLTV delta in route hints
For route hints, the aggregate next hops path penalty and CLTV delta
should be computed after considering each hop rather than before.
Otherwise, these aggregate values will include values from the current
hop, too.
Jeffrey Czyz [Tue, 17 May 2022 21:43:36 +0000 (16:43 -0500)]
Use the correct amount when scoring route hints
When scoring route hints, the amount passed to the scorer should include
any fees needed for subsequent hops. This worked correctly for single-
hop hints since there are no further hops, but not for multi-hint hops
(except the final one).
Jeffrey Czyz [Sun, 23 Jan 2022 23:25:38 +0000 (17:25 -0600)]
Distinguish maximum HTLC from effective capacity
Using EffectiveCapacity in scoring gives more accurate success
probabilities when the maximum HTLC value is less than the channel
capacity. Change EffectiveCapacity to prefer the channel's capacity
over its maximum HTLC limit, but still use the latter for route finding.
Matt Corallo [Wed, 4 May 2022 17:49:09 +0000 (17:49 +0000)]
Store an `events::PaymentPurpose` with each claimable payment
In fc77c57c3c6e165d26cb5c1f5d1afee0ecd02589 we stopped using the
`FInalOnionHopData` in `OnionPayload::Invoice` directly and intend
to remove it eventually. However, in the next few commits we need
access to the payment secret when claimaing a payment, as we create
a new `PaymentPurpose` during the claim process for a new event.
In order to get access to a `PaymentPurpose` without having access
to the `FinalOnionHopData` we here change the storage of
`claimable_htlcs` to store a single `PaymentPurpose` explicitly
with each set of claimable HTLCs.
Matt Corallo [Wed, 4 May 2022 16:29:29 +0000 (16:29 +0000)]
Enable removal of `OnionPayload::Invoice::_legacy_hop_data` later
In fc77c57c3c6e165d26cb5c1f5d1afee0ecd02589 we stopped using the
`FinalOnionHopData` in `OnionPayload::Invoice` directly and renamed
it `_legacy_hop_data` with the intent of removing it in a few
versions. However, we continue to check that it was included in the
serialized data, meaning we would not be able to remove it without
breaking ability to serialize full `ChannelManager`s.
This fixes that by making the `_legacy_hop_data` an `Option` which
we will happily handle just fine if its `None`.