Elias Rohrer [Wed, 25 May 2022 23:44:22 +0000 (16:44 -0700)]
Allow building of a route from given hops
Implements `build_route_from_hops`, which provides a simple way to build
a route from us (payer) to the target node (payee) via the given hops
(which should exclude the payer, but include the payee). This may be
useful, e.g., for probing the chosen path.
Matt Corallo [Tue, 24 May 2022 22:02:15 +0000 (22:02 +0000)]
Correct bogus references to `revocation_point` in `ChannelMonitor`
The `ChannelMonitor` had a field for the counterparty's
`cur_revocation_points`. Somewhat confusingly, this actually stored
the counterparty's *per-commitment* points, not the (derived)
revocation points.
Here we correct this by simply renaming the references as
appropriate. Note the update in `channel.rs` makes the variable
names align correctly.
Matt Corallo [Thu, 19 May 2022 00:56:16 +0000 (00:56 +0000)]
Rename HTLC `onchain_value_satoshis` to `htlc_value_satoshis`
In `HTLCUpdate` and `OnchainEvent` tracking, we store the HTLC
value (rounded down to whole satoshis). This is somewhat
confusingly referred to as the `onchain_value_satoshis` even though
it refers to the commitment transaction output value, not the value
available on chain (which may have been reduced by an
HTLC-Timeout/HTLC-Success transaction).
Matt Corallo [Sun, 24 Apr 2022 20:30:50 +0000 (20:30 +0000)]
Rename HTLC `input_idx` fields to `commitment_tx_output_idx`
Several fields used in tracking on-chain HTLC outputs were
named `input_idx` despite referring to the output index in the
commitment transaction. Here they are all renamed
`commitment_tx_output_idx` for clarity.
For direct channels, the channel liquidity is known with certainty. Use
this knowledge in ProbabilisticScorer by either penalizing with the
per-hop penalty or u64::max_value depending on the amount.
Scorers could benefit from having the channel's EffectiveCapacity rather
than a u64 msat value. For instance, ProbabilisticScorer can give a more
accurate penalty when given the ExactLiquidity variant. Pass a struct
wrapping the effective capacity, the proposed amount, and any in-flight
HTLC value.
Jeffrey Czyz [Tue, 17 May 2022 21:57:55 +0000 (16:57 -0500)]
Use correct penalty and CLTV delta in route hints
For route hints, the aggregate next hops path penalty and CLTV delta
should be computed after considering each hop rather than before.
Otherwise, these aggregate values will include values from the current
hop, too.
Jeffrey Czyz [Tue, 17 May 2022 21:43:36 +0000 (16:43 -0500)]
Use the correct amount when scoring route hints
When scoring route hints, the amount passed to the scorer should include
any fees needed for subsequent hops. This worked correctly for single-
hop hints since there are no further hops, but not for multi-hint hops
(except the final one).
Jeffrey Czyz [Sun, 23 Jan 2022 23:25:38 +0000 (17:25 -0600)]
Distinguish maximum HTLC from effective capacity
Using EffectiveCapacity in scoring gives more accurate success
probabilities when the maximum HTLC value is less than the channel
capacity. Change EffectiveCapacity to prefer the channel's capacity
over its maximum HTLC limit, but still use the latter for route finding.
Matt Corallo [Sun, 15 May 2022 19:13:43 +0000 (19:13 +0000)]
Randomize the `ConnectStyle` during tests
We have a bunch of fancy infrastructure to ensure we can connect
blocks using all our different connection interfaces, but we only
bother to use it in a few select tests.
This expands our use of `ConnectStyle` to most of our tests by
simply randomizing the style in each test. This makes our tests
non-deterministic, but we print the connection style at start so
that it's easy to reproduce a failure deterministically.
Matt Corallo [Sun, 15 May 2022 19:38:01 +0000 (19:38 +0000)]
Make tests more robust against different connection styles
In the next commit we'll randomize the `ConnectStyle` used in each
test. However, some tests are slightly too prescriptive, which we
address here in a few places.
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.
We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
As the `counterparty_node_id` is now required to be passed back to the
`ChannelManager` to accept or reject an inbound channel request, the
documentation is updated to reflect that.
Matt Corallo [Mon, 9 May 2022 23:03:50 +0000 (23:03 +0000)]
Pull secp256k1 contexts from per-peer to per-PeerManager
Instead of including a `Secp256k1` context per
`PeerChannelEncryptor`, which is relatively expensive memory-wise
and nontrivial CPU-wise to construct, we should keep one for all
peers and simply reuse it.
This is relatively trivial so we do so in this commit.
Since its trivial to do so, we also take this opportunity to
randomize the new PeerManager context.
Matt Corallo [Wed, 6 Oct 2021 16:58:56 +0000 (16:58 +0000)]
Create a simple `FairRwLock` to avoid readers starving writers
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.
Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).
Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
Matt Corallo [Sun, 3 Oct 2021 21:44:52 +0000 (21:44 +0000)]
Wake reader future when we fail to flush socket buffer
This avoids any extra calls to `read_event` after a write fails to
flush the write buffer fully, as is required by the PeerManager
API (though it isn't critical).
Matt Corallo [Wed, 6 Oct 2021 04:45:07 +0000 (04:45 +0000)]
Limit blocked PeerManager::process_events waiters to two
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.
Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
Matt Corallo [Sat, 25 Sep 2021 22:24:23 +0000 (22:24 +0000)]
Avoid taking the peers write lock during event processing
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
Matt Corallo [Wed, 6 Oct 2021 04:29:19 +0000 (04:29 +0000)]
[net-tokio] Call PeerManager::process_events without blocking reads
Unlike very ancient versions of lightning-net-tokio, this does not
rely on a single global process_events future, but instead has one
per connection. This could still cause significant contention, so
we'll ensure only two process_events calls can exist at once in
the next few commits.
Matt Corallo [Wed, 6 Oct 2021 06:58:15 +0000 (06:58 +0000)]
Process messages with only the top-level read lock held
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.
That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.
This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
Matt Corallo [Fri, 30 Jul 2021 18:03:28 +0000 (18:03 +0000)]
Process messages from peers in parallel in `PeerManager`.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
Add test that ensures that channels are closed with
`ClosureReason::DisconnectedPeer` if the peer disconnects before the
funding transaction has been broadcasted.
Set `ChannelUpdate` `htlc_maximum_msat` using the peer's value
Use the `counterparty_max_htlc_value_in_flight_msat` value, and not the
`holder_max_htlc_value_in_flight_msat` value when creating the
`htlc_maximum_msat` value for `ChannelUpdate` messages.
BOLT 7 specifies that the field MUST be less than or equal to
`max_htlc_value_in_flight_msat` received from the peer, which we
currently are not guaranteed to adhere to by using the holder value.
Add a config field
`ChannelHandshakeConfig::max_inbound_htlc_value_in_flight_percent_of_channel`
which sets the percentage of the channel value we cap the total value of
outstanding inbound HTLCs to.
This field can be set to a value between 1-100, where the value
corresponds to the percent of the channel value in whole percentages.
Note that:
* If configured to another value than the default value 10, any new
channels created with the non default value will cause versions of LDK
prior to 0.0.104 to refuse to read the `ChannelManager`.
* This caps the total value for inbound HTLCs in-flight only, and
there's currently no way to configure the cap for the total value of
outbound HTLCs in-flight.
* The requirements for your node being online to ensure the safety of
HTLC-encumbered funds are different from the non-HTLC-encumbered funds.
This makes this an important knob to restrict exposure to loss due to
being offline for too long. See
`ChannelHandshakeConfig::our_to_self_delay` and
`ChannelConfig::cltv_expiry_delta` for more information.
Default value: 10.
Minimum value: 1, any values less than 1 will be treated as 1 instead.
Maximum value: 100, any values larger than 100 will be treated as 100
instead.
Matt Corallo [Mon, 2 May 2022 20:45:17 +0000 (20:45 +0000)]
Reject outbound channels if the total reserve is larger than funding
In 2826af75a5761859dedcddc870de0753ae4ecde4 we fixed a fuzz crash
in which the total reserve values in a channel were greater than
the funding amount, checked when an incoming channel is accepted.
This, however, did not fix the same issue for outbound channels,
where a peer can accept a channel with a nonsense reserve value in
the `accept_channel` message. The `full_stack_target` fuzzer
eventually found its way into the same issue, which this resolves.
Thanks (again) to Chaincode Labs for providing the fuzzing
resources which found this bug!