Jeffrey Czyz [Fri, 20 Jan 2023 18:31:17 +0000 (12:31 -0600)]
Fuzz test for parsing Offer
An offer is serialized as a TLV stream and encoded in bech32 without a
checksum. Add a fuzz test that parses the unencoded TLV stream and
deserializes the underlying Offer. Then compare the original bytes with
those obtained by re-serializing the Offer.
Jeffrey Czyz [Tue, 31 Jan 2023 20:35:49 +0000 (14:35 -0600)]
Make separate no-std version for invoice response
Both Refund::respond_with and InvoiceRequest::respond_with take a
created_at since the Unix epoch Duration in no-std. However, this can
cause problems if two downstream dependencies want to use the lightning
crate with different feature flags set. Instead, define
respond_with_no_std versions of each method in addition to a
respond_with version in std.
Prior to this, we returned PaymentSendFailure from auto retry send payment
methods. This implied that we might return a PartialFailure from them, which
has never been the case. So it makes sense to rework the errors to be a better
fit for the methods.
We're taking error handling in a totally different direction now to make it
more asynchronous, see send_payment_internal for more information.
Matt Corallo [Sun, 19 Feb 2023 00:13:51 +0000 (00:13 +0000)]
Don't generate a `ChannelMonitorUpdate` for closed chans on shutdown
The `Channel::get_shutdown` docs are very clear - if the channel
jumps to `Shutdown` as a result of not being funded when we go to
initiate shutdown we should not generate a `ChannelMonitorUpdate`
as there's no need to bother with the shutdown script - we're
force-closing anyway.
However, this wasn't actually implemented, potentially causing a
spurious monitor update for no reason.
Matt Corallo [Sat, 3 Dec 2022 03:15:04 +0000 (03:15 +0000)]
Use the new monitor persistence flow for `funding_created` handling
Building on the previous commits, this finishes our transition to
doing all message-sending in the monitor update completion
pipeline, unifying our immediate- and async- `ChannelMonitor`
update and persistence flows.
Matt Corallo [Mon, 6 Feb 2023 23:03:38 +0000 (23:03 +0000)]
Use new monitor persistence flow in funding_signed handling
In the previous commit, we moved all our `ChannelMonitorUpdate`
pipelines to use a new async path via the
`handle_new_monitor_update` macro. This avoids having two message
sending pathways and simply sends messages in the "monitor update
completed" flow, which is shared between sync and async monitor
updates.
Here we reuse the new macro for handling `funding_signed` messages
when doing an initial `ChannelMonitor` persistence. This provides
a similar benefit, simplifying the code a trivial amount, but
importantly allows us to fully remove the original
`handle_monitor_update_res` macro.
Matt Corallo [Wed, 11 Jan 2023 21:37:57 +0000 (21:37 +0000)]
Always process `ChannelMonitorUpdate`s asynchronously
We currently have two codepaths on most channel update functions -
most methods return a set of messages to send a peer iff the
`ChannelMonitorUpdate` succeeds, but if it does not we push the
messages back into the `Channel` and then pull them back out when
the `ChannelMonitorUpdate` completes and send them then. This adds
a substantial amount of complexity in very critical codepaths.
Instead, here we swap all our channel update codepaths to
immediately set the channel-update-required flag and only return a
`ChannelMonitorUpdate` to the `ChannelManager`. Internally in the
`Channel` we store a queue of `ChannelMonitorUpdate`s, which will
become critical in future work to surface pending
`ChannelMonitorUpdate`s to users at startup so they can complete.
This leaves some redundant work in `Channel` to be cleaned up
later. Specifically, we still generate the messages which we will
now ignore and regenerate later.
This commit updates the `ChannelMonitorUpdate` pipeline across all
the places we generate them.
Matt Corallo [Sat, 3 Dec 2022 05:38:24 +0000 (05:38 +0000)]
Move TODO from `handle_monitor_update_res` into `Channel`
The TODO mentioned in `handle_monitor_update_res` about how we
might forget about HTLCs in case of permanent monitor update
failure still applies in spite of all our changes. If a channel is
drop'd in general, monitor-pending updates may be lost if the
monitor update failed to persist.
This was always the case, and is ultimately the general form of the
the specific TODO, so we simply leave comments there
Matt Corallo [Fri, 27 Jan 2023 06:14:18 +0000 (06:14 +0000)]
Handle `MonitorUpdateCompletionAction`s after monitor update sync
In a previous PR, we added a `MonitorUpdateCompletionAction` enum
which described actions to take after a `ChannelMonitorUpdate`
persistence completes. At the time, it was only used to execute
actions in-line, however in the next commit we'll start (correctly)
leaving the existing actions until after monitor updates complete.
Matt Corallo [Thu, 26 Jan 2023 04:47:25 +0000 (04:47 +0000)]
Limit the number of pending un-funded inbound channel
Because we store some (not large, but not zero) state per-peer,
it's useful to limit the number of peers we have connected, at
least with some buffer.
Much more importantly, each channel has a relatively large cost,
especially around the `ChannelMonitor`s we have to build for each.
Thus, here, we limit the number of channels per-peer which aren't
(yet) on-chain, as well as limit the number of (inbound) peers
which don't have a (funded-on-chain) channel.
Matt Corallo [Tue, 21 Feb 2023 19:10:43 +0000 (19:10 +0000)]
Remove the `peer_disconnected` `no_connection_possible` flag
Long ago, we used the `no_connection_possible` to signal that a
peer has some unknown feature set or some other condition prevents
us from ever connecting to the given peer. In that case we'd
automatically force-close all channels with the given peer. This
was somewhat surprising to users so we removed the automatic
force-close, leaving the flag serving no LDK-internal purpose.
Distilling the concept of "can we connect to this peer again in the
future" to a simple flag turns out to be ripe with edge cases, so
users actually using the flag to force-close channels would likely
cause surprising behavior.
Thus, there's really not a lot of reason to keep the flag,
especially given its untested and likely to be broken in subtle
ways anyway.
Matt Corallo [Wed, 15 Feb 2023 01:23:20 +0000 (01:23 +0000)]
Correct `funding_transaction_generated` err msg and fix fuzz check
This fixes new errors in `full_stack_target` pointed out by
Chaincode's generous fuzzing infrastructure. Specifically, there's
no reason to check the error message in the
`funding_transaction_generated` return value - it can only return
a failure if the channel has closed since the funding transaction
was generated (which is fine) or if the signer refuses to sign
(which can't happen in fuzzing).
Matt Corallo [Wed, 15 Feb 2023 01:20:38 +0000 (01:20 +0000)]
Correct the "is peer live" checks in `PeerManager`
In general, we should be checking if a `Peer` has `their_features`
set as the "is this peer connected and have they finished the
handshake" flag as it indicates an `Init` message was received.
While none of these appear to be reachable bugs, there were a
number of places where we checked other flags for this purpose,
which may lead to sending messages before `Init` in the future.
Here we clean these cases up to always use the correct check (via
the new util method).
Matt Corallo [Wed, 15 Feb 2023 01:13:57 +0000 (01:13 +0000)]
Fix (and DRY) the conditionals before calling `peer_disconnected`
If we have a peer that sends a non-`Init` first message, we'll call
`peer_disconnected` without ever having called `peer_connected`
(which has to wait until we have an `Init` message). This is a
violation of our API guarantees, though should generally not be an
issue.
Because this bug was repeated in a few places, we also take this
opportunity to DRY up the logic which checks the peer state before
calling `peer_disconnected`.
Found by the new `ChannelManager` assertions and the
`full_stack_target` fuzzer.
Matt Corallo [Sat, 3 Dec 2022 04:25:37 +0000 (04:25 +0000)]
Add a new monitor update result handling macro
Over the next few commits, this macro will replace the
`handle_monitor_update_res` macro. It takes a different approach -
instead of receiving the message(s) that need to be re-sent after
the monitor update completes and pushing them back into the
channel, we'll not get the messages from the channel at all until
we're ready for them.
This will unify our message sending into only actually fetching +
sending messages in the common monitor-update-completed code,
rather than both there *and* in the functions that call `Channel`
when new messages are originated.
Matt Corallo [Mon, 19 Dec 2022 20:41:42 +0000 (20:41 +0000)]
Add storage for `ChannelMonitorUpdate`s in `Channel`s
In order to support fully async `ChannelMonitor` updating, we need
to ensure that we can replay `ChannelMonitorUpdate`s if we shut
down after persisting a `ChannelManager` but without completing a
`ChannelMonitorUpdate` persistence. In order to support that we
(obviously) have to store the `ChannelMonitorUpdate`s in the
`ChannelManager`, which we do here inside the `Channel`.
We do so now because in the coming commits we will start using the
async persistence flow for all updates, and while we won't yet
support fully async monitor updating it's nice to get some of the
foundational structures in place now.
Matt Corallo [Wed, 30 Nov 2022 18:49:44 +0000 (18:49 +0000)]
Track actions to execute after a `ChannelMonitor` is updated
When a `ChannelMonitor` update completes, we may need to take some
further action, such as exposing an `Event` to the user initiating
another `ChannelMonitor` update, etc. This commit adds the basic
structure to track such actions and serialize them as required.
Note that while this does introduce a new map which is written as
an even value which users cannot opt out of, the map is only filled
in when users use the asynchronous `ChannelMonitor` updates. As
these are still considered beta, breaking downgrades for such users
is considered acceptable here.
Matt Corallo [Thu, 1 Dec 2022 00:25:32 +0000 (00:25 +0000)]
Add an infallible no-sign version of send_commitment_no_status_check
In the coming commits we'll move to async `ChannelMonitorUpdate`
application, which means we'll want to generate a
`ChannelMonitorUpdate` (including a new counterparty commitment
transaction) before we actually send it to our counterparty. To do
that today we'd have to actually sign the commitment transaction
by calling the signer, then drop it, apply the
`ChannelMonitorUpdate`, then re-sign the commitment transaction to
send it to our peer.
In this commit we instead split `send_commitment_no_status_check`
and `send_commitment_no_state_update` into `build_` and `send_`
variants, allowing us to generate new counterparty commitment
transactions without actually signing, then build them for sending,
with signatures, later.
Matt Corallo [Fri, 3 Feb 2023 23:05:58 +0000 (23:05 +0000)]
Fix (and test) threaded payment retries
The new in-`ChannelManager` retries logic does retries as two
separate steps, under two separate locks - first it calculates
the amount that needs to be retried, then it actually sends it.
Because the first step doesn't udpate the amount, a second thread
may come along and calculate the same amount and end up retrying
duplicatively.
Because we generally shouldn't ever be processing retries at the
same time, the fix is trivial - simply take a lock at the top of
the retry loop and hold it until we're done.
Matt Corallo [Tue, 7 Feb 2023 19:46:08 +0000 (19:46 +0000)]
Test if a given mutex is locked by the current thread in tests
In anticipation of the next commit(s) adding threaded tests, we
need to ensure our lockorder checks work fine with multiple
threads. Sadly, currently we have tests in the form
`assert!(mutex.try_lock().is_ok())` to assert that a given mutex is
not locked by the caller to a function.
The fix is rather simple given we already track mutexes locked by a
thread in our `debug_sync` logic - simply replace the check with a
new extension trait which (for test builds) checks the locked state
by only looking at what was locked by the current thread.
We're no longer supporting manual retries since
ChannelManager::send_payment_with_retry can be parameterized by a retry
strategy
This commit also updates all docs related to retry_payment and abandon_payment.
Since these docs frequently overlap with changes in preceding commits where we
start abandoning payments on behalf of the user, all the docs are updated in
one go.
Jeffrey Czyz [Tue, 31 Jan 2023 16:44:19 +0000 (10:44 -0600)]
Re-write CustomMessageHandler documentation
Documentation for CustomMessageHandler wasn't clear how it is related to
PeerManager and contained some grammatical and factual errors. Re-write
the docs and link to the lightning_custom_message crate.
Jeffrey Czyz [Tue, 3 Jan 2023 17:24:30 +0000 (11:24 -0600)]
Macro for composing custom message handlers
BOLT 1 specifies a custom message type range for use with experimental
or application-specific messages. While a `CustomMessageHandler` can be
defined to support more than one message type, defining such a handler
requires a significant amount of boilerplate and can be error prone.
Add a crate exporting a `composite_custom_message_handler` macro for
easily composing pre-defined custom message handlers. The resulting
handler can be further composed with other custom message handlers using
the same macro.
This requires a separate crate since the macro needs to support "or"
patterns in macro_rules, which is only available in edition 2021.
Otherwise, a crate defining a handler for a set of custom messages could
not easily be reused with another custom message handler. Doing so would
require explicitly duplicating the reused handlers type ids, but those
may change when the crate is updated.
When a peer disconnects but still has channels, the peer's `peer_state`
entry in the `per_peer_state` is not removed by the `peer_disconnected`
function. If the channels with that peer is later closed while still
being disconnected (i.e. force closed), we therefore need to remove the
peer from `peer_state` separately.
To remove the peers separately, we push such peers to a separate HashSet
that holds peers awaiting removal, and remove the peers on a timer to
limit the negative effects on parallelism as much as possible.
Updates multiple instances of the `ChannelManager` docs related to the
previous change that moved the storage of the channels to the
`per_peer_state`. This docs update corrects some grammar errors and
incorrect information, as well as clarifies documentation that was
confusing.
Elias Rohrer [Wed, 23 Nov 2022 08:33:37 +0000 (09:33 +0100)]
Add transaction sync crate
This crate provides utilities for syncing LDK via the transaction-based
`Confirm` interface. The initial implementation facilitates
synchronization with an Esplora backend server.
Matt Corallo [Mon, 30 Jan 2023 17:56:46 +0000 (17:56 +0000)]
Don't apply gossip backpressure to non-channel-announcing peers
When we apply the new gossip-async-check backpressure on peer
connections, if a peer has never sent us a `channel_announcement`
at all, we really shouldn't delay reading their messages.
This does so by tracking, on a per-peer basis, whether they've sent
us a channel_announcement, and resetting that state whenever we're
not backlogged.
Matt Corallo [Sun, 22 Jan 2023 18:08:33 +0000 (18:08 +0000)]
Apply backpressure when we have too many gossip checks in-flight
Now that the `RoutingMessageHandler` can signal that it needs to
apply message backpressure, we implement it here in the
`PeerManager`. There's not much complicated here, aside from noting
that we need to add the ability to call `send_data` with no data
to indicate that reading should resume (and track when we may need
to make such calls when updating the routing-backpressure state).
Matt Corallo [Sun, 22 Jan 2023 05:12:45 +0000 (05:12 +0000)]
Allow `RoutingMessageHandler` to signal backpressure
Now that we allow `handle_channel_announcement` to (indirectly)
spawn async tasks which will complete later, we have to ensure it
can apply backpressure all the way up to the TCP socket to ensure
we don't end up with too many buffers allocated for UTXO
validation.
We do this by adding a new method to `RoutingMessageHandler` which
allows it to signal if there are "many" checks pending and
`channel_announcement` messages should be delayed. The actual
`PeerManager` implementation thereof is done in the next commit.
Matt Corallo [Sun, 22 Jan 2023 04:14:58 +0000 (04:14 +0000)]
Forward gossip messages which were verified asynchronously
Gossip messages which were verified against the chain
asynchronously should still be forwarded to peers, but must now go
out via a new `P2PGossipSync` parameter in the
`AccessResolver::resolve` method, allowing us to wire them up to
the `P2PGossipSync`'s `MessageSendEventsProvider` implementation.
Matt Corallo [Sun, 22 Jan 2023 03:41:28 +0000 (03:41 +0000)]
Add the ability to broadcast gossip msgs via the event pipeline
When we process gossip messages asynchronously we may find that we
want to forward a gossip message to a peer after we've returned
from the existing `handle_*` method. In order to do so, we need to
be able to send arbitrary loose gossip messages back to the
`PeerManager` via `MessageSendEvent`.
This commit modifies `MessageSendEvent` in order to support this.
Matt Corallo [Tue, 7 Feb 2023 20:38:20 +0000 (20:38 +0000)]
Process `channel_update`/`node_announcement` async if needed
If we have a `channel_announcement` which is waiting on a UTXO
lookup before we can process it, and we receive a `channel_update`
or `node_announcement` for the same channel or a node which is a
part of the channel, we have to wait until the lookup completes
until we can decide if we want to accept the new message.
Here, we store the new message in the pending lookup state and
process it asynchronously like the original `channel_announcement`.