Matt Corallo [Tue, 6 Sep 2022 20:56:24 +0000 (20:56 +0000)]
Clarify and consolidate event handling requirements
We've seen a bit of user confusion about the requirements for event
handling, largely because the idempotency and consistency
requirements weren't super clearly phrased. While we're at it, we
also consolidate some documentation out of the event handling
function onto the trait itself.
Matt Corallo [Fri, 2 Sep 2022 21:10:43 +0000 (21:10 +0000)]
Correct payment resolution after on chain failure of dust HTLCs
Previously, we wouldn't mark a dust HTLC as permanently resolved if
the commitment transaction went on chain. This resulted in us
always considering the HTLC as pending on restart, when we load the
pending payments set from the monitors.
Matt Corallo [Mon, 5 Sep 2022 16:28:11 +0000 (16:28 +0000)]
Ensure we log private channel_updates at a non-GOSSIP log level
If we receive a channel_update for one of our private channels, we
will not log the message at the usual TRACE log level as the
message falls into the gossip range. However, for our own channels
they aren't *just* gossip, as we store that info and it changes
how we generate invoices. Thus, we add a log in `ChannelManager`
here at the DEBUG log level.
Matt Corallo [Tue, 9 Aug 2022 04:15:21 +0000 (04:15 +0000)]
Add a `Future` which can receive manager persistence events
This allows users who don't wish to block a full thread to receive
persistence events.
The `Future` added here is really just a trivial list of callbacks,
but from that we can build a (somewhat ineffecient)
std::future::Future implementation and can (at least once a mapping
for Box<dyn Trait> is added) include the future in no-std bindings
as well.
Matt Corallo [Fri, 2 Sep 2022 21:57:32 +0000 (21:57 +0000)]
Handle monotonic clock going backwards during runtime
We've had some users complain that `duration_since` is panic'ing
for them. This is possible if the machine being run on is buggy and
the "monotonic clock" goes backwards, which sadly some ancient
systems can do.
Rust addressed this issue in 1.60 by forcing
`Instant::duration_since` to not panic if the machine is buggy
(and time goes backwards), but for users on older rust versions we
do the same by hand here.
jurvis [Sun, 28 Aug 2022 06:07:50 +0000 (23:07 -0700)]
Make payment tests more realistic
Made sure that every hop has a unique receipient. When we simulate
calling `channel_penalty_msat` in `TestRouter`’s find route, use
actual previous node ids instead of just using the payer’s.
jurvis [Tue, 30 Aug 2022 05:50:44 +0000 (22:50 -0700)]
Keep track of inflight HTLCs across payments
Added two methods, `process_path_inflight_htlcs` and
`remove_path_inflight_htlcs`, that updates that `payment_cache` map with
path information that may have failed, succeeded, or have been given up
on.
Introduced `AccountForInflightHtlcs`, which will wrap our user-provided
scorer. We move the `S:Score` type parameterization from the `Router` to
`find_route`, so we can use our newly introduced
`AccountForInflightHtlcs`.
`AccountForInflightHtlcs` keeps track of a map of inflight HTLCs by
their short channel id, direction, and give us the value that is being
used up.
This map will in turn be populated prior to calling `find_route`, where
we’ll use `create_inflight_map`, to generate a current map of all
inflight HTLCs based on what was stored in `payment_cache`.
jurvis [Tue, 30 Aug 2022 05:49:24 +0000 (22:49 -0700)]
Change `payment_cache` to accept `PaymentInfo`
Introduces a new `PaymentInfo` struct that contains both the previous
`attempts` count that was tracked as well as the paths that are also
currently inflight.
In this commit, we check if a peer's outbound buffer has room for onion
messages, and if so pulls them from an implementer of a new trait,
OnionMessageProvider.
Makes sure channel messages are prioritized over OMs, and OMs are prioritized
over gossip.
The onion_message module remains private until further rate limiting is added.
Add boilerplate for sending and receiving onion messages in PeerManager
Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
Separate gossip broadcasts into their own queue in PeerManager
This allows us to better prioritize channel messages over gossip broadcasts and
lays groundwork for rate limiting onion messages more simply, since they won't
be competing with gossip broadcasts for space in the main message queue.
Matt Corallo [Wed, 17 Aug 2022 20:15:23 +0000 (20:15 +0000)]
Expose a `Balance` for inbound HTLCs even without a preimage
If we don't currently have the preimage for an inbound HTLC, that
does not guarantee we can never claim it, but instead only that we
cannot claim it unless we receive the preimage from the channel we
forwarded the channel out on.
Thus, we cannot consider a channel to have no claimable balances if
the only remaining output on the commitment ransaction is an
inbound HTLC for which we do not have the preimage, as we may be
able to claim it in the future.
This commit addresses this issue by adding a new `Balance` variant
- `MaybePreimageClaimableHTLCAwaitingTimeout`, which is generated
until the HTLC output is spent.
Elias Rohrer [Wed, 24 Aug 2022 11:59:58 +0000 (13:59 +0200)]
Export and document all `log` macros.
Previously, only `log_error` and `log_trace` macros have been exported.
This change exports the macros of all log levels, which enables them to
be used downstream.
Previously, we were decoding payload lengths as a VarInt. Per the spec, this is
wrong -- it should be decoded as a BigSize. This bug also exists in our
payment payload decoding, to be fixed separately.
Upcoming reply path tests caught this bug because we hadn't encoded a payload
greater than 253 before, so we hadn't hit the problem that VarInts are encoded
as little-endian whereas BigSizes are encoded as big-endian.
Matt Corallo [Sun, 21 Aug 2022 20:34:22 +0000 (20:34 +0000)]
Avoid querying the chain for outputs for channels we already have
If we receive a ChannelAnnouncement message but we already have the
channel, there's no reason to do a chain lookup. Instead of
immediately calling the user-provided `chain::Access` when handling
a ChannelAnnouncement, we first check if we have the corresponding
channel in the graph.
Note that if we do have the corresponding channel but it was not
previously checked against the blockchain, we should still check
with the `chain::Access` and update if necessary.
Matt Corallo [Mon, 15 Aug 2022 19:30:32 +0000 (19:30 +0000)]
Provide guidance on ChannelMonitorUpdate serialized size
Users need to make decisions about storage sizing and we need to
have advice on the maximum size of various things users need to
store. ChannelMonitorUpdates are likely the worst case of this,
they're usually at max a few KB, but can get up to a few hundred
KB for commitment transactions that have 400+ HTLCs pending.
We had one user report an update (likely) going over 400 KiB, which
isn't immediately obvious to me is practical, but its within a few
multiples of trivially-reachable sizes, so its likely that did
occur. To be on the safe side, we simply recommend users ensure
they can support "upwards of 1 MiB" here.
Matt Corallo [Sat, 13 Aug 2022 17:29:06 +0000 (17:29 +0000)]
Correct the on-chain script checked in gossip verification
The `bitcoin_key_1` and `bitcoin_key_2` fields in
`channel_announcement` messages are sorted according to node_ids
rather than the keys themselves, however the on-chain funding
script is sorted according to the bitcoin keys themselves. Thus,
with some probability, we end up checking that the on-chain script
matches the wrong script and rejecting the channel announcement.
The correct solution is to use our existing channel funding script
generation function which ensure we always match what we generate.
This was found in testing the Java bindings, where a test checks
that retunring the generated funding script in `chain::Access`
results in the constructed channel ending up in our network graph.
Also update the fuzz ChaCha20Poly1305 to not mark as finished after a single
encrypt_in_place. This is because more bytes may still need to be encrypted,
causing us to panic at the assertion that finished == false when we go to
encrypt more.
Also fix unused_mut warning in messenger + add log on OM forward for testing
Matt Corallo [Sat, 16 Jul 2022 20:41:45 +0000 (20:41 +0000)]
Move per-HTLC logic out of get_claimable_balances into a helper
Val suggested this as an obvious cleanup to separate per_HTLC logic
from the total commitment transaction logic, separating the large
function into two.
Matt Corallo [Tue, 24 May 2022 23:57:56 +0000 (23:57 +0000)]
Expose counterparty-revoked-outputs in `get_claimable_balance`
This uses the various new tracking added in the prior commits to
expose a new `Balance` type - `CounterpartyRevokedOutputClaimable`.
Some nontrivial work is required, however, as we now have to track
HTLC outputs as spendable in a transaction that comes *after* an
HTLC-Success/HTLC-Timeout transaction, which we previously didn't
need to do. Thus, we have to check if an
`onchain_events_awaiting_threshold_conf` event spends a commitment
transaction's HTLC output while walking events. Further, because
we now need to track HTLC outputs after the
HTLC-Success/HTLC-Timeout confirms, and because we have to track
the counterparty's `to_self` output as a contentious output which
could be claimed by either party, we have to examine the
`OnchainTxHandler`'s set of outputs to spend when determining if
certain outputs are still spendable.
Two new tests are added which test various different transaction
formats, and hopefully provide good test coverage of the various
revoked output paths.
Matt Corallo [Tue, 17 May 2022 20:45:17 +0000 (20:45 +0000)]
Scan `onchain_events_awaiting_threshold_conf` once in balance calc
Instead of a series of different
`onchain_events_awaiting_threshold_conf.iter()...` calls to scan
for HTLC status in balance calculation, pull them all out into one
`for ... { match ... }` to do it once and simplify the code
somewhat.
Matt Corallo [Sat, 21 May 2022 01:11:52 +0000 (01:11 +0000)]
Track the txid that resolves HTLCs even after resolution completes
We need this information when we look up if we still need to spend
a revoked output from an HTLC-Success/HTLC-Timeout transaction for
balance calculation.
Matt Corallo [Thu, 19 May 2022 01:50:37 +0000 (01:50 +0000)]
Track HTLC-Success/HTLC-Timeout claims of revoked outputs
When a counterparty broadcasts a revoked commitment transaction,
followed immediately by HTLC-Success/-Timeout spends thereof, we'd
like to have an `onchain_events_awaiting_threshold_conf` entry
for them.
This does so using the `HTLCSpendConfirmation` entry, giving it
(slightly) new meaning. Because all existing uses of
`HTLCSpendConfirmation` already check if the relevant commitment
transaction is revoked first, this should be trivially backwards
compatible.
We will ultimately figure out if something is being spent via the
`OnchainTxHandler`, but to do so we need to look up the output via
the HTLC transaction txid, which this allows us to do.
Matt Corallo [Tue, 17 May 2022 23:57:52 +0000 (23:57 +0000)]
Fix off-by-one in test_onchain_htlc_claim_reorg_remote_commitment
The test intended to disconnect a transaction previously connected
but didn't disconnect enough blocks to do so, leading to it
confirming two conflicting transactions.
In the next few commits this will become an assertion failure.
Matt Corallo [Sat, 30 Apr 2022 20:29:31 +0000 (20:29 +0000)]
Track counterparty payout info in counterparty commitment txn
When handling a revoked counterparty commitment transaction which
was broadcast on-chain, we occasionally need to look up which
output (and its value) was to the counterparty (the `to_self`
output). This will allow us to generate `Balance`s for the user for
the revoked output.
Matt Corallo [Fri, 13 May 2022 05:11:14 +0000 (05:11 +0000)]
Store the full event transaction in `OnchainEvent` structs
When we see a transaction which generates some `OnchainEvent`, its
useful to have the full transaction around for later analysis.
Specifically, it lets us check the list of outputs which were spent
in the transaction, allowing us to look up, e.g. which HTLC
outpoint was spent in a transaction.
This will be used in a few commits to do exactly that - figure out
which HTLC a given `OnchainEvent` corresponds with.
Matt Corallo [Tue, 9 Aug 2022 21:26:16 +0000 (21:26 +0000)]
Backfill gossip without buffering directly in LDK
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of `PeerHandler` and into the operating system buffer, where it
arguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.
Note that because we change how often we check if we're out of
messages to send before pinging, we slightly change how many
messages are exchanged at once, impacting the
`test_do_attempt_write_data` constants.
Elias Rohrer [Thu, 11 Aug 2022 12:27:45 +0000 (14:27 +0200)]
Drop return value from `Filter::register_output`
This commit removes the return value from `Filter::register_output` as
creating a suitable value almost always entails blocking operations
(e.g., lookups via network request), which however conflicts with the
requirement that user calls should avoid blocking calls at all cost.
Removing the return value also rendered quite a bit of test code for
dependent transaction handling superfluous, which is therefore also
removed with this commit.
Prior to this change, we could have failed to decode a valid payload of size
>253. This is because we were decoding the length (a BigSize, big-endian) as a
VarInt (little-endian).
Use util methods in `Peer` to decide when to forward
This consolidates our various checks on peer buffer space into the
`Peer` impl itself, making the thresholds at which we stop taking
various actions on a peer more readable as a whole.
This commit was primarily authored by `Valentine Wallace
<vwallace@protonmail.com>` with some amendments by `Matt Corallo
<git@bluematt.me>`.
Matt Corallo [Tue, 9 Aug 2022 01:57:09 +0000 (01:57 +0000)]
Move PersistenceNotifier to a new util module
It was always somewhat strange to have a bunch of notification
logic in `channelmanager`, and with the next commit adding a bunch
more, its moved here first.
Matt Corallo [Sun, 7 Aug 2022 19:02:33 +0000 (19:02 +0000)]
Update libfuzzer-sys to new upstream inclusion method
Dunno why they changed it, but the old "depend directly on git"
thing that cargo-fuzz used forever is now deprecated that that
repo is archived, they've now moved to another repo and publish
properly on crates.io.
Fix possible incomplete read bug on onion packet decode
Pre-existing to this PR, we were reading next packet bytes with io::Read::read,
which is not guaranteed to read all the bytes we need, only guaranteed to read
*some* bytes.
We fix this to be read_exact, which is guaranteed to read all the next hop
packet bytes.