Matt Corallo [Sun, 30 May 2021 23:24:43 +0000 (23:24 +0000)]
Use TLV instead of explicit length for VecReadWrapper termination
VecReadWrapper is only used in TLVs so there is no need to prepend
a length before writing/reading the objects - we can instead simply
read until we reach the end of the TLV stream.
Matt Corallo [Fri, 28 May 2021 14:07:39 +0000 (14:07 +0000)]
Add additional inline hints in serialization methods
This substantially improves deserialization performance when LLVM
decides not to inline many short methods, eg when not building
with LTO/codegen-units=1.
Even with the default bench params of LTO/codegen-units=1, the
serialization benchmarks on an Intel 2687W v3 take:
test routing::network_graph::benches::read_network_graph ... bench: 1,955,616,225 ns/iter (+/- 4,135,777)
test routing::network_graph::benches::write_network_graph ... bench: 165,905,275 ns/iter (+/- 118,798)
Matt Corallo [Sat, 29 May 2021 18:32:53 +0000 (18:32 +0000)]
Impl `serialized_length()` without `LengthCalculatingWriter`
With the new `serialized_length()` method potentially being
significantly more efficient than `LengthCalculatingWriter`, this
commit ensures we call `serialized_length()` when calculating
length of a larger struct.
Specifically, prior to this commit a call to
`serialized_length()` on a large object serialized with
`impl_writeable`, `impl_writeable_len_match`, or
`encode_varint_length_prefixed_tlv` (and
`impl_writeable_tlv_based`) would always serialize all inner fields
of that object using `LengthCalculatingWriter`. This would ignore
any `serialized_length()` overrides by inner fields. Instead, we
override `serialized_length()` on all of the above by calculating
the serialized size using calls to `serialized_length()` on inner
fields.
Further, writes to `LengthCalculatingWriter` should never fail as
its `write` method never returns an error. Thus, any write failures
indicate a bug in an object's write method or in our
object-creation sanity checking. We `.expect()` such write calls
here.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,039,451,296 ns/iter (+/- 4,329,821)
test routing::network_graph::benches::write_network_graph ... bench: 166,685,412 ns/iter (+/- 352,537)
Matt Corallo [Sat, 29 May 2021 18:24:16 +0000 (18:24 +0000)]
Avoid calling libsecp serialization fns when calculating length
When writing out libsecp256k1 objects during serialization in a
TLV, we potentially calculate the TLV length twice before
performing the actual serialization (once when calculating the
total TLV-stream length and once when calculating the length of the
secp256k1-object-containing TLV). Because the lengths of secp256k1
objects is a constant, we'd ideally like LLVM to entirely optimize
out those calls and simply know the expected length. However,
without cross-language LTO, there is no way for LLVM to verify that
there are no side-effects of the calls to libsecp256k1, leaving
LLVM with no way to optimize them out.
This commit adds a new method to `Writeable` which returns the
length of an object once serialized. It is implemented by default
using `LengthCalculatingWriter` (which LLVM generally optimizes out
for Rust objects) and overrides it for libsecp256k1 objects.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,035,402,164 ns/iter (+/- 1,855,357)
test routing::network_graph::benches::write_network_graph ... bench: 308,235,267 ns/iter (+/- 140,202)
Matt Corallo [Fri, 28 May 2021 01:40:22 +0000 (01:40 +0000)]
Drop byte_utils in favor of native `to/from_be_bytes` methods
Now that our MSRV supports the native methods, we have no need
for the helpers anymore. Because LLVM was already matching our
byte_utils methods as byteswap functions, this should have no
impact on generated (optimzied) code.
This removes most of the byte_utils usage, though some remains to
keep the patch size reasonable.
Matt Corallo [Fri, 28 May 2021 14:16:20 +0000 (14:16 +0000)]
Add bench profiles to Cargo.toml to force codegen-units=1
This makes a small difference for NetworkGraph deserialization
as it enables more inlining across different files, hopefully
better matching user performance as well.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,037,875,071 ns/iter (+/- 760,370)
test routing::network_graph::benches::write_network_graph ... bench: 320,561,557 ns/iter (+/- 176,343)
Matt Corallo [Wed, 21 Apr 2021 20:47:24 +0000 (20:47 +0000)]
Correct Channel outbound HTLC serialization
Channel serialization should happen "as if
remove_uncommitted_htlcs_and_mark_paused had just been called".
This is true for the most part, but outbound RemoteRemoved HTLCs
were being serialized as normal, even though
`remote_uncommitted_htlcs_and_mark_paused` resets them to
`Committed`.
This led to a bug identified by the `chanmon_consistency_target`
fuzzer wherein, if we receive a update_*_htlc message bug not the
corresponding commitment_signed prior to a serialization roundtrip,
we'd force-close the channel due to the peer "attempting to
fail/claim an HTLC which was already failed/claimed".
Matt Corallo [Wed, 21 Apr 2021 17:03:57 +0000 (17:03 +0000)]
[fuzz] Expand chanmon_consistency to do single message delivery
While trying to debug the issue ultimately tracked down to a
`PeerHandler` locking bug in #891, the ability to deliver only
individual messages at a time in chanmon_consistency looked
important. Specifically, it initially appeared there may be a race
when an update_add_htlc was delivered, then a node sent a payment,
and only after that, the corresponding commitment-signed was
delivered.
This commit adds such an ability, greatly expanding the potential
for chanmon_consistency to identify channel state machine bugs.
Matt Corallo [Sun, 30 May 2021 17:04:21 +0000 (17:04 +0000)]
Add new `(C-not implementable)` tag on `EventsProvider`
This makes it so that users cannot usefully implement their own
`EventsProvider`, which would require substantial new logic in the
bindings generator (for generic methods). In the case of
`EventsProvider`, because there are no Rust methods which accept an
`EventsProvider` as an argument, this is perfectly OK as the
generated code would be entirely unused anyway.
Matt Corallo [Wed, 19 May 2021 21:47:42 +0000 (21:47 +0000)]
Delay broadcast of PackageTemplate packages until their locktime
This stores transaction templates temporarily until their locktime
is reached, avoiding broadcasting (or RBF bumping) transactions
prior to their locktime. For those broadcasting transactions
(potentially indirectly) via Bitcoin Core RPC, this ensures no
automated rebroadcast of transactions on the client side is
required to get transactions confirmed.
Matt Corallo [Wed, 26 May 2021 15:47:29 +0000 (15:47 +0000)]
Add assertions to ensure we don't use an invalid package_amount
This somewhat cleans up the public API of PackageSolvingData to
make it harder to get an invalid amount and use it, adding further
debug assertion to check it at test-time.
Matt Corallo [Tue, 25 May 2021 21:18:30 +0000 (21:18 +0000)]
Add a macro which implements Readable/Writeable using TLVs only
This also includes a `VecWriteWrapper` and `VecReadWrapper` which
implements serialization for any `Readable`/`Writeable` type that is
in a Vec. We do this instead of implementing `Readable`/`Writeable`
directly as there isn't always a univerally-defined way to serialize
a Vec and this makes things more explicit.
Finally, this tweaks existing macros (and in the new macros) to
support a trailing `,` after a list, eg
`write_tlv_fields!(stream, {(0, a),}, {});` whereas previously the
trailing `,` after the `(0, a)` would be a compile-error.
Jeffrey Czyz [Thu, 27 May 2021 15:53:14 +0000 (08:53 -0700)]
Cache socket address in HttpClient for reconnect
If the HttpClient attempts to reconnect to bitcoind that is no longer
running, the client fails to get the address from the stream. Cache the
address when connecting to prevent this.
Jeffrey Czyz [Wed, 26 May 2021 17:57:39 +0000 (10:57 -0700)]
Parse RPC errors as JSON content
Bitcoin Core's JSON RPC server returns errors as HTTP error responses
with JSON content in the body. Parse this content as JSON to give a more
meaningful error. Otherwise, "binary" is given because the content
contains ASCII control characters.
Jeffrey Czyz [Wed, 26 May 2021 17:51:16 +0000 (10:51 -0700)]
Define an HttpError for returning error contents
Return an HTTP error response as a status code and contents. This allows
clients to interpret the response as desired (e.g., the contents as a
JSON-formatted error).
Antoine Riard [Fri, 7 May 2021 23:36:50 +0000 (19:36 -0400)]
Introduce PackageTemplae, a replacement of InputMaterial
PackageTemplate aims to replace InputMaterial, introducing a clean
interface to manipulate a wide range of claim types without
OnchainTxHandler aware of special content of each.
Antoine Riard [Tue, 16 Jun 2020 00:11:01 +0000 (20:11 -0400)]
Add package.rs file
Package.rs aims to gather interfaces to communicate between
onchain channel transactions parser (ChannelMonitor) and outputs
claiming logic (OnchainTxHandler). These interfaces are data
structures, generated per-case by ChannelMonitor and consumed
blindly by OnchainTxHandler.
Matt Corallo [Thu, 6 May 2021 01:15:35 +0000 (01:15 +0000)]
Process announcement_signatures messages in Channel and store sigs
Previously we handled most of the logic of announcement_signatures
in ChannelManager, rather than Channel. This is somewhat unique as
far as our message processing goes, but it also avoided having to
pass the node_secret in to the Channel.
Eventually, we'll move the node_secret behind the signer anyway, so
there isn't much reason for this, and storing the
announcement_signatures-provided signatures in the Channel allows
us to recreate the channel_announcement later for rebroadcast,
which may be useful.
Matt Corallo [Wed, 5 May 2021 22:56:42 +0000 (22:56 +0000)]
Append backwards-compat TLVs to serialization of larger structs
Currently our serialization is very compact, and contains version
numbers to indicate which versions the code can read a given
serialized struct. However, if you want to add a new field without
needlessly breaking the ability of previous versions of the code to
read the struct, there is not a good way to do so.
This adds dummy, currently empty, TLVs to the major structs we
serialize out for users, providing an easy place to put new
optional fields without breaking previous versions.
Matt Corallo [Tue, 9 Feb 2021 20:22:44 +0000 (15:22 -0500)]
Read monitors from our KeysInterface in chanmon_consistency_fuzz
If the fuzz target is failing due to a channel force-close, the
immediately-visible error is that we're signing a stale state. This
is because the ChannelMonitorUpdateStep::ChannelForceClosed event
results in a signature in the test clone which was deserialized
using a OnlyReadsKeysInterface. Instead, we need to deserialize
using the full KeysInterface instance.
Matt Corallo [Fri, 20 Nov 2020 20:49:53 +0000 (15:49 -0500)]
Stop failing back HTLCs on peer disconnection
Previously, if we got disconnected from a peer while there were
HTLCs pending forwarding in the holding cell, we'd clear them and
fail them all backwards. This is largely fine, but since we now
have support for handling such HTLCs on reconnect, we might as
well not, instead relying on our timeout logic to fail them
backwards if it takes too long to forward them.
Matt Corallo [Wed, 21 Apr 2021 02:37:02 +0000 (02:37 +0000)]
[fuzz] Handle monitor updates during get_and_clear_pending_msg_events
Because we may now generate a monitor update during
get_and_clear_pending_msg_events calls, we need to ensure we
re-serialize the relevant ChannelManager before attempting to
reload it, if such a monitor update occurred.
Matt Corallo [Thu, 18 Mar 2021 22:03:30 +0000 (18:03 -0400)]
Free holding cell on monitor-updating-restored when there's no upd
If there is no pending channel update messages when monitor updating
is restored (though there may be an RAA to send), and we're
connected to our peer and not awaiting a remote RAA, we need to
free anything in our holding cell.
However, we don't want to immediately free the holding cell during
channel_monitor_updated as it presents a somewhat bug-prone case of
reentrancy:
a) it would re-enter user code around a monitor update while being
called from user code notifying us of the same monitor being
updated, making deadlocs very likely (in fact, our fuzzers
would have a bug here!),
b) the re-entrancy only occurs in a very rare case, making it
likely users will not hit it in testing, only deadlocking in
production.
Thus, we add a holding-cell-free pass over each channel in
get_and_clear_pending_msg_events. This fits up nicely with the
anticipated bug - users almost certainly need to process new
network messages immediately after monitor updating has been
restored to send messages which were not sent originally when the
monitor updating was paused.
Without this, chanmon_fail_consistency was able to find a stuck
condition where we sit on an HTLC failure in our holding cell and
don't ever handle it (at least until we have other actions to take
which empty the holding cell).
Matt Corallo [Thu, 18 Mar 2021 22:23:05 +0000 (18:23 -0400)]
DRY ChannelError conversion macros
Both break_chan_entry and try_chan_entry do almost identical work,
only differing on if they `break` or `return` in response to an
error. Because we will now also need an option to do neither, we
break out the common code into a shared `convert_chan_err` macro.
Matt Corallo [Tue, 24 Nov 2020 00:12:31 +0000 (19:12 -0500)]
[fuzz] Allow SendAnnouncementSigs events in chanmon_consistency
Because of the merge between peer reconnection and channel monitor
updating channel restoration code, we now sometimes generate
(somewhat spurious) announcement signatures when restoring channel
monitor updating. This should not result in a fuzzing failure.
Matt Corallo [Tue, 24 Nov 2020 00:12:19 +0000 (19:12 -0500)]
[fuzz] Be more strict about msg events in chanmon_consistency
This fails chanmon_consistency on IgnoreError error events and on
messages left over to be sent to a just-disconnected peer, which
should have been drained.
These should never appear, so consider them a fuzzer fail case.