When refactoring, structure your PR to make it easy to review and don't
hesitate to split it into multiple small, focused PRs.
-The Minimum Supported Rust Version (MSRV) currently is 1.41.1 (enforced by
-our GitHub Actions). Also, the compatibility for LDK object serialization is
-currently ensured back to and including crate version 0.0.99 (see the
-[changelog](CHANGELOG.md)).
+The Minimum Supported Rust Version (MSRV) currently is 1.48.0 (enforced by
+our GitHub Actions). We support reading serialized LDK objects written by any
+version of LDK 0.0.99 and above. We support LDK versions 0.0.113 and above
+reading serialized LDK objects written by modern LDK. Any expected issues with
+upgrades or downgrades should be mentioned in the [changelog](CHANGELOG.md).
Commits should cover both the issue fixed and the solution's rationale. These
[guidelines](https://chris.beams.io/posts/git-commit/) should be kept in mind.
# Fuzzing
-Fuzz tests generate a ton of random parameter arguments to the program and then validate that none cause it to crash.
+Fuzz tests generate a ton of random parameter arguments to the program and then validate that none
+cause it to crash.
## How does it work?
-Typically, Travis CI will run `travis-fuzz.sh` on one of the environments the automated tests are configured for.
-This is the most time-consuming component of the continuous integration workflow, so it is recommended that you detect
-issues locally, and Travis merely acts as a sanity check. Fuzzing is further only effective with
-a lot of CPU time, indicating that if crash scenarios are discovered on Travis with its low
-runtime constraints, the crash is caused relatively easily.
+Typically, CI will run `ci-fuzz.sh` on one of the environments the automated tests are
+configured for. Fuzzing is further only effective with a lot of CPU time, indicating that if crash
+scenarios are discovered on CI with its low runtime constraints, the crash is caused relatively
+easily.
## How do I run fuzz tests locally?
-You typically won't need to run the entire combination of different fuzzing tools. For local execution, `honggfuzz`
-should be more than sufficient.
+We support multiple fuzzing engines such as `honggfuzz`, `libFuzzer` and `AFL`. You typically won't
+need to run the entire suite of different fuzzing tools. For local execution, `honggfuzz`should be
+more than sufficient.
### Setup
-
+#### Honggfuzz
To install `honggfuzz`, simply run
```shell
cargo install --force honggfuzz --version "0.5.52"
```
+#### cargo-fuzz / libFuzzer
+To install `cargo-fuzz`, simply run
+
+```shell
+cargo update
+cargo install --force cargo-fuzz
+```
+
### Execution
-To run the Hongg fuzzer, do
+#### Honggfuzz
+To run fuzzing using `honggfuzz`, do
```shell
export CPU_COUNT=1 # replace as needed
(Or, for a prettier output, replace the last line with `cargo --color always hfuzz run $TARGET`.)
+#### cargo-fuzz / libFuzzer
+To run fuzzing using `cargo-fuzz / libFuzzer`, run
+
+```shell
+rustup install nightly # Note: libFuzzer requires a nightly version of rust.
+cargo +nightly fuzz run --features "libfuzzer_fuzz" msg_ping_target
+```
+Note: If you encounter a `SIGKILL` during run/build check for OOM in kernel logs and consider
+increasing RAM size for VM.
+
+If you wish to just generate fuzzing binary executables for `libFuzzer` and not run them:
+```shell
+cargo +nightly fuzz build --features "libfuzzer_fuzz" msg_ping_target
+# Generates binary artifact in path ./target/aarch64-unknown-linux-gnu/release/msg_ping_target
+# Exact path depends on your system architecture.
+```
+You can upload the build artifact generated above to `ClusterFuzz` for distributed fuzzing.
+
+### List Fuzzing Targets
To see a list of available fuzzing targets, run:
```shell
ls ./src/bin/
```
-## A fuzz test failed on Travis, what do I do?
+## A fuzz test failed, what do I do?
-You're trying to create a PR, but need to find the underlying cause of that pesky fuzz failure blocking the merge?
+You're trying to create a PR, but need to find the underlying cause of that pesky fuzz failure
+blocking the merge?
Worry not, for this is easily traced.
-If your Travis output log looks like this:
+If your output log looks like this:
```
Size:639 (i,b,hw,ed,ip,cmp): 0/0/0/0/0/1, Tot:0/0/0/2036/5/28604
… # a lot of lines in between
-<0x0000555555565559> [func:UNKNOWN file: line:0 module:/home/travis/build/rust-bitcoin/rust-lightning/fuzz/hfuzz_target/x86_64-unknown-linux-gnu/release/full_stack_target]
+<0x0000555555565559> [func:UNKNOWN file: line:0 module:./rust-lightning/fuzz/hfuzz_target/x86_64-unknown-linux-gnu/release/full_stack_target]
<0x0000000000000000> [func:UNKNOWN file: line:0 module:UNKNOWN]
=====================================================================
2d3136383734090101010101010101010101010101010101010101010101
010101010100040101010101010101010101010103010101010100010101
0069d07c319a4961
-The command "if [ "$(rustup show | grep default | grep stable)" != "" ]; then cd fuzz && cargo test --verbose && ./travis-fuzz.sh; fi" exited with 1.
+The command "if [ "$(rustup show | grep default | grep stable)" != "" ]; then cd fuzz && cargo test --verbose && ./ci-fuzz.sh; fi" exited with 1.
```
Note that the penultimate stack trace line ends in `release/full_stack_target]`. That indicates that
GEN_TEST msg_tx_init_rbf msg_targets::
GEN_TEST msg_tx_ack_rbf msg_targets::
GEN_TEST msg_tx_abort msg_targets::
+
+GEN_TEST msg_stfu msg_targets::
+
+GEN_TEST msg_splice msg_targets::
+GEN_TEST msg_splice_ack msg_targets::
+GEN_TEST msg_splice_locked msg_targets::
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on target_template.txt
+// To modify it, modify target_template.txt and run gen_target.sh instead.
+
+#![cfg_attr(feature = "libfuzzer_fuzz", no_main)]
+
+#[cfg(not(fuzzing))]
+compile_error!("Fuzz targets need cfg=fuzzing");
+
+extern crate lightning_fuzz;
+use lightning_fuzz::msg_targets::msg_splice_ack::*;
+
+#[cfg(feature = "afl")]
+#[macro_use] extern crate afl;
+#[cfg(feature = "afl")]
+fn main() {
+ fuzz!(|data| {
+ msg_splice_ack_run(data.as_ptr(), data.len());
+ });
+}
+
+#[cfg(feature = "honggfuzz")]
+#[macro_use] extern crate honggfuzz;
+#[cfg(feature = "honggfuzz")]
+fn main() {
+ loop {
+ fuzz!(|data| {
+ msg_splice_ack_run(data.as_ptr(), data.len());
+ });
+ }
+}
+
+#[cfg(feature = "libfuzzer_fuzz")]
+#[macro_use] extern crate libfuzzer_sys;
+#[cfg(feature = "libfuzzer_fuzz")]
+fuzz_target!(|data: &[u8]| {
+ msg_splice_ack_run(data.as_ptr(), data.len());
+});
+
+#[cfg(feature = "stdin_fuzz")]
+fn main() {
+ use std::io::Read;
+
+ let mut data = Vec::with_capacity(8192);
+ std::io::stdin().read_to_end(&mut data).unwrap();
+ msg_splice_ack_run(data.as_ptr(), data.len());
+}
+
+#[test]
+fn run_test_cases() {
+ use std::fs;
+ use std::io::Read;
+ use lightning_fuzz::utils::test_logger::StringBuffer;
+
+ use std::sync::{atomic, Arc};
+ {
+ let data: Vec<u8> = vec![0];
+ msg_splice_ack_run(data.as_ptr(), data.len());
+ }
+ let mut threads = Vec::new();
+ let threads_running = Arc::new(atomic::AtomicUsize::new(0));
+ if let Ok(tests) = fs::read_dir("test_cases/msg_splice_ack") {
+ for test in tests {
+ let mut data: Vec<u8> = Vec::new();
+ let path = test.unwrap().path();
+ fs::File::open(&path).unwrap().read_to_end(&mut data).unwrap();
+ threads_running.fetch_add(1, atomic::Ordering::AcqRel);
+
+ let thread_count_ref = Arc::clone(&threads_running);
+ let main_thread_ref = std::thread::current();
+ threads.push((path.file_name().unwrap().to_str().unwrap().to_string(),
+ std::thread::spawn(move || {
+ let string_logger = StringBuffer::new();
+
+ let panic_logger = string_logger.clone();
+ let res = if ::std::panic::catch_unwind(move || {
+ msg_splice_ack_test(&data, panic_logger);
+ }).is_err() {
+ Some(string_logger.into_string())
+ } else { None };
+ thread_count_ref.fetch_sub(1, atomic::Ordering::AcqRel);
+ main_thread_ref.unpark();
+ res
+ })
+ ));
+ while threads_running.load(atomic::Ordering::Acquire) > 32 {
+ std::thread::park();
+ }
+ }
+ }
+ let mut failed_outputs = Vec::new();
+ for (test, thread) in threads.drain(..) {
+ if let Some(output) = thread.join().unwrap() {
+ println!("\nOutput of {}:\n{}\n", test, output);
+ failed_outputs.push(test);
+ }
+ }
+ if !failed_outputs.is_empty() {
+ println!("Test cases which failed: ");
+ for case in failed_outputs {
+ println!("{}", case);
+ }
+ panic!();
+ }
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on target_template.txt
+// To modify it, modify target_template.txt and run gen_target.sh instead.
+
+#![cfg_attr(feature = "libfuzzer_fuzz", no_main)]
+
+#[cfg(not(fuzzing))]
+compile_error!("Fuzz targets need cfg=fuzzing");
+
+extern crate lightning_fuzz;
+use lightning_fuzz::msg_targets::msg_splice_locked::*;
+
+#[cfg(feature = "afl")]
+#[macro_use] extern crate afl;
+#[cfg(feature = "afl")]
+fn main() {
+ fuzz!(|data| {
+ msg_splice_locked_run(data.as_ptr(), data.len());
+ });
+}
+
+#[cfg(feature = "honggfuzz")]
+#[macro_use] extern crate honggfuzz;
+#[cfg(feature = "honggfuzz")]
+fn main() {
+ loop {
+ fuzz!(|data| {
+ msg_splice_locked_run(data.as_ptr(), data.len());
+ });
+ }
+}
+
+#[cfg(feature = "libfuzzer_fuzz")]
+#[macro_use] extern crate libfuzzer_sys;
+#[cfg(feature = "libfuzzer_fuzz")]
+fuzz_target!(|data: &[u8]| {
+ msg_splice_locked_run(data.as_ptr(), data.len());
+});
+
+#[cfg(feature = "stdin_fuzz")]
+fn main() {
+ use std::io::Read;
+
+ let mut data = Vec::with_capacity(8192);
+ std::io::stdin().read_to_end(&mut data).unwrap();
+ msg_splice_locked_run(data.as_ptr(), data.len());
+}
+
+#[test]
+fn run_test_cases() {
+ use std::fs;
+ use std::io::Read;
+ use lightning_fuzz::utils::test_logger::StringBuffer;
+
+ use std::sync::{atomic, Arc};
+ {
+ let data: Vec<u8> = vec![0];
+ msg_splice_locked_run(data.as_ptr(), data.len());
+ }
+ let mut threads = Vec::new();
+ let threads_running = Arc::new(atomic::AtomicUsize::new(0));
+ if let Ok(tests) = fs::read_dir("test_cases/msg_splice_locked") {
+ for test in tests {
+ let mut data: Vec<u8> = Vec::new();
+ let path = test.unwrap().path();
+ fs::File::open(&path).unwrap().read_to_end(&mut data).unwrap();
+ threads_running.fetch_add(1, atomic::Ordering::AcqRel);
+
+ let thread_count_ref = Arc::clone(&threads_running);
+ let main_thread_ref = std::thread::current();
+ threads.push((path.file_name().unwrap().to_str().unwrap().to_string(),
+ std::thread::spawn(move || {
+ let string_logger = StringBuffer::new();
+
+ let panic_logger = string_logger.clone();
+ let res = if ::std::panic::catch_unwind(move || {
+ msg_splice_locked_test(&data, panic_logger);
+ }).is_err() {
+ Some(string_logger.into_string())
+ } else { None };
+ thread_count_ref.fetch_sub(1, atomic::Ordering::AcqRel);
+ main_thread_ref.unpark();
+ res
+ })
+ ));
+ while threads_running.load(atomic::Ordering::Acquire) > 32 {
+ std::thread::park();
+ }
+ }
+ }
+ let mut failed_outputs = Vec::new();
+ for (test, thread) in threads.drain(..) {
+ if let Some(output) = thread.join().unwrap() {
+ println!("\nOutput of {}:\n{}\n", test, output);
+ failed_outputs.push(test);
+ }
+ }
+ if !failed_outputs.is_empty() {
+ println!("Test cases which failed: ");
+ for case in failed_outputs {
+ println!("{}", case);
+ }
+ panic!();
+ }
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on target_template.txt
+// To modify it, modify target_template.txt and run gen_target.sh instead.
+
+#![cfg_attr(feature = "libfuzzer_fuzz", no_main)]
+
+#[cfg(not(fuzzing))]
+compile_error!("Fuzz targets need cfg=fuzzing");
+
+extern crate lightning_fuzz;
+use lightning_fuzz::msg_targets::msg_splice::*;
+
+#[cfg(feature = "afl")]
+#[macro_use] extern crate afl;
+#[cfg(feature = "afl")]
+fn main() {
+ fuzz!(|data| {
+ msg_splice_run(data.as_ptr(), data.len());
+ });
+}
+
+#[cfg(feature = "honggfuzz")]
+#[macro_use] extern crate honggfuzz;
+#[cfg(feature = "honggfuzz")]
+fn main() {
+ loop {
+ fuzz!(|data| {
+ msg_splice_run(data.as_ptr(), data.len());
+ });
+ }
+}
+
+#[cfg(feature = "libfuzzer_fuzz")]
+#[macro_use] extern crate libfuzzer_sys;
+#[cfg(feature = "libfuzzer_fuzz")]
+fuzz_target!(|data: &[u8]| {
+ msg_splice_run(data.as_ptr(), data.len());
+});
+
+#[cfg(feature = "stdin_fuzz")]
+fn main() {
+ use std::io::Read;
+
+ let mut data = Vec::with_capacity(8192);
+ std::io::stdin().read_to_end(&mut data).unwrap();
+ msg_splice_run(data.as_ptr(), data.len());
+}
+
+#[test]
+fn run_test_cases() {
+ use std::fs;
+ use std::io::Read;
+ use lightning_fuzz::utils::test_logger::StringBuffer;
+
+ use std::sync::{atomic, Arc};
+ {
+ let data: Vec<u8> = vec![0];
+ msg_splice_run(data.as_ptr(), data.len());
+ }
+ let mut threads = Vec::new();
+ let threads_running = Arc::new(atomic::AtomicUsize::new(0));
+ if let Ok(tests) = fs::read_dir("test_cases/msg_splice") {
+ for test in tests {
+ let mut data: Vec<u8> = Vec::new();
+ let path = test.unwrap().path();
+ fs::File::open(&path).unwrap().read_to_end(&mut data).unwrap();
+ threads_running.fetch_add(1, atomic::Ordering::AcqRel);
+
+ let thread_count_ref = Arc::clone(&threads_running);
+ let main_thread_ref = std::thread::current();
+ threads.push((path.file_name().unwrap().to_str().unwrap().to_string(),
+ std::thread::spawn(move || {
+ let string_logger = StringBuffer::new();
+
+ let panic_logger = string_logger.clone();
+ let res = if ::std::panic::catch_unwind(move || {
+ msg_splice_test(&data, panic_logger);
+ }).is_err() {
+ Some(string_logger.into_string())
+ } else { None };
+ thread_count_ref.fetch_sub(1, atomic::Ordering::AcqRel);
+ main_thread_ref.unpark();
+ res
+ })
+ ));
+ while threads_running.load(atomic::Ordering::Acquire) > 32 {
+ std::thread::park();
+ }
+ }
+ }
+ let mut failed_outputs = Vec::new();
+ for (test, thread) in threads.drain(..) {
+ if let Some(output) = thread.join().unwrap() {
+ println!("\nOutput of {}:\n{}\n", test, output);
+ failed_outputs.push(test);
+ }
+ }
+ if !failed_outputs.is_empty() {
+ println!("Test cases which failed: ");
+ for case in failed_outputs {
+ println!("{}", case);
+ }
+ panic!();
+ }
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on target_template.txt
+// To modify it, modify target_template.txt and run gen_target.sh instead.
+
+#![cfg_attr(feature = "libfuzzer_fuzz", no_main)]
+
+#[cfg(not(fuzzing))]
+compile_error!("Fuzz targets need cfg=fuzzing");
+
+extern crate lightning_fuzz;
+use lightning_fuzz::msg_targets::msg_stfu::*;
+
+#[cfg(feature = "afl")]
+#[macro_use] extern crate afl;
+#[cfg(feature = "afl")]
+fn main() {
+ fuzz!(|data| {
+ msg_stfu_run(data.as_ptr(), data.len());
+ });
+}
+
+#[cfg(feature = "honggfuzz")]
+#[macro_use] extern crate honggfuzz;
+#[cfg(feature = "honggfuzz")]
+fn main() {
+ loop {
+ fuzz!(|data| {
+ msg_stfu_run(data.as_ptr(), data.len());
+ });
+ }
+}
+
+#[cfg(feature = "libfuzzer_fuzz")]
+#[macro_use] extern crate libfuzzer_sys;
+#[cfg(feature = "libfuzzer_fuzz")]
+fuzz_target!(|data: &[u8]| {
+ msg_stfu_run(data.as_ptr(), data.len());
+});
+
+#[cfg(feature = "stdin_fuzz")]
+fn main() {
+ use std::io::Read;
+
+ let mut data = Vec::with_capacity(8192);
+ std::io::stdin().read_to_end(&mut data).unwrap();
+ msg_stfu_run(data.as_ptr(), data.len());
+}
+
+#[test]
+fn run_test_cases() {
+ use std::fs;
+ use std::io::Read;
+ use lightning_fuzz::utils::test_logger::StringBuffer;
+
+ use std::sync::{atomic, Arc};
+ {
+ let data: Vec<u8> = vec![0];
+ msg_stfu_run(data.as_ptr(), data.len());
+ }
+ let mut threads = Vec::new();
+ let threads_running = Arc::new(atomic::AtomicUsize::new(0));
+ if let Ok(tests) = fs::read_dir("test_cases/msg_stfu") {
+ for test in tests {
+ let mut data: Vec<u8> = Vec::new();
+ let path = test.unwrap().path();
+ fs::File::open(&path).unwrap().read_to_end(&mut data).unwrap();
+ threads_running.fetch_add(1, atomic::Ordering::AcqRel);
+
+ let thread_count_ref = Arc::clone(&threads_running);
+ let main_thread_ref = std::thread::current();
+ threads.push((path.file_name().unwrap().to_str().unwrap().to_string(),
+ std::thread::spawn(move || {
+ let string_logger = StringBuffer::new();
+
+ let panic_logger = string_logger.clone();
+ let res = if ::std::panic::catch_unwind(move || {
+ msg_stfu_test(&data, panic_logger);
+ }).is_err() {
+ Some(string_logger.into_string())
+ } else { None };
+ thread_count_ref.fetch_sub(1, atomic::Ordering::AcqRel);
+ main_thread_ref.unpark();
+ res
+ })
+ ));
+ while threads_running.load(atomic::Ordering::Acquire) > 32 {
+ std::thread::park();
+ }
+ }
+ }
+ let mut failed_outputs = Vec::new();
+ for (test, thread) in threads.drain(..) {
+ if let Some(output) = thread.join().unwrap() {
+ println!("\nOutput of {}:\n{}\n", test, output);
+ failed_outputs.push(test);
+ }
+ }
+ if !failed_outputs.is_empty() {
+ println!("Test cases which failed: ");
+ for case in failed_outputs {
+ println!("{}", case);
+ }
+ panic!();
+ }
+}
inner,
state,
disable_revocation_policy_check: false,
+ available: Arc::new(Mutex::new(true)),
})
}
features: $source.init_features(), networks: None, remote_network_address: None
}, false).unwrap();
- $source.create_channel($dest.get_our_node_id(), 100_000, 42, 0, None).unwrap();
+ $source.create_channel($dest.get_our_node_id(), 100_000, 42, 0, None, None).unwrap();
let open_channel = {
let events = $source.get_and_clear_pending_msg_events();
assert_eq!(events.len(), 1);
let their_key = get_pubkey!();
let chan_value = slice_to_be24(get_slice!(3)) as u64;
let push_msat_value = slice_to_be24(get_slice!(3)) as u64;
- if channelmanager.create_channel(their_key, chan_value, push_msat_value, 0, None).is_err() { return; }
+ if channelmanager.create_channel(their_key, chan_value, push_msat_value, 0, None, None).is_err() { return; }
},
6 => {
let mut channels = channelmanager.list_channels();
GEN_TEST lightning::ln::msgs::TxInitRbf test_msg_simple ""
GEN_TEST lightning::ln::msgs::TxAckRbf test_msg_simple ""
GEN_TEST lightning::ln::msgs::TxAbort test_msg_simple ""
+
+GEN_TEST lightning::ln::msgs::Stfu test_msg_simple ""
+
+GEN_TEST lightning::ln::msgs::Splice test_msg_simple ""
+GEN_TEST lightning::ln::msgs::SpliceAck test_msg_simple ""
+GEN_TEST lightning::ln::msgs::SpliceLocked test_msg_simple ""
pub mod msg_tx_init_rbf;
pub mod msg_tx_ack_rbf;
pub mod msg_tx_abort;
+pub mod msg_stfu;
+pub mod msg_splice;
+pub mod msg_splice_ack;
+pub mod msg_splice_locked;
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on msg_target_template.txt
+// To modify it, modify msg_target_template.txt and run gen_target.sh instead.
+
+use crate::msg_targets::utils::VecWriter;
+use crate::utils::test_logger;
+
+#[inline]
+pub fn msg_splice_test<Out: test_logger::Output>(data: &[u8], _out: Out) {
+ test_msg_simple!(lightning::ln::msgs::Splice, data);
+}
+
+#[no_mangle]
+pub extern "C" fn msg_splice_run(data: *const u8, datalen: usize) {
+ let data = unsafe { std::slice::from_raw_parts(data, datalen) };
+ test_msg_simple!(lightning::ln::msgs::Splice, data);
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on msg_target_template.txt
+// To modify it, modify msg_target_template.txt and run gen_target.sh instead.
+
+use crate::msg_targets::utils::VecWriter;
+use crate::utils::test_logger;
+
+#[inline]
+pub fn msg_splice_ack_test<Out: test_logger::Output>(data: &[u8], _out: Out) {
+ test_msg_simple!(lightning::ln::msgs::SpliceAck, data);
+}
+
+#[no_mangle]
+pub extern "C" fn msg_splice_ack_run(data: *const u8, datalen: usize) {
+ let data = unsafe { std::slice::from_raw_parts(data, datalen) };
+ test_msg_simple!(lightning::ln::msgs::SpliceAck, data);
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on msg_target_template.txt
+// To modify it, modify msg_target_template.txt and run gen_target.sh instead.
+
+use crate::msg_targets::utils::VecWriter;
+use crate::utils::test_logger;
+
+#[inline]
+pub fn msg_splice_locked_test<Out: test_logger::Output>(data: &[u8], _out: Out) {
+ test_msg_simple!(lightning::ln::msgs::SpliceLocked, data);
+}
+
+#[no_mangle]
+pub extern "C" fn msg_splice_locked_run(data: *const u8, datalen: usize) {
+ let data = unsafe { std::slice::from_raw_parts(data, datalen) };
+ test_msg_simple!(lightning::ln::msgs::SpliceLocked, data);
+}
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+// This file is auto-generated by gen_target.sh based on msg_target_template.txt
+// To modify it, modify msg_target_template.txt and run gen_target.sh instead.
+
+use crate::msg_targets::utils::VecWriter;
+use crate::utils::test_logger;
+
+#[inline]
+pub fn msg_stfu_test<Out: test_logger::Output>(data: &[u8], _out: Out) {
+ test_msg_simple!(lightning::ln::msgs::Stfu, data);
+}
+
+#[no_mangle]
+pub extern "C" fn msg_stfu_run(data: *const u8, datalen: usize) {
+ let data = unsafe { std::slice::from_raw_parts(data, datalen) };
+ test_msg_simple!(lightning::ln::msgs::Stfu, data);
+}
use bitcoin::secp256k1::schnorr;
use lightning::sign::{Recipient, KeyMaterial, EntropySource, NodeSigner, SignerProvider};
+use lightning::ln::features::InitFeatures;
use lightning::ln::msgs::{self, DecodeError, OnionMessageHandler};
use lightning::ln::script::ShutdownScript;
use lightning::offers::invoice::UnsignedBolt12Invoice;
&keys_manager, &keys_manager, logger, &message_router, &offers_msg_handler,
&custom_msg_handler
);
- let mut pk = [2; 33]; pk[1] = 0xff;
- let peer_node_id_not_used = PublicKey::from_slice(&pk).unwrap();
- onion_messenger.handle_onion_message(&peer_node_id_not_used, &msg);
+
+ let peer_node_id = {
+ let mut secret_bytes = [0; 32];
+ secret_bytes[31] = 2;
+ let secret = SecretKey::from_slice(&secret_bytes).unwrap();
+ PublicKey::from_secret_key(&Secp256k1::signing_only(), &secret)
+ };
+
+ let mut features = InitFeatures::empty();
+ features.set_onion_messages_optional();
+ let init = msgs::Init { features, networks: None, remote_network_address: None };
+
+ onion_messenger.peer_connected(&peer_node_id, &init, false).unwrap();
+ onion_messenger.handle_onion_message(&peer_node_id, &msg);
}
}
#[test]
fn test_no_onion_message_breakage() {
- let two_unblinded_hops_om = "020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000000000000000000000000000000000000000000000000000e0135043304210202020202020202020202020202020202020202020202020202020202020202026d000000000000000000000000000000eb0000000000000000000000000000000000000000000000000000000000000036041096000000000000000000000000000000fd1092202a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004800000000000000000000000000000000000000000000000000000000000000";
+ let one_hop_om = "\
+ 020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000\
+ 000000000000000000000000000000000000000000000000e01ae0276020000000000000000000000000000\
+ 000000000000000000000000000000000002020000000000000000000000000000000000000000000000000\
+ 000000000000e0101022a0000000000000000000000000000014551231950b75fc4402da1732fc9bebf0010\
+ 9500000000000000000000000000000004106d000000000000000000000000000000fd1092202a2a2a2a2a2\
+ a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a0000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000005600000000000000000000000000000000000000000000\
+ 000000000000000000";
+ let logger = TrackingLogger { lines: Mutex::new(HashMap::new()) };
+ super::do_test(&::hex::decode(one_hop_om).unwrap(), &logger);
+ {
+ let log_entries = logger.lines.lock().unwrap();
+ assert_eq!(log_entries.get(&("lightning::onion_message::messenger".to_string(),
+ "Received an onion message with path_id None and a reply_path".to_string())), Some(&1));
+ assert_eq!(log_entries.get(&("lightning::onion_message::messenger".to_string(),
+ "Sending onion message when responding to Custom onion message with path_id None".to_string())), Some(&1));
+ }
+
+ let two_unblinded_hops_om = "\
+ 020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000\
+ 000000000000000000000000000000000000000000000000e01350433042102020202020202020202020202\
+ 02020202020202020202020202020202020202026d000000000000000000000000000000eb0000000000000\
+ 000000000000000000000000000000000000000000000000036041096000000000000000000000000000000\
+ fd1092202a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000004800000000000000000000000000000000000000000000\
+ 000000000000000000";
let logger = TrackingLogger { lines: Mutex::new(HashMap::new()) };
super::do_test(&::hex::decode(two_unblinded_hops_om).unwrap(), &logger);
{
assert_eq!(log_entries.get(&("lightning::onion_message::messenger".to_string(), "Forwarding an onion message to peer 020202020202020202020202020202020202020202020202020202020202020202".to_string())), Some(&1));
}
- let two_unblinded_two_blinded_om = "020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000000000000000000000000000000000000000000000000000e0135043304210202020202020202020202020202020202020202020202020202020202020202026d0000000000000000000000000000009e0000000000000000000000000000000000000000000000000000000000000058045604210203030303030303030303030303030303030303030303030303030303030303020821020000000000000000000000000000000000000000000000000000000000000e0196000000000000000000000000000000e9000000000000000000000000000000000000000000000000000000000000003504330421020404040404040404040404040404040404040404040404040404040404040402ca00000000000000000000000000000042000000000000000000000000000000000000000000000000000000000000003604103f000000000000000000000000000000fd1092202a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004800000000000000000000000000000000000000000000000000000000000000";
+ let two_unblinded_two_blinded_om = "\
+ 020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000\
+ 000000000000000000000000000000000000000000000000e01350433042102020202020202020202020202\
+ 02020202020202020202020202020202020202026d0000000000000000000000000000009e0000000000000\
+ 000000000000000000000000000000000000000000000000058045604210203030303030303030303030303\
+ 030303030303030303030303030303030303020821020000000000000000000000000000000000000000000\
+ 000000000000000000e0196000000000000000000000000000000e900000000000000000000000000000000\
+ 000000000000000000000000000000350433042102040404040404040404040404040404040404040404040\
+ 4040404040404040402ca000000000000000000000000000000420000000000000000000000000000000000\
+ 00000000000000000000000000003604103f000000000000000000000000000000fd1092202a2a2a2a2a2a2\
+ a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000004800000000000000000000000000000000000000000000\
+ 000000000000000000";
let logger = TrackingLogger { lines: Mutex::new(HashMap::new()) };
super::do_test(&::hex::decode(two_unblinded_two_blinded_om).unwrap(), &logger);
{
assert_eq!(log_entries.get(&("lightning::onion_message::messenger".to_string(), "Forwarding an onion message to peer 020202020202020202020202020202020202020202020202020202020202020202".to_string())), Some(&1));
}
- let three_blinded_om = "020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000000000000000000000000000000000000000000000000000e0135043304210202020202020202020202020202020202020202020202020202020202020202026d000000000000000000000000000000b20000000000000000000000000000000000000000000000000000000000000035043304210203030303030303030303030303030303030303030303030303030303030303029600000000000000000000000000000033000000000000000000000000000000000000000000000000000000000000003604104e000000000000000000000000000000fd1092202a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004800000000000000000000000000000000000000000000000000000000000000";
+ let three_blinded_om = "\
+ 020000000000000000000000000000000000000000000000000000000000000e01055600020000000000000\
+ 000000000000000000000000000000000000000000000000e01350433042102020202020202020202020202\
+ 02020202020202020202020202020202020202026d000000000000000000000000000000b20000000000000\
+ 000000000000000000000000000000000000000000000000035043304210203030303030303030303030303\
+ 030303030303030303030303030303030303029600000000000000000000000000000033000000000000000\
+ 000000000000000000000000000000000000000000000003604104e000000000000000000000000000000fd\
+ 1092202a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a2a00000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\
+ 000000000000000000000000000000000000000004800000000000000000000000000000000000000000000\
+ 000000000000000000";
let logger = TrackingLogger { lines: Mutex::new(HashMap::new()) };
super::do_test(&::hex::decode(three_blinded_om).unwrap(), &logger);
{
// You may not use this file except in accordance with one or both of these
// licenses.
-use lightning::ln::peer_channel_encryptor::PeerChannelEncryptor;
+use lightning::ln::peer_channel_encryptor::{PeerChannelEncryptor, MessageBuf};
use lightning::util::test_utils::TestNodeSigner;
use bitcoin::secp256k1::{Secp256k1, PublicKey, SecretKey};
assert!(crypter.is_ready_for_encryption());
crypter
};
+ let mut buf = [0; 65536 + 16];
loop {
if get_slice!(1)[0] == 0 {
- crypter.encrypt_buffer(get_slice!(slice_to_be16(get_slice!(2))));
+ crypter.encrypt_buffer(MessageBuf::from_encoded(&get_slice!(slice_to_be16(get_slice!(2)))));
} else {
let len = match crypter.decrypt_length_header(get_slice!(16+2)) {
Ok(len) => len,
Err(_) => return,
};
- match crypter.decrypt_message(get_slice!(len as usize + 16)) {
+ buf.copy_from_slice(&get_slice!(len as usize + 16));
+ match crypter.decrypt_message(&mut buf[..len as usize + 16]) {
Ok(_) => {},
Err(_) => return,
}
void msg_tx_init_rbf_run(const unsigned char* data, size_t data_len);
void msg_tx_ack_rbf_run(const unsigned char* data, size_t data_len);
void msg_tx_abort_run(const unsigned char* data, size_t data_len);
+void msg_stfu_run(const unsigned char* data, size_t data_len);
+void msg_splice_run(const unsigned char* data, size_t data_len);
+void msg_splice_ack_run(const unsigned char* data, size_t data_len);
+void msg_splice_locked_run(const unsigned char* data, size_t data_len);
macro_rules! begin_open_channel {
($node_a: expr, $node_b: expr, $channel_value: expr) => {{
- $node_a.node.create_channel($node_b.node.get_our_node_id(), $channel_value, 100, 42, None).unwrap();
+ $node_a.node.create_channel($node_b.node.get_our_node_id(), $channel_value, 100, 42, None, None).unwrap();
$node_b.node.handle_open_channel(&$node_a.node.get_our_node_id(), &get_event_msg!($node_a, MessageSendEvent::SendOpenChannel, $node_b.node.get_our_node_id()));
$node_a.node.handle_accept_channel(&$node_b.node.get_our_node_id(), &get_event_msg!($node_b, MessageSendEvent::SendAcceptChannel, $node_a.node.get_our_node_id()));
}}
//! Convenient utilities for paying Lightning invoices.
use crate::Bolt11Invoice;
-use crate::prelude::*;
use bitcoin_hashes::Hash;
-use lightning::chain;
-use lightning::chain::chaininterface::{BroadcasterInterface, FeeEstimator};
-use lightning::sign::{NodeSigner, SignerProvider, EntropySource};
use lightning::ln::PaymentHash;
-use lightning::ln::channelmanager::{AChannelManager, ChannelManager, PaymentId, Retry, RetryableSendFailure, RecipientOnionFields, ProbeSendFailure};
-use lightning::routing::router::{PaymentParameters, RouteParameters, Router};
-use lightning::util::logger::Logger;
+use lightning::ln::channelmanager::RecipientOnionFields;
+use lightning::routing::router::{PaymentParameters, RouteParameters};
-use core::fmt::Debug;
-use core::ops::Deref;
-use core::time::Duration;
-
-/// Pays the given [`Bolt11Invoice`], retrying if needed based on [`Retry`].
-///
-/// [`Bolt11Invoice::payment_hash`] is used as the [`PaymentId`], which ensures idempotency as long
-/// as the payment is still pending. If the payment succeeds, you must ensure that a second payment
-/// with the same [`PaymentHash`] is never sent.
-///
-/// If you wish to use a different payment idempotency token, see [`pay_invoice_with_id`].
-pub fn pay_invoice<C: Deref>(
- invoice: &Bolt11Invoice, retry_strategy: Retry, channelmanager: C
-) -> Result<PaymentId, PaymentError>
-where C::Target: AChannelManager,
-{
- let payment_id = PaymentId(invoice.payment_hash().into_inner());
- pay_invoice_with_id(invoice, payment_id, retry_strategy, channelmanager.get_cm())
- .map(|()| payment_id)
-}
-
-/// Pays the given [`Bolt11Invoice`] with a custom idempotency key, retrying if needed based on
-/// [`Retry`].
-///
-/// Note that idempotency is only guaranteed as long as the payment is still pending. Once the
-/// payment completes or fails, no idempotency guarantees are made.
+/// Builds the necessary parameters to pay or pre-flight probe the given zero-amount
+/// [`Bolt11Invoice`] using [`ChannelManager::send_payment`] or
+/// [`ChannelManager::send_preflight_probes`].
///
-/// You should ensure that the [`Bolt11Invoice::payment_hash`] is unique and the same
-/// [`PaymentHash`] has never been paid before.
+/// Prior to paying, you must ensure that the [`Bolt11Invoice::payment_hash`] is unique and the
+/// same [`PaymentHash`] has never been paid before.
///
-/// See [`pay_invoice`] for a variant which uses the [`PaymentHash`] for the idempotency token.
-pub fn pay_invoice_with_id<C: Deref>(
- invoice: &Bolt11Invoice, payment_id: PaymentId, retry_strategy: Retry, channelmanager: C
-) -> Result<(), PaymentError>
-where C::Target: AChannelManager,
-{
- let amt_msat = invoice.amount_milli_satoshis().ok_or(PaymentError::Invoice("amount missing"))?;
- pay_invoice_using_amount(invoice, amt_msat, payment_id, retry_strategy, channelmanager.get_cm())
-}
-
-/// Pays the given zero-value [`Bolt11Invoice`] using the given amount, retrying if needed based on
-/// [`Retry`].
-///
-/// [`Bolt11Invoice::payment_hash`] is used as the [`PaymentId`], which ensures idempotency as long
-/// as the payment is still pending. If the payment succeeds, you must ensure that a second payment
-/// with the same [`PaymentHash`] is never sent.
+/// Will always succeed unless the invoice has an amount specified, in which case
+/// [`payment_parameters_from_invoice`] should be used.
///
-/// If you wish to use a different payment idempotency token, see
-/// [`pay_zero_value_invoice_with_id`].
-pub fn pay_zero_value_invoice<C: Deref>(
- invoice: &Bolt11Invoice, amount_msats: u64, retry_strategy: Retry, channelmanager: C
-) -> Result<PaymentId, PaymentError>
-where C::Target: AChannelManager,
-{
- let payment_id = PaymentId(invoice.payment_hash().into_inner());
- pay_zero_value_invoice_with_id(invoice, amount_msats, payment_id, retry_strategy,
- channelmanager)
- .map(|()| payment_id)
+/// [`ChannelManager::send_payment`]: lightning::ln::channelmanager::ChannelManager::send_payment
+/// [`ChannelManager::send_preflight_probes`]: lightning::ln::channelmanager::ChannelManager::send_preflight_probes
+pub fn payment_parameters_from_zero_amount_invoice(invoice: &Bolt11Invoice, amount_msat: u64)
+-> Result<(PaymentHash, RecipientOnionFields, RouteParameters), ()> {
+ if invoice.amount_milli_satoshis().is_some() {
+ Err(())
+ } else {
+ Ok(params_from_invoice(invoice, amount_msat))
+ }
}
-/// Pays the given zero-value [`Bolt11Invoice`] using the given amount and custom idempotency key,
-/// retrying if needed based on [`Retry`].
+/// Builds the necessary parameters to pay or pre-flight probe the given [`Bolt11Invoice`] using
+/// [`ChannelManager::send_payment`] or [`ChannelManager::send_preflight_probes`].
///
-/// Note that idempotency is only guaranteed as long as the payment is still pending. Once the
-/// payment completes or fails, no idempotency guarantees are made.
+/// Prior to paying, you must ensure that the [`Bolt11Invoice::payment_hash`] is unique and the
+/// same [`PaymentHash`] has never been paid before.
///
-/// You should ensure that the [`Bolt11Invoice::payment_hash`] is unique and the same
-/// [`PaymentHash`] has never been paid before.
+/// Will always succeed unless the invoice has no amount specified, in which case
+/// [`payment_parameters_from_zero_amount_invoice`] should be used.
///
-/// See [`pay_zero_value_invoice`] for a variant which uses the [`PaymentHash`] for the
-/// idempotency token.
-pub fn pay_zero_value_invoice_with_id<C: Deref>(
- invoice: &Bolt11Invoice, amount_msats: u64, payment_id: PaymentId, retry_strategy: Retry,
- channelmanager: C
-) -> Result<(), PaymentError>
-where C::Target: AChannelManager,
-{
- if invoice.amount_milli_satoshis().is_some() {
- Err(PaymentError::Invoice("amount unexpected"))
+/// [`ChannelManager::send_payment`]: lightning::ln::channelmanager::ChannelManager::send_payment
+/// [`ChannelManager::send_preflight_probes`]: lightning::ln::channelmanager::ChannelManager::send_preflight_probes
+pub fn payment_parameters_from_invoice(invoice: &Bolt11Invoice)
+-> Result<(PaymentHash, RecipientOnionFields, RouteParameters), ()> {
+ if let Some(amount_msat) = invoice.amount_milli_satoshis() {
+ Ok(params_from_invoice(invoice, amount_msat))
} else {
- pay_invoice_using_amount(invoice, amount_msats, payment_id, retry_strategy,
- channelmanager.get_cm())
+ Err(())
}
}
-fn pay_invoice_using_amount<P: Deref>(
- invoice: &Bolt11Invoice, amount_msats: u64, payment_id: PaymentId, retry_strategy: Retry,
- payer: P
-) -> Result<(), PaymentError> where P::Target: Payer {
+fn params_from_invoice(invoice: &Bolt11Invoice, amount_msat: u64)
+-> (PaymentHash, RecipientOnionFields, RouteParameters) {
let payment_hash = PaymentHash((*invoice.payment_hash()).into_inner());
+
let mut recipient_onion = RecipientOnionFields::secret_only(*invoice.payment_secret());
recipient_onion.payment_metadata = invoice.payment_metadata().map(|v| v.clone());
- let mut payment_params = PaymentParameters::from_node_id(invoice.recover_payee_pub_key(),
- invoice.min_final_cltv_expiry_delta() as u32)
- .with_expiry_time(expiry_time_from_unix_epoch(invoice).as_secs())
- .with_route_hints(invoice.route_hints()).unwrap();
- if let Some(features) = invoice.features() {
- payment_params = payment_params.with_bolt11_features(features.clone()).unwrap();
- }
- let route_params = RouteParameters::from_payment_params_and_value(payment_params, amount_msats);
-
- payer.send_payment(payment_hash, recipient_onion, payment_id, route_params, retry_strategy)
-}
-
-/// Sends payment probes over all paths of a route that would be used to pay the given invoice.
-///
-/// See [`ChannelManager::send_preflight_probes`] for more information.
-pub fn preflight_probe_invoice<C: Deref>(
- invoice: &Bolt11Invoice, channelmanager: C, liquidity_limit_multiplier: Option<u64>,
-) -> Result<Vec<(PaymentHash, PaymentId)>, ProbingError>
-where C::Target: AChannelManager,
-{
- let amount_msat = if let Some(invoice_amount_msat) = invoice.amount_milli_satoshis() {
- invoice_amount_msat
- } else {
- return Err(ProbingError::Invoice("Failed to send probe as no amount was given in the invoice."));
- };
let mut payment_params = PaymentParameters::from_node_id(
- invoice.recover_payee_pub_key(),
- invoice.min_final_cltv_expiry_delta() as u32,
- )
- .with_expiry_time(expiry_time_from_unix_epoch(invoice).as_secs())
- .with_route_hints(invoice.route_hints())
- .unwrap();
-
- if let Some(features) = invoice.features() {
- payment_params = payment_params.with_bolt11_features(features.clone()).unwrap();
- }
- let route_params = RouteParameters::from_payment_params_and_value(payment_params, amount_msat);
-
- channelmanager.get_cm().send_preflight_probes(route_params, liquidity_limit_multiplier)
- .map_err(ProbingError::Sending)
-}
-
-/// Sends payment probes over all paths of a route that would be used to pay the given zero-value
-/// invoice using the given amount.
-///
-/// See [`ChannelManager::send_preflight_probes`] for more information.
-pub fn preflight_probe_zero_value_invoice<C: Deref>(
- invoice: &Bolt11Invoice, amount_msat: u64, channelmanager: C,
- liquidity_limit_multiplier: Option<u64>,
-) -> Result<Vec<(PaymentHash, PaymentId)>, ProbingError>
-where C::Target: AChannelManager,
-{
- if invoice.amount_milli_satoshis().is_some() {
- return Err(ProbingError::Invoice("amount unexpected"));
+ invoice.recover_payee_pub_key(),
+ invoice.min_final_cltv_expiry_delta() as u32
+ )
+ .with_route_hints(invoice.route_hints()).unwrap();
+ if let Some(expiry) = invoice.expires_at() {
+ payment_params = payment_params.with_expiry_time(expiry.as_secs());
}
-
- let mut payment_params = PaymentParameters::from_node_id(
- invoice.recover_payee_pub_key(),
- invoice.min_final_cltv_expiry_delta() as u32,
- )
- .with_expiry_time(expiry_time_from_unix_epoch(invoice).as_secs())
- .with_route_hints(invoice.route_hints())
- .unwrap();
-
if let Some(features) = invoice.features() {
payment_params = payment_params.with_bolt11_features(features.clone()).unwrap();
}
- let route_params = RouteParameters::from_payment_params_and_value(payment_params, amount_msat);
-
- channelmanager.get_cm().send_preflight_probes(route_params, liquidity_limit_multiplier)
- .map_err(ProbingError::Sending)
-}
-fn expiry_time_from_unix_epoch(invoice: &Bolt11Invoice) -> Duration {
- invoice.signed_invoice.raw_invoice.data.timestamp.0 + invoice.expiry_time()
-}
-
-/// An error that may occur when making a payment.
-#[derive(Clone, Debug, PartialEq, Eq)]
-pub enum PaymentError {
- /// An error resulting from the provided [`Bolt11Invoice`] or payment hash.
- Invoice(&'static str),
- /// An error occurring when sending a payment.
- Sending(RetryableSendFailure),
-}
-
-/// An error that may occur when sending a payment probe.
-#[derive(Clone, Debug, PartialEq, Eq)]
-pub enum ProbingError {
- /// An error resulting from the provided [`Bolt11Invoice`].
- Invoice(&'static str),
- /// An error occurring when sending a payment probe.
- Sending(ProbeSendFailure),
-}
-
-/// A trait defining behavior of a [`Bolt11Invoice`] payer.
-///
-/// Useful for unit testing internal methods.
-trait Payer {
- /// Sends a payment over the Lightning Network using the given [`Route`].
- ///
- /// [`Route`]: lightning::routing::router::Route
- fn send_payment(
- &self, payment_hash: PaymentHash, recipient_onion: RecipientOnionFields,
- payment_id: PaymentId, route_params: RouteParameters, retry_strategy: Retry
- ) -> Result<(), PaymentError>;
-}
-
-impl<M: Deref, T: Deref, ES: Deref, NS: Deref, SP: Deref, F: Deref, R: Deref, L: Deref> Payer for ChannelManager<M, T, ES, NS, SP, F, R, L>
-where
- M::Target: chain::Watch<<SP::Target as SignerProvider>::Signer>,
- T::Target: BroadcasterInterface,
- ES::Target: EntropySource,
- NS::Target: NodeSigner,
- SP::Target: SignerProvider,
- F::Target: FeeEstimator,
- R::Target: Router,
- L::Target: Logger,
-{
- fn send_payment(
- &self, payment_hash: PaymentHash, recipient_onion: RecipientOnionFields,
- payment_id: PaymentId, route_params: RouteParameters, retry_strategy: Retry
- ) -> Result<(), PaymentError> {
- self.send_payment(payment_hash, recipient_onion, payment_id, route_params, retry_strategy)
- .map_err(PaymentError::Sending)
- }
+ let route_params = RouteParameters::from_payment_params_and_value(payment_params, amount_msat);
+ (payment_hash, recipient_onion, route_params)
}
#[cfg(test)]
use crate::{InvoiceBuilder, Currency};
use bitcoin_hashes::sha256::Hash as Sha256;
use lightning::events::Event;
+ use lightning::ln::channelmanager::{Retry, PaymentId};
use lightning::ln::msgs::ChannelMessageHandler;
- use lightning::ln::{PaymentPreimage, PaymentSecret};
+ use lightning::ln::PaymentSecret;
use lightning::ln::functional_test_utils::*;
- use secp256k1::{SecretKey, Secp256k1};
- use std::collections::VecDeque;
+ use lightning::routing::router::Payee;
+ use secp256k1::{SecretKey, PublicKey, Secp256k1};
use std::time::{SystemTime, Duration};
- struct TestPayer {
- expectations: core::cell::RefCell<VecDeque<Amount>>,
- }
-
- impl TestPayer {
- fn new() -> Self {
- Self {
- expectations: core::cell::RefCell::new(VecDeque::new()),
- }
- }
-
- fn expect_send(self, value_msat: Amount) -> Self {
- self.expectations.borrow_mut().push_back(value_msat);
- self
- }
-
- fn check_value_msats(&self, actual_value_msats: Amount) {
- let expected_value_msats = self.expectations.borrow_mut().pop_front();
- if let Some(expected_value_msats) = expected_value_msats {
- assert_eq!(actual_value_msats, expected_value_msats);
- } else {
- panic!("Unexpected amount: {:?}", actual_value_msats);
- }
- }
- }
-
- #[derive(Clone, Debug, PartialEq, Eq)]
- struct Amount(u64); // msat
-
- impl Payer for TestPayer {
- fn send_payment(
- &self, _payment_hash: PaymentHash, _recipient_onion: RecipientOnionFields,
- _payment_id: PaymentId, route_params: RouteParameters, _retry_strategy: Retry
- ) -> Result<(), PaymentError> {
- self.check_value_msats(Amount(route_params.final_value_msat));
- Ok(())
- }
- }
-
- impl Drop for TestPayer {
- fn drop(&mut self) {
- if std::thread::panicking() {
- return;
- }
-
- if !self.expectations.borrow().is_empty() {
- panic!("Unsatisfied payment expectations: {:?}", self.expectations.borrow());
- }
- }
- }
-
fn duration_since_epoch() -> Duration {
#[cfg(feature = "std")]
let duration_since_epoch =
duration_since_epoch
}
- fn invoice(payment_preimage: PaymentPreimage) -> Bolt11Invoice {
- let payment_hash = Sha256::hash(&payment_preimage.0);
+ #[test]
+ fn invoice_test() {
+ let payment_hash = Sha256::hash(&[0; 32]);
let private_key = SecretKey::from_slice(&[42; 32]).unwrap();
+ let secp_ctx = Secp256k1::new();
+ let public_key = PublicKey::from_secret_key(&secp_ctx, &private_key);
- InvoiceBuilder::new(Currency::Bitcoin)
+ let invoice = InvoiceBuilder::new(Currency::Bitcoin)
.description("test".into())
.payment_hash(payment_hash)
.payment_secret(PaymentSecret([0; 32]))
.min_final_cltv_expiry_delta(144)
.amount_milli_satoshis(128)
.build_signed(|hash| {
- Secp256k1::new().sign_ecdsa_recoverable(hash, &private_key)
+ secp_ctx.sign_ecdsa_recoverable(hash, &private_key)
})
- .unwrap()
+ .unwrap();
+
+ assert!(payment_parameters_from_zero_amount_invoice(&invoice, 42).is_err());
+
+ let (hash, onion, params) = payment_parameters_from_invoice(&invoice).unwrap();
+ assert_eq!(&hash.0[..], &payment_hash[..]);
+ assert_eq!(onion.payment_secret, Some(PaymentSecret([0; 32])));
+ assert_eq!(params.final_value_msat, 128);
+ match params.payment_params.payee {
+ Payee::Clear { node_id, .. } => {
+ assert_eq!(node_id, public_key);
+ },
+ _ => panic!(),
+ }
}
- fn zero_value_invoice(payment_preimage: PaymentPreimage) -> Bolt11Invoice {
- let payment_hash = Sha256::hash(&payment_preimage.0);
+ #[test]
+ fn zero_value_invoice_test() {
+ let payment_hash = Sha256::hash(&[0; 32]);
let private_key = SecretKey::from_slice(&[42; 32]).unwrap();
+ let secp_ctx = Secp256k1::new();
+ let public_key = PublicKey::from_secret_key(&secp_ctx, &private_key);
- InvoiceBuilder::new(Currency::Bitcoin)
+ let invoice = InvoiceBuilder::new(Currency::Bitcoin)
.description("test".into())
.payment_hash(payment_hash)
.payment_secret(PaymentSecret([0; 32]))
.duration_since_epoch(duration_since_epoch())
.min_final_cltv_expiry_delta(144)
.build_signed(|hash| {
- Secp256k1::new().sign_ecdsa_recoverable(hash, &private_key)
+ secp_ctx.sign_ecdsa_recoverable(hash, &private_key)
})
- .unwrap()
- }
+ .unwrap();
- #[test]
- fn pays_invoice() {
- let payment_id = PaymentId([42; 32]);
- let payment_preimage = PaymentPreimage([1; 32]);
- let invoice = invoice(payment_preimage);
- let final_value_msat = invoice.amount_milli_satoshis().unwrap();
-
- let payer = TestPayer::new().expect_send(Amount(final_value_msat));
- pay_invoice_using_amount(&invoice, final_value_msat, payment_id, Retry::Attempts(0), &payer).unwrap();
- }
+ assert!(payment_parameters_from_invoice(&invoice).is_err());
- #[test]
- fn pays_zero_value_invoice() {
- let payment_id = PaymentId([42; 32]);
- let payment_preimage = PaymentPreimage([1; 32]);
- let invoice = zero_value_invoice(payment_preimage);
- let amt_msat = 10_000;
-
- let payer = TestPayer::new().expect_send(Amount(amt_msat));
- pay_invoice_using_amount(&invoice, amt_msat, payment_id, Retry::Attempts(0), &payer).unwrap();
- }
-
- #[test]
- fn fails_paying_zero_value_invoice_with_amount() {
- let chanmon_cfgs = create_chanmon_cfgs(1);
- let node_cfgs = create_node_cfgs(1, &chanmon_cfgs);
- let node_chanmgrs = create_node_chanmgrs(1, &node_cfgs, &[None]);
- let nodes = create_network(1, &node_cfgs, &node_chanmgrs);
-
- let payment_preimage = PaymentPreimage([1; 32]);
- let invoice = invoice(payment_preimage);
- let amt_msat = 10_000;
-
- match pay_zero_value_invoice(&invoice, amt_msat, Retry::Attempts(0), nodes[0].node) {
- Err(PaymentError::Invoice("amount unexpected")) => {},
- _ => panic!()
+ let (hash, onion, params) = payment_parameters_from_zero_amount_invoice(&invoice, 42).unwrap();
+ assert_eq!(&hash.0[..], &payment_hash[..]);
+ assert_eq!(onion.payment_secret, Some(PaymentSecret([0; 32])));
+ assert_eq!(params.final_value_msat, 42);
+ match params.payment_params.payee {
+ Payee::Clear { node_id, .. } => {
+ assert_eq!(node_id, public_key);
+ },
+ _ => panic!(),
}
}
})
.unwrap();
- pay_invoice(&invoice, Retry::Attempts(0), nodes[0].node).unwrap();
+ let (hash, onion, params) = payment_parameters_from_invoice(&invoice).unwrap();
+ nodes[0].node.send_payment(hash, onion, PaymentId(hash.0), params, Retry::Attempts(0)).unwrap();
check_added_monitors(&nodes[0], 1);
let send_event = SendEvent::from_node(&nodes[0]);
nodes[1].node.handle_update_add_htlc(&nodes[0].node.get_our_node_id(), &send_event.msgs[0]);
// is never handled, the `channel.counterparty.forwarding_info` is never assigned.
let mut private_chan_cfg = UserConfig::default();
private_chan_cfg.channel_handshake_config.announced_channel = false;
- let temporary_channel_id = nodes[2].node.create_channel(nodes[0].node.get_our_node_id(), 1_000_000, 500_000_000, 42, Some(private_chan_cfg)).unwrap();
+ let temporary_channel_id = nodes[2].node.create_channel(nodes[0].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None, Some(private_chan_cfg)).unwrap();
let open_channel = get_event_msg!(nodes[2], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
nodes[0].node.handle_open_channel(&nodes[2].node.get_our_node_id(), &open_channel);
let accept_channel = get_event_msg!(nodes[0], MessageSendEvent::SendAcceptChannel, nodes[2].node.get_our_node_id());
// is never handled, the `channel.counterparty.forwarding_info` is never assigned.
let mut private_chan_cfg = UserConfig::default();
private_chan_cfg.channel_handshake_config.announced_channel = false;
- let temporary_channel_id = nodes[1].node.create_channel(nodes[3].node.get_our_node_id(), 1_000_000, 500_000_000, 42, Some(private_chan_cfg)).unwrap();
+ let temporary_channel_id = nodes[1].node.create_channel(nodes[3].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None, Some(private_chan_cfg)).unwrap();
let open_channel = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[3].node.get_our_node_id());
nodes[3].node.handle_open_channel(&nodes[1].node.get_our_node_id(), &open_channel);
let accept_channel = get_event_msg!(nodes[3], MessageSendEvent::SendAcceptChannel, nodes[1].node.get_our_node_id());
task::RawWakerVTable::new(clone_socket_waker, wake_socket_waker, wake_socket_waker_by_ref, drop_socket_waker);
fn clone_socket_waker(orig_ptr: *const ()) -> task::RawWaker {
- write_avail_to_waker(orig_ptr as *const mpsc::Sender<()>)
+ let new_waker = unsafe { Arc::from_raw(orig_ptr as *const mpsc::Sender<()>) };
+ let res = write_avail_to_waker(&new_waker);
+ // Don't decrement the refcount when dropping new_waker by turning it back `into_raw`.
+ let _ = Arc::into_raw(new_waker);
+ res
}
// When waking, an error should be fine. Most likely we got two send_datas in a row, both of which
// failed to fully write, but we only need to call write_buffer_space_avail() once. Otherwise, the
}
fn wake_socket_waker_by_ref(orig_ptr: *const ()) {
let sender_ptr = orig_ptr as *const mpsc::Sender<()>;
- let sender = unsafe { (*sender_ptr).clone() };
+ let sender = unsafe { &*sender_ptr };
let _ = sender.try_send(());
}
fn drop_socket_waker(orig_ptr: *const ()) {
- let _orig_box = unsafe { Box::from_raw(orig_ptr as *mut mpsc::Sender<()>) };
- // _orig_box is now dropped
+ let _orig_arc = unsafe { Arc::from_raw(orig_ptr as *mut mpsc::Sender<()>) };
+ // _orig_arc is now dropped
}
-fn write_avail_to_waker(sender: *const mpsc::Sender<()>) -> task::RawWaker {
- let new_box = Box::leak(Box::new(unsafe { (*sender).clone() }));
- let new_ptr = new_box as *const mpsc::Sender<()>;
+fn write_avail_to_waker(sender: &Arc<mpsc::Sender<()>>) -> task::RawWaker {
+ let new_ptr = Arc::into_raw(Arc::clone(&sender));
task::RawWaker::new(new_ptr as *const (), &SOCK_WAKER_VTABLE)
}
/// type in the template of PeerHandler.
pub struct SocketDescriptor {
conn: Arc<Mutex<Connection>>,
+ // We store a copy of the mpsc::Sender to wake the read task in an Arc here. While we can
+ // simply clone the sender and store a copy in each waker, that would require allocating for
+ // each waker. Instead, we can simply `Arc::clone`, creating a new reference and store the
+ // pointer in the waker.
+ write_avail_sender: Arc<mpsc::Sender<()>>,
id: u64,
}
impl SocketDescriptor {
fn new(conn: Arc<Mutex<Connection>>) -> Self {
- let id = conn.lock().unwrap().id;
- Self { conn, id }
+ let (id, write_avail_sender) = {
+ let us = conn.lock().unwrap();
+ (us.id, Arc::new(us.write_avail.clone()))
+ };
+ Self { conn, id, write_avail_sender }
}
}
impl peer_handler::SocketDescriptor for SocketDescriptor {
let _ = us.read_waker.try_send(());
}
if data.is_empty() { return 0; }
- let waker = unsafe { task::Waker::from_raw(write_avail_to_waker(&us.write_avail)) };
+ let waker = unsafe { task::Waker::from_raw(write_avail_to_waker(&self.write_avail_sender)) };
let mut ctx = task::Context::from_waker(&waker);
let mut written_len = 0;
loop {
Self {
conn: Arc::clone(&self.conn),
id: self.id,
+ write_avail_sender: Arc::clone(&self.write_avail_sender),
}
}
}
fn handle_channel_update(&self, _their_node_id: &PublicKey, _msg: &ChannelUpdate) {}
fn handle_open_channel_v2(&self, _their_node_id: &PublicKey, _msg: &OpenChannelV2) {}
fn handle_accept_channel_v2(&self, _their_node_id: &PublicKey, _msg: &AcceptChannelV2) {}
+ fn handle_stfu(&self, _their_node_id: &PublicKey, _msg: &Stfu) {}
+ fn handle_splice(&self, _their_node_id: &PublicKey, _msg: &Splice) {}
+ fn handle_splice_ack(&self, _their_node_id: &PublicKey, _msg: &SpliceAck) {}
+ fn handle_splice_locked(&self, _their_node_id: &PublicKey, _msg: &SpliceLocked) {}
fn handle_tx_add_input(&self, _their_node_id: &PublicKey, _msg: &TxAddInput) {}
fn handle_tx_add_output(&self, _their_node_id: &PublicKey, _msg: &TxAddOutput) {}
fn handle_tx_remove_input(&self, _their_node_id: &PublicKey, _msg: &TxRemoveInput) {}
criterion = { version = "0.4", optional = true, default-features = false }
[target.'cfg(taproot)'.dependencies]
-musig2 = { git = "https://github.com/arik-so/rust-musig2", rev = "27797d7" }
+musig2 = { git = "https://github.com/arik-so/rust-musig2", rev = "dc05904" }
use crate::ln::onion_utils;
use crate::onion_message::Destination;
use crate::util::chacha20poly1305rfc::ChaChaPolyWriteAdapter;
-use crate::util::ser::{Readable, VecWriter, Writeable};
+use crate::util::ser::{Readable, Writeable};
use crate::io;
use crate::prelude::*;
/// Encrypt TLV payload to be used as a [`crate::blinded_path::BlindedHop::encrypted_payload`].
fn encrypt_payload<P: Writeable>(payload: P, encrypted_tlvs_rho: [u8; 32]) -> Vec<u8> {
- let mut writer = VecWriter(Vec::new());
let write_adapter = ChaChaPolyWriteAdapter::new(encrypted_tlvs_rho, &payload);
- write_adapter.write(&mut writer).expect("In-memory writes cannot fail");
- writer.0
+ write_adapter.encode()
}
/// Blinded path encrypted payloads may be padded to ensure they are equal length.
/// A monitor event containing an HTLCUpdate.
HTLCEvent(HTLCUpdate),
- /// A monitor event that the Channel's commitment transaction was confirmed.
+ /// Indicates we broadcasted the channel's latest commitment transaction and thus closed the
+ /// channel.
HolderForceClosed(OutPoint),
/// Indicates a [`ChannelMonitor`] update has completed. See
pub cltv_expiry: u32,
/// The amount (in msats) of this part of an MPP.
pub value_msat: u64,
+ /// The extra fee our counterparty skimmed off the top of this HTLC, if any.
+ ///
+ /// This value will always be 0 for [`ClaimedHTLC`]s serialized with LDK versions prior to
+ /// 0.0.119.
+ pub counterparty_skimmed_fee_msat: u64,
}
impl_writeable_tlv_based!(ClaimedHTLC, {
(0, channel_id, required),
+ (1, counterparty_skimmed_fee_msat, (default_value, 0u64)),
(2, user_channel_id, required),
(4, cltv_expiry, required),
(6, value_msat, required),
/// The message which should be sent.
msg: msgs::FundingSigned,
},
+ /// Used to indicate that a stfu message should be sent to the peer with the given node id.
+ SendStfu {
+ /// The node_id of the node which should receive this message
+ node_id: PublicKey,
+ /// The message which should be sent.
+ msg: msgs::Stfu,
+ },
+ /// Used to indicate that a splice message should be sent to the peer with the given node id.
+ SendSplice {
+ /// The node_id of the node which should receive this message
+ node_id: PublicKey,
+ /// The message which should be sent.
+ msg: msgs::Splice,
+ },
+ /// Used to indicate that a splice_ack message should be sent to the peer with the given node id.
+ SendSpliceAck {
+ /// The node_id of the node which should receive this message
+ node_id: PublicKey,
+ /// The message which should be sent.
+ msg: msgs::SpliceAck,
+ },
+ /// Used to indicate that a splice_locked message should be sent to the peer with the given node id.
+ SendSpliceLocked {
+ /// The node_id of the node which should receive this message
+ node_id: PublicKey,
+ /// The message which should be sent.
+ msg: msgs::SpliceLocked,
+ },
/// Used to indicate that a tx_add_input message should be sent to the peer with the given node_id.
SendTxAddInput {
/// The node_id of the node which should receive this message
--- /dev/null
+// This file is Copyright its original authors, visible in version control
+// history.
+//
+// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
+// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
+// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
+// You may not use this file except in accordance with one or both of these
+// licenses.
+
+//! Tests for asynchronous signing. These tests verify that the channel state machine behaves
+//! properly with a signer implementation that asynchronously derives signatures.
+
+use crate::events::{Event, MessageSendEvent, MessageSendEventsProvider};
+use crate::ln::functional_test_utils::*;
+use crate::ln::msgs::ChannelMessageHandler;
+use crate::ln::channelmanager::{PaymentId, RecipientOnionFields};
+
+#[test]
+fn test_async_commitment_signature_for_funding_created() {
+ // Simulate acquiring the signature for `funding_created` asynchronously.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
+
+ // nodes[0] --- open_channel --> nodes[1]
+ let mut open_chan_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan_msg);
+
+ // nodes[0] <-- accept_channel --- nodes[1]
+ nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
+
+ // nodes[0] --- funding_created --> nodes[1]
+ //
+ // But! Let's make node[0]'s signer be unavailable: we should *not* broadcast a funding_created
+ // message...
+ let (temporary_channel_id, tx, _) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 100000, 42);
+ nodes[0].set_channel_signer_available(&nodes[1].node.get_our_node_id(), &temporary_channel_id, false);
+ nodes[0].node.funding_transaction_generated(&temporary_channel_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap();
+ check_added_monitors(&nodes[0], 0);
+
+ assert!(nodes[0].node.get_and_clear_pending_msg_events().is_empty());
+
+ // Now re-enable the signer and simulate a retry. The temporary_channel_id won't work anymore so
+ // we have to dig out the real channel ID.
+ let chan_id = {
+ let channels = nodes[0].node.list_channels();
+ assert_eq!(channels.len(), 1, "expected one channel, not {}", channels.len());
+ channels[0].channel_id
+ };
+
+ nodes[0].set_channel_signer_available(&nodes[1].node.get_our_node_id(), &chan_id, true);
+ nodes[0].node.signer_unblocked(Some((nodes[1].node.get_our_node_id(), chan_id)));
+
+ let mut funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg);
+ check_added_monitors(&nodes[1], 1);
+ expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
+
+ // nodes[0] <-- funding_signed --- nodes[1]
+ let funding_signed_msg = get_event_msg!(nodes[1], MessageSendEvent::SendFundingSigned, nodes[0].node.get_our_node_id());
+ nodes[0].node.handle_funding_signed(&nodes[1].node.get_our_node_id(), &funding_signed_msg);
+ check_added_monitors(&nodes[0], 1);
+ expect_channel_pending_event(&nodes[0], &nodes[1].node.get_our_node_id());
+}
+
+#[test]
+fn test_async_commitment_signature_for_funding_signed() {
+ // Simulate acquiring the signature for `funding_signed` asynchronously.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
+
+ // nodes[0] --- open_channel --> nodes[1]
+ let mut open_chan_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan_msg);
+
+ // nodes[0] <-- accept_channel --- nodes[1]
+ nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
+
+ // nodes[0] --- funding_created --> nodes[1]
+ let (temporary_channel_id, tx, _) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 100000, 42);
+ nodes[0].node.funding_transaction_generated(&temporary_channel_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap();
+ check_added_monitors(&nodes[0], 0);
+
+ let mut funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+
+ // Now let's make node[1]'s signer be unavailable while handling the `funding_created`. It should
+ // *not* broadcast a `funding_signed`...
+ nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, false);
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg);
+ check_added_monitors(&nodes[1], 1);
+
+ assert!(nodes[1].node.get_and_clear_pending_msg_events().is_empty());
+
+ // Now re-enable the signer and simulate a retry. The temporary_channel_id won't work anymore so
+ // we have to dig out the real channel ID.
+ let chan_id = {
+ let channels = nodes[0].node.list_channels();
+ assert_eq!(channels.len(), 1, "expected one channel, not {}", channels.len());
+ channels[0].channel_id
+ };
+ nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &chan_id, true);
+ nodes[1].node.signer_unblocked(Some((nodes[0].node.get_our_node_id(), chan_id)));
+
+ expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
+
+ // nodes[0] <-- funding_signed --- nodes[1]
+ let funding_signed_msg = get_event_msg!(nodes[1], MessageSendEvent::SendFundingSigned, nodes[0].node.get_our_node_id());
+ nodes[0].node.handle_funding_signed(&nodes[1].node.get_our_node_id(), &funding_signed_msg);
+ check_added_monitors(&nodes[0], 1);
+ expect_channel_pending_event(&nodes[0], &nodes[1].node.get_our_node_id());
+}
+
+#[test]
+fn test_async_commitment_signature_for_commitment_signed() {
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let (_, _, chan_id, _) = create_announced_chan_between_nodes(&nodes, 0, 1);
+
+ // Send a payment.
+ let src = &nodes[0];
+ let dst = &nodes[1];
+ let (route, our_payment_hash, _our_payment_preimage, our_payment_secret) = get_route_and_payment_hash!(src, dst, 8000000);
+ src.node.send_payment_with_route(&route, our_payment_hash,
+ RecipientOnionFields::secret_only(our_payment_secret), PaymentId(our_payment_hash.0)).unwrap();
+ check_added_monitors!(src, 1);
+
+ // Pass the payment along the route.
+ let payment_event = {
+ let mut events = src.node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), 1);
+ SendEvent::from_event(events.remove(0))
+ };
+ assert_eq!(payment_event.node_id, dst.node.get_our_node_id());
+ assert_eq!(payment_event.msgs.len(), 1);
+
+ dst.node.handle_update_add_htlc(&src.node.get_our_node_id(), &payment_event.msgs[0]);
+
+ // Mark dst's signer as unavailable and handle src's commitment_signed: while dst won't yet have a
+ // `commitment_signed` of its own to offer, it should publish a `revoke_and_ack`.
+ dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, false);
+ dst.node.handle_commitment_signed(&src.node.get_our_node_id(), &payment_event.commitment_msg);
+ check_added_monitors(dst, 1);
+
+ get_event_msg!(dst, MessageSendEvent::SendRevokeAndACK, src.node.get_our_node_id());
+
+ // Mark dst's signer as available and retry: we now expect to see dst's `commitment_signed`.
+ dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, true);
+ dst.node.signer_unblocked(Some((src.node.get_our_node_id(), chan_id)));
+
+ let events = dst.node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), 1, "expected one message, got {}", events.len());
+ if let MessageSendEvent::UpdateHTLCs { ref node_id, .. } = events[0] {
+ assert_eq!(node_id, &src.node.get_our_node_id());
+ } else {
+ panic!("expected UpdateHTLCs message, not {:?}", events[0]);
+ };
+}
+
+#[test]
+fn test_async_commitment_signature_for_funding_signed_0conf() {
+ // Simulate acquiring the signature for `funding_signed` asynchronously for a zero-conf channel.
+ let mut manually_accept_config = test_default_channel_config();
+ manually_accept_config.manually_accept_inbound_channels = true;
+
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(manually_accept_config)]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ // nodes[0] --- open_channel --> nodes[1]
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
+ let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
+
+ nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
+
+ {
+ let events = nodes[1].node.get_and_clear_pending_events();
+ assert_eq!(events.len(), 1, "Expected one event, got {}", events.len());
+ match &events[0] {
+ Event::OpenChannelRequest { temporary_channel_id, .. } => {
+ nodes[1].node.accept_inbound_channel_from_trusted_peer_0conf(
+ temporary_channel_id, &nodes[0].node.get_our_node_id(), 0)
+ .expect("Unable to accept inbound zero-conf channel");
+ },
+ ev => panic!("Expected OpenChannelRequest, not {:?}", ev)
+ }
+ }
+
+ // nodes[0] <-- accept_channel --- nodes[1]
+ let accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
+ assert_eq!(accept_channel.minimum_depth, 0, "Expected minimum depth of 0");
+ nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &accept_channel);
+
+ // nodes[0] --- funding_created --> nodes[1]
+ let (temporary_channel_id, tx, _) = create_funding_transaction(&nodes[0], &nodes[1].node.get_our_node_id(), 100000, 42);
+ nodes[0].node.funding_transaction_generated(&temporary_channel_id, &nodes[1].node.get_our_node_id(), tx.clone()).unwrap();
+ check_added_monitors(&nodes[0], 0);
+
+ let mut funding_created_msg = get_event_msg!(nodes[0], MessageSendEvent::SendFundingCreated, nodes[1].node.get_our_node_id());
+
+ // Now let's make node[1]'s signer be unavailable while handling the `funding_created`. It should
+ // *not* broadcast a `funding_signed`...
+ nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &temporary_channel_id, false);
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created_msg);
+ check_added_monitors(&nodes[1], 1);
+
+ assert!(nodes[1].node.get_and_clear_pending_msg_events().is_empty());
+
+ // Now re-enable the signer and simulate a retry. The temporary_channel_id won't work anymore so
+ // we have to dig out the real channel ID.
+ let chan_id = {
+ let channels = nodes[0].node.list_channels();
+ assert_eq!(channels.len(), 1, "expected one channel, not {}", channels.len());
+ channels[0].channel_id
+ };
+
+ // At this point, we basically expect the channel to open like a normal zero-conf channel.
+ nodes[1].set_channel_signer_available(&nodes[0].node.get_our_node_id(), &chan_id, true);
+ nodes[1].node.signer_unblocked(Some((nodes[0].node.get_our_node_id(), chan_id)));
+
+ let (funding_signed, channel_ready_1) = {
+ let events = nodes[1].node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), 2);
+ let funding_signed = match &events[0] {
+ MessageSendEvent::SendFundingSigned { msg, .. } => msg.clone(),
+ ev => panic!("Expected SendFundingSigned, not {:?}", ev)
+ };
+ let channel_ready = match &events[1] {
+ MessageSendEvent::SendChannelReady { msg, .. } => msg.clone(),
+ ev => panic!("Expected SendChannelReady, not {:?}", ev)
+ };
+ (funding_signed, channel_ready)
+ };
+
+ nodes[0].node.handle_funding_signed(&nodes[1].node.get_our_node_id(), &funding_signed);
+ expect_channel_pending_event(&nodes[0], &nodes[1].node.get_our_node_id());
+ expect_channel_pending_event(&nodes[1], &nodes[0].node.get_our_node_id());
+ check_added_monitors(&nodes[0], 1);
+
+ let channel_ready_0 = get_event_msg!(nodes[0], MessageSendEvent::SendChannelReady, nodes[1].node.get_our_node_id());
+
+ nodes[0].node.handle_channel_ready(&nodes[1].node.get_our_node_id(), &channel_ready_1);
+ expect_channel_ready_event(&nodes[0], &nodes[1].node.get_our_node_id());
+
+ nodes[1].node.handle_channel_ready(&nodes[0].node.get_our_node_id(), &channel_ready_0);
+ expect_channel_ready_event(&nodes[1], &nodes[0].node.get_our_node_id());
+
+ let channel_update_0 = get_event_msg!(nodes[0], MessageSendEvent::SendChannelUpdate, nodes[1].node.get_our_node_id());
+ let channel_update_1 = get_event_msg!(nodes[1], MessageSendEvent::SendChannelUpdate, nodes[0].node.get_our_node_id());
+
+ nodes[0].node.handle_channel_update(&nodes[1].node.get_our_node_id(), &channel_update_1);
+ nodes[1].node.handle_channel_update(&nodes[0].node.get_our_node_id(), &channel_update_0);
+
+ assert_eq!(nodes[0].node.list_usable_channels().len(), 1);
+ assert_eq!(nodes[1].node.list_usable_channels().len(), 1);
+}
+
+#[test]
+fn test_async_commitment_signature_for_peer_disconnect() {
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+ let (_, _, chan_id, _) = create_announced_chan_between_nodes(&nodes, 0, 1);
+
+ // Send a payment.
+ let src = &nodes[0];
+ let dst = &nodes[1];
+ let (route, our_payment_hash, _our_payment_preimage, our_payment_secret) = get_route_and_payment_hash!(src, dst, 8000000);
+ src.node.send_payment_with_route(&route, our_payment_hash,
+ RecipientOnionFields::secret_only(our_payment_secret), PaymentId(our_payment_hash.0)).unwrap();
+ check_added_monitors!(src, 1);
+
+ // Pass the payment along the route.
+ let payment_event = {
+ let mut events = src.node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), 1);
+ SendEvent::from_event(events.remove(0))
+ };
+ assert_eq!(payment_event.node_id, dst.node.get_our_node_id());
+ assert_eq!(payment_event.msgs.len(), 1);
+
+ dst.node.handle_update_add_htlc(&src.node.get_our_node_id(), &payment_event.msgs[0]);
+
+ // Mark dst's signer as unavailable and handle src's commitment_signed: while dst won't yet have a
+ // `commitment_signed` of its own to offer, it should publish a `revoke_and_ack`.
+ dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, false);
+ dst.node.handle_commitment_signed(&src.node.get_our_node_id(), &payment_event.commitment_msg);
+ check_added_monitors(dst, 1);
+
+ get_event_msg!(dst, MessageSendEvent::SendRevokeAndACK, src.node.get_our_node_id());
+
+ // Now disconnect and reconnect the peers.
+ src.node.peer_disconnected(&dst.node.get_our_node_id());
+ dst.node.peer_disconnected(&src.node.get_our_node_id());
+ let mut reconnect_args = ReconnectArgs::new(&nodes[0], &nodes[1]);
+ reconnect_args.send_channel_ready = (false, false);
+ reconnect_args.pending_raa = (true, false);
+ reconnect_nodes(reconnect_args);
+
+ // Mark dst's signer as available and retry: we now expect to see dst's `commitment_signed`.
+ dst.set_channel_signer_available(&src.node.get_our_node_id(), &chan_id, true);
+ dst.node.signer_unblocked(Some((src.node.get_our_node_id(), chan_id)));
+
+ {
+ let events = dst.node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), 1, "expected one message, got {}", events.len());
+ if let MessageSendEvent::UpdateHTLCs { ref node_id, .. } = events[0] {
+ assert_eq!(node_id, &src.node.get_our_node_id());
+ } else {
+ panic!("expected UpdateHTLCs message, not {:?}", events[0]);
+ };
+ }
+}
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None, None).unwrap();
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()));
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None, None).unwrap();
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()));
let events = nodes[1].node.get_and_clear_pending_events();
let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 43, None, None).unwrap();
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()));
let events = nodes[1].node.get_and_clear_pending_events();
use crate::sign::{EcdsaChannelSigner, WriteableEcdsaChannelSigner, EntropySource, ChannelSigner, SignerProvider, NodeSigner, Recipient};
use crate::events::ClosureReason;
use crate::routing::gossip::NodeId;
-use crate::util::ser::{Readable, ReadableArgs, Writeable, Writer, VecWriter};
+use crate::util::ser::{Readable, ReadableArgs, Writeable, Writer};
use crate::util::logger::Logger;
use crate::util::errors::APIError;
use crate::util::config::{UserConfig, ChannelConfig, LegacyChannelConfig, ChannelHandshakeConfig, ChannelHandshakeLimits, MaxDustHTLCExposure};
use crate::io;
use crate::prelude::*;
use core::{cmp,mem,fmt};
+use core::convert::TryInto;
use core::ops::Deref;
#[cfg(any(test, fuzzing, debug_assertions))]
use crate::sync::Mutex;
pub announcement_sigs: Option<msgs::AnnouncementSignatures>,
}
+/// The return value of `signer_maybe_unblocked`
+#[allow(unused)]
+pub(super) struct SignerResumeUpdates {
+ pub commitment_update: Option<msgs::CommitmentUpdate>,
+ pub funding_signed: Option<msgs::FundingSigned>,
+ pub funding_created: Option<msgs::FundingCreated>,
+ pub channel_ready: Option<msgs::ChannelReady>,
+}
+
/// The return value of `channel_reestablish`
pub(super) struct ReestablishResponses {
pub channel_ready: Option<msgs::ChannelReady>,
monitor_pending_failures: Vec<(HTLCSource, PaymentHash, HTLCFailReason)>,
monitor_pending_finalized_fulfills: Vec<HTLCSource>,
+ /// If we went to send a commitment update (ie some messages then [`msgs::CommitmentSigned`])
+ /// but our signer (initially) refused to give us a signature, we should retry at some point in
+ /// the future when the signer indicates it may have a signature for us.
+ ///
+ /// This flag is set in such a case. Note that we don't need to persist this as we'll end up
+ /// setting it again as a side-effect of [`Channel::channel_reestablish`].
+ signer_pending_commitment_update: bool,
+ /// Similar to [`Self::signer_pending_commitment_update`] but we're waiting to send either a
+ /// [`msgs::FundingCreated`] or [`msgs::FundingSigned`] depending on if this channel is
+ /// outbound or inbound.
+ signer_pending_funding: bool,
+
// pending_update_fee is filled when sending and receiving update_fee.
//
// Because it follows the same commitment flow as HTLCs, `FeeUpdateState` is either `Outbound`
#[cfg(not(test))]
closing_fee_limits: Option<(u64, u64)>,
+ /// If we remove an HTLC (or fee update), commit, and receive our counterparty's
+ /// `revoke_and_ack`, we remove all knowledge of said HTLC (or fee update). However, the latest
+ /// local commitment transaction that we can broadcast still contains the HTLC (or old fee)
+ /// until we receive a further `commitment_signed`. Thus we are not eligible for initiating the
+ /// `closing_signed` negotiation if we're expecting a counterparty `commitment_signed`.
+ ///
+ /// To ensure we don't send a `closing_signed` too early, we track this state here, waiting
+ /// until we see a `commitment_signed` before doing so.
+ ///
+ /// We don't bother to persist this - we anticipate this state won't last longer than a few
+ /// milliseconds, so any accidental force-closes here should be exceedingly rare.
+ expecting_peer_commitment_signed: bool,
+
/// The hash of the block in which the funding transaction was included.
funding_tx_confirmed_in: Option<BlockHash>,
funding_tx_confirmation_height: u32,
self.outbound_scid_alias
}
+ /// Returns the holder signer for this channel.
+ #[cfg(test)]
+ pub fn get_signer(&self) -> &ChannelSignerType<<SP::Target as SignerProvider>::Signer> {
+ return &self.holder_signer
+ }
+
/// Only allowed immediately after deserialization if get_outbound_scid_alias returns 0,
/// indicating we were written by LDK prior to 0.0.106 which did not set outbound SCID aliases
/// or prior to any channel actions during `Channel` initialization.
match self.config.options.max_dust_htlc_exposure {
MaxDustHTLCExposure::FeeRateMultiplier(multiplier) => {
let feerate_per_kw = fee_estimator.bounded_sat_per_1000_weight(
- ConfirmationTarget::OnChainSweep);
- feerate_per_kw as u64 * multiplier
+ ConfirmationTarget::OnChainSweep) as u64;
+ feerate_per_kw.saturating_mul(multiplier)
},
MaxDustHTLCExposure::FixedLimitMsat(limit) => limit,
}
context.holder_dust_limit_satoshis + dust_buffer_feerate * htlc_timeout_tx_weight(context.get_channel_type()) / 1000)
};
let on_counterparty_dust_htlc_exposure_msat = inbound_stats.on_counterparty_tx_dust_exposure_msat + outbound_stats.on_counterparty_tx_dust_exposure_msat;
- if on_counterparty_dust_htlc_exposure_msat as i64 + htlc_success_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat as i64 {
+ if on_counterparty_dust_htlc_exposure_msat as i64 + htlc_success_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat.try_into().unwrap_or(i64::max_value()) {
remaining_msat_below_dust_exposure_limit =
Some(max_dust_htlc_exposure_msat.saturating_sub(on_counterparty_dust_htlc_exposure_msat));
dust_exposure_dust_limit_msat = cmp::max(dust_exposure_dust_limit_msat, htlc_success_dust_limit * 1000);
}
let on_holder_dust_htlc_exposure_msat = inbound_stats.on_holder_tx_dust_exposure_msat + outbound_stats.on_holder_tx_dust_exposure_msat;
- if on_holder_dust_htlc_exposure_msat as i64 + htlc_timeout_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat as i64 {
+ if on_holder_dust_htlc_exposure_msat as i64 + htlc_timeout_dust_limit as i64 * 1000 - 1 > max_dust_htlc_exposure_msat.try_into().unwrap_or(i64::max_value()) {
remaining_msat_below_dust_exposure_limit = Some(cmp::min(
remaining_msat_below_dust_exposure_limit.unwrap_or(u64::max_value()),
max_dust_htlc_exposure_msat.saturating_sub(on_holder_dust_htlc_exposure_msat)));
unbroadcasted_batch_funding_txid,
}
}
+
+ /// Only allowed after [`Self::channel_transaction_parameters`] is set.
+ fn get_funding_created_msg<L: Deref>(&mut self, logger: &L) -> Option<msgs::FundingCreated> where L::Target: Logger {
+ let counterparty_keys = self.build_remote_transaction_keys();
+ let counterparty_initial_commitment_tx = self.build_commitment_transaction(self.cur_counterparty_commitment_transaction_number, &counterparty_keys, false, false, logger).tx;
+ let signature = match &self.holder_signer {
+ // TODO (taproot|arik): move match into calling method for Taproot
+ ChannelSignerType::Ecdsa(ecdsa) => {
+ ecdsa.sign_counterparty_commitment(&counterparty_initial_commitment_tx, Vec::new(), &self.secp_ctx)
+ .map(|(sig, _)| sig).ok()?
+ }
+ };
+
+ if self.signer_pending_funding {
+ log_trace!(logger, "Counterparty commitment signature ready for funding_created message: clearing signer_pending_funding");
+ self.signer_pending_funding = false;
+ }
+
+ Some(msgs::FundingCreated {
+ temporary_channel_id: self.temporary_channel_id.unwrap(),
+ funding_txid: self.channel_transaction_parameters.funding_outpoint.as_ref().unwrap().txid,
+ funding_output_index: self.channel_transaction_parameters.funding_outpoint.as_ref().unwrap().index,
+ signature,
+ #[cfg(taproot)]
+ partial_signature_with_nonce: None,
+ #[cfg(taproot)]
+ next_local_nonce: None,
+ })
+ }
+
+ /// Only allowed after [`Self::channel_transaction_parameters`] is set.
+ fn get_funding_signed_msg<L: Deref>(&mut self, logger: &L) -> (CommitmentTransaction, Option<msgs::FundingSigned>) where L::Target: Logger {
+ let counterparty_keys = self.build_remote_transaction_keys();
+ let counterparty_initial_commitment_tx = self.build_commitment_transaction(self.cur_counterparty_commitment_transaction_number + 1, &counterparty_keys, false, false, logger).tx;
+
+ let counterparty_trusted_tx = counterparty_initial_commitment_tx.trust();
+ let counterparty_initial_bitcoin_tx = counterparty_trusted_tx.built_transaction();
+ log_trace!(logger, "Initial counterparty tx for channel {} is: txid {} tx {}",
+ &self.channel_id(), counterparty_initial_bitcoin_tx.txid, encode::serialize_hex(&counterparty_initial_bitcoin_tx.transaction));
+
+ match &self.holder_signer {
+ // TODO (arik): move match into calling method for Taproot
+ ChannelSignerType::Ecdsa(ecdsa) => {
+ let funding_signed = ecdsa.sign_counterparty_commitment(&counterparty_initial_commitment_tx, Vec::new(), &self.secp_ctx)
+ .map(|(signature, _)| msgs::FundingSigned {
+ channel_id: self.channel_id(),
+ signature,
+ #[cfg(taproot)]
+ partial_signature_with_nonce: None,
+ })
+ .ok();
+
+ if funding_signed.is_none() {
+ log_trace!(logger, "Counterparty commitment signature not available for funding_signed message; setting signer_pending_funding");
+ self.signer_pending_funding = true;
+ } else if self.signer_pending_funding {
+ log_trace!(logger, "Counterparty commitment signature available for funding_signed message; clearing signer_pending_funding");
+ self.signer_pending_funding = false;
+ }
+
+ // We sign "counterparty" commitment transaction, allowing them to broadcast the tx if they wish.
+ (counterparty_initial_commitment_tx, funding_signed)
+ }
+ }
+ }
}
// Internal utility functions for channels
};
self.context.cur_holder_commitment_transaction_number -= 1;
+ self.context.expecting_peer_commitment_signed = false;
// Note that if we need_commitment & !AwaitingRemoteRevoke we'll call
// build_commitment_no_status_check() next which will reset this to RAAFirst.
self.context.resend_order = RAACommitmentOrder::CommitmentFirst;
self.context.monitor_pending_revoke_and_ack = true;
if need_commitment && (self.context.channel_state & (ChannelState::AwaitingRemoteRevoke as u32)) == 0 {
// If we were going to send a commitment_signed after the RAA, go ahead and do all
- // the corresponding HTLC status updates so that get_last_commitment_update
- // includes the right HTLCs.
+ // the corresponding HTLC status updates so that
+ // get_last_commitment_update_for_send includes the right HTLCs.
self.context.monitor_pending_commitment_signed = true;
let mut additional_update = self.build_commitment_no_status_check(logger);
// build_commitment_no_status_check may bump latest_monitor_id but we want them to be
// Take references explicitly so that we can hold multiple references to self.context.
let pending_inbound_htlcs: &mut Vec<_> = &mut self.context.pending_inbound_htlcs;
let pending_outbound_htlcs: &mut Vec<_> = &mut self.context.pending_outbound_htlcs;
+ let expecting_peer_commitment_signed = &mut self.context.expecting_peer_commitment_signed;
// We really shouldnt have two passes here, but retain gives a non-mutable ref (Rust bug)
pending_inbound_htlcs.retain(|htlc| {
if let &InboundHTLCRemovalReason::Fulfill(_) = reason {
value_to_self_msat_diff += htlc.amount_msat as i64;
}
+ *expecting_peer_commitment_signed = true;
false
} else { true }
});
if let OutboundHTLCState::LocalAnnounced(_) = htlc.state {
log_trace!(logger, " ...promoting outbound LocalAnnounced {} to Committed", &htlc.payment_hash);
htlc.state = OutboundHTLCState::Committed;
+ *expecting_peer_commitment_signed = true;
}
if let &mut OutboundHTLCState::AwaitingRemoteRevokeToRemove(ref mut outcome) = &mut htlc.state {
log_trace!(logger, " ...promoting outbound AwaitingRemoteRevokeToRemove {} to AwaitingRemovedRemoteRevoke", &htlc.payment_hash);
log_trace!(logger, " ...promoting outbound fee update {} to Committed", feerate);
self.context.feerate_per_kw = feerate;
self.context.pending_update_fee = None;
+ self.context.expecting_peer_commitment_signed = true;
},
FeeUpdateState::RemoteAnnounced => { debug_assert!(!self.context.is_outbound()); },
FeeUpdateState::AwaitingRemoteRevokeToAnnounce => {
// cells) while we can't update the monitor, so we just return what we have.
if require_commitment {
self.context.monitor_pending_commitment_signed = true;
- // When the monitor updating is restored we'll call get_last_commitment_update(),
- // which does not update state, but we're definitely now awaiting a remote revoke
- // before we can step forward any more, so set it here.
+ // When the monitor updating is restored we'll call
+ // get_last_commitment_update_for_send(), which does not update state, but we're
+ // definitely now awaiting a remote revoke before we can step forward any more, so
+ // set it here.
let mut additional_update = self.build_commitment_no_status_check(logger);
// build_commitment_no_status_check may bump latest_monitor_id but we want them to be
// strictly increasing by one, so decrement it here.
Some(self.get_last_revoke_and_ack())
} else { None };
let commitment_update = if self.context.monitor_pending_commitment_signed {
- self.mark_awaiting_response();
- Some(self.get_last_commitment_update(logger))
+ self.get_last_commitment_update_for_send(logger).ok()
} else { None };
+ if commitment_update.is_some() {
+ self.mark_awaiting_response();
+ }
self.context.monitor_pending_revoke_and_ack = false;
self.context.monitor_pending_commitment_signed = false;
Ok(())
}
+ /// Indicates that the signer may have some signatures for us, so we should retry if we're
+ /// blocked.
+ #[allow(unused)]
+ pub fn signer_maybe_unblocked<L: Deref>(&mut self, logger: &L) -> SignerResumeUpdates where L::Target: Logger {
+ let commitment_update = if self.context.signer_pending_commitment_update {
+ self.get_last_commitment_update_for_send(logger).ok()
+ } else { None };
+ let funding_signed = if self.context.signer_pending_funding && !self.context.is_outbound() {
+ self.context.get_funding_signed_msg(logger).1
+ } else { None };
+ let channel_ready = if funding_signed.is_some() {
+ self.check_get_channel_ready(0)
+ } else { None };
+ let funding_created = if self.context.signer_pending_funding && self.context.is_outbound() {
+ self.context.get_funding_created_msg(logger)
+ } else { None };
+
+ log_trace!(logger, "Signer unblocked with {} commitment_update, {} funding_signed, {} funding_created, and {} channel_ready",
+ if commitment_update.is_some() { "a" } else { "no" },
+ if funding_signed.is_some() { "a" } else { "no" },
+ if funding_created.is_some() { "a" } else { "no" },
+ if channel_ready.is_some() { "a" } else { "no" });
+
+ SignerResumeUpdates {
+ commitment_update,
+ funding_signed,
+ funding_created,
+ channel_ready,
+ }
+ }
+
fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK {
let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.cur_holder_commitment_transaction_number, &self.context.secp_ctx);
let per_commitment_secret = self.context.holder_signer.as_ref().release_commitment_secret(self.context.cur_holder_commitment_transaction_number + 2);
}
}
- fn get_last_commitment_update<L: Deref>(&self, logger: &L) -> msgs::CommitmentUpdate where L::Target: Logger {
+ /// Gets the last commitment update for immediate sending to our peer.
+ fn get_last_commitment_update_for_send<L: Deref>(&mut self, logger: &L) -> Result<msgs::CommitmentUpdate, ()> where L::Target: Logger {
let mut update_add_htlcs = Vec::new();
let mut update_fulfill_htlcs = Vec::new();
let mut update_fail_htlcs = Vec::new();
})
} else { None };
- log_trace!(logger, "Regenerated latest commitment update in channel {} with{} {} update_adds, {} update_fulfills, {} update_fails, and {} update_fail_malformeds",
+ log_trace!(logger, "Regenerating latest commitment update in channel {} with{} {} update_adds, {} update_fulfills, {} update_fails, and {} update_fail_malformeds",
&self.context.channel_id(), if update_fee.is_some() { " update_fee," } else { "" },
update_add_htlcs.len(), update_fulfill_htlcs.len(), update_fail_htlcs.len(), update_fail_malformed_htlcs.len());
- msgs::CommitmentUpdate {
+ let commitment_signed = if let Ok(update) = self.send_commitment_no_state_update(logger).map(|(cu, _)| cu) {
+ if self.context.signer_pending_commitment_update {
+ log_trace!(logger, "Commitment update generated: clearing signer_pending_commitment_update");
+ self.context.signer_pending_commitment_update = false;
+ }
+ update
+ } else {
+ if !self.context.signer_pending_commitment_update {
+ log_trace!(logger, "Commitment update awaiting signer: setting signer_pending_commitment_update");
+ self.context.signer_pending_commitment_update = true;
+ }
+ return Err(());
+ };
+ Ok(msgs::CommitmentUpdate {
update_add_htlcs, update_fulfill_htlcs, update_fail_htlcs, update_fail_malformed_htlcs, update_fee,
- commitment_signed: self.send_commitment_no_state_update(logger).expect("It looks like we failed to re-generate a commitment_signed we had previously sent?").0,
- }
+ commitment_signed,
+ })
}
/// Gets the `Shutdown` message we should send our peer on reconnect, if any.
Ok(ReestablishResponses {
channel_ready, shutdown_msg, announcement_sigs,
raa: required_revoke,
- commitment_update: Some(self.get_last_commitment_update(logger)),
+ commitment_update: self.get_last_commitment_update_for_send(logger).ok(),
order: self.context.resend_order.clone(),
})
}
-> Result<(Option<msgs::ClosingSigned>, Option<Transaction>, Option<ShutdownResult>), ChannelError>
where F::Target: FeeEstimator, L::Target: Logger
{
+ // If we're waiting on a monitor persistence, that implies we're also waiting to send some
+ // message to our counterparty (probably a `revoke_and_ack`). In such a case, we shouldn't
+ // initiate `closing_signed` negotiation until we're clear of all pending messages. Note
+ // that closing_negotiation_ready checks this case (as well as a few others).
if self.context.last_sent_closing_fee.is_some() || !self.closing_negotiation_ready() {
return Ok((None, None, None));
}
return Ok((None, None, None));
}
+ // If we're waiting on a counterparty `commitment_signed` to clear some updates from our
+ // local commitment transaction, we can't yet initiate `closing_signed` negotiation.
+ if self.context.expecting_peer_commitment_signed {
+ return Ok((None, None, None));
+ }
+
let (our_min_fee, our_max_fee) = self.calculate_closing_fee_limits(fee_estimator);
assert!(self.context.shutdown_scriptpubkey.is_some());
return None;
}
+ // If we're still pending the signature on a funding transaction, then we're not ready to send a
+ // channel_ready yet.
+ if self.context.signer_pending_funding {
+ return None;
+ }
+
// Note that we don't include ChannelState::WaitingForBatch as we don't want to send
// channel_ready until the entire batch is ready.
let non_shutdown_state = self.context.channel_state & (!MULTI_STATE_FLAGS);
}
let res = ecdsa.sign_counterparty_commitment(&commitment_stats.tx, commitment_stats.preimages, &self.context.secp_ctx)
- .map_err(|_| ChannelError::Close("Failed to get signatures for new commitment_signed".to_owned()))?;
+ .map_err(|_| ChannelError::Ignore("Failed to get signatures for new commitment_signed".to_owned()))?;
signature = res.0;
htlc_signatures = res.1;
pub fn new<ES: Deref, F: Deref>(
fee_estimator: &LowerBoundedFeeEstimator<F>, entropy_source: &ES, signer_provider: &SP, counterparty_node_id: PublicKey, their_features: &InitFeatures,
channel_value_satoshis: u64, push_msat: u64, user_id: u128, config: &UserConfig, current_chain_height: u32,
- outbound_scid_alias: u64
+ outbound_scid_alias: u64, temporary_channel_id: Option<ChannelId>
) -> Result<OutboundV1Channel<SP>, APIError>
where ES::Target: EntropySource,
F::Target: FeeEstimator
Err(_) => return Err(APIError::ChannelUnavailable { err: "Failed to get destination script".to_owned()}),
};
- let temporary_channel_id = ChannelId::temporary_from_entropy_source(entropy_source);
+ let temporary_channel_id = temporary_channel_id.unwrap_or_else(|| ChannelId::temporary_from_entropy_source(entropy_source));
Ok(Self {
context: ChannelContext {
monitor_pending_failures: Vec::new(),
monitor_pending_finalized_fulfills: Vec::new(),
+ signer_pending_commitment_update: false,
+ signer_pending_funding: false,
+
#[cfg(debug_assertions)]
holder_max_commitment_tx_output: Mutex::new((channel_value_satoshis * 1000 - push_msat, push_msat)),
#[cfg(debug_assertions)]
last_sent_closing_fee: None,
pending_counterparty_closing_signed: None,
+ expecting_peer_commitment_signed: false,
closing_fee_limits: None,
target_closing_feerate_sats_per_kw: None,
})
}
- /// If an Err is returned, it is a ChannelError::Close (for get_funding_created)
- fn get_funding_created_signature<L: Deref>(&mut self, logger: &L) -> Result<Signature, ChannelError> where L::Target: Logger {
- let counterparty_keys = self.context.build_remote_transaction_keys();
- let counterparty_initial_commitment_tx = self.context.build_commitment_transaction(self.context.cur_counterparty_commitment_transaction_number, &counterparty_keys, false, false, logger).tx;
- match &self.context.holder_signer {
- // TODO (taproot|arik): move match into calling method for Taproot
- ChannelSignerType::Ecdsa(ecdsa) => {
- Ok(ecdsa.sign_counterparty_commitment(&counterparty_initial_commitment_tx, Vec::new(), &self.context.secp_ctx)
- .map_err(|_| ChannelError::Close("Failed to get signatures for new commitment_signed".to_owned()))?.0)
- }
- }
- }
-
/// Updates channel state with knowledge of the funding transaction's txid/index, and generates
/// a funding_created message for the remote peer.
/// Panics if called at some time other than immediately after initial handshake, if called twice,
/// Do NOT broadcast the funding transaction until after a successful funding_signed call!
/// If an Err is returned, it is a ChannelError::Close.
pub fn get_funding_created<L: Deref>(mut self, funding_transaction: Transaction, funding_txo: OutPoint, is_batch_funding: bool, logger: &L)
- -> Result<(Channel<SP>, msgs::FundingCreated), (Self, ChannelError)> where L::Target: Logger {
+ -> Result<(Channel<SP>, Option<msgs::FundingCreated>), (Self, ChannelError)> where L::Target: Logger {
if !self.context.is_outbound() {
panic!("Tried to create outbound funding_created message on an inbound channel!");
}
self.context.channel_transaction_parameters.funding_outpoint = Some(funding_txo);
self.context.holder_signer.as_mut().provide_channel_parameters(&self.context.channel_transaction_parameters);
- let signature = match self.get_funding_created_signature(logger) {
- Ok(res) => res,
- Err(e) => {
- log_error!(logger, "Got bad signatures: {:?}!", e);
- self.context.channel_transaction_parameters.funding_outpoint = None;
- return Err((self, e));
- }
- };
-
- let temporary_channel_id = self.context.channel_id;
-
// Now that we're past error-generating stuff, update our local state:
self.context.channel_state = ChannelState::FundingCreated as u32;
self.context.funding_transaction = Some(funding_transaction);
self.context.is_batch_funding = Some(()).filter(|_| is_batch_funding);
+ let funding_created = self.context.get_funding_created_msg(logger);
+ if funding_created.is_none() {
+ if !self.context.signer_pending_funding {
+ log_trace!(logger, "funding_created awaiting signer; setting signer_pending_funding");
+ self.context.signer_pending_funding = true;
+ }
+ }
+
let channel = Channel {
context: self.context,
};
- Ok((channel, msgs::FundingCreated {
- temporary_channel_id,
- funding_txid: funding_txo.txid,
- funding_output_index: funding_txo.index,
- signature,
- #[cfg(taproot)]
- partial_signature_with_nonce: None,
- #[cfg(taproot)]
- next_local_nonce: None,
- }))
+ Ok((channel, funding_created))
}
fn get_initial_channel_type(config: &UserConfig, their_features: &InitFeatures) -> ChannelTypeFeatures {
monitor_pending_failures: Vec::new(),
monitor_pending_finalized_fulfills: Vec::new(),
+ signer_pending_commitment_update: false,
+ signer_pending_funding: false,
+
#[cfg(debug_assertions)]
holder_max_commitment_tx_output: Mutex::new((msg.push_msat, msg.funding_satoshis * 1000 - msg.push_msat)),
#[cfg(debug_assertions)]
last_sent_closing_fee: None,
pending_counterparty_closing_signed: None,
+ expecting_peer_commitment_signed: false,
closing_fee_limits: None,
target_closing_feerate_sats_per_kw: None,
self.generate_accept_channel_message()
}
- fn funding_created_signature<L: Deref>(&mut self, sig: &Signature, logger: &L) -> Result<(CommitmentTransaction, CommitmentTransaction, Signature), ChannelError> where L::Target: Logger {
+ fn check_funding_created_signature<L: Deref>(&mut self, sig: &Signature, logger: &L) -> Result<CommitmentTransaction, ChannelError> where L::Target: Logger {
let funding_script = self.context.get_funding_redeemscript();
let keys = self.context.build_holder_transaction_keys(self.context.cur_holder_commitment_transaction_number);
let initial_commitment_tx = self.context.build_commitment_transaction(self.context.cur_holder_commitment_transaction_number, &keys, true, false, logger).tx;
- {
- let trusted_tx = initial_commitment_tx.trust();
- let initial_commitment_bitcoin_tx = trusted_tx.built_transaction();
- let sighash = initial_commitment_bitcoin_tx.get_sighash_all(&funding_script, self.context.channel_value_satoshis);
- // They sign the holder commitment transaction...
- log_trace!(logger, "Checking funding_created tx signature {} by key {} against tx {} (sighash {}) with redeemscript {} for channel {}.",
- log_bytes!(sig.serialize_compact()[..]), log_bytes!(self.context.counterparty_funding_pubkey().serialize()),
- encode::serialize_hex(&initial_commitment_bitcoin_tx.transaction), log_bytes!(sighash[..]),
- encode::serialize_hex(&funding_script), &self.context.channel_id());
- secp_check!(self.context.secp_ctx.verify_ecdsa(&sighash, &sig, self.context.counterparty_funding_pubkey()), "Invalid funding_created signature from peer".to_owned());
- }
-
- let counterparty_keys = self.context.build_remote_transaction_keys();
- let counterparty_initial_commitment_tx = self.context.build_commitment_transaction(self.context.cur_counterparty_commitment_transaction_number, &counterparty_keys, false, false, logger).tx;
+ let trusted_tx = initial_commitment_tx.trust();
+ let initial_commitment_bitcoin_tx = trusted_tx.built_transaction();
+ let sighash = initial_commitment_bitcoin_tx.get_sighash_all(&funding_script, self.context.channel_value_satoshis);
+ // They sign the holder commitment transaction...
+ log_trace!(logger, "Checking funding_created tx signature {} by key {} against tx {} (sighash {}) with redeemscript {} for channel {}.",
+ log_bytes!(sig.serialize_compact()[..]), log_bytes!(self.context.counterparty_funding_pubkey().serialize()),
+ encode::serialize_hex(&initial_commitment_bitcoin_tx.transaction), log_bytes!(sighash[..]),
+ encode::serialize_hex(&funding_script), &self.context.channel_id());
+ secp_check!(self.context.secp_ctx.verify_ecdsa(&sighash, &sig, self.context.counterparty_funding_pubkey()), "Invalid funding_created signature from peer".to_owned());
- let counterparty_trusted_tx = counterparty_initial_commitment_tx.trust();
- let counterparty_initial_bitcoin_tx = counterparty_trusted_tx.built_transaction();
- log_trace!(logger, "Initial counterparty tx for channel {} is: txid {} tx {}",
- &self.context.channel_id(), counterparty_initial_bitcoin_tx.txid, encode::serialize_hex(&counterparty_initial_bitcoin_tx.transaction));
-
- match &self.context.holder_signer {
- // TODO (arik): move match into calling method for Taproot
- ChannelSignerType::Ecdsa(ecdsa) => {
- let counterparty_signature = ecdsa.sign_counterparty_commitment(&counterparty_initial_commitment_tx, Vec::new(), &self.context.secp_ctx)
- .map_err(|_| ChannelError::Close("Failed to get signatures for new commitment_signed".to_owned()))?.0;
-
- // We sign "counterparty" commitment transaction, allowing them to broadcast the tx if they wish.
- Ok((counterparty_initial_commitment_tx, initial_commitment_tx, counterparty_signature))
- }
- }
+ Ok(initial_commitment_tx)
}
pub fn funding_created<L: Deref>(
mut self, msg: &msgs::FundingCreated, best_block: BestBlock, signer_provider: &SP, logger: &L
- ) -> Result<(Channel<SP>, msgs::FundingSigned, ChannelMonitor<<SP::Target as SignerProvider>::Signer>), (Self, ChannelError)>
+ ) -> Result<(Channel<SP>, Option<msgs::FundingSigned>, ChannelMonitor<<SP::Target as SignerProvider>::Signer>), (Self, ChannelError)>
where
L::Target: Logger
{
let funding_txo = OutPoint { txid: msg.funding_txid, index: msg.funding_output_index };
self.context.channel_transaction_parameters.funding_outpoint = Some(funding_txo);
// This is an externally observable change before we finish all our checks. In particular
- // funding_created_signature may fail.
+ // check_funding_created_signature may fail.
self.context.holder_signer.as_mut().provide_channel_parameters(&self.context.channel_transaction_parameters);
- let (counterparty_initial_commitment_tx, initial_commitment_tx, signature) = match self.funding_created_signature(&msg.signature, logger) {
+ let initial_commitment_tx = match self.check_funding_created_signature(&msg.signature, logger) {
Ok(res) => res,
Err(ChannelError::Close(e)) => {
self.context.channel_transaction_parameters.funding_outpoint = None;
Err(e) => {
// The only error we know how to handle is ChannelError::Close, so we fall over here
// to make sure we don't continue with an inconsistent state.
- panic!("unexpected error type from funding_created_signature {:?}", e);
+ panic!("unexpected error type from check_funding_created_signature {:?}", e);
}
};
// Now that we're past error-generating stuff, update our local state:
+ self.context.channel_state = ChannelState::FundingSent as u32;
+ self.context.channel_id = funding_txo.to_channel_id();
+ self.context.cur_counterparty_commitment_transaction_number -= 1;
+ self.context.cur_holder_commitment_transaction_number -= 1;
+
+ let (counterparty_initial_commitment_tx, funding_signed) = self.context.get_funding_signed_msg(logger);
+
let funding_redeemscript = self.context.get_funding_redeemscript();
let funding_txo_script = funding_redeemscript.to_v0_p2wsh();
let obscure_factor = get_commitment_transaction_number_obscure_factor(&self.context.get_holder_pubkeys().payment_point, &self.context.get_counterparty_pubkeys().payment_point, self.context.is_outbound());
channel_monitor.provide_initial_counterparty_commitment_tx(
counterparty_initial_commitment_tx.trust().txid(), Vec::new(),
- self.context.cur_counterparty_commitment_transaction_number,
+ self.context.cur_counterparty_commitment_transaction_number + 1,
self.context.counterparty_cur_commitment_point.unwrap(), self.context.feerate_per_kw,
counterparty_initial_commitment_tx.to_broadcaster_value_sat(),
counterparty_initial_commitment_tx.to_countersignatory_value_sat(), logger);
- self.context.channel_state = ChannelState::FundingSent as u32;
- self.context.channel_id = funding_txo.to_channel_id();
- self.context.cur_counterparty_commitment_transaction_number -= 1;
- self.context.cur_holder_commitment_transaction_number -= 1;
-
- log_info!(logger, "Generated funding_signed for peer for channel {}", &self.context.channel_id());
+ log_info!(logger, "{} funding_signed for peer for channel {}",
+ if funding_signed.is_some() { "Generated" } else { "Waiting for signature on" }, &self.context.channel_id());
// Promote the channel to a full-fledged one now that we have updated the state and have a
// `ChannelMonitor`.
let mut channel = Channel {
context: self.context,
};
- let channel_id = channel.context.channel_id.clone();
let need_channel_ready = channel.check_get_channel_ready(0).is_some();
channel.monitor_updating_paused(false, false, need_channel_ready, Vec::new(), Vec::new(), Vec::new());
- Ok((channel, msgs::FundingSigned {
- channel_id,
- signature,
- #[cfg(taproot)]
- partial_signature_with_nonce: None,
- }, channel_monitor))
+ Ok((channel, funding_signed, channel_monitor))
}
}
const SERIALIZATION_VERSION: u8 = 3;
-const MIN_SERIALIZATION_VERSION: u8 = 2;
+const MIN_SERIALIZATION_VERSION: u8 = 3;
impl_writeable_tlv_based_enum!(InboundHTLCRemovalReason,;
(0, FailRelay),
self.context.latest_monitor_update_id.write(writer)?;
- let mut key_data = VecWriter(Vec::new());
- // TODO (taproot|arik): Introduce serialization distinction for non-ECDSA signers.
- self.context.holder_signer.as_ecdsa().expect("Only ECDSA signers may be serialized").write(&mut key_data)?;
- assert!(key_data.0.len() < core::usize::MAX);
- assert!(key_data.0.len() < core::u32::MAX as usize);
- (key_data.0.len() as u32).write(writer)?;
- writer.write_all(&key_data.0[..])?;
-
// Write out the old serialization for shutdown_pubkey for backwards compatibility, if
// deserialized from that format.
match self.context.shutdown_scriptpubkey.as_ref().and_then(|script| script.as_legacy_pubkey()) {
monitor_pending_failures,
monitor_pending_finalized_fulfills: monitor_pending_finalized_fulfills.unwrap(),
+ signer_pending_commitment_update: false,
+ signer_pending_funding: false,
+
pending_update_fee,
holding_cell_update_fee,
next_holder_htlc_id,
last_sent_closing_fee: None,
pending_counterparty_closing_signed: None,
+ expecting_peer_commitment_signed: false,
closing_fee_limits: None,
target_closing_feerate_sats_per_kw,
let secp_ctx = Secp256k1::new();
let node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- match OutboundV1Channel::<&TestKeysInterface>::new(&LowerBoundedFeeEstimator::new(&TestFeeEstimator { fee_est: 253 }), &&keys_provider, &&keys_provider, node_id, &features, 10000000, 100000, 42, &config, 0, 42) {
+ match OutboundV1Channel::<&TestKeysInterface>::new(&LowerBoundedFeeEstimator::new(&TestFeeEstimator { fee_est: 253 }), &&keys_provider, &&keys_provider, node_id, &features, 10000000, 100000, 42, &config, 0, 42, None) {
Err(APIError::IncompatibleShutdownScript { script }) => {
assert_eq!(script.into_inner(), non_v0_segwit_shutdown_script.into_inner());
},
let node_a_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- let node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&bounded_fee_estimator, &&keys_provider, &&keys_provider, node_a_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ let node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&bounded_fee_estimator, &&keys_provider, &&keys_provider, node_a_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
// Now change the fee so we can check that the fee in the open_channel message is the
// same as the old fee.
// Create Node A's channel pointing to Node B's pubkey
let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
// Create Node B's channel by receiving Node A's open_channel message
// Make sure A's dust limit is as we expect.
}]};
let funding_outpoint = OutPoint{ txid: tx.txid(), index: 0 };
let (mut node_a_chan, funding_created_msg) = node_a_chan.get_funding_created(tx.clone(), funding_outpoint, false, &&logger).map_err(|_| ()).unwrap();
- let (_, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg, best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
+ let (_, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg.unwrap(), best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
// Node B --> Node A: funding signed
- let _ = node_a_chan.funding_signed(&funding_signed_msg, best_block, &&keys_provider, &&logger).unwrap();
+ let _ = node_a_chan.funding_signed(&funding_signed_msg.unwrap(), best_block, &&keys_provider, &&logger).unwrap();
// Put some inbound and outbound HTLCs in A's channel.
let htlc_amount_msat = 11_092_000; // put an amount below A's effective dust limit but above B's.
let node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- let mut chan = OutboundV1Channel::<&TestKeysInterface>::new(&fee_est, &&keys_provider, &&keys_provider, node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ let mut chan = OutboundV1Channel::<&TestKeysInterface>::new(&fee_est, &&keys_provider, &&keys_provider, node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
let commitment_tx_fee_0_htlcs = commit_tx_fee_msat(chan.context.feerate_per_kw, 0, chan.context.get_channel_type());
let commitment_tx_fee_1_htlc = commit_tx_fee_msat(chan.context.feerate_per_kw, 1, chan.context.get_channel_type());
// Create Node A's channel pointing to Node B's pubkey
let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
// Create Node B's channel by receiving Node A's open_channel message
let open_channel_msg = node_a_chan.get_open_channel(chain_hash);
}]};
let funding_outpoint = OutPoint{ txid: tx.txid(), index: 0 };
let (mut node_a_chan, funding_created_msg) = node_a_chan.get_funding_created(tx.clone(), funding_outpoint, false, &&logger).map_err(|_| ()).unwrap();
- let (mut node_b_chan, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg, best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
+ let (mut node_b_chan, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg.unwrap(), best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
// Node B --> Node A: funding signed
- let _ = node_a_chan.funding_signed(&funding_signed_msg, best_block, &&keys_provider, &&logger).unwrap();
+ let _ = node_a_chan.funding_signed(&funding_signed_msg.unwrap(), best_block, &&keys_provider, &&logger).unwrap();
// Now disconnect the two nodes and check that the commitment point in
// Node B's channel_reestablish message is sane.
// Test that `OutboundV1Channel::new` creates a channel with the correct value for
// `holder_max_htlc_value_in_flight_msat`, when configured with a valid percentage value,
// which is set to the lower bound + 1 (2%) of the `channel_value`.
- let chan_1 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_2_percent), 10000000, 100000, 42, &config_2_percent, 0, 42).unwrap();
+ let chan_1 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_2_percent), 10000000, 100000, 42, &config_2_percent, 0, 42, None).unwrap();
let chan_1_value_msat = chan_1.context.channel_value_satoshis * 1000;
assert_eq!(chan_1.context.holder_max_htlc_value_in_flight_msat, (chan_1_value_msat as f64 * 0.02) as u64);
// Test with the upper bound - 1 of valid values (99%).
- let chan_2 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_99_percent), 10000000, 100000, 42, &config_99_percent, 0, 42).unwrap();
+ let chan_2 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_99_percent), 10000000, 100000, 42, &config_99_percent, 0, 42, None).unwrap();
let chan_2_value_msat = chan_2.context.channel_value_satoshis * 1000;
assert_eq!(chan_2.context.holder_max_htlc_value_in_flight_msat, (chan_2_value_msat as f64 * 0.99) as u64);
// Test that `OutboundV1Channel::new` uses the lower bound of the configurable percentage values (1%)
// if `max_inbound_htlc_value_in_flight_percent_of_channel` is set to a value less than 1.
- let chan_5 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_0_percent), 10000000, 100000, 42, &config_0_percent, 0, 42).unwrap();
+ let chan_5 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_0_percent), 10000000, 100000, 42, &config_0_percent, 0, 42, None).unwrap();
let chan_5_value_msat = chan_5.context.channel_value_satoshis * 1000;
assert_eq!(chan_5.context.holder_max_htlc_value_in_flight_msat, (chan_5_value_msat as f64 * 0.01) as u64);
// Test that `OutboundV1Channel::new` uses the upper bound of the configurable percentage values
// (100%) if `max_inbound_htlc_value_in_flight_percent_of_channel` is set to a larger value
// than 100.
- let chan_6 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_101_percent), 10000000, 100000, 42, &config_101_percent, 0, 42).unwrap();
+ let chan_6 = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&config_101_percent), 10000000, 100000, 42, &config_101_percent, 0, 42, None).unwrap();
let chan_6_value_msat = chan_6.context.channel_value_satoshis * 1000;
assert_eq!(chan_6.context.holder_max_htlc_value_in_flight_msat, chan_6_value_msat);
let mut outbound_node_config = UserConfig::default();
outbound_node_config.channel_handshake_config.their_channel_reserve_proportional_millionths = (outbound_selected_channel_reserve_perc * 1_000_000.0) as u32;
- let chan = OutboundV1Channel::<&TestKeysInterface>::new(&&fee_est, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&outbound_node_config), channel_value_satoshis, 100_000, 42, &outbound_node_config, 0, 42).unwrap();
+ let chan = OutboundV1Channel::<&TestKeysInterface>::new(&&fee_est, &&keys_provider, &&keys_provider, outbound_node_id, &channelmanager::provided_init_features(&outbound_node_config), channel_value_satoshis, 100_000, 42, &outbound_node_config, 0, 42, None).unwrap();
let expected_outbound_selected_chan_reserve = cmp::max(MIN_THEIR_CHAN_RESERVE_SATOSHIS, (chan.context.channel_value_satoshis as f64 * outbound_selected_channel_reserve_perc) as u64);
assert_eq!(chan.context.holder_selected_channel_reserve_satoshis, expected_outbound_selected_chan_reserve);
// Create Node A's channel pointing to Node B's pubkey
let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
- let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ let mut node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider, node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
// Create Node B's channel by receiving Node A's open_channel message
// Make sure A's dust limit is as we expect.
}]};
let funding_outpoint = OutPoint{ txid: tx.txid(), index: 0 };
let (mut node_a_chan, funding_created_msg) = node_a_chan.get_funding_created(tx.clone(), funding_outpoint, false, &&logger).map_err(|_| ()).unwrap();
- let (_, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg, best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
+ let (_, funding_signed_msg, _) = node_b_chan.funding_created(&funding_created_msg.unwrap(), best_block, &&keys_provider, &&logger).map_err(|_| ()).unwrap();
// Node B --> Node A: funding signed
- let _ = node_a_chan.funding_signed(&funding_signed_msg, best_block, &&keys_provider, &&logger).unwrap();
+ let _ = node_a_chan.funding_signed(&funding_signed_msg.unwrap(), best_block, &&keys_provider, &&logger).unwrap();
// Make sure that receiving a channel update will update the Channel as expected.
let update = ChannelUpdate {
let counterparty_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let mut config = UserConfig::default();
config.channel_handshake_config.announced_channel = false;
- let mut chan = OutboundV1Channel::<&Keys>::new(&LowerBoundedFeeEstimator::new(&feeest), &&keys_provider, &&keys_provider, counterparty_node_id, &channelmanager::provided_init_features(&config), 10_000_000, 0, 42, &config, 0, 42).unwrap(); // Nothing uses their network key in this test
+ let mut chan = OutboundV1Channel::<&Keys>::new(&LowerBoundedFeeEstimator::new(&feeest), &&keys_provider, &&keys_provider, counterparty_node_id, &channelmanager::provided_init_features(&config), 10_000_000, 0, 42, &config, 0, 42, None).unwrap(); // Nothing uses their network key in this test
chan.context.holder_dust_limit_satoshis = 546;
chan.context.counterparty_selected_channel_reserve_satoshis = Some(0); // Filled in in accept_channel
let node_b_node_id = PublicKey::from_secret_key(&secp_ctx, &SecretKey::from_slice(&[42; 32]).unwrap());
let config = UserConfig::default();
let node_a_chan = OutboundV1Channel::<&TestKeysInterface>::new(&feeest, &&keys_provider, &&keys_provider,
- node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42).unwrap();
+ node_b_node_id, &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42, None).unwrap();
let mut channel_type_features = ChannelTypeFeatures::only_static_remote_key();
channel_type_features.set_zero_conf_required();
let channel_a = OutboundV1Channel::<&TestKeysInterface>::new(
&fee_estimator, &&keys_provider, &&keys_provider, node_id_b,
&channelmanager::provided_init_features(&UserConfig::default()), 10000000, 100000, 42,
- &config, 0, 42
+ &config, 0, 42, None
).unwrap();
assert!(!channel_a.context.channel_type.supports_anchors_zero_fee_htlc_tx());
let channel_a = OutboundV1Channel::<&TestKeysInterface>::new(
&fee_estimator, &&keys_provider, &&keys_provider, node_id_b,
- &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42
+ &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42,
+ None
).unwrap();
let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network));
let channel_a = OutboundV1Channel::<&TestKeysInterface>::new(
&fee_estimator, &&keys_provider, &&keys_provider, node_id_b,
- &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42
+ &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42,
+ None
).unwrap();
// Set `channel_type` to `None` to force the implicit feature negotiation.
// B as it's not supported by LDK.
let channel_a = OutboundV1Channel::<&TestKeysInterface>::new(
&fee_estimator, &&keys_provider, &&keys_provider, node_id_b,
- &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42
+ &channelmanager::provided_init_features(&config), 10000000, 100000, 42, &config, 0, 42,
+ None
).unwrap();
let mut open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network));
// LDK.
let mut channel_a = OutboundV1Channel::<&TestKeysInterface>::new(
&fee_estimator, &&keys_provider, &&keys_provider, node_id_b, &simple_anchors_init,
- 10000000, 100000, 42, &config, 0, 42
+ 10000000, 100000, 42, &config, 0, 42, None
).unwrap();
let open_channel_msg = channel_a.get_open_channel(ChainHash::using_genesis_block(network));
&config,
0,
42,
+ None
).unwrap();
let open_channel_msg = node_a_chan.get_open_channel(ChainHash::using_genesis_block(network));
&&logger,
).map_err(|_| ()).unwrap();
let (mut node_b_chan, funding_signed_msg, _) = node_b_chan.funding_created(
- &funding_created_msg,
+ &funding_created_msg.unwrap(),
best_block,
&&keys_provider,
&&logger,
// Receive funding_signed, but the channel will be configured to hold sending channel_ready and
// broadcasting the funding transaction until the batch is ready.
let _ = node_a_chan.funding_signed(
- &funding_signed_msg,
+ &funding_signed_msg.unwrap(),
best_block,
&&keys_provider,
&&logger,
user_channel_id: val.prev_hop.user_channel_id.unwrap_or(0),
cltv_expiry: val.cltv_expiry,
value_msat: val.value,
+ counterparty_skimmed_fee_msat: val.counterparty_skimmed_fee_msat.unwrap_or(0),
}
}
}
/// connection is available, the outbound `open_channel` message may fail to send, resulting in
/// the channel eventually being silently forgotten (dropped on reload).
///
+ /// If `temporary_channel_id` is specified, it will be used as the temporary channel ID of the
+ /// channel. Otherwise, a random one will be generated for you.
+ ///
/// Returns the new Channel's temporary `channel_id`. This ID will appear as
/// [`Event::FundingGenerationReady::temporary_channel_id`] and in
/// [`ChannelDetails::channel_id`] until after
/// [`Event::FundingGenerationReady::user_channel_id`]: events::Event::FundingGenerationReady::user_channel_id
/// [`Event::FundingGenerationReady::temporary_channel_id`]: events::Event::FundingGenerationReady::temporary_channel_id
/// [`Event::ChannelClosed::channel_id`]: events::Event::ChannelClosed::channel_id
- pub fn create_channel(&self, their_network_key: PublicKey, channel_value_satoshis: u64, push_msat: u64, user_channel_id: u128, override_config: Option<UserConfig>) -> Result<ChannelId, APIError> {
+ pub fn create_channel(&self, their_network_key: PublicKey, channel_value_satoshis: u64, push_msat: u64, user_channel_id: u128, temporary_channel_id: Option<ChannelId>, override_config: Option<UserConfig>) -> Result<ChannelId, APIError> {
if channel_value_satoshis < 1000 {
return Err(APIError::APIMisuseError { err: format!("Channel value must be at least 1000 satoshis. It was {}", channel_value_satoshis) });
}
.ok_or_else(|| APIError::APIMisuseError{ err: format!("Not connected to node: {}", their_network_key) })?;
let mut peer_state = peer_state_mutex.lock().unwrap();
+
+ if let Some(temporary_channel_id) = temporary_channel_id {
+ if peer_state.channel_by_id.contains_key(&temporary_channel_id) {
+ return Err(APIError::APIMisuseError{ err: format!("Channel with temporary channel ID {} already exists!", temporary_channel_id)});
+ }
+ }
+
let channel = {
let outbound_scid_alias = self.create_and_insert_outbound_scid_alias();
let their_features = &peer_state.latest_features;
let config = if override_config.is_some() { override_config.as_ref().unwrap() } else { &self.default_configuration };
match OutboundV1Channel::new(&self.fee_estimator, &self.entropy_source, &self.signer_provider, their_network_key,
their_features, channel_value_satoshis, push_msat, user_channel_id, config,
- self.best_block.read().unwrap().height(), outbound_scid_alias)
+ self.best_block.read().unwrap().height(), outbound_scid_alias, temporary_channel_id)
{
Ok(res) => res,
Err(e) => {
let prng_seed = self.entropy_source.get_secure_random_bytes();
let session_priv = SecretKey::from_slice(&session_priv_bytes[..]).expect("RNG is busted");
- let onion_keys = onion_utils::construct_onion_keys(&self.secp_ctx, &path, &session_priv)
- .map_err(|_| APIError::InvalidRoute{err: "Pubkey along hop was maliciously selected".to_owned()})?;
- let (onion_payloads, htlc_msat, htlc_cltv) = onion_utils::build_onion_payloads(path, total_value, recipient_onion, cur_height, keysend_preimage)?;
-
- let onion_packet = onion_utils::construct_onion_packet(onion_payloads, onion_keys, prng_seed, payment_hash)
- .map_err(|_| APIError::InvalidRoute { err: "Route size too large considering onion data".to_owned()})?;
+ let (onion_packet, htlc_msat, htlc_cltv) = onion_utils::create_payment_onion(
+ &self.secp_ctx, &path, &session_priv, total_value, recipient_onion, cur_height,
+ payment_hash, keysend_preimage, prng_seed
+ )?;
let err: Result<(), _> = loop {
let (counterparty_node_id, id) = match self.short_to_chan_info.read().unwrap().get(&path.hops.first().unwrap().short_channel_id) {
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
- let (chan, msg) = match peer_state.channel_by_id.remove(temporary_channel_id) {
+ let (chan, msg_opt) = match peer_state.channel_by_id.remove(temporary_channel_id) {
Some(ChannelPhase::UnfundedOutboundV1(chan)) => {
let funding_txo = find_funding_output(&chan, &funding_transaction)?;
}),
};
- peer_state.pending_msg_events.push(events::MessageSendEvent::SendFundingCreated {
- node_id: chan.context.get_counterparty_node_id(),
- msg,
- });
+ if let Some(msg) = msg_opt {
+ peer_state.pending_msg_events.push(events::MessageSendEvent::SendFundingCreated {
+ node_id: chan.context.get_counterparty_node_id(),
+ msg,
+ });
+ }
match peer_state.channel_by_id.entry(chan.context.channel_id()) {
hash_map::Entry::Occupied(_) => {
panic!("Generated duplicate funding txid?");
err: format!("Channel with id {} for the passed counterparty node_id {} is still opening.",
next_hop_channel_id, next_node_id)
}),
- None => return Err(APIError::ChannelUnavailable {
- err: format!("Channel with id {} not found for the passed counterparty node_id {}",
- next_hop_channel_id, next_node_id)
- })
+ None => {
+ let error = format!("Channel with id {} not found for the passed counterparty node_id {}",
+ next_hop_channel_id, next_node_id);
+ log_error!(self.logger, "{} when attempting to forward intercepted HTLC", error);
+ return Err(APIError::ChannelUnavailable {
+ err: error
+ })
+ }
}
};
let mut peer_state_lock = peer_state_mutex.lock().unwrap();
let peer_state = &mut *peer_state_lock;
- let (chan, funding_msg, monitor) =
+ let (chan, funding_msg_opt, monitor) =
match peer_state.channel_by_id.remove(&msg.temporary_channel_id) {
Some(ChannelPhase::UnfundedInboundV1(inbound_chan)) => {
match inbound_chan.funding_created(msg, best_block, &self.signer_provider, &self.logger) {
None => return Err(MsgHandleErrInternal::send_err_msg_no_close(format!("Got a message for a channel from the wrong node! No such channel for the passed counterparty_node_id {}", counterparty_node_id), msg.temporary_channel_id))
};
- match peer_state.channel_by_id.entry(funding_msg.channel_id) {
+ match peer_state.channel_by_id.entry(chan.context.channel_id()) {
hash_map::Entry::Occupied(_) => {
- Err(MsgHandleErrInternal::send_err_msg_no_close("Already had channel with the new channel_id".to_owned(), funding_msg.channel_id))
+ Err(MsgHandleErrInternal::send_err_msg_no_close(
+ "Already had channel with the new channel_id".to_owned(),
+ chan.context.channel_id()
+ ))
},
hash_map::Entry::Vacant(e) => {
let mut id_to_peer_lock = self.id_to_peer.lock().unwrap();
hash_map::Entry::Occupied(_) => {
return Err(MsgHandleErrInternal::send_err_msg_no_close(
"The funding_created message had the same funding_txid as an existing channel - funding is not possible".to_owned(),
- funding_msg.channel_id))
+ chan.context.channel_id()))
},
hash_map::Entry::Vacant(i_e) => {
let monitor_res = self.chain_monitor.watch_channel(monitor.get_funding_txo().0, monitor);
// hasn't persisted to disk yet - we can't lose money on a transaction that we haven't
// accepted payment from yet. We do, however, need to wait to send our channel_ready
// until we have persisted our monitor.
- peer_state.pending_msg_events.push(events::MessageSendEvent::SendFundingSigned {
- node_id: counterparty_node_id.clone(),
- msg: funding_msg,
- });
+ if let Some(msg) = funding_msg_opt {
+ peer_state.pending_msg_events.push(events::MessageSendEvent::SendFundingSigned {
+ node_id: counterparty_node_id.clone(),
+ msg,
+ });
+ }
if let ChannelPhase::Funded(chan) = e.insert(ChannelPhase::Funded(chan)) {
handle_new_monitor_update!(self, persist_state, peer_state_lock, peer_state,
Ok(())
} else {
log_error!(self.logger, "Persisting initial ChannelMonitor failed, implying the funding outpoint was duplicated");
+ let channel_id = match funding_msg_opt {
+ Some(msg) => msg.channel_id,
+ None => chan.context.channel_id(),
+ };
return Err(MsgHandleErrInternal::send_err_msg_no_close(
"The funding_created message had the same funding_txid as an existing channel - funding is not possible".to_owned(),
- funding_msg.channel_id));
+ channel_id));
}
}
}
has_update
}
+ /// When a call to a [`ChannelSigner`] method returns an error, this indicates that the signer
+ /// is (temporarily) unavailable, and the operation should be retried later.
+ ///
+ /// This method allows for that retry - either checking for any signer-pending messages to be
+ /// attempted in every channel, or in the specifically provided channel.
+ ///
+ /// [`ChannelSigner`]: crate::sign::ChannelSigner
+ #[cfg(test)] // This is only implemented for one signer method, and should be private until we
+ // actually finish implementing it fully.
+ pub fn signer_unblocked(&self, channel_opt: Option<(PublicKey, ChannelId)>) {
+ let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
+
+ let unblock_chan = |phase: &mut ChannelPhase<SP>, pending_msg_events: &mut Vec<MessageSendEvent>| {
+ let node_id = phase.context().get_counterparty_node_id();
+ if let ChannelPhase::Funded(chan) = phase {
+ let msgs = chan.signer_maybe_unblocked(&self.logger);
+ if let Some(updates) = msgs.commitment_update {
+ pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs {
+ node_id,
+ updates,
+ });
+ }
+ if let Some(msg) = msgs.funding_signed {
+ pending_msg_events.push(events::MessageSendEvent::SendFundingSigned {
+ node_id,
+ msg,
+ });
+ }
+ if let Some(msg) = msgs.funding_created {
+ pending_msg_events.push(events::MessageSendEvent::SendFundingCreated {
+ node_id,
+ msg,
+ });
+ }
+ if let Some(msg) = msgs.channel_ready {
+ send_channel_ready!(self, pending_msg_events, chan, msg);
+ }
+ }
+ };
+
+ let per_peer_state = self.per_peer_state.read().unwrap();
+ if let Some((counterparty_node_id, channel_id)) = channel_opt {
+ if let Some(peer_state_mutex) = per_peer_state.get(&counterparty_node_id) {
+ let mut peer_state_lock = peer_state_mutex.lock().unwrap();
+ let peer_state = &mut *peer_state_lock;
+ if let Some(chan) = peer_state.channel_by_id.get_mut(&channel_id) {
+ unblock_chan(chan, &mut peer_state.pending_msg_events);
+ }
+ }
+ } else {
+ for (_cp_id, peer_state_mutex) in per_peer_state.iter() {
+ let mut peer_state_lock = peer_state_mutex.lock().unwrap();
+ let peer_state = &mut *peer_state_lock;
+ for (_, chan) in peer_state.channel_by_id.iter_mut() {
+ unblock_chan(chan, &mut peer_state.pending_msg_events);
+ }
+ }
+ }
+ }
+
/// Check whether any channels have finished removing all pending updates after a shutdown
/// exchange and can now send a closing_signed.
/// Returns whether any closing_signed messages were generated.
});
}
+ fn handle_stfu(&self, counterparty_node_id: &PublicKey, msg: &msgs::Stfu) {
+ let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close(
+ "Quiescence not supported".to_owned(),
+ msg.channel_id.clone())), *counterparty_node_id);
+ }
+
+ fn handle_splice(&self, counterparty_node_id: &PublicKey, msg: &msgs::Splice) {
+ let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close(
+ "Splicing not supported".to_owned(),
+ msg.channel_id.clone())), *counterparty_node_id);
+ }
+
+ fn handle_splice_ack(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceAck) {
+ let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close(
+ "Splicing not supported (splice_ack)".to_owned(),
+ msg.channel_id.clone())), *counterparty_node_id);
+ }
+
+ fn handle_splice_locked(&self, counterparty_node_id: &PublicKey, msg: &msgs::SpliceLocked) {
+ let _: Result<(), _> = handle_error!(self, Err(MsgHandleErrInternal::send_err_msg_no_close(
+ "Splicing not supported (splice_locked)".to_owned(),
+ msg.channel_id.clone())), *counterparty_node_id);
+ }
+
fn handle_shutdown(&self, counterparty_node_id: &PublicKey, msg: &msgs::Shutdown) {
let _persistence_guard = PersistenceNotifierGuard::notify_on_drop(self);
let _ = handle_error!(self, self.internal_shutdown(counterparty_node_id, msg), *counterparty_node_id);
// Common Channel Establishment
&events::MessageSendEvent::SendChannelReady { .. } => false,
&events::MessageSendEvent::SendAnnouncementSignatures { .. } => false,
+ // Quiescence
+ &events::MessageSendEvent::SendStfu { .. } => false,
+ // Splicing
+ &events::MessageSendEvent::SendSplice { .. } => false,
+ &events::MessageSendEvent::SendSpliceAck { .. } => false,
+ &events::MessageSendEvent::SendSpliceLocked { .. } => false,
// Interactive Transaction Construction
&events::MessageSendEvent::SendTxAddInput { .. } => false,
&events::MessageSendEvent::SendTxAddOutput { .. } => false,
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let intercept_id = InterceptId([0; 32]);
// Test the API functions.
- check_not_connected_to_peer_error(nodes[0].node.create_channel(unkown_public_key, 1_000_000, 500_000_000, 42, None), unkown_public_key);
+ check_not_connected_to_peer_error(nodes[0].node.create_channel(unkown_public_key, 1_000_000, 500_000_000, 42, None, None), unkown_public_key);
check_unkown_peer_error(nodes[0].node.accept_inbound_channel(&channel_id, &unkown_public_key, 42), unkown_public_key);
// Note that create_network connects the nodes together for us
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
let mut funding_tx = None;
open_channel_msg.temporary_channel_id);
// Of course, however, outbound channels are always allowed
- nodes[1].node.create_channel(last_random_pk, 100_000, 0, 42, None).unwrap();
+ nodes[1].node.create_channel(last_random_pk, 100_000, 0, 42, None, None).unwrap();
get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, last_random_pk);
// If we fund the first channel, nodes[0] has a live on-chain channel with us, it is now
// Note that create_network connects the nodes together for us
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
for _ in 0..super::MAX_UNFUNDED_CHANS_PER_PEER {
open_channel_msg.temporary_channel_id);
// but we can still open an outbound channel.
- nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
// but even with such an outbound channel, additional inbound channels will still fail.
// Note that create_network connects the nodes together for us
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
// First, get us up to MAX_UNFUNDED_CHANNEL_PEERS so we can test at the edge
&[Some(anchors_cfg.clone()), Some(anchors_cfg.clone()), Some(anchors_manual_accept_cfg.clone())]);
let nodes = create_network(3, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_msg);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(anchors_config.clone()), Some(anchors_config.clone())]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 0, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 0, None, None).unwrap();
let open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
assert!(open_channel_msg.channel_type.as_ref().unwrap().supports_anchors_zero_fee_htlc_tx());
node_b.peer_connected(&node_a.get_our_node_id(), &Init {
features: node_a.init_features(), networks: None, remote_network_address: None
}, false).unwrap();
- node_a.create_channel(node_b.get_our_node_id(), 8_000_000, 100_000_000, 42, None).unwrap();
+ node_a.create_channel(node_b.get_our_node_id(), 8_000_000, 100_000_000, 42, None, None).unwrap();
node_b.handle_open_channel(&node_a.get_our_node_id(), &get_event_msg!(node_a_holder, MessageSendEvent::SendOpenChannel, node_b.get_our_node_id()));
node_a.handle_accept_channel(&node_b.get_our_node_id(), &get_event_msg!(node_b_holder, MessageSendEvent::SendAcceptChannel, node_a.get_our_node_id()));
use crate::util::errors::APIError;
use crate::util::config::{UserConfig, MaxDustHTLCExposure};
use crate::util::ser::{ReadableArgs, Writeable};
+#[cfg(test)]
+use crate::util::logger::Logger;
use bitcoin::blockdata::block::{Block, BlockHeader};
use bitcoin::blockdata::transaction::{Transaction, TxOut};
pub fn get_block_header(&self, height: u32) -> BlockHeader {
self.blocks.lock().unwrap()[height as usize].0.header
}
+ /// Changes the channel signer's availability for the specified peer and channel.
+ ///
+ /// When `available` is set to `true`, the channel signer will behave normally. When set to
+ /// `false`, the channel signer will act like an off-line remote signer and will return `Err` for
+ /// several of the signing methods. Currently, only `get_per_commitment_point` and
+ /// `release_commitment_secret` are affected by this setting.
+ #[cfg(test)]
+ pub fn set_channel_signer_available(&self, peer_id: &PublicKey, chan_id: &ChannelId, available: bool) {
+ let per_peer_state = self.node.per_peer_state.read().unwrap();
+ let chan_lock = per_peer_state.get(peer_id).unwrap().lock().unwrap();
+ let signer = (|| {
+ match chan_lock.channel_by_id.get(chan_id) {
+ Some(phase) => phase.context().get_signer(),
+ None => panic!("Couldn't find a channel with id {}", chan_id),
+ }
+ })();
+ log_debug!(self.logger, "Setting channel signer for {} as available={}", chan_id, available);
+ signer.as_ecdsa().unwrap().set_available(available);
+ }
}
/// If we need an unsafe pointer to a `Node` (ie to reference it in a thread
MessageSendEvent::SendOpenChannelV2 { node_id, .. } => {
node_id == msg_node_id
},
+ MessageSendEvent::SendStfu { node_id, .. } => {
+ node_id == msg_node_id
+ },
+ MessageSendEvent::SendSplice { node_id, .. } => {
+ node_id == msg_node_id
+ },
+ MessageSendEvent::SendSpliceAck { node_id, .. } => {
+ node_id == msg_node_id
+ },
+ MessageSendEvent::SendSpliceLocked { node_id, .. } => {
+ node_id == msg_node_id
+ },
MessageSendEvent::SendTxAddInput { node_id, .. } => {
node_id == msg_node_id
},
pub fn check_added_monitors<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &H, count: usize) {
if let Some(chain_monitor) = node.chain_monitor() {
let mut added_monitors = chain_monitor.added_monitors.lock().unwrap();
- assert_eq!(added_monitors.len(), count);
+ let n = added_monitors.len();
+ assert_eq!(n, count, "expected {} monitors to be added, not {}", count, n);
added_monitors.clear();
}
}
let initiator_channels = initiator.node.list_usable_channels().len();
let receiver_channels = receiver.node.list_usable_channels().len();
- initiator.node.create_channel(receiver.node.get_our_node_id(), 100_000, 10_001, 42, initiator_config).unwrap();
+ initiator.node.create_channel(receiver.node.get_our_node_id(), 100_000, 10_001, 42, None, initiator_config).unwrap();
let open_channel = get_event_msg!(initiator, MessageSendEvent::SendOpenChannel, receiver.node.get_our_node_id());
receiver.node.handle_open_channel(&initiator.node.get_our_node_id(), &open_channel);
}
pub fn create_chan_between_nodes_with_value_init<'a, 'b, 'c>(node_a: &Node<'a, 'b, 'c>, node_b: &Node<'a, 'b, 'c>, channel_value: u64, push_msat: u64) -> Transaction {
- let create_chan_id = node_a.node.create_channel(node_b.node.get_our_node_id(), channel_value, push_msat, 42, None).unwrap();
+ let create_chan_id = node_a.node.create_channel(node_b.node.get_our_node_id(), channel_value, push_msat, 42, None, None).unwrap();
let open_channel_msg = get_event_msg!(node_a, MessageSendEvent::SendOpenChannel, node_b.node.get_our_node_id());
assert_eq!(open_channel_msg.temporary_channel_id, create_chan_id);
assert_eq!(node_a.node.list_channels().iter().find(|channel| channel.channel_id == create_chan_id).unwrap().user_channel_id, 42);
pub fn create_unannounced_chan_between_nodes_with_value<'a, 'b, 'c, 'd>(nodes: &'a Vec<Node<'b, 'c, 'd>>, a: usize, b: usize, channel_value: u64, push_msat: u64) -> (msgs::ChannelReady, Transaction) {
let mut no_announce_cfg = test_default_channel_config();
no_announce_cfg.channel_handshake_config.announced_channel = false;
- nodes[a].node.create_channel(nodes[b].node.get_our_node_id(), channel_value, push_msat, 42, Some(no_announce_cfg)).unwrap();
+ nodes[a].node.create_channel(nodes[b].node.get_our_node_id(), channel_value, push_msat, 42, None, Some(no_announce_cfg)).unwrap();
let open_channel = get_event_msg!(nodes[a], MessageSendEvent::SendOpenChannel, nodes[b].node.get_our_node_id());
nodes[b].node.handle_open_channel(&nodes[a].node.get_our_node_id(), &open_channel);
let accept_channel = get_event_msg!(nodes[b], MessageSendEvent::SendAcceptChannel, nodes[a].node.get_our_node_id());
}
#[cfg(any(test, ldk_bench, feature = "_test_utils"))]
-pub fn expect_channel_pending_event<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, expected_counterparty_node_id: &PublicKey) {
+pub fn expect_channel_pending_event<'a, 'b, 'c, 'd>(node: &'a Node<'b, 'c, 'd>, expected_counterparty_node_id: &PublicKey) -> ChannelId {
let events = node.node.get_and_clear_pending_events();
assert_eq!(events.len(), 1);
- match events[0] {
- crate::events::Event::ChannelPending { ref counterparty_node_id, .. } => {
+ match &events[0] {
+ crate::events::Event::ChannelPending { channel_id, counterparty_node_id, .. } => {
assert_eq!(*expected_counterparty_node_id, *counterparty_node_id);
+ *channel_id
},
_ => panic!("Unexpected event"),
}
}
}
+#[cfg(any(test, feature = "_test_utils"))]
+pub fn expect_probe_successful_events(node: &Node, mut probe_results: Vec<(PaymentHash, PaymentId)>) {
+ let mut events = node.node.get_and_clear_pending_events();
+
+ for event in events.drain(..) {
+ match event {
+ Event::ProbeSuccessful { payment_hash: ev_ph, payment_id: ev_pid, ..} => {
+ let result_idx = probe_results.iter().position(|(payment_hash, payment_id)| *payment_hash == ev_ph && *payment_id == ev_pid);
+ assert!(result_idx.is_some());
+
+ probe_results.remove(result_idx.unwrap());
+ },
+ _ => panic!(),
+ }
+ };
+
+ // Ensure that we received a ProbeSuccessful event for each probe result.
+ assert!(probe_results.is_empty());
+}
+
pub struct PaymentFailedConditions<'a> {
pub(crate) expected_htlc_error_data: Option<(u16, &'a [u8])>,
pub(crate) expected_blamed_scid: Option<u64>,
payment_id
}
-pub fn do_pass_along_path<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_path: &[&Node<'a, 'b, 'c>], recv_value: u64, our_payment_hash: PaymentHash, our_payment_secret: Option<PaymentSecret>, ev: MessageSendEvent, payment_claimable_expected: bool, clear_recipient_events: bool, expected_preimage: Option<PaymentPreimage>) -> Option<Event> {
+fn fail_payment_along_path<'a, 'b, 'c>(expected_path: &[&Node<'a, 'b, 'c>]) {
+ let origin_node_id = expected_path[0].node.get_our_node_id();
+
+ // iterate from the receiving node to the origin node and handle update fail htlc.
+ for (&node, &prev_node) in expected_path.iter().rev().zip(expected_path.iter().rev().skip(1)) {
+ let updates = get_htlc_update_msgs!(node, prev_node.node.get_our_node_id());
+ prev_node.node.handle_update_fail_htlc(&node.node.get_our_node_id(), &updates.update_fail_htlcs[0]);
+ check_added_monitors!(prev_node, 0);
+
+ let is_first_hop = origin_node_id == prev_node.node.get_our_node_id();
+ // We do not want to fail backwards on the first hop. All other hops should fail backwards.
+ commitment_signed_dance!(prev_node, node, updates.commitment_signed, !is_first_hop);
+ }
+}
+
+pub fn do_pass_along_path<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_path: &[&Node<'a, 'b, 'c>], recv_value: u64, our_payment_hash: PaymentHash, our_payment_secret: Option<PaymentSecret>, ev: MessageSendEvent, payment_claimable_expected: bool, clear_recipient_events: bool, expected_preimage: Option<PaymentPreimage>, is_probe: bool) -> Option<Event> {
let mut payment_event = SendEvent::from_event(ev);
let mut prev_node = origin_node;
let mut event = None;
for (idx, &node) in expected_path.iter().enumerate() {
+ let is_last_hop = idx == expected_path.len() - 1;
assert_eq!(node.node.get_our_node_id(), payment_event.node_id);
node.node.handle_update_add_htlc(&prev_node.node.get_our_node_id(), &payment_event.msgs[0]);
check_added_monitors!(node, 0);
- commitment_signed_dance!(node, prev_node, payment_event.commitment_msg, false);
- expect_pending_htlcs_forwardable!(node);
+ if is_last_hop && is_probe {
+ commitment_signed_dance!(node, prev_node, payment_event.commitment_msg, true, true);
+ } else {
+ commitment_signed_dance!(node, prev_node, payment_event.commitment_msg, false);
+ expect_pending_htlcs_forwardable!(node);
+ }
- if idx == expected_path.len() - 1 && clear_recipient_events {
+ if is_last_hop && clear_recipient_events {
let events_2 = node.node.get_and_clear_pending_events();
if payment_claimable_expected {
assert_eq!(events_2.len(), 1);
} else {
assert!(events_2.is_empty());
}
- } else if idx != expected_path.len() - 1 {
+ } else if !is_last_hop {
let mut events_2 = node.node.get_and_clear_pending_msg_events();
assert_eq!(events_2.len(), 1);
check_added_monitors!(node, 1);
}
pub fn pass_along_path<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_path: &[&Node<'a, 'b, 'c>], recv_value: u64, our_payment_hash: PaymentHash, our_payment_secret: Option<PaymentSecret>, ev: MessageSendEvent, payment_claimable_expected: bool, expected_preimage: Option<PaymentPreimage>) -> Option<Event> {
- do_pass_along_path(origin_node, expected_path, recv_value, our_payment_hash, our_payment_secret, ev, payment_claimable_expected, true, expected_preimage)
+ do_pass_along_path(origin_node, expected_path, recv_value, our_payment_hash, our_payment_secret, ev, payment_claimable_expected, true, expected_preimage, false)
+}
+
+pub fn send_probe_along_route<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_route: &[&[&Node<'a, 'b, 'c>]]) {
+ let mut events = origin_node.node.get_and_clear_pending_msg_events();
+ assert_eq!(events.len(), expected_route.len());
+
+ check_added_monitors!(origin_node, expected_route.len());
+
+ for path in expected_route.iter() {
+ let ev = remove_first_msg_event_to_node(&path[0].node.get_our_node_id(), &mut events);
+
+ do_pass_along_path(origin_node, path, 0, PaymentHash([0_u8; 32]), None, ev, false, false, None, true);
+ let nodes_to_fail_payment: Vec<_> = vec![origin_node].into_iter().chain(path.iter().cloned()).collect();
+
+ fail_payment_along_path(nodes_to_fail_payment.as_slice());
+ }
}
pub fn pass_along_route<'a, 'b, 'c>(origin_node: &Node<'a, 'b, 'c>, expected_route: &[&[&Node<'a, 'b, 'c>]], recv_value: u64, our_payment_hash: PaymentHash, our_payment_secret: PaymentSecret) {
let mut events = origin_node.node.get_and_clear_pending_msg_events();
assert_eq!(events.len(), expected_route.len());
+
for (path_idx, expected_path) in expected_route.iter().enumerate() {
let ev = remove_first_msg_event_to_node(&expected_path[0].node.get_our_node_id(), &mut events);
// Once we've gotten through all the HTLCs, the last one should result in a
- // PaymentClaimable (but each previous one should not!), .
+ // PaymentClaimable (but each previous one should not!).
let expect_payment = path_idx == expected_route.len() - 1;
pass_along_path(origin_node, expected_path, recv_value, our_payment_hash.clone(), Some(our_payment_secret), ev, expect_payment, None);
}
// If a expects a channel_ready, it better not think it has received a revoke_and_ack
// from b
for reestablish in reestablish_1.iter() {
- assert_eq!(reestablish.next_remote_commitment_number, 0);
+ let n = reestablish.next_remote_commitment_number;
+ assert_eq!(n, 0, "expected a->b next_remote_commitment_number to be 0, got {}", n);
}
}
if send_channel_ready.1 {
// If b expects a channel_ready, it better not think it has received a revoke_and_ack
// from a
for reestablish in reestablish_2.iter() {
- assert_eq!(reestablish.next_remote_commitment_number, 0);
+ let n = reestablish.next_remote_commitment_number;
+ assert_eq!(n, 0, "expected b->a next_remote_commitment_number to be 0, got {}", n);
}
}
if send_channel_ready.0 || send_channel_ready.1 {
// If we expect any channel_ready's, both sides better have set
// next_holder_commitment_number to 1
for reestablish in reestablish_1.iter() {
- assert_eq!(reestablish.next_local_commitment_number, 1);
+ let n = reestablish.next_local_commitment_number;
+ assert_eq!(n, 1, "expected a->b next_local_commitment_number to be 1, got {}", n);
}
for reestablish in reestablish_2.iter() {
- assert_eq!(reestablish.next_local_commitment_number, 1);
+ let n = reestablish.next_local_commitment_number;
+ assert_eq!(n, 1, "expected b->a next_local_commitment_number to be 1, got {}", n);
}
}
// Initialize channel opening.
let temp_chan_id = funding_node.node.create_channel(
other_node.node.get_our_node_id(), *channel_value_satoshis, *push_msat, *user_channel_id,
+ None,
*override_config,
).unwrap();
let open_channel_msg = get_event_msg!(funding_node, MessageSendEvent::SendOpenChannel, other_node.node.get_our_node_id());
let push_msat = (channel_value_sat - channel_reserve_satoshis) * 1000;
// Have node0 initiate a channel to node1 with aforementioned parameters
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_sat, push_msat, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_sat, push_msat, 42, None, None).unwrap();
// Extract the channel open message from node0 to node1
let open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- match nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), MAX_FUNDING_SATOSHIS_NO_WUMBO + 1, 0, 42, None) {
+ match nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), MAX_FUNDING_SATOSHIS_NO_WUMBO + 1, 0, 42, None, None) {
Err(APIError::APIMisuseError { err }) => {
assert_eq!(format!("funding_value must not exceed {}, it was {}", MAX_FUNDING_SATOSHIS_NO_WUMBO, MAX_FUNDING_SATOSHIS_NO_WUMBO + 1), err);
},
push_amt -= feerate_per_kw as u64 * (commitment_tx_base_weight(&channel_type_features) + 4 * COMMITMENT_TX_WEIGHT_PER_HTLC) / 1000 * 1000;
push_amt -= get_holder_selected_channel_reserve_satoshis(100_000, &default_config) * 1000;
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, if send_from_initiator { 0 } else { push_amt }, 42, None).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, if send_from_initiator { 0 } else { push_amt }, 42, None, None).unwrap();
let mut open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
if !send_from_initiator {
open_channel_message.channel_reserve_satoshis = 0;
}
if steps & 0x0f == 0 { return; }
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
if steps & 0x0f == 1 { return; }
// HTLC.
let mut push_amt = 100_000_000;
push_amt -= commit_tx_fee_msat(feerate_per_kw, MIN_AFFORDABLE_HTLC_COUNT as u64, &channel_type_features);
- assert_eq!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, push_amt + 1, 42, None).unwrap_err(),
+ assert_eq!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, push_amt + 1, 42, None, None).unwrap_err(),
APIError::APIMisuseError { err: "Funding amount (356) can't even pay fee for initial commitment transaction fee of 357.".to_string() });
// During open, we don't have a "counterparty channel reserve" to check against, so that
// requirement only comes into play on the open_channel handling side.
push_amt -= get_holder_selected_channel_reserve_satoshis(100_000, &default_config) * 1000;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, push_amt, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, push_amt, 42, None, None).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
open_channel_msg.push_msat += 1;
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_msg);
// Open a channel between `nodes[0]` and `nodes[1]`, for which the funding transaction is never
// broadcasted, even though it's created by `nodes[0]`.
- let expected_temporary_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None).unwrap();
+ let expected_temporary_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
// BOLT #2 spec: Sending node must ensure temporary_channel_id is unique from any other channel ID with the same peer.
let channel_value_satoshis=10000;
let push_msat=10001;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).unwrap();
let node0_to_1_send_open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &node0_to_1_send_open_channel);
get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
// Create a second channel with the same random values. This used to panic due to a colliding
// channel_id, but now panics due to a colliding outbound SCID alias.
- assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).is_err());
+ assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).is_err());
}
#[test]
// BOLT #2 spec: Sending node must set funding_satoshis to less than 2^24 satoshis
let channel_value_satoshis=2^24;
let push_msat=10001;
- assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).is_err());
+ assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).is_err());
// BOLT #2 spec: Sending node must set push_msat to equal or less than 1000 * funding_satoshis
let channel_value_satoshis=10000;
// Test when push_msat is equal to 1000 * funding_satoshis.
let push_msat=1000*channel_value_satoshis+1;
- assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).is_err());
+ assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).is_err());
// BOLT #2 spec: Sending node must set set channel_reserve_satoshis greater than or equal to dust_limit_satoshis
let channel_value_satoshis=10000;
let push_msat=10001;
- assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).is_ok()); //Create a valid channel
+ assert!(nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).is_ok()); //Create a valid channel
let node0_to_1_send_open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
assert!(node0_to_1_send_open_channel.channel_reserve_satoshis>=node0_to_1_send_open_channel.dust_limit_satoshis);
let channel_value_satoshis=1000000;
let push_msat=10001;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), channel_value_satoshis, push_msat, 42, None, None).unwrap();
let mut node0_to_1_send_open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
node0_to_1_send_open_channel.dust_limit_satoshis = 547;
node0_to_1_send_open_channel.channel_reserve_satoshis = 100001;
// We test config.our_to_self > BREAKDOWN_TIMEOUT is enforced in OutboundV1Channel::new()
if let Err(error) = OutboundV1Channel::new(&LowerBoundedFeeEstimator::new(&test_utils::TestFeeEstimator { sat_per_kw: Mutex::new(253) }),
&nodes[0].keys_manager, &nodes[0].keys_manager, nodes[1].node.get_our_node_id(), &nodes[1].node.init_features(), 1000000, 1000000, 0,
- &low_our_to_self_config, 0, 42)
+ &low_our_to_self_config, 0, 42, None)
{
match error {
APIError::APIMisuseError { err } => { assert!(regex::Regex::new(r"Configured with an unreasonable our_to_self_delay \(\d+\) putting user funds at risks").unwrap().is_match(err.as_str())); },
} else { assert!(false) }
// We test config.our_to_self > BREAKDOWN_TIMEOUT is enforced in InboundV1Channel::new()
- nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 1000000, 1000000, 42, None).unwrap();
+ nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 1000000, 1000000, 42, None, None).unwrap();
let mut open_channel = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
open_channel.to_self_delay = 200;
if let Err(error) = InboundV1Channel::new(&LowerBoundedFeeEstimator::new(&test_utils::TestFeeEstimator { sat_per_kw: Mutex::new(253) }),
} else { assert!(false); }
// We test msg.to_self_delay <= config.their_to_self_delay is enforced in Chanel::accept_channel()
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1000000, 1000000, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1000000, 1000000, 42, None, None).unwrap();
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()));
let mut accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
accept_channel.to_self_delay = 200;
check_closed_event!(nodes[0], 1, ClosureReason::ProcessingError { err: reason_msg }, [nodes[1].node.get_our_node_id()], 1000000);
// We test msg.to_self_delay <= config.their_to_self_delay is enforced in InboundV1Channel::new()
- nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 1000000, 1000000, 42, None).unwrap();
+ nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 1000000, 1000000, 42, None, None).unwrap();
let mut open_channel = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
open_channel.to_self_delay = 200;
if let Err(error) = InboundV1Channel::new(&LowerBoundedFeeEstimator::new(&test_utils::TestFeeEstimator { sat_per_kw: Mutex::new(253) }),
let mut override_config = UserConfig::default();
override_config.channel_handshake_config.our_to_self_delay = 200;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 16_000_000, 12_000_000, 42, Some(override_config)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 16_000_000, 12_000_000, 42, None, Some(override_config)).unwrap();
// Assert the channel created by node0 is using the override config.
let res = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(zero_config.clone())]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 16_000_000, 12_000_000, 42, Some(zero_config)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 16_000_000, 12_000_000, 42, None, Some(zero_config)).unwrap();
let res = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
assert_eq!(res.htlc_minimum_msat, 1);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(manually_accept_conf.clone())]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(manually_accept_conf)).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(manually_accept_conf)).unwrap();
let res = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &res);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(manually_accept_conf.clone())]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(manually_accept_conf)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(manually_accept_conf)).unwrap();
let res = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &res);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(manually_accept_conf.clone())]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(manually_accept_conf)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(manually_accept_conf)).unwrap();
let res = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &res);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
// Create an initial channel
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let mut open_chan_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan_msg);
let accept_chan_msg = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let nodes = create_network(3, &node_cfgs, &node_chanmgrs);
// Create an first channel channel
- nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[1].node.create_channel(nodes[0].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let mut open_chan_msg_chan_1_0 = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
// Create an second channel
- nodes[2].node.create_channel(nodes[0].node.get_our_node_id(), 100000, 10001, 43, None).unwrap();
+ nodes[2].node.create_channel(nodes[0].node.get_our_node_id(), 100000, 10001, 43, None, None).unwrap();
let mut open_chan_msg_chan_2_0 = get_event_msg!(nodes[2], MessageSendEvent::SendOpenChannel, nodes[0].node.get_our_node_id());
// Modify the `OpenChannel` from `nodes[2]` to `nodes[0]` to ensure that it uses the same
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
// Create an initial channel
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let mut open_chan_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan_msg);
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
}
// Now try to create a second channel which has a duplicate funding output.
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let open_chan_2_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_chan_2_msg);
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
}
};
check_added_monitors!(nodes[0], 0);
- nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created);
+ nodes[1].node.handle_funding_created(&nodes[0].node.get_our_node_id(), &funding_created.unwrap());
// At this point we'll look up if the channel_id is present and immediately fail the channel
// without trying to persist the `ChannelMonitor`.
check_added_monitors!(nodes[1], 0);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 10_000, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 10_000, 42, None, None).unwrap();
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id()));
nodes[0].node.handle_accept_channel(&nodes[1].node.get_our_node_id(), &get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id()));
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[Some(config), None]);
let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 500_000_000, 42, None, None).unwrap();
let mut open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
open_channel.max_htlc_value_in_flight_msat = 50_000_000;
open_channel.max_accepted_htlcs = 60;
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_message);
let accept_channel_message = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_message);
let accept_channel_message = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_message);
let accept_channel_message = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ let temp_channel_id = nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let open_channel_message = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel_message);
let accept_channel_message = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
mod outbound_payment;
pub mod wire;
+pub use onion_utils::create_payment_onion;
// Older rustc (which we support) refuses to let us call the get_payment_preimage_hash!() macro
// without the node parameter being mut. This is incorrect, and thus newer rustcs will complain
// about an unnecessary mut. Thus, we silence the unused_mut warning in two test modules below.
#[cfg(test)]
#[allow(unused_mut)]
mod shutdown_tests;
+#[cfg(test)]
+#[allow(unused_mut)]
+mod async_signer_tests;
pub use self::peer_channel_encryptor::LN_MAX_MSG_LEN;
#[cfg(taproot)]
/// A partial signature that also contains the Musig2 nonce its signer used
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct PartialSignatureWithNonce(pub musig2::types::PartialSignature, pub musig2::types::PublicNonce);
/// An error in decoding a message or struct.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum DecodeError {
/// A version byte specified something we don't know how to handle.
///
/// An [`init`] message to be sent to or received from a peer.
///
/// [`init`]: https://github.com/lightning/bolts/blob/master/01-messaging.md#the-init-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Init {
/// The relevant features which the sender supports.
pub features: InitFeatures,
/// An [`error`] message to be sent to or received from a peer.
///
/// [`error`]: https://github.com/lightning/bolts/blob/master/01-messaging.md#the-error-and-warning-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ErrorMessage {
/// The channel ID involved in the error.
///
/// A [`warning`] message to be sent to or received from a peer.
///
/// [`warning`]: https://github.com/lightning/bolts/blob/master/01-messaging.md#the-error-and-warning-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct WarningMessage {
/// The channel ID involved in the warning.
///
/// A [`ping`] message to be sent to or received from a peer.
///
/// [`ping`]: https://github.com/lightning/bolts/blob/master/01-messaging.md#the-ping-and-pong-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Ping {
/// The desired response length.
pub ponglen: u16,
/// A [`pong`] message to be sent to or received from a peer.
///
/// [`pong`]: https://github.com/lightning/bolts/blob/master/01-messaging.md#the-ping-and-pong-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Pong {
/// The pong packet size.
///
/// Used in V1 channel establishment
///
/// [`open_channel`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct OpenChannel {
/// The genesis hash of the blockchain where the channel is to be opened
pub chain_hash: ChainHash,
/// Used in V2 channel establishment
///
// TODO(dual_funding): Add spec link for `open_channel2`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct OpenChannelV2 {
/// The genesis hash of the blockchain where the channel is to be opened
pub chain_hash: ChainHash,
/// Used in V1 channel establishment
///
/// [`accept_channel`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-accept_channel-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct AcceptChannel {
/// A temporary channel ID, until the funding outpoint is announced
pub temporary_channel_id: ChannelId,
/// Used in V2 channel establishment
///
// TODO(dual_funding): Add spec link for `accept_channel2`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct AcceptChannelV2 {
/// The same `temporary_channel_id` received from the initiator's `open_channel2` message.
pub temporary_channel_id: ChannelId,
/// Used in V1 channel establishment
///
/// [`funding_created`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-funding_created-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct FundingCreated {
/// A temporary channel ID, until the funding is established
pub temporary_channel_id: ChannelId,
/// Used in V1 channel establishment
///
/// [`funding_signed`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-funding_signed-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct FundingSigned {
/// The channel ID
pub channel_id: ChannelId,
/// A [`channel_ready`] message to be sent to or received from a peer.
///
/// [`channel_ready`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-channel_ready-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelReady {
/// The channel ID
pub channel_id: ChannelId,
pub short_channel_id_alias: Option<u64>,
}
+/// An stfu (quiescence) message to be sent by or received from the stfu initiator.
+// TODO(splicing): Add spec link for `stfu`; still in draft, using from https://github.com/lightning/bolts/pull/863
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct Stfu {
+ /// The channel ID where quiescence is intended
+ pub channel_id: ChannelId,
+ /// Initiator flag, 1 if initiating, 0 if replying to an stfu.
+ pub initiator: u8,
+}
+
+/// A splice message to be sent by or received from the stfu initiator (splice initiator).
+// TODO(splicing): Add spec link for `splice`; still in draft, using from https://github.com/lightning/bolts/pull/863
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct Splice {
+ /// The channel ID where splicing is intended
+ pub channel_id: ChannelId,
+ /// The genesis hash of the blockchain where the channel is intended to be spliced
+ pub chain_hash: ChainHash,
+ /// The intended change in channel capacity: the amount to be added (positive value)
+ /// or removed (negative value) by the sender (splice initiator) by splicing into/from the channel.
+ pub relative_satoshis: i64,
+ /// The feerate for the new funding transaction, set by the splice initiator
+ pub funding_feerate_perkw: u32,
+ /// The locktime for the new funding transaction
+ pub locktime: u32,
+ /// The key of the sender (splice initiator) controlling the new funding transaction
+ pub funding_pubkey: PublicKey,
+}
+
+/// A splice_ack message to be received by or sent to the splice initiator.
+///
+// TODO(splicing): Add spec link for `splice_ack`; still in draft, using from https://github.com/lightning/bolts/pull/863
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct SpliceAck {
+ /// The channel ID where splicing is intended
+ pub channel_id: ChannelId,
+ /// The genesis hash of the blockchain where the channel is intended to be spliced
+ pub chain_hash: ChainHash,
+ /// The intended change in channel capacity: the amount to be added (positive value)
+ /// or removed (negative value) by the sender (splice acceptor) by splicing into/from the channel.
+ pub relative_satoshis: i64,
+ /// The key of the sender (splice acceptor) controlling the new funding transaction
+ pub funding_pubkey: PublicKey,
+}
+
+/// A splice_locked message to be sent to or received from a peer.
+///
+// TODO(splicing): Add spec link for `splice_locked`; still in draft, using from https://github.com/lightning/bolts/pull/863
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct SpliceLocked {
+ /// The channel ID
+ pub channel_id: ChannelId,
+}
+
/// A tx_add_input message for adding an input during interactive transaction construction
///
// TODO(dual_funding): Add spec link for `tx_add_input`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxAddInput {
/// The channel ID
pub channel_id: ChannelId,
/// A tx_add_output message for adding an output during interactive transaction construction.
///
// TODO(dual_funding): Add spec link for `tx_add_output`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxAddOutput {
/// The channel ID
pub channel_id: ChannelId,
/// A tx_remove_input message for removing an input during interactive transaction construction.
///
// TODO(dual_funding): Add spec link for `tx_remove_input`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxRemoveInput {
/// The channel ID
pub channel_id: ChannelId,
/// A tx_remove_output message for removing an output during interactive transaction construction.
///
// TODO(dual_funding): Add spec link for `tx_remove_output`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxRemoveOutput {
/// The channel ID
pub channel_id: ChannelId,
/// interactive transaction construction.
///
// TODO(dual_funding): Add spec link for `tx_complete`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxComplete {
/// The channel ID
pub channel_id: ChannelId,
/// interactive transaction construction.
///
// TODO(dual_funding): Add spec link for `tx_signatures`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxSignatures {
/// The channel ID
pub channel_id: ChannelId,
/// completed.
///
// TODO(dual_funding): Add spec link for `tx_init_rbf`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxInitRbf {
/// The channel ID
pub channel_id: ChannelId,
/// completed.
///
// TODO(dual_funding): Add spec link for `tx_ack_rbf`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxAckRbf {
/// The channel ID
pub channel_id: ChannelId,
/// A tx_abort message which signals the cancellation of an in-progress transaction negotiation.
///
// TODO(dual_funding): Add spec link for `tx_abort`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TxAbort {
/// The channel ID
pub channel_id: ChannelId,
/// A [`shutdown`] message to be sent to or received from a peer.
///
/// [`shutdown`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#closing-initiation-shutdown
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Shutdown {
/// The channel ID
pub channel_id: ChannelId,
///
/// This is provided in [`ClosingSigned`] by both sides to indicate the fee range they are willing
/// to use.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ClosingSignedFeeRange {
/// The minimum absolute fee, in satoshis, which the sender is willing to place on the closing
/// transaction.
/// A [`closing_signed`] message to be sent to or received from a peer.
///
/// [`closing_signed`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#closing-negotiation-closing_signed
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ClosingSigned {
/// The channel ID
pub channel_id: ChannelId,
/// An [`update_add_htlc`] message to be sent to or received from a peer.
///
/// [`update_add_htlc`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#adding-an-htlc-update_add_htlc
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UpdateAddHTLC {
/// The channel ID
pub channel_id: ChannelId,
/// An onion message to be sent to or received from a peer.
///
// TODO: update with link to OM when they are merged into the BOLTs
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct OnionMessage {
/// Used in decrypting the onion packet's payload.
pub blinding_point: PublicKey,
/// An [`update_fulfill_htlc`] message to be sent to or received from a peer.
///
/// [`update_fulfill_htlc`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#removing-an-htlc-update_fulfill_htlc-update_fail_htlc-and-update_fail_malformed_htlc
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UpdateFulfillHTLC {
/// The channel ID
pub channel_id: ChannelId,
/// An [`update_fail_htlc`] message to be sent to or received from a peer.
///
/// [`update_fail_htlc`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#removing-an-htlc-update_fulfill_htlc-update_fail_htlc-and-update_fail_malformed_htlc
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UpdateFailHTLC {
/// The channel ID
pub channel_id: ChannelId,
/// An [`update_fail_malformed_htlc`] message to be sent to or received from a peer.
///
/// [`update_fail_malformed_htlc`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#removing-an-htlc-update_fulfill_htlc-update_fail_htlc-and-update_fail_malformed_htlc
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UpdateFailMalformedHTLC {
/// The channel ID
pub channel_id: ChannelId,
/// A [`commitment_signed`] message to be sent to or received from a peer.
///
/// [`commitment_signed`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#committing-updates-so-far-commitment_signed
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct CommitmentSigned {
/// The channel ID
pub channel_id: ChannelId,
/// A [`revoke_and_ack`] message to be sent to or received from a peer.
///
/// [`revoke_and_ack`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#completing-the-transition-to-the-updated-state-revoke_and_ack
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct RevokeAndACK {
/// The channel ID
pub channel_id: ChannelId,
/// An [`update_fee`] message to be sent to or received from a peer
///
/// [`update_fee`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#updating-fees-update_fee
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UpdateFee {
/// The channel ID
pub channel_id: ChannelId,
/// A [`channel_reestablish`] message to be sent to or received from a peer.
///
/// [`channel_reestablish`]: https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#message-retransmission
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelReestablish {
/// The channel ID
pub channel_id: ChannelId,
/// An [`announcement_signatures`] message to be sent to or received from a peer.
///
/// [`announcement_signatures`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-announcement_signatures-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct AnnouncementSignatures {
/// The channel ID
pub channel_id: ChannelId,
}
/// An address which can be used to connect to a remote peer.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum SocketAddress {
/// An IPv4 address and port on which the peer is listening.
TcpIpV4 {
}
/// [`SocketAddress`] error variants
-#[derive(Debug, Eq, PartialEq, Clone)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub enum SocketAddressParseError {
/// Socket address (IPv4/IPv6) parsing error
SocketAddrParse,
/// The unsigned part of a [`node_announcement`] message.
///
/// [`node_announcement`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-node_announcement-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UnsignedNodeAnnouncement {
/// The advertised features
pub features: NodeFeatures,
pub(crate) excess_address_data: Vec<u8>,
pub(crate) excess_data: Vec<u8>,
}
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
/// A [`node_announcement`] message to be sent to or received from a peer.
///
/// [`node_announcement`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-node_announcement-message
/// The unsigned part of a [`channel_announcement`] message.
///
/// [`channel_announcement`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-channel_announcement-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UnsignedChannelAnnouncement {
/// The advertised channel features
pub features: ChannelFeatures,
/// A [`channel_announcement`] message to be sent to or received from a peer.
///
/// [`channel_announcement`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-channel_announcement-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelAnnouncement {
/// Authentication of the announcement by the first public node
pub node_signature_1: Signature,
/// The unsigned part of a [`channel_update`] message.
///
/// [`channel_update`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-channel_update-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct UnsignedChannelUpdate {
/// The genesis hash of the blockchain where the channel is to be opened
pub chain_hash: ChainHash,
/// A [`channel_update`] message to be sent to or received from a peer.
///
/// [`channel_update`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-channel_update-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ChannelUpdate {
/// A signature of the channel update
pub signature: Signature,
/// messages.
///
/// [`query_channel_range`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-query_channel_range-and-reply_channel_range-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct QueryChannelRange {
/// The genesis hash of the blockchain being queried
pub chain_hash: ChainHash,
/// serialization and do not support `encoding_type=1` zlib serialization.
///
/// [`reply_channel_range`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-query_channel_range-and-reply_channel_range-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ReplyChannelRange {
/// The genesis hash of the blockchain being queried
pub chain_hash: ChainHash,
/// serialization and do not support `encoding_type=1` zlib serialization.
///
/// [`query_short_channel_ids`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-query_short_channel_idsreply_short_channel_ids_end-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct QueryShortChannelIds {
/// The genesis hash of the blockchain being queried
pub chain_hash: ChainHash,
/// a perfect view of the network.
///
/// [`reply_short_channel_ids_end`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-query_short_channel_idsreply_short_channel_ids_end-messages
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct ReplyShortChannelIdsEnd {
/// The genesis hash of the blockchain that was queried
pub chain_hash: ChainHash,
/// `gossip_queries` feature has been negotiated.
///
/// [`gossip_timestamp_filter`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md#the-gossip_timestamp_filter-message
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct GossipTimestampFilter {
/// The genesis hash of the blockchain for channel and node information
pub chain_hash: ChainHash,
}
/// Used to put an error message in a [`LightningError`].
-#[derive(Clone, Debug, PartialEq)]
+#[derive(Clone, Debug, Hash, PartialEq)]
pub enum ErrorAction {
/// The peer took some action which made us think they were useless. Disconnect them.
DisconnectPeer {
/// Struct used to return values from [`RevokeAndACK`] messages, containing a bunch of commitment
/// transaction updates if they were pending.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct CommitmentUpdate {
/// `update_add_htlc` messages which should be sent
pub update_add_htlcs: Vec<UpdateAddHTLC>,
/// Handle an incoming `closing_signed` message from the given peer.
fn handle_closing_signed(&self, their_node_id: &PublicKey, msg: &ClosingSigned);
+ // Quiescence
+ /// Handle an incoming `stfu` message from the given peer.
+ fn handle_stfu(&self, their_node_id: &PublicKey, msg: &Stfu);
+
+ // Splicing
+ /// Handle an incoming `splice` message from the given peer.
+ fn handle_splice(&self, their_node_id: &PublicKey, msg: &Splice);
+ /// Handle an incoming `splice_ack` message from the given peer.
+ fn handle_splice_ack(&self, their_node_id: &PublicKey, msg: &SpliceAck);
+ /// Handle an incoming `splice_locked` message from the given peer.
+ fn handle_splice_locked(&self, their_node_id: &PublicKey, msg: &SpliceLocked);
+
// Interactive channel construction
/// Handle an incoming `tx_add_input message` from the given peer.
fn handle_tx_add_input(&self, their_node_id: &PublicKey, msg: &TxAddInput);
#[cfg(not(fuzzing))]
pub(crate) use self::fuzzy_internal_msgs::*;
-#[derive(Clone)]
-pub(crate) struct OnionPacket {
- pub(crate) version: u8,
+/// BOLT 4 onion packet including hop data for the next peer.
+#[derive(Clone, Hash, PartialEq, Eq)]
+pub struct OnionPacket {
+ /// BOLT 4 version number.
+ pub version: u8,
/// In order to ensure we always return an error on onion decode in compliance with [BOLT
/// #4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md), we have to
/// deserialize `OnionPacket`s contained in [`UpdateAddHTLC`] messages even if the ephemeral
/// public key (here) is bogus, so we hold a [`Result`] instead of a [`PublicKey`] as we'd
/// like.
- pub(crate) public_key: Result<PublicKey, secp256k1::Error>,
- pub(crate) hop_data: [u8; 20*65],
- pub(crate) hmac: [u8; 32],
+ pub public_key: Result<PublicKey, secp256k1::Error>,
+ /// 1300 bytes encrypted payload for the next hop.
+ pub hop_data: [u8; 20*65],
+ /// HMAC to verify the integrity of hop_data.
+ pub hmac: [u8; 32],
}
impl onion_utils::Packet for OnionPacket {
}
}
-impl Eq for OnionPacket { }
-impl PartialEq for OnionPacket {
- fn eq(&self, other: &OnionPacket) -> bool {
- for (i, j) in self.hop_data.iter().zip(other.hop_data.iter()) {
- if i != j { return false; }
- }
- self.version == other.version &&
- self.public_key == other.public_key &&
- self.hmac == other.hmac
- }
-}
-
impl fmt::Debug for OnionPacket {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_fmt(format_args!("OnionPacket version {} with hmac {:?}", self.version, &self.hmac[..]))
}
}
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub(crate) struct OnionErrorPacket {
// This really should be a constant size slice, but the spec lets these things be up to 128KB?
// (TODO) We limit it in decode to much lower...
(2, require_confirmed_inputs, option),
});
+impl_writeable_msg!(Stfu, {
+ channel_id,
+ initiator,
+}, {});
+
+impl_writeable_msg!(Splice, {
+ channel_id,
+ chain_hash,
+ relative_satoshis,
+ funding_feerate_perkw,
+ locktime,
+ funding_pubkey,
+}, {});
+
+impl_writeable_msg!(SpliceAck, {
+ channel_id,
+ chain_hash,
+ relative_satoshis,
+ funding_pubkey,
+}, {});
+
+impl_writeable_msg!(SpliceLocked, {
+ channel_id,
+}, {});
+
impl_writeable_msg!(TxAddInput, {
channel_id,
serial_id,
assert_eq!(encoded_value, target_value);
}
+ #[test]
+ fn encoding_splice() {
+ let secp_ctx = Secp256k1::new();
+ let (_, pubkey_1,) = get_keys_from!("0101010101010101010101010101010101010101010101010101010101010101", secp_ctx);
+ let splice = msgs::Splice {
+ chain_hash: ChainHash::from_hex("6fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000").unwrap(),
+ channel_id: ChannelId::from_bytes([2; 32]),
+ relative_satoshis: 123456,
+ funding_feerate_perkw: 2000,
+ locktime: 0,
+ funding_pubkey: pubkey_1,
+ };
+ let encoded_value = splice.encode();
+ assert_eq!(hex::encode(encoded_value), "02020202020202020202020202020202020202020202020202020202020202026fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000000000000001e240000007d000000000031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f");
+ }
+
+ #[test]
+ fn encoding_stfu() {
+ let stfu = msgs::Stfu {
+ channel_id: ChannelId::from_bytes([2; 32]),
+ initiator: 1,
+ };
+ let encoded_value = stfu.encode();
+ assert_eq!(hex::encode(encoded_value), "020202020202020202020202020202020202020202020202020202020202020201");
+ }
+
+ #[test]
+ fn encoding_splice_ack() {
+ let secp_ctx = Secp256k1::new();
+ let (_, pubkey_1,) = get_keys_from!("0101010101010101010101010101010101010101010101010101010101010101", secp_ctx);
+ let splice = msgs::SpliceAck {
+ chain_hash: ChainHash::from_hex("6fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000").unwrap(),
+ channel_id: ChannelId::from_bytes([2; 32]),
+ relative_satoshis: 123456,
+ funding_pubkey: pubkey_1,
+ };
+ let encoded_value = splice.encode();
+ assert_eq!(hex::encode(encoded_value), "02020202020202020202020202020202020202020202020202020202020202026fe28c0ab6f1b372c1a6a246ae63f74f931e8365e15a089c68d6190000000000000000000001e240031b84c5567b126440995d3ed5aaba0565d71e1834604819ff9c17f5e9d5dd078f");
+ }
+
+ #[test]
+ fn encoding_splice_locked() {
+ let splice = msgs::SpliceLocked {
+ channel_id: ChannelId::from_bytes([2; 32]),
+ };
+ let encoded_value = splice.encode();
+ assert_eq!(hex::encode(encoded_value), "0202020202020202020202020202020202020202020202020202020202020202");
+ }
+
#[test]
fn encoding_tx_add_input() {
let tx_add_input = msgs::TxAddInput {
}
/// Calculates a pubkey for the next hop, such as the next hop's packet pubkey or blinding point.
-pub(crate) fn next_hop_pubkey<T: secp256k1::Signing + secp256k1::Verification>(
+pub(crate) fn next_hop_pubkey<T: secp256k1::Verification>(
secp_ctx: &Secp256k1<T>, curr_pubkey: PublicKey, shared_secret: &[u8]
) -> Result<PublicKey, secp256k1::Error> {
let blinding_factor = {
}
}
+/// Build a payment onion, returning the first hop msat and cltv values as well.
+/// `cur_block_height` should be set to the best known block height + 1.
+pub fn create_payment_onion<T: secp256k1::Signing>(
+ secp_ctx: &Secp256k1<T>, path: &Path, session_priv: &SecretKey, total_msat: u64,
+ recipient_onion: RecipientOnionFields, cur_block_height: u32, payment_hash: &PaymentHash,
+ keysend_preimage: &Option<PaymentPreimage>, prng_seed: [u8; 32]
+) -> Result<(msgs::OnionPacket, u64, u32), APIError> {
+ let onion_keys = construct_onion_keys(&secp_ctx, &path, &session_priv)
+ .map_err(|_| APIError::InvalidRoute{
+ err: "Pubkey along hop was maliciously selected".to_owned()
+ })?;
+ let (onion_payloads, htlc_msat, htlc_cltv) = build_onion_payloads(
+ &path, total_msat, recipient_onion, cur_block_height, keysend_preimage
+ )?;
+ let onion_packet = construct_onion_packet(onion_payloads, onion_keys, prng_seed, payment_hash)
+ .map_err(|_| APIError::InvalidRoute{
+ err: "Route size too large considering onion data".to_owned()
+ })?;
+ Ok((onion_packet, htlc_msat, htlc_cltv))
+}
+
pub(crate) fn decode_next_untagged_hop<T, R: ReadableArgs<T>, N: NextPacketBytes>(shared_secret: [u8; 32], hop_data: &[u8], hmac_bytes: [u8; 32], read_args: T) -> Result<(R, Option<([u8; 32], N)>), OnionDecodeErr> {
decode_next_hop(shared_secret, hop_data, hmac_bytes, None, read_args)
}
create_announced_chan_between_nodes(&nodes, 0, 1);
create_announced_chan_between_nodes(&nodes, 1, 2);
- let (route, _, _, _) = get_route_and_payment_hash!(&nodes[0], nodes[2], 100_000);
+ let recv_value = 100_000;
+ let (route, payment_hash, _, _) = get_route_and_payment_hash!(&nodes[0], nodes[2], recv_value);
- let (payment_hash, payment_id) = nodes[0].node.send_probe(route.paths[0].clone()).unwrap();
+ let res = nodes[0].node.send_probe(route.paths[0].clone()).unwrap();
- // node[0] -- update_add_htlcs -> node[1]
- check_added_monitors!(nodes[0], 1);
- let updates = get_htlc_update_msgs!(nodes[0], nodes[1].node.get_our_node_id());
- let probe_event = SendEvent::from_commitment_update(nodes[1].node.get_our_node_id(), updates);
- nodes[1].node.handle_update_add_htlc(&nodes[0].node.get_our_node_id(), &probe_event.msgs[0]);
- check_added_monitors!(nodes[1], 0);
- commitment_signed_dance!(nodes[1], nodes[0], probe_event.commitment_msg, false);
- expect_pending_htlcs_forwardable!(nodes[1]);
+ let expected_route: &[&[&Node]] = &[&[&nodes[1], &nodes[2]]];
- // node[1] -- update_add_htlcs -> node[2]
- check_added_monitors!(nodes[1], 1);
- let updates = get_htlc_update_msgs!(nodes[1], nodes[2].node.get_our_node_id());
- let probe_event = SendEvent::from_commitment_update(nodes[1].node.get_our_node_id(), updates);
- nodes[2].node.handle_update_add_htlc(&nodes[1].node.get_our_node_id(), &probe_event.msgs[0]);
- check_added_monitors!(nodes[2], 0);
- commitment_signed_dance!(nodes[2], nodes[1], probe_event.commitment_msg, true, true);
+ send_probe_along_route(&nodes[0], expected_route);
- // node[1] <- update_fail_htlcs -- node[2]
- let updates = get_htlc_update_msgs!(nodes[2], nodes[1].node.get_our_node_id());
- nodes[1].node.handle_update_fail_htlc(&nodes[2].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
- check_added_monitors!(nodes[1], 0);
- commitment_signed_dance!(nodes[1], nodes[2], updates.commitment_signed, true);
-
- // node[0] <- update_fail_htlcs -- node[1]
- let updates = get_htlc_update_msgs!(nodes[1], nodes[0].node.get_our_node_id());
- nodes[0].node.handle_update_fail_htlc(&nodes[1].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
- check_added_monitors!(nodes[0], 0);
- commitment_signed_dance!(nodes[0], nodes[1], updates.commitment_signed, false);
+ expect_probe_successful_events(&nodes[0], vec![res]);
- let mut events = nodes[0].node.get_and_clear_pending_events();
- assert_eq!(events.len(), 1);
- match events.drain(..).next().unwrap() {
- crate::events::Event::ProbeSuccessful { payment_id: ev_pid, payment_hash: ev_ph, .. } => {
- assert_eq!(payment_id, ev_pid);
- assert_eq!(payment_hash, ev_ph);
- },
- _ => panic!(),
- };
assert!(!nodes[0].node.has_pending_payments());
}
assert!(!nodes[0].node.has_pending_payments());
}
+#[test]
+fn preflight_probes_yield_event_skip_private_hop() {
+ let chanmon_cfgs = create_chanmon_cfgs(5);
+ let node_cfgs = create_node_cfgs(5, &chanmon_cfgs);
+
+ // We alleviate the HTLC max-in-flight limit, as otherwise we'd always be limited through that.
+ let mut no_htlc_limit_config = test_default_channel_config();
+ no_htlc_limit_config.channel_handshake_config.max_inbound_htlc_value_in_flight_percent_of_channel = 100;
+
+ let user_configs = std::iter::repeat(no_htlc_limit_config).take(5).map(|c| Some(c)).collect::<Vec<Option<UserConfig>>>();
+ let node_chanmgrs = create_node_chanmgrs(5, &node_cfgs, &user_configs);
+ let nodes = create_network(5, &node_cfgs, &node_chanmgrs);
+
+ // Setup channel topology:
+ // N0 -(1M:0)- N1 -(1M:0)- N2 -(70k:0)- N3 -(50k:0)- N4
+
+ create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 0);
+ create_announced_chan_between_nodes_with_value(&nodes, 1, 2, 1_000_000, 0);
+ create_announced_chan_between_nodes_with_value(&nodes, 2, 3, 70_000, 0);
+ create_unannounced_chan_between_nodes_with_value(&nodes, 3, 4, 50_000, 0);
+
+ let mut invoice_features = Bolt11InvoiceFeatures::empty();
+ invoice_features.set_basic_mpp_optional();
+
+ let payment_params = PaymentParameters::from_node_id(nodes[3].node.get_our_node_id(), TEST_FINAL_CLTV)
+ .with_bolt11_features(invoice_features).unwrap();
+
+ let recv_value = 50_000_000;
+ let route_params = RouteParameters::from_payment_params_and_value(payment_params, recv_value);
+ let res = nodes[0].node.send_preflight_probes(route_params, None).unwrap();
+
+ let expected_route: &[&[&Node]] = &[&[&nodes[1], &nodes[2], &nodes[3]]];
+
+ assert_eq!(res.len(), expected_route.len());
+
+ send_probe_along_route(&nodes[0], expected_route);
+
+ expect_probe_successful_events(&nodes[0], res.clone());
+
+ assert!(!nodes[0].node.has_pending_payments());
+}
+
+#[test]
+fn preflight_probes_yield_event() {
+ let chanmon_cfgs = create_chanmon_cfgs(4);
+ let node_cfgs = create_node_cfgs(4, &chanmon_cfgs);
+
+ // We alleviate the HTLC max-in-flight limit, as otherwise we'd always be limited through that.
+ let mut no_htlc_limit_config = test_default_channel_config();
+ no_htlc_limit_config.channel_handshake_config.max_inbound_htlc_value_in_flight_percent_of_channel = 100;
+
+ let user_configs = std::iter::repeat(no_htlc_limit_config).take(4).map(|c| Some(c)).collect::<Vec<Option<UserConfig>>>();
+ let node_chanmgrs = create_node_chanmgrs(4, &node_cfgs, &user_configs);
+ let nodes = create_network(4, &node_cfgs, &node_chanmgrs);
+
+ // Setup channel topology:
+ // (1M:0)- N1 -(30k:0)
+ // / \
+ // N0 N4
+ // \ /
+ // (1M:0)- N2 -(70k:0)
+ //
+ create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 0);
+ create_announced_chan_between_nodes_with_value(&nodes, 0, 2, 1_000_000, 0);
+ create_announced_chan_between_nodes_with_value(&nodes, 1, 3, 30_000, 0);
+ create_announced_chan_between_nodes_with_value(&nodes, 2, 3, 70_000, 0);
+
+ let mut invoice_features = Bolt11InvoiceFeatures::empty();
+ invoice_features.set_basic_mpp_optional();
+
+ let payment_params = PaymentParameters::from_node_id(nodes[3].node.get_our_node_id(), TEST_FINAL_CLTV)
+ .with_bolt11_features(invoice_features).unwrap();
+
+ let recv_value = 50_000_000;
+ let route_params = RouteParameters::from_payment_params_and_value(payment_params, recv_value);
+ let res = nodes[0].node.send_preflight_probes(route_params, None).unwrap();
+
+ let expected_route: &[&[&Node]] = &[&[&nodes[1], &nodes[3]], &[&nodes[2], &nodes[3]]];
+
+ assert_eq!(res.len(), expected_route.len());
+
+ send_probe_along_route(&nodes[0], expected_route);
+
+ expect_probe_successful_events(&nodes[0], res.clone());
+
+ assert!(!nodes[0].node.has_pending_payments());
+}
+
#[test]
fn preflight_probes_yield_event_and_skip() {
let chanmon_cfgs = create_chanmon_cfgs(5);
// \ /
// (70k:0)- N3 -(1M:0)
//
- let first_chan_update = create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 0).0;
+ create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 0);
create_announced_chan_between_nodes_with_value(&nodes, 1, 2, 30_000, 0);
create_announced_chan_between_nodes_with_value(&nodes, 1, 3, 70_000, 0);
create_announced_chan_between_nodes_with_value(&nodes, 2, 4, 1_000_000, 0);
let mut invoice_features = Bolt11InvoiceFeatures::empty();
invoice_features.set_basic_mpp_optional();
- let mut payment_params = PaymentParameters::from_node_id(nodes[4].node.get_our_node_id(), TEST_FINAL_CLTV)
+ let payment_params = PaymentParameters::from_node_id(nodes[4].node.get_our_node_id(), TEST_FINAL_CLTV)
.with_bolt11_features(invoice_features).unwrap();
- let route_params = RouteParameters::from_payment_params_and_value(payment_params, 80_000_000);
+ let recv_value = 80_000_000;
+ let route_params = RouteParameters::from_payment_params_and_value(payment_params, recv_value);
let res = nodes[0].node.send_preflight_probes(route_params, None).unwrap();
+ let expected_route : &[&[&Node]] = &[&[&nodes[1], &nodes[2], &nodes[4]]];
+
// We check that only one probe was sent, the other one was skipped due to limited liquidity.
assert_eq!(res.len(), 1);
- let log_msg = format!("Skipped sending payment probe to avoid putting channel {} under the liquidity limit.",
- first_chan_update.contents.short_channel_id);
- node_cfgs[0].logger.assert_log_contains("lightning::ln::channelmanager", &log_msg, 1);
- let (payment_hash, payment_id) = res.first().unwrap();
+ send_probe_along_route(&nodes[0], expected_route);
- // node[0] -- update_add_htlcs -> node[1]
- check_added_monitors!(nodes[0], 1);
- let probe_event = SendEvent::from_node(&nodes[0]);
- nodes[1].node.handle_update_add_htlc(&nodes[0].node.get_our_node_id(), &probe_event.msgs[0]);
- check_added_monitors!(nodes[1], 0);
- commitment_signed_dance!(nodes[1], nodes[0], probe_event.commitment_msg, false);
- expect_pending_htlcs_forwardable!(nodes[1]);
+ expect_probe_successful_events(&nodes[0], res.clone());
- // node[1] -- update_add_htlcs -> node[2]
- check_added_monitors!(nodes[1], 1);
- let probe_event = SendEvent::from_node(&nodes[1]);
- nodes[2].node.handle_update_add_htlc(&nodes[1].node.get_our_node_id(), &probe_event.msgs[0]);
- check_added_monitors!(nodes[2], 0);
- commitment_signed_dance!(nodes[2], nodes[1], probe_event.commitment_msg, false);
- expect_pending_htlcs_forwardable!(nodes[2]);
-
- // node[2] -- update_add_htlcs -> node[4]
- check_added_monitors!(nodes[2], 1);
- let probe_event = SendEvent::from_node(&nodes[2]);
- nodes[4].node.handle_update_add_htlc(&nodes[2].node.get_our_node_id(), &probe_event.msgs[0]);
- check_added_monitors!(nodes[4], 0);
- commitment_signed_dance!(nodes[4], nodes[2], probe_event.commitment_msg, true, true);
-
- // node[2] <- update_fail_htlcs -- node[4]
- let updates = get_htlc_update_msgs!(nodes[4], nodes[2].node.get_our_node_id());
- nodes[2].node.handle_update_fail_htlc(&nodes[4].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
- check_added_monitors!(nodes[2], 0);
- commitment_signed_dance!(nodes[2], nodes[4], updates.commitment_signed, true);
-
- // node[1] <- update_fail_htlcs -- node[2]
- let updates = get_htlc_update_msgs!(nodes[2], nodes[1].node.get_our_node_id());
- nodes[1].node.handle_update_fail_htlc(&nodes[2].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
- check_added_monitors!(nodes[1], 0);
- commitment_signed_dance!(nodes[1], nodes[2], updates.commitment_signed, true);
-
- // node[0] <- update_fail_htlcs -- node[1]
- let updates = get_htlc_update_msgs!(nodes[1], nodes[0].node.get_our_node_id());
- nodes[0].node.handle_update_fail_htlc(&nodes[1].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
- check_added_monitors!(nodes[0], 0);
- commitment_signed_dance!(nodes[0], nodes[1], updates.commitment_signed, false);
-
- let mut events = nodes[0].node.get_and_clear_pending_events();
- assert_eq!(events.len(), 1);
- match events.drain(..).next().unwrap() {
- crate::events::Event::ProbeSuccessful { payment_id: ev_pid, payment_hash: ev_ph, .. } => {
- assert_eq!(*payment_id, ev_pid);
- assert_eq!(*payment_hash, ev_ph);
- },
- _ => panic!(),
- };
assert!(!nodes[0].node.has_pending_payments());
}
expect_payment_failed_conditions(&nodes[0], payment_hash, false, fail_conditions);
} else if test == InterceptTest::Forward {
// Check that we'll fail as expected when sending to a channel that isn't in `ChannelReady` yet.
- let temp_chan_id = nodes[1].node.create_channel(nodes[2].node.get_our_node_id(), 100_000, 0, 42, None).unwrap();
+ let temp_chan_id = nodes[1].node.create_channel(nodes[2].node.get_our_node_id(), 100_000, 0, 42, None, None).unwrap();
let unusable_chan_err = nodes[1].node.forward_intercepted_htlc(intercept_id, &temp_chan_id, nodes[2].node.get_our_node_id(), expected_outbound_amount_msat).unwrap_err();
assert_eq!(unusable_chan_err , APIError::ChannelUnavailable {
err: format!("Channel with id {} for the passed counterparty node_id {} is still opening.",
/// and [BOLT-1](https://github.com/lightning/bolts/blob/master/01-messaging.md#lightning-message-format):
pub const LN_MAX_MSG_LEN: usize = ::core::u16::MAX as usize; // Must be equal to 65535
+/// The (rough) size buffer to pre-allocate when encoding a message. Messages should reliably be
+/// smaller than this size by at least 32 bytes or so.
+pub const MSG_BUF_ALLOC_SIZE: usize = 2048;
+
// Sha256("Noise_XK_secp256k1_ChaChaPoly_SHA256")
const NOISE_CK: [u8; 32] = [0x26, 0x40, 0xf5, 0x2e, 0xeb, 0xcd, 0x9e, 0x88, 0x29, 0x58, 0x95, 0x1c, 0x79, 0x42, 0x50, 0xee, 0xdb, 0x28, 0x00, 0x2c, 0x05, 0xd7, 0xdc, 0x2e, 0xa0, 0xf1, 0x95, 0x40, 0x60, 0x42, 0xca, 0xf1];
// Sha256(NOISE_CK || "lightning")
res.extend_from_slice(&tag);
}
+ fn decrypt_in_place_with_ad(inout: &mut [u8], n: u64, key: &[u8; 32], h: &[u8]) -> Result<(), LightningError> {
+ let mut nonce = [0; 12];
+ nonce[4..].copy_from_slice(&n.to_le_bytes()[..]);
+
+ let mut chacha = ChaCha20Poly1305RFC::new(key, &nonce, h);
+ let (inout, tag) = inout.split_at_mut(inout.len() - 16);
+ if chacha.check_decrypt_in_place(inout, tag).is_err() {
+ return Err(LightningError{err: "Bad MAC".to_owned(), action: msgs::ErrorAction::DisconnectPeer{ msg: None }});
+ }
+ Ok(())
+ }
+
#[inline]
fn decrypt_with_ad(res: &mut[u8], n: u64, key: &[u8; 32], h: &[u8], cyphertext: &[u8]) -> Result<(), LightningError> {
let mut nonce = [0; 12];
Ok(self.their_node_id.unwrap().clone())
}
- /// Encrypts the given pre-serialized message, returning the encrypted version.
- /// panics if msg.len() > 65535 or Noise handshake has not finished.
- pub fn encrypt_buffer(&mut self, msg: &[u8]) -> Vec<u8> {
- if msg.len() > LN_MAX_MSG_LEN {
+ /// Builds sendable bytes for a message.
+ ///
+ /// `msgbuf` must begin with 16 + 2 dummy/0 bytes, which will be filled with the encrypted
+ /// message length and its MAC. It should then be followed by the message bytes themselves
+ /// (including the two byte message type).
+ ///
+ /// For effeciency, the [`Vec::capacity`] should be at least 16 bytes larger than the
+ /// [`Vec::len`], to avoid reallocating for the message MAC, which will be appended to the vec.
+ fn encrypt_message_with_header_0s(&mut self, msgbuf: &mut Vec<u8>) {
+ let msg_len = msgbuf.len() - 16 - 2;
+ if msg_len > LN_MAX_MSG_LEN {
panic!("Attempted to encrypt message longer than 65535 bytes!");
}
- let mut res = Vec::with_capacity(msg.len() + 16*2 + 2);
- res.resize(msg.len() + 16*2 + 2, 0);
-
match self.noise_state {
NoiseState::Finished { ref mut sk, ref mut sn, ref mut sck, rk: _, rn: _, rck: _ } => {
if *sn >= 1000 {
*sn = 0;
}
- Self::encrypt_with_ad(&mut res[0..16+2], *sn, sk, &[0; 0], &(msg.len() as u16).to_be_bytes());
+ Self::encrypt_with_ad(&mut msgbuf[0..16+2], *sn, sk, &[0; 0], &(msg_len as u16).to_be_bytes());
*sn += 1;
- Self::encrypt_with_ad(&mut res[16+2..], *sn, sk, &[0; 0], msg);
+ Self::encrypt_in_place_with_ad(msgbuf, 16+2, *sn, sk, &[0; 0]);
*sn += 1;
},
_ => panic!("Tried to encrypt a message prior to noise handshake completion"),
}
+ }
- res
+ /// Encrypts the given pre-serialized message, returning the encrypted version.
+ /// panics if msg.len() > 65535 or Noise handshake has not finished.
+ pub fn encrypt_buffer(&mut self, mut msg: MessageBuf) -> Vec<u8> {
+ self.encrypt_message_with_header_0s(&mut msg.0);
+ msg.0
}
/// Encrypts the given message, returning the encrypted version.
pub fn encrypt_message<M: wire::Type>(&mut self, message: &M) -> Vec<u8> {
// Allocate a buffer with 2KB, fitting most common messages. Reserve the first 16+2 bytes
// for the 2-byte message type prefix and its MAC.
- let mut res = VecWriter(Vec::with_capacity(2048));
+ let mut res = VecWriter(Vec::with_capacity(MSG_BUF_ALLOC_SIZE));
res.0.resize(16 + 2, 0);
wire::write(message, &mut res).expect("In-memory messages must never fail to serialize");
- let msg_len = res.0.len() - 16 - 2;
- if msg_len > LN_MAX_MSG_LEN {
- panic!("Attempted to encrypt message longer than 65535 bytes!");
- }
-
- match self.noise_state {
- NoiseState::Finished { ref mut sk, ref mut sn, ref mut sck, rk: _, rn: _, rck: _ } => {
- if *sn >= 1000 {
- let (new_sck, new_sk) = hkdf_extract_expand_twice(sck, sk);
- *sck = new_sck;
- *sk = new_sk;
- *sn = 0;
- }
-
- Self::encrypt_with_ad(&mut res.0[0..16+2], *sn, sk, &[0; 0], &(msg_len as u16).to_be_bytes());
- *sn += 1;
-
- Self::encrypt_in_place_with_ad(&mut res.0, 16+2, *sn, sk, &[0; 0]);
- *sn += 1;
- },
- _ => panic!("Tried to encrypt a message prior to noise handshake completion"),
- }
-
+ self.encrypt_message_with_header_0s(&mut res.0);
res.0
}
}
}
- /// Decrypts the given message.
+ /// Decrypts the given message up to msg.len() - 16. Bytes after msg.len() - 16 will be left
+ /// undefined (as they contain the Poly1305 tag bytes).
+ ///
/// panics if msg.len() > 65535 + 16
- pub fn decrypt_message(&mut self, msg: &[u8]) -> Result<Vec<u8>, LightningError> {
+ pub fn decrypt_message(&mut self, msg: &mut [u8]) -> Result<(), LightningError> {
if msg.len() > LN_MAX_MSG_LEN + 16 {
panic!("Attempted to decrypt message longer than 65535 + 16 bytes!");
}
match self.noise_state {
NoiseState::Finished { sk: _, sn: _, sck: _, ref rk, ref mut rn, rck: _ } => {
- let mut res = Vec::with_capacity(msg.len() - 16);
- res.resize(msg.len() - 16, 0);
- Self::decrypt_with_ad(&mut res[..], *rn, rk, &[0; 0], msg)?;
+ Self::decrypt_in_place_with_ad(&mut msg[..], *rn, rk, &[0; 0])?;
*rn += 1;
-
- Ok(res)
+ Ok(())
},
_ => panic!("Tried to decrypt a message prior to noise handshake completion"),
}
}
}
+/// A buffer which stores an encoded message (including the two message-type bytes) with some
+/// padding to allow for future encryption/MACing.
+pub struct MessageBuf(Vec<u8>);
+impl MessageBuf {
+ /// Creates a new buffer from an encoded message (i.e. the two message-type bytes followed by
+ /// the message contents).
+ ///
+ /// Panics if the message is longer than 2^16.
+ pub fn from_encoded(encoded_msg: &[u8]) -> Self {
+ if encoded_msg.len() > LN_MAX_MSG_LEN {
+ panic!("Attempted to encrypt message longer than 65535 bytes!");
+ }
+ // In addition to the message (continaing the two message type bytes), we also have to add
+ // the message length header (and its MAC) and the message MAC.
+ let mut res = Vec::with_capacity(encoded_msg.len() + 16*2 + 2);
+ res.resize(encoded_msg.len() + 16 + 2, 0);
+ res[16 + 2..].copy_from_slice(&encoded_msg);
+ Self(res)
+ }
+}
+
#[cfg(test)]
mod tests {
- use super::LN_MAX_MSG_LEN;
+ use super::{MessageBuf, LN_MAX_MSG_LEN};
use bitcoin::secp256k1::{PublicKey, SecretKey};
use bitcoin::secp256k1::Secp256k1;
for i in 0..1005 {
let msg = [0x68, 0x65, 0x6c, 0x6c, 0x6f];
- let res = outbound_peer.encrypt_buffer(&msg);
+ let mut res = outbound_peer.encrypt_buffer(MessageBuf::from_encoded(&msg));
assert_eq!(res.len(), 5 + 2*16 + 2);
let len_header = res[0..2+16].to_vec();
assert_eq!(inbound_peer.decrypt_length_header(&len_header[..]).unwrap() as usize, msg.len());
- assert_eq!(inbound_peer.decrypt_message(&res[2+16..]).unwrap()[..], msg[..]);
if i == 0 {
assert_eq!(res, hex::decode("cf2b30ddf0cf3f80e7c35a6e6730b59fe802473180f396d88a8fb0db8cbcf25d2f214cf9ea1d95").unwrap());
} else if i == 1001 {
assert_eq!(res, hex::decode("2ecd8c8a5629d0d02ab457a0fdd0f7b90a192cd46be5ecb6ca570bfc5e268338b1a16cf4ef2d36").unwrap());
}
+
+ inbound_peer.decrypt_message(&mut res[2+16..]).unwrap();
+ assert_eq!(res[2 + 16..res.len() - 16], msg[..]);
}
}
fn max_message_len_encryption() {
let mut outbound_peer = get_outbound_peer_for_initiator_test_vectors();
let msg = [4u8; LN_MAX_MSG_LEN + 1];
- outbound_peer.encrypt_buffer(&msg);
+ outbound_peer.encrypt_buffer(MessageBuf::from_encoded(&msg));
}
#[test]
let mut inbound_peer = get_inbound_peer_for_test_vectors();
// MSG should not exceed LN_MAX_MSG_LEN + 16
- let msg = [4u8; LN_MAX_MSG_LEN + 17];
- inbound_peer.decrypt_message(&msg).unwrap();
+ let mut msg = [4u8; LN_MAX_MSG_LEN + 17];
+ inbound_peer.decrypt_message(&mut msg).unwrap();
}
}
#[cfg(not(c_bindings))]
use crate::ln::channelmanager::{SimpleArcChannelManager, SimpleRefChannelManager};
use crate::util::ser::{VecWriter, Writeable, Writer};
-use crate::ln::peer_channel_encryptor::{PeerChannelEncryptor,NextNoiseStep};
+use crate::ln::peer_channel_encryptor::{PeerChannelEncryptor, NextNoiseStep, MessageBuf, MSG_BUF_ALLOC_SIZE};
use crate::ln::wire;
use crate::ln::wire::{Encode, Type};
#[cfg(not(c_bindings))]
use crate::prelude::*;
use crate::io;
-use alloc::collections::LinkedList;
+use alloc::collections::VecDeque;
use crate::sync::{Arc, Mutex, MutexGuard, FairRwLock};
use core::sync::atomic::{AtomicBool, AtomicU32, AtomicI32, Ordering};
use core::{cmp, hash, fmt, mem};
fn handle_closing_signed(&self, their_node_id: &PublicKey, msg: &msgs::ClosingSigned) {
ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id);
}
+ fn handle_stfu(&self, their_node_id: &PublicKey, msg: &msgs::Stfu) {
+ ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id);
+ }
+ fn handle_splice(&self, their_node_id: &PublicKey, msg: &msgs::Splice) {
+ ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id);
+ }
+ fn handle_splice_ack(&self, their_node_id: &PublicKey, msg: &msgs::SpliceAck) {
+ ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id);
+ }
+ fn handle_splice_locked(&self, their_node_id: &PublicKey, msg: &msgs::SpliceLocked) {
+ ErroringMessageHandler::push_error(&self, their_node_id, msg.channel_id);
+ }
fn handle_update_add_htlc(&self, their_node_id: &PublicKey, msg: &msgs::UpdateAddHTLC) {
ErroringMessageHandler::push_error(self, their_node_id, msg.channel_id);
}
their_features: Option<InitFeatures>,
their_socket_address: Option<SocketAddress>,
- pending_outbound_buffer: LinkedList<Vec<u8>>,
+ pending_outbound_buffer: VecDeque<Vec<u8>>,
pending_outbound_buffer_first_msg_offset: usize,
/// Queue gossip broadcasts separately from `pending_outbound_buffer` so we can easily
/// prioritize channel messages over them.
///
/// Note that these messages are *not* encrypted/MAC'd, and are only serialized.
- gossip_broadcast_buffer: LinkedList<Vec<u8>>,
+ gossip_broadcast_buffer: VecDeque<MessageBuf>,
awaiting_write_event: bool,
pending_read_buffer: Vec<u8>,
macro_rules! encode_msg {
($msg: expr) => {{
- let mut buffer = VecWriter(Vec::new());
+ let mut buffer = VecWriter(Vec::with_capacity(MSG_BUF_ALLOC_SIZE));
wire::write($msg, &mut buffer).unwrap();
buffer.0
}}
their_features: None,
their_socket_address: remote_network_address,
- pending_outbound_buffer: LinkedList::new(),
+ pending_outbound_buffer: VecDeque::new(),
pending_outbound_buffer_first_msg_offset: 0,
- gossip_broadcast_buffer: LinkedList::new(),
+ gossip_broadcast_buffer: VecDeque::new(),
awaiting_write_event: false,
pending_read_buffer,
their_features: None,
their_socket_address: remote_network_address,
- pending_outbound_buffer: LinkedList::new(),
+ pending_outbound_buffer: VecDeque::new(),
pending_outbound_buffer_first_msg_offset: 0,
- gossip_broadcast_buffer: LinkedList::new(),
+ gossip_broadcast_buffer: VecDeque::new(),
awaiting_write_event: false,
pending_read_buffer,
}
if peer.should_buffer_gossip_broadcast() {
if let Some(msg) = peer.gossip_broadcast_buffer.pop_front() {
- peer.pending_outbound_buffer.push_back(peer.channel_encryptor.encrypt_buffer(&msg[..]));
+ peer.pending_outbound_buffer.push_back(peer.channel_encryptor.encrypt_buffer(msg));
}
}
if peer.should_buffer_gossip_backfill() {
if peer.pending_outbound_buffer_first_msg_offset == next_buff.len() {
peer.pending_outbound_buffer_first_msg_offset = 0;
peer.pending_outbound_buffer.pop_front();
+ const VEC_SIZE: usize = ::core::mem::size_of::<Vec<u8>>();
+ let large_capacity = peer.pending_outbound_buffer.capacity() > 4096 / VEC_SIZE;
+ let lots_of_slack = peer.pending_outbound_buffer.len()
+ < peer.pending_outbound_buffer.capacity() / 2;
+ if large_capacity && lots_of_slack {
+ peer.pending_outbound_buffer.shrink_to_fit();
+ }
} else {
peer.awaiting_write_event = true;
}
}
/// Append a message to a peer's pending outbound/write gossip broadcast buffer
- fn enqueue_encoded_gossip_broadcast(&self, peer: &mut Peer, encoded_message: Vec<u8>) {
+ fn enqueue_encoded_gossip_broadcast(&self, peer: &mut Peer, encoded_message: MessageBuf) {
peer.msgs_sent_since_pong += 1;
+ debug_assert!(peer.gossip_broadcast_buffer.len() <= OUTBOUND_BUFFER_LIMIT_DROP_GOSSIP);
peer.gossip_broadcast_buffer.push_back(encoded_message);
}
}
peer.pending_read_is_header = false;
} else {
- let msg_data = try_potential_handleerror!(peer,
- peer.channel_encryptor.decrypt_message(&peer.pending_read_buffer[..]));
- assert!(msg_data.len() >= 2);
+ debug_assert!(peer.pending_read_buffer.len() >= 2 + 16);
+ try_potential_handleerror!(peer,
+ peer.channel_encryptor.decrypt_message(&mut peer.pending_read_buffer[..]));
+
+ let mut reader = io::Cursor::new(&peer.pending_read_buffer[..peer.pending_read_buffer.len() - 16]);
+ let message_result = wire::read(&mut reader, &*self.message_handler.custom_message_handler);
// Reset read buffer
if peer.pending_read_buffer.capacity() > 8192 { peer.pending_read_buffer = Vec::new(); }
peer.pending_read_buffer.resize(18, 0);
peer.pending_read_is_header = true;
- let mut reader = io::Cursor::new(&msg_data[..]);
- let message_result = wire::read(&mut reader, &*self.message_handler.custom_message_handler);
let message = match message_result {
Ok(x) => x,
Err(e) => {
self.message_handler.chan_handler.handle_channel_ready(&their_node_id, &msg);
},
+ // Quiescence messages:
+ wire::Message::Stfu(msg) => {
+ self.message_handler.chan_handler.handle_stfu(&their_node_id, &msg);
+ }
+
+ // Splicing messages:
+ wire::Message::Splice(msg) => {
+ self.message_handler.chan_handler.handle_splice(&their_node_id, &msg);
+ }
+ wire::Message::SpliceAck(msg) => {
+ self.message_handler.chan_handler.handle_splice_ack(&their_node_id, &msg);
+ }
+ wire::Message::SpliceLocked(msg) => {
+ self.message_handler.chan_handler.handle_splice_locked(&their_node_id, &msg);
+ }
+
// Interactive transaction construction messages:
wire::Message::TxAddInput(msg) => {
self.message_handler.chan_handler.handle_tx_add_input(&their_node_id, &msg);
if except_node.is_some() && peer.their_node_id.as_ref().map(|(pk, _)| pk) == except_node {
continue;
}
- self.enqueue_encoded_gossip_broadcast(&mut *peer, encoded_msg.clone());
+ self.enqueue_encoded_gossip_broadcast(&mut *peer, MessageBuf::from_encoded(&encoded_msg));
}
},
wire::Message::NodeAnnouncement(ref msg) => {
if except_node.is_some() && peer.their_node_id.as_ref().map(|(pk, _)| pk) == except_node {
continue;
}
- self.enqueue_encoded_gossip_broadcast(&mut *peer, encoded_msg.clone());
+ self.enqueue_encoded_gossip_broadcast(&mut *peer, MessageBuf::from_encoded(&encoded_msg));
}
},
wire::Message::ChannelUpdate(ref msg) => {
if except_node.is_some() && peer.their_node_id.as_ref().map(|(pk, _)| pk) == except_node {
continue;
}
- self.enqueue_encoded_gossip_broadcast(&mut *peer, encoded_msg.clone());
+ self.enqueue_encoded_gossip_broadcast(&mut *peer, MessageBuf::from_encoded(&encoded_msg));
}
},
_ => debug_assert!(false, "We shouldn't attempt to forward anything but gossip messages"),
&msg.channel_id);
self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg);
},
+ MessageSendEvent::SendStfu { ref node_id, ref msg} => {
+ log_debug!(self.logger, "Handling SendStfu event in peer_handler for node {} for channel {}",
+ log_pubkey!(node_id),
+ &msg.channel_id);
+ self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg);
+ }
+ MessageSendEvent::SendSplice { ref node_id, ref msg} => {
+ log_debug!(self.logger, "Handling SendSplice event in peer_handler for node {} for channel {}",
+ log_pubkey!(node_id),
+ &msg.channel_id);
+ self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg);
+ }
+ MessageSendEvent::SendSpliceAck { ref node_id, ref msg} => {
+ log_debug!(self.logger, "Handling SendSpliceAck event in peer_handler for node {} for channel {}",
+ log_pubkey!(node_id),
+ &msg.channel_id);
+ self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg);
+ }
+ MessageSendEvent::SendSpliceLocked { ref node_id, ref msg} => {
+ log_debug!(self.logger, "Handling SendSpliceLocked event in peer_handler for node {} for channel {}",
+ log_pubkey!(node_id),
+ &msg.channel_id);
+ self.enqueue_message(&mut *get_peer_for_forwarding!(node_id), msg);
+ }
MessageSendEvent::SendTxAddInput { ref node_id, ref msg } => {
log_debug!(self.logger, "Handling SendTxAddInput event in peer_handler for node {} for channel {}",
log_pubkey!(node_id),
let mut scid_privacy_cfg = test_default_channel_config();
scid_privacy_cfg.channel_handshake_config.announced_channel = true;
scid_privacy_cfg.channel_handshake_config.negotiate_scid_privacy = true;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(scid_privacy_cfg)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(scid_privacy_cfg)).unwrap();
let mut open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
assert!(!open_channel.channel_type.as_ref().unwrap().supports_scid_privacy()); // we ignore `negotiate_scid_privacy` on pub channels
let mut scid_privacy_cfg = test_default_channel_config();
scid_privacy_cfg.channel_handshake_config.announced_channel = false;
scid_privacy_cfg.channel_handshake_config.negotiate_scid_privacy = true;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(scid_privacy_cfg)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(scid_privacy_cfg)).unwrap();
let init_open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
assert!(init_open_channel.channel_type.as_ref().unwrap().supports_scid_privacy());
let mut no_announce_cfg = test_default_channel_config();
no_announce_cfg.channel_handshake_config.announced_channel = false;
no_announce_cfg.channel_handshake_config.negotiate_scid_privacy = true;
- nodes[1].node.create_channel(nodes[2].node.get_our_node_id(), 100_000, 10_000, 42, Some(no_announce_cfg)).unwrap();
+ nodes[1].node.create_channel(nodes[2].node.get_our_node_id(), 100_000, 10_000, 42, None, Some(no_announce_cfg)).unwrap();
let mut open_channel = get_event_msg!(nodes[1], MessageSendEvent::SendOpenChannel, nodes[2].node.get_our_node_id());
assert!(open_channel.channel_type.as_ref().unwrap().requires_scid_privacy());
create_announced_chan_between_nodes_with_value(&nodes, 1, 2, 1_000_000, 0);
chan_config.channel_handshake_config.announced_channel = false;
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, Some(chan_config)).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, Some(chan_config)).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
open_channel_msg.channel_type = Some(channel_type_features.clone());
// 2.1 First try the non-0conf method to manually accept
nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42,
- Some(manually_accept_conf)).unwrap();
+ None, Some(manually_accept_conf)).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel,
nodes[1].node.get_our_node_id());
// 2.2 Try again with the 0conf method to manually accept
nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42,
- Some(manually_accept_conf)).unwrap();
+ None, Some(manually_accept_conf)).unwrap();
let mut open_channel_msg = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel,
nodes[1].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, Some(manually_accept_conf)]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 10_001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100_000, 10_001, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let push_msat = 10001;
let node_a = nodes.remove(0);
let node_b = nodes.remove(0);
- node_a.node.create_channel(node_b.node.get_our_node_id(), channel_value, push_msat, 42, None).unwrap();
+ node_a.node.create_channel(node_b.node.get_our_node_id(), channel_value, push_msat, 42, None, None).unwrap();
node_b.node.handle_open_channel(&node_a.node.get_our_node_id(), &get_event_msg!(node_a, MessageSendEvent::SendOpenChannel, node_b.node.get_our_node_id()));
node_a.node.handle_accept_channel(&node_b.node.get_our_node_id(), &get_event_msg!(node_b, MessageSendEvent::SendAcceptChannel, node_a.node.get_our_node_id()));
assert_eq!(send_events.len(), 2);
let node_1_msgs = remove_first_msg_event_to_node(&nodes[1].node.get_our_node_id(), &mut send_events);
let node_2_msgs = remove_first_msg_event_to_node(&nodes[2].node.get_our_node_id(), &mut send_events);
- do_pass_along_path(&nodes[0], &[&nodes[1], &nodes[3]], 15_000_000, payment_hash, Some(payment_secret), node_1_msgs, true, false, None);
- do_pass_along_path(&nodes[0], &[&nodes[2], &nodes[3]], 15_000_000, payment_hash, Some(payment_secret), node_2_msgs, true, false, None);
+ do_pass_along_path(&nodes[0], &[&nodes[1], &nodes[3]], 15_000_000, payment_hash, Some(payment_secret), node_1_msgs, true, false, None, false);
+ do_pass_along_path(&nodes[0], &[&nodes[2], &nodes[3]], 15_000_000, payment_hash, Some(payment_secret), node_2_msgs, true, false, None, false);
// Now that we have an MPP payment pending, get the latest encoded copies of nodes[3]'s
// monitors and ChannelManager, for use later, if we don't want to persist both monitors.
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
self.0.write(w)
}
-
- fn serialized_length(&self) -> usize {
- self.0.serialized_length()
- }
}
impl Readable for ShutdownScript {
//! Tests of our shutdown and closing_signed negotiation logic.
use crate::sign::{EntropySource, SignerProvider};
+use crate::chain::ChannelMonitorUpdateStatus;
use crate::chain::transaction::OutPoint;
-use crate::events::{MessageSendEvent, MessageSendEventsProvider, ClosureReason};
+use crate::events::{MessageSendEvent, HTLCDestination, MessageSendEventsProvider, ClosureReason};
use crate::ln::channelmanager::{self, PaymentSendFailure, PaymentId, RecipientOnionFields, ChannelShutdownState, ChannelDetails};
use crate::routing::router::{PaymentParameters, get_route, RouteParameters};
use crate::ln::msgs;
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 100_000, 0, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 1_000_000, 100_000, 0, None, None).unwrap();
let open_chan = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
// Create a dummy P2WPKH script
.into_script();
// Check script when handling an open_channel message
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let mut open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
open_channel.shutdown_scriptpubkey = Some(anysegwit_shutdown_script.clone());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
// Check script when handling an accept_channel message
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
let open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
nodes[1].node.handle_open_channel(&nodes[0].node.get_our_node_id(), &open_channel);
let mut accept_channel = get_event_msg!(nodes[1], MessageSendEvent::SendAcceptChannel, nodes[0].node.get_our_node_id());
let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
- nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None).unwrap();
+ nodes[0].node.create_channel(nodes[1].node.get_our_node_id(), 100000, 10001, 42, None, None).unwrap();
// Use a segwit v0 script with an unsupported witness program
let mut open_channel = get_event_msg!(nodes[0], MessageSendEvent::SendOpenChannel, nodes[1].node.get_our_node_id());
check_closed_event!(nodes[0], 1, ClosureReason::CooperativeClosure, [nodes[1].node.get_our_node_id()], 100000);
check_closed_event!(nodes[1], 1, ClosureReason::CooperativeClosure, [nodes[0].node.get_our_node_id()], 100000);
}
+
+fn do_outbound_update_no_early_closing_signed(use_htlc: bool) {
+ // Previously, if we have a pending inbound HTLC (or fee update) on a channel which has
+ // initiated shutdown, we'd send our initial closing_signed immediately after receiving the
+ // peer's last RAA to remove the HTLC/fee update, but before receiving their final
+ // commitment_signed for a commitment without the HTLC/with the new fee. This caused at least
+ // LDK peers to force-close as we initiated closing_signed prior to the channel actually being
+ // fully empty of pending updates/HTLCs.
+ let chanmon_cfgs = create_chanmon_cfgs(2);
+ let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
+ let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
+ let nodes = create_network(2, &node_cfgs, &node_chanmgrs);
+
+ let chan_id = create_announced_chan_between_nodes(&nodes, 0, 1).2;
+
+ send_payment(&nodes[0], &[&nodes[1]], 1_000_000);
+ let payment_hash_opt = if use_htlc {
+ Some(route_payment(&nodes[1], &[&nodes[0]], 10_000).1)
+ } else {
+ None
+ };
+
+ if use_htlc {
+ nodes[0].node.fail_htlc_backwards(&payment_hash_opt.unwrap());
+ expect_pending_htlcs_forwardable_and_htlc_handling_failed!(nodes[0],
+ [HTLCDestination::FailedPayment { payment_hash: payment_hash_opt.unwrap() }]);
+ } else {
+ *chanmon_cfgs[0].fee_estimator.sat_per_kw.lock().unwrap() *= 10;
+ nodes[0].node.timer_tick_occurred();
+ }
+ let updates = get_htlc_update_msgs(&nodes[0], &nodes[1].node.get_our_node_id());
+ check_added_monitors(&nodes[0], 1);
+
+ nodes[1].node.close_channel(&chan_id, &nodes[0].node.get_our_node_id()).unwrap();
+ let node_0_shutdown = get_event_msg!(nodes[1], MessageSendEvent::SendShutdown, nodes[0].node.get_our_node_id());
+ nodes[0].node.close_channel(&chan_id, &nodes[1].node.get_our_node_id()).unwrap();
+ let node_1_shutdown = get_event_msg!(nodes[0], MessageSendEvent::SendShutdown, nodes[1].node.get_our_node_id());
+
+ nodes[0].node.handle_shutdown(&nodes[1].node.get_our_node_id(), &node_0_shutdown);
+ nodes[1].node.handle_shutdown(&nodes[0].node.get_our_node_id(), &node_1_shutdown);
+
+ if use_htlc {
+ nodes[1].node.handle_update_fail_htlc(&nodes[0].node.get_our_node_id(), &updates.update_fail_htlcs[0]);
+ } else {
+ nodes[1].node.handle_update_fee(&nodes[0].node.get_our_node_id(), &updates.update_fee.unwrap());
+ }
+ nodes[1].node.handle_commitment_signed(&nodes[0].node.get_our_node_id(), &updates.commitment_signed);
+ check_added_monitors(&nodes[1], 1);
+ let (bs_raa, bs_cs) = get_revoke_commit_msgs(&nodes[1], &nodes[0].node.get_our_node_id());
+
+ nodes[0].node.handle_revoke_and_ack(&nodes[1].node.get_our_node_id(), &bs_raa);
+ check_added_monitors(&nodes[0], 1);
+
+ // At this point the Channel on nodes[0] has no record of any HTLCs but the latest
+ // broadcastable commitment does contain the HTLC (but only the ChannelMonitor knows this).
+ // Thus, the channel should not yet initiate closing_signed negotiation (but previously did).
+ assert_eq!(nodes[0].node.get_and_clear_pending_msg_events(), Vec::new());
+
+ chanmon_cfgs[0].persister.set_update_ret(ChannelMonitorUpdateStatus::InProgress);
+ nodes[0].node.handle_commitment_signed(&nodes[1].node.get_our_node_id(), &bs_cs);
+ check_added_monitors(&nodes[0], 1);
+ assert_eq!(nodes[0].node.get_and_clear_pending_msg_events(), Vec::new());
+
+ expect_channel_shutdown_state!(nodes[0], chan_id, ChannelShutdownState::ResolvingHTLCs);
+ assert_eq!(nodes[0].node.get_and_clear_pending_msg_events(), Vec::new());
+ let (outpoint, latest_update, _) = nodes[0].chain_monitor.latest_monitor_update_id.lock().unwrap().get(&chan_id).unwrap().clone();
+ nodes[0].chain_monitor.chain_monitor.force_channel_monitor_updated(outpoint, latest_update);
+
+ let as_raa_closing_signed = nodes[0].node.get_and_clear_pending_msg_events();
+ assert_eq!(as_raa_closing_signed.len(), 2);
+
+ if let MessageSendEvent::SendRevokeAndACK { msg, .. } = &as_raa_closing_signed[0] {
+ nodes[1].node.handle_revoke_and_ack(&nodes[0].node.get_our_node_id(), &msg);
+ check_added_monitors(&nodes[1], 1);
+ if use_htlc {
+ expect_payment_failed!(nodes[1], payment_hash_opt.unwrap(), true);
+ }
+ } else { panic!("Unexpected message {:?}", as_raa_closing_signed[0]); }
+
+ if let MessageSendEvent::SendClosingSigned { msg, .. } = &as_raa_closing_signed[1] {
+ nodes[1].node.handle_closing_signed(&nodes[0].node.get_our_node_id(), &msg);
+ } else { panic!("Unexpected message {:?}", as_raa_closing_signed[1]); }
+
+ let bs_closing_signed = get_event_msg!(nodes[1], MessageSendEvent::SendClosingSigned, nodes[0].node.get_our_node_id());
+ nodes[0].node.handle_closing_signed(&nodes[1].node.get_our_node_id(), &bs_closing_signed);
+ let (_, as_2nd_closing_signed) = get_closing_signed_broadcast!(nodes[0].node, nodes[1].node.get_our_node_id());
+ nodes[1].node.handle_closing_signed(&nodes[0].node.get_our_node_id(), &as_2nd_closing_signed.unwrap());
+ let (_, node_1_none) = get_closing_signed_broadcast!(nodes[1].node, nodes[0].node.get_our_node_id());
+ assert!(node_1_none.is_none());
+
+ check_closed_event!(nodes[0], 1, ClosureReason::CooperativeClosure, [nodes[1].node.get_our_node_id()], 100000);
+ check_closed_event!(nodes[1], 1, ClosureReason::CooperativeClosure, [nodes[0].node.get_our_node_id()], 100000);
+}
+
+#[test]
+fn outbound_update_no_early_closing_signed() {
+ do_outbound_update_no_early_closing_signed(true);
+ do_outbound_update_no_early_closing_signed(false);
+}
AcceptChannelV2(msgs::AcceptChannelV2),
FundingCreated(msgs::FundingCreated),
FundingSigned(msgs::FundingSigned),
+ Stfu(msgs::Stfu),
+ Splice(msgs::Splice),
+ SpliceAck(msgs::SpliceAck),
+ SpliceLocked(msgs::SpliceLocked),
TxAddInput(msgs::TxAddInput),
TxAddOutput(msgs::TxAddOutput),
TxRemoveInput(msgs::TxRemoveInput),
&Message::AcceptChannelV2(ref msg) => msg.write(writer),
&Message::FundingCreated(ref msg) => msg.write(writer),
&Message::FundingSigned(ref msg) => msg.write(writer),
+ &Message::Stfu(ref msg) => msg.write(writer),
+ &Message::Splice(ref msg) => msg.write(writer),
+ &Message::SpliceAck(ref msg) => msg.write(writer),
+ &Message::SpliceLocked(ref msg) => msg.write(writer),
&Message::TxAddInput(ref msg) => msg.write(writer),
&Message::TxAddOutput(ref msg) => msg.write(writer),
&Message::TxRemoveInput(ref msg) => msg.write(writer),
&Message::AcceptChannelV2(ref msg) => msg.type_id(),
&Message::FundingCreated(ref msg) => msg.type_id(),
&Message::FundingSigned(ref msg) => msg.type_id(),
+ &Message::Stfu(ref msg) => msg.type_id(),
+ &Message::Splice(ref msg) => msg.type_id(),
+ &Message::SpliceAck(ref msg) => msg.type_id(),
+ &Message::SpliceLocked(ref msg) => msg.type_id(),
&Message::TxAddInput(ref msg) => msg.type_id(),
&Message::TxAddOutput(ref msg) => msg.type_id(),
&Message::TxRemoveInput(ref msg) => msg.type_id(),
msgs::FundingSigned::TYPE => {
Ok(Message::FundingSigned(Readable::read(buffer)?))
},
+ msgs::Splice::TYPE => {
+ Ok(Message::Splice(Readable::read(buffer)?))
+ },
+ msgs::Stfu::TYPE => {
+ Ok(Message::Stfu(Readable::read(buffer)?))
+ },
+ msgs::SpliceAck::TYPE => {
+ Ok(Message::SpliceAck(Readable::read(buffer)?))
+ },
+ msgs::SpliceLocked::TYPE => {
+ Ok(Message::SpliceLocked(Readable::read(buffer)?))
+ },
msgs::TxAddInput::TYPE => {
Ok(Message::TxAddInput(Readable::read(buffer)?))
},
fn type_id(&self) -> u16 { T::TYPE }
}
+impl Encode for msgs::Stfu {
+ const TYPE: u16 = 2;
+}
+
impl Encode for msgs::Init {
const TYPE: u16 = 16;
}
const TYPE: u16 = 65;
}
+impl Encode for msgs::Splice {
+ // TODO(splicing) Double check with finalized spec; draft spec contains 74, which is probably wrong as it is used by tx_Abort; CLN uses 75
+ const TYPE: u16 = 75;
+}
+
+impl Encode for msgs::SpliceAck {
+ const TYPE: u16 = 76;
+}
+
+impl Encode for msgs::SpliceLocked {
+ const TYPE: u16 = 77;
+}
+
impl Encode for msgs::TxAddInput {
const TYPE: u16 = 66;
}
bytes: self.bytes,
contents: self.contents,
signature,
+ tagged_hash: self.tagged_hash,
})
}
}
bytes: Vec<u8>,
contents: InvoiceContents,
signature: Signature,
+ tagged_hash: TaggedHash,
}
/// The contents of an [`Bolt12Invoice`] for responding to either an [`Offer`] or a [`Refund`].
/// Hash that was used for signing the invoice.
pub fn signable_hash(&self) -> [u8; 32] {
- merkle::message_digest(SIGNATURE_TAG, &self.bytes).as_ref().clone()
+ self.tagged_hash.as_digest().as_ref().clone()
}
/// Verifies that the invoice was for a request or refund created using the given key. Returns
None => return Err(Bolt12ParseError::InvalidSemantics(Bolt12SemanticError::MissingSignature)),
Some(signature) => signature,
};
- let message = TaggedHash::new(SIGNATURE_TAG, &bytes);
+ let tagged_hash = TaggedHash::new(SIGNATURE_TAG, &bytes);
let pubkey = contents.fields().signing_pubkey;
- merkle::verify_signature(&signature, message, pubkey)?;
+ merkle::verify_signature(&signature, &tagged_hash, pubkey)?;
- Ok(Bolt12Invoice { bytes, contents, signature })
+ Ok(Bolt12Invoice { bytes, contents, signature, tagged_hash })
}
}
assert_eq!(invoice.signing_pubkey(), recipient_pubkey());
let message = TaggedHash::new(SIGNATURE_TAG, &invoice.bytes);
- assert!(merkle::verify_signature(&invoice.signature, message, recipient_pubkey()).is_ok());
+ assert!(merkle::verify_signature(&invoice.signature, &message, recipient_pubkey()).is_ok());
let digest = Message::from_slice(&invoice.signable_hash()).unwrap();
let pubkey = recipient_pubkey().into();
assert_eq!(invoice.signing_pubkey(), recipient_pubkey());
let message = TaggedHash::new(SIGNATURE_TAG, &invoice.bytes);
- assert!(merkle::verify_signature(&invoice.signature, message, recipient_pubkey()).is_ok());
+ assert!(merkle::verify_signature(&invoice.signature, &message, recipient_pubkey()).is_ok());
assert_eq!(
invoice.as_tlv_stream(),
Some(signature) => signature,
};
let message = TaggedHash::new(SIGNATURE_TAG, &bytes);
- merkle::verify_signature(&signature, message, contents.payer_id)?;
+ merkle::verify_signature(&signature, &message, contents.payer_id)?;
Ok(InvoiceRequest { bytes, contents, signature })
}
assert_eq!(invoice_request.payer_note(), None);
let message = TaggedHash::new(SIGNATURE_TAG, &invoice_request.bytes);
- assert!(merkle::verify_signature(&invoice_request.signature, message, payer_pubkey()).is_ok());
+ assert!(merkle::verify_signature(&invoice_request.signature, &message, payer_pubkey()).is_ok());
assert_eq!(
invoice_request.as_tlv_stream(),
///
/// [BIP 340]: https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki
/// [BOLT 12]: https://github.com/rustyrussell/lightning-rfc/blob/guilt/offers/12-offer-encoding.md#signature-calculation
-#[derive(Debug, PartialEq)]
-pub struct TaggedHash(Message);
+#[derive(Clone, Debug, PartialEq)]
+pub struct TaggedHash {
+ tag: &'static str,
+ merkle_root: sha256::Hash,
+ digest: Message,
+}
impl TaggedHash {
/// Creates a tagged hash with the given parameters.
///
/// Panics if `tlv_stream` is not a well-formed TLV stream containing at least one TLV record.
- pub(super) fn new(tag: &str, tlv_stream: &[u8]) -> Self {
- Self(message_digest(tag, tlv_stream))
+ pub(super) fn new(tag: &'static str, tlv_stream: &[u8]) -> Self {
+ let tag_hash = sha256::Hash::hash(tag.as_bytes());
+ let merkle_root = root_hash(tlv_stream);
+ let digest = Message::from_slice(&tagged_hash(tag_hash, merkle_root)).unwrap();
+ Self {
+ tag,
+ merkle_root,
+ digest,
+ }
}
/// Returns the digest to sign.
pub fn as_digest(&self) -> &Message {
- &self.0
+ &self.digest
+ }
+
+ /// Returns the tag used in the tagged hash.
+ pub fn tag(&self) -> &str {
+ &self.tag
+ }
+
+ /// Returns the merkle root used in the tagged hash.
+ pub fn merkle_root(&self) -> sha256::Hash {
+ self.merkle_root
}
}
/// Verifies the signature with a pubkey over the given message using a tagged hash as the message
/// digest.
pub(super) fn verify_signature(
- signature: &Signature, message: TaggedHash, pubkey: PublicKey,
+ signature: &Signature, message: &TaggedHash, pubkey: PublicKey,
) -> Result<(), secp256k1::Error> {
let digest = message.as_digest();
let pubkey = pubkey.into();
secp_ctx.verify_schnorr(signature, digest, &pubkey)
}
-pub(super) fn message_digest(tag: &str, bytes: &[u8]) -> Message {
- let tag = sha256::Hash::hash(tag.as_bytes());
- let merkle_root = root_hash(bytes);
- Message::from_slice(&tagged_hash(tag, merkle_root)).unwrap()
-}
-
/// Computes a merkle root hash for the given data, which must be a well-formed TLV stream
/// containing at least one TLV record.
fn root_hash(data: &[u8]) -> sha256::Hash {
use super::{SIGNATURE_TYPES, TlvStream, WithoutSignatures};
use bitcoin::hashes::{Hash, sha256};
- use bitcoin::secp256k1::{KeyPair, Secp256k1, SecretKey};
+ use bitcoin::secp256k1::{KeyPair, Message, Secp256k1, SecretKey};
use bitcoin::secp256k1::schnorr::Signature;
use core::convert::Infallible;
use crate::offers::offer::{Amount, OfferBuilder};
use crate::offers::invoice_request::InvoiceRequest;
use crate::offers::parse::Bech32Encode;
+ use crate::offers::test_utils::{payer_pubkey, recipient_pubkey};
use crate::util::ser::Writeable;
#[test]
);
}
+ #[test]
+ fn compute_tagged_hash() {
+ let unsigned_invoice_request = OfferBuilder::new("foo".into(), recipient_pubkey())
+ .amount_msats(1000)
+ .build().unwrap()
+ .request_invoice(vec![1; 32], payer_pubkey()).unwrap()
+ .payer_note("bar".into())
+ .build().unwrap();
+
+ // Simply test that we can grab the tag and merkle root exposed by the accessor
+ // functions, then use them to succesfully compute a tagged hash.
+ let tagged_hash = unsigned_invoice_request.as_ref();
+ let expected_digest = unsigned_invoice_request.as_ref().as_digest();
+ let tag = sha256::Hash::hash(tagged_hash.tag().as_bytes());
+ let actual_digest = Message::from_slice(&super::tagged_hash(tag, tagged_hash.merkle_root()))
+ .unwrap();
+ assert_eq!(*expected_digest, actual_digest);
+ }
+
#[test]
fn skips_encoding_signature_tlv_records() {
let secp_ctx = Secp256k1::new();
}
#[cfg(test)]
-mod bech32_tests {
- use super::{Bolt12ParseError, Offer};
- use bitcoin::bech32;
+mod bolt12_tests {
+ use super::{Bolt12ParseError, Bolt12SemanticError, Offer};
use crate::ln::msgs::DecodeError;
- #[test]
- fn encodes_offer_as_bech32_without_checksum() {
- let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg";
- let offer = dbg!(encoded_offer.parse::<Offer>().unwrap());
- let reencoded_offer = offer.to_string();
- dbg!(reencoded_offer.parse::<Offer>().unwrap());
- assert_eq!(reencoded_offer, encoded_offer);
- }
-
#[test]
fn parses_bech32_encoded_offers() {
let offers = [
- // BOLT 12 test vectors
- "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
- "l+no1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
- "lno1pqps7sjqpgt+yzm3qv4uxzmtsd3jjqer9wd3hy6tsw3+5k7msjzfpy7nz5yqcn+ygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd+5xvxg",
- "lno1pqps7sjqpgt+ yzm3qv4uxzmtsd3jjqer9wd3hy6tsw3+ 5k7msjzfpy7nz5yqcn+\nygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd+\r\n 5xvxg",
+ // Minimal bolt12 offer
+ "lno1pgx9getnwss8vetrw3hhyuckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+
+ // for testnet
+ "lno1qgsyxjtl6luzd9t3pr62xr7eemp6awnejusgf6gw45q75vcfqqqqqqq2p32x2um5ypmx2cm5dae8x93pqthvwfzadd7jejes8q9lhc4rvjxd022zv5l44g6qah82ru5rdpnpj",
+
+ // for bitcoin (redundant)
+ "lno1qgsxlc5vp2m0rvmjcxn2y34wv0m5lyc7sdj7zksgn35dvxgqqqqqqqq2p32x2um5ypmx2cm5dae8x93pqthvwfzadd7jejes8q9lhc4rvjxd022zv5l44g6qah82ru5rdpnpj",
+
+ // for bitcoin or liquidv1
+ "lno1qfqpge38tqmzyrdjj3x2qkdr5y80dlfw56ztq6yd9sme995g3gsxqqm0u2xq4dh3kdevrf4zg6hx8a60jv0gxe0ptgyfc6xkryqqqqqqqq9qc4r9wd6zqan9vd6x7unnzcss9mk8y3wkklfvevcrszlmu23kfrxh49px20665dqwmn4p72pksese",
+
+ // with metadata
+ "lno1qsgqqqqqqqqqqqqqqqqqqqqqqqqqqzsv23jhxapqwejkxar0wfe3vggzamrjghtt05kvkvpcp0a79gmy3nt6jsn98ad2xs8de6sl9qmgvcvs",
+
+ // with amount
+ "lno1pqpzwyq2p32x2um5ypmx2cm5dae8x93pqthvwfzadd7jejes8q9lhc4rvjxd022zv5l44g6qah82ru5rdpnpj",
+
+ // with currency
+ "lno1qcp4256ypqpzwyq2p32x2um5ypmx2cm5dae8x93pqthvwfzadd7jejes8q9lhc4rvjxd022zv5l44g6qah82ru5rdpnpj",
+
+ // with expiry
+ "lno1pgx9getnwss8vetrw3hhyucwq3ay997czcss9mk8y3wkklfvevcrszlmu23kfrxh49px20665dqwmn4p72pksese",
+
+ // with issuer
+ "lno1pgx9getnwss8vetrw3hhyucjy358garswvaz7tmzdak8gvfj9ehhyeeqgf85c4p3xgsxjmnyw4ehgunfv4e3vggzamrjghtt05kvkvpcp0a79gmy3nt6jsn98ad2xs8de6sl9qmgvcvs",
+
+ // with quantity
+ "lno1pgx9getnwss8vetrw3hhyuc5qyz3vggzamrjghtt05kvkvpcp0a79gmy3nt6jsn98ad2xs8de6sl9qmgvcvs",
+
+ // with unlimited (or unknown) quantity
+ "lno1pgx9getnwss8vetrw3hhyuc5qqtzzqhwcuj966ma9n9nqwqtl032xeyv6755yeflt235pmww58egx6rxry",
+
+ // with single quantity (weird but valid)
+ "lno1pgx9getnwss8vetrw3hhyuc5qyq3vggzamrjghtt05kvkvpcp0a79gmy3nt6jsn98ad2xs8de6sl9qmgvcvs",
+
+ // with feature
+ "lno1pgx9getnwss8vetrw3hhyucvp5yqqqqqqqqqqqqqqqqqqqqkyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+
+ // with blinded path via Bob (0x424242...), blinding 020202...
+ "lno1pgx9getnwss8vetrw3hhyucs5ypjgef743p5fzqq9nqxh0ah7y87rzv3ud0eleps9kl2d5348hq2k8qzqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgqpqqqqqqqqqqqqqqqqqqqqqqqqqqqzqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqqzq3zyg3zyg3zyg3vggzamrjghtt05kvkvpcp0a79gmy3nt6jsn98ad2xs8de6sl9qmgvcvs",
+
+ // ... and with second blinded path via Carol (0x434343...), blinding 020202...
+ "lno1pgx9getnwss8vetrw3hhyucsl5q5yqeyv5l2cs6y3qqzesrth7mlzrlp3xg7xhulusczm04x6g6nms9trspqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqqsqqqqqqqqqqqqqqqqqqqqqqqqqqpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsqpqg3zyg3zyg3zygz0uc7h32x9s0aecdhxlk075kn046aafpuuyw8f5j652t3vha2yqrsyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsqzqqqqqqqqqqqqqqqqqqqqqqqqqqqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqqyzyg3zyg3zyg3zzcss9mk8y3wkklfvevcrszlmu23kfrxh49px20665dqwmn4p72pksese",
+
+ // unknown odd field
+ "lno1pgx9getnwss8vetrw3hhyuckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxfppf5x2mrvdamk7unvvs",
];
for encoded_offer in &offers {
if let Err(e) = encoded_offer.parse::<Offer>() {
}
#[test]
- fn fails_parsing_bech32_encoded_offers_with_invalid_continuations() {
- let offers = [
- // BOLT 12 test vectors
- "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg+",
- "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg+ ",
- "+lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
- "+ lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
- "ln++o1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
- ];
- for encoded_offer in &offers {
- match encoded_offer.parse::<Offer>() {
- Ok(_) => panic!("Valid offer: {}", encoded_offer),
- Err(e) => assert_eq!(e, Bolt12ParseError::InvalidContinuation),
- }
- }
+ fn fails_parsing_bech32_encoded_offers() {
+ // Malformed: fields out of order
+ assert_eq!(
+ "lno1zcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszpgz5znzfgdzs".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
- }
+ // Malformed: unknown even TLV type 78
+ assert_eq!(
+ "lno1pgz5znzfgdz3vggzqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpysgr0u2xq4dh3kdevrf4zg6hx8a60jv0gxe0ptgyfc6xkryqqqqqqqq".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::UnknownRequiredFeature)),
+ );
- #[test]
- fn fails_parsing_bech32_encoded_offer_with_invalid_hrp() {
- let encoded_offer = "lni1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg";
- match encoded_offer.parse::<Offer>() {
- Ok(_) => panic!("Valid offer: {}", encoded_offer),
- Err(e) => assert_eq!(e, Bolt12ParseError::InvalidBech32Hrp),
- }
- }
+ // Malformed: empty
+ assert_eq!(
+ "lno1".parse::<Offer>(),
+ Err(Bolt12ParseError::InvalidSemantics(Bolt12SemanticError::MissingDescription)),
+ );
- #[test]
- fn fails_parsing_bech32_encoded_offer_with_invalid_bech32_data() {
- let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxo";
- match encoded_offer.parse::<Offer>() {
- Ok(_) => panic!("Valid offer: {}", encoded_offer),
- Err(e) => assert_eq!(e, Bolt12ParseError::Bech32(bech32::Error::InvalidChar('o'))),
- }
- }
+ // Malformed: truncated at type
+ assert_eq!(
+ "lno1pg".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
- #[test]
- fn fails_parsing_bech32_encoded_offer_with_invalid_tlv_data() {
- let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxgqqqqq";
- match encoded_offer.parse::<Offer>() {
- Ok(_) => panic!("Valid offer: {}", encoded_offer),
- Err(e) => assert_eq!(e, Bolt12ParseError::Decode(DecodeError::InvalidValue)),
- }
+ // Malformed: truncated in length
+ assert_eq!(
+ "lno1pt7s".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated after length
+ assert_eq!(
+ "lno1pgpq".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated in description
+ assert_eq!(
+ "lno1pgpyz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: invalid offer_chains length
+ assert_eq!(
+ "lno1qgqszzs9g9xyjs69zcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated currency UTF-8
+ assert_eq!(
+ "lno1qcqcqzs9g9xyjs69zcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: invalid currency UTF-8
+ assert_eq!(
+ "lno1qcpgqsg2q4q5cj2rg5tzzqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqg".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated description UTF-8
+ assert_eq!(
+ "lno1pgqcq93pqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqy".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // Malformed: invalid description UTF-8
+ assert_eq!(
+ "lno1pgpgqsgkyypqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqs".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // Malformed: truncated offer_paths
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqgpzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: zero num_hops in blinded_path
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsqzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated onionmsg_hop in blinded_path
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqspqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqgkyypqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqs".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: bad first_node_id in blinded_path
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqspqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqgqzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: bad blinding in blinded_path
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcpqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqgqzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: bad blinded_node_id in onionmsg_hop
+ assert_eq!(
+ "lno1pgz5znzfgdz3qqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqspqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqgqzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::ShortRead)),
+ );
+
+ // Malformed: truncated issuer UTF-8
+ assert_eq!(
+ "lno1pgz5znzfgdz3yqvqzcssyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqsz".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // Malformed: invalid issuer UTF-8
+ assert_eq!(
+ "lno1pgz5znzfgdz3yq5qgytzzqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqg".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // Malformed: invalid offer_node_id
+ assert_eq!(
+ "lno1pgz5znzfgdz3vggzqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvpsxqcrqvps".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // Contains type >= 80
+ assert_eq!(
+ "lno1pgz5znzfgdz3vggzqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgpqyqszqgp9qgr0u2xq4dh3kdevrf4zg6hx8a60jv0gxe0ptgyfc6xkryqqqqqqqq".parse::<Offer>(),
+ Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ );
+
+ // TODO: Resolved in spec https://github.com/lightning/bolts/pull/798/files#r1334851959
+ // Contains unknown feature 22
+ assert!(
+ "lno1pgx9getnwss8vetrw3hhyucvqdqqqqqkyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg".parse::<Offer>().is_ok()
+ );
+
+ // Missing offer_description
+ assert_eq!(
+ "lno1zcss9mk8y3wkklfvevcrszlmu23kfrxh49px20665dqwmn4p72pksese".parse::<Offer>(),
+ Err(Bolt12ParseError::InvalidSemantics(Bolt12SemanticError::MissingDescription)),
+ );
+
+ // Missing offer_node_id"
+ assert_eq!(
+ "lno1pgx9getnwss8vetrw3hhyuc".parse::<Offer>(),
+ Err(Bolt12ParseError::InvalidSemantics(Bolt12SemanticError::MissingSigningPubkey)),
+ );
}
}
Self::InvalidSignature(error)
}
}
+
+#[cfg(test)]
+mod bolt12_tests {
+ use super::Bolt12ParseError;
+ use crate::offers::offer::Offer;
+
+ #[test]
+ fn encodes_offer_as_bech32_without_checksum() {
+ let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg";
+ let offer = dbg!(encoded_offer.parse::<Offer>().unwrap());
+ let reencoded_offer = offer.to_string();
+ dbg!(reencoded_offer.parse::<Offer>().unwrap());
+ assert_eq!(reencoded_offer, encoded_offer);
+ }
+
+ #[test]
+ fn parses_bech32_encoded_offers() {
+ let offers = [
+ // A complete string is valid
+ "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+
+ // + can join anywhere
+ "l+no1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+
+ // Multiple + can join
+ "lno1pqps7sjqpgt+yzm3qv4uxzmtsd3jjqer9wd3hy6tsw3+5k7msjzfpy7nz5yqcn+ygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd+5xvxg",
+
+ // + can be followed by whitespace
+ "lno1pqps7sjqpgt+ yzm3qv4uxzmtsd3jjqer9wd3hy6tsw3+ 5k7msjzfpy7nz5yqcn+\nygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd+\r\n 5xvxg",
+ ];
+ for encoded_offer in &offers {
+ if let Err(e) = encoded_offer.parse::<Offer>() {
+ panic!("Invalid offer ({:?}): {}", e, encoded_offer);
+ }
+ }
+ }
+
+ #[test]
+ fn fails_parsing_bech32_encoded_offers_with_invalid_continuations() {
+ let offers = [
+ // + must be surrounded by bech32 characters
+ "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg+",
+ "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg+ ",
+ "+lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+ "+ lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+ "ln++o1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg",
+ ];
+ for encoded_offer in &offers {
+ match encoded_offer.parse::<Offer>() {
+ Ok(_) => panic!("Valid offer: {}", encoded_offer),
+ Err(e) => assert_eq!(e, Bolt12ParseError::InvalidContinuation),
+ }
+ }
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use super::Bolt12ParseError;
+ use bitcoin::bech32;
+ use crate::ln::msgs::DecodeError;
+ use crate::offers::offer::Offer;
+
+ #[test]
+ fn fails_parsing_bech32_encoded_offer_with_invalid_hrp() {
+ let encoded_offer = "lni1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxg";
+ match encoded_offer.parse::<Offer>() {
+ Ok(_) => panic!("Valid offer: {}", encoded_offer),
+ Err(e) => assert_eq!(e, Bolt12ParseError::InvalidBech32Hrp),
+ }
+ }
+
+ #[test]
+ fn fails_parsing_bech32_encoded_offer_with_invalid_bech32_data() {
+ let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxo";
+ match encoded_offer.parse::<Offer>() {
+ Ok(_) => panic!("Valid offer: {}", encoded_offer),
+ Err(e) => assert_eq!(e, Bolt12ParseError::Bech32(bech32::Error::InvalidChar('o'))),
+ }
+ }
+
+ #[test]
+ fn fails_parsing_bech32_encoded_offer_with_invalid_tlv_data() {
+ let encoded_offer = "lno1pqps7sjqpgtyzm3qv4uxzmtsd3jjqer9wd3hy6tsw35k7msjzfpy7nz5yqcnygrfdej82um5wf5k2uckyypwa3eyt44h6txtxquqh7lz5djge4afgfjn7k4rgrkuag0jsd5xvxgqqqqq";
+ match encoded_offer.parse::<Offer>() {
+ Ok(_) => panic!("Valid offer: {}", encoded_offer),
+ Err(e) => assert_eq!(e, Bolt12ParseError::Decode(DecodeError::InvalidValue)),
+ }
+ }
+}
// Re-export structs so they can be imported with just the `onion_message::` module prefix.
pub use self::messenger::{CustomOnionMessageHandler, DefaultMessageRouter, Destination, MessageRouter, OnionMessageContents, OnionMessagePath, OnionMessenger, PeeledOnion, PendingOnionMessage, SendError};
+pub use self::messenger::{create_onion_message, peel_onion_message};
#[cfg(not(c_bindings))]
pub use self::messenger::{SimpleArcOnionMessenger, SimpleRefOnionMessenger};
pub use self::offers::{OffersMessage, OffersMessageHandler};
pub(super) const BIG_PACKET_HOP_DATA_LEN: usize = 32768;
/// Packet of hop data for next peer
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Packet {
/// Bolt 04 version number
pub version: u8,
}
}
+fn message_sha256d_hash<M: Writeable>(msg: &M) -> Sha256dHash {
+ let mut engine = Sha256dHash::engine();
+ msg.write(&mut engine).expect("In-memory structs should not fail to serialize");
+ Sha256dHash::from_engine(engine)
+}
+
/// Verifies the signature of a [`NodeAnnouncement`].
///
/// Returns an error if it is invalid.
pub fn verify_node_announcement<C: Verification>(msg: &NodeAnnouncement, secp_ctx: &Secp256k1<C>) -> Result<(), LightningError> {
- let msg_hash = hash_to_message!(&Sha256dHash::hash(&msg.contents.encode()[..])[..]);
+ let msg_hash = hash_to_message!(&message_sha256d_hash(&msg.contents)[..]);
secp_verify_sig!(secp_ctx, &msg_hash, &msg.signature, &get_pubkey_from_node_id!(msg.contents.node_id, "node_announcement"), "node_announcement");
Ok(())
///
/// Returns an error if one of the signatures is invalid.
pub fn verify_channel_announcement<C: Verification>(msg: &ChannelAnnouncement, secp_ctx: &Secp256k1<C>) -> Result<(), LightningError> {
- let msg_hash = hash_to_message!(&Sha256dHash::hash(&msg.contents.encode()[..])[..]);
+ let msg_hash = hash_to_message!(&message_sha256d_hash(&msg.contents)[..]);
secp_verify_sig!(secp_ctx, &msg_hash, &msg.node_signature_1, &get_pubkey_from_node_id!(msg.contents.node_id_1, "channel_announcement"), "channel_announcement");
secp_verify_sig!(secp_ctx, &msg_hash, &msg.node_signature_2, &get_pubkey_from_node_id!(msg.contents.node_id_2, "channel_announcement"), "channel_announcement");
secp_verify_sig!(secp_ctx, &msg_hash, &msg.bitcoin_signature_1, &get_pubkey_from_node_id!(msg.contents.bitcoin_key_1, "channel_announcement"), "channel_announcement");
///
/// Since node aliases are provided by third parties, they are a potential avenue for injection
/// attacks. Care must be taken when processing.
-#[derive(Clone, Copy, Debug, PartialEq, Eq)]
+#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)]
pub struct NodeAlias(pub [u8; 32]);
impl fmt::Display for NodeAlias {
let chain_hash: ChainHash = Readable::read(reader)?;
let channels_count: u64 = Readable::read(reader)?;
- let mut channels = IndexedMap::new();
+ // In Nov, 2023 there were about 15,000 nodes; we cap allocations to 1.5x that.
+ let mut channels = IndexedMap::with_capacity(cmp::min(channels_count as usize, 22500));
for _ in 0..channels_count {
let chan_id: u64 = Readable::read(reader)?;
let chan_info = Readable::read(reader)?;
channels.insert(chan_id, chan_info);
}
let nodes_count: u64 = Readable::read(reader)?;
- let mut nodes = IndexedMap::new();
+ // In Nov, 2023 there were about 69K channels; we cap allocations to 1.5x that.
+ let mut nodes = IndexedMap::with_capacity(cmp::min(nodes_count as usize, 103500));
for _ in 0..nodes_count {
let node_id = Readable::read(reader)?;
let node_info = Readable::read(reader)?;
} }
}
- let msg_hash = hash_to_message!(&Sha256dHash::hash(&msg.encode()[..])[..]);
+ let msg_hash = hash_to_message!(&message_sha256d_hash(&msg)[..]);
if msg.flags & 1 == 1 {
check_update_latest!(channel.two_to_one);
if let Some(sig) = sig {
}
}
+ #[allow(unused)]
pub(crate) fn as_ecdsa(&self) -> Option<&ECS> {
match self {
ChannelSignerType::Ecdsa(ecs) => Some(ecs)
}
}
- // Decrypt in place, without checking the tag. Use `finish_and_check_tag` to check it
- // later when decryption finishes.
- //
- // Should never be `pub` because the public API should always enforce tag checking.
+ pub fn check_decrypt_in_place(&mut self, input_output: &mut [u8], tag: &[u8]) -> Result<(), ()> {
+ self.decrypt_in_place(input_output);
+ if self.finish_and_check_tag(tag) { Ok(()) } else { Err(()) }
+ }
+
+ /// Decrypt in place, without checking the tag. Use `finish_and_check_tag` to check it
+ /// later when decryption finishes.
+ ///
+ /// Should never be `pub` because the public API should always enforce tag checking.
pub(super) fn decrypt_in_place(&mut self, input_output: &mut [u8]) {
debug_assert!(self.finished == false);
self.mac.input(input_output);
self.cipher.process_in_place(input_output);
}
- // If we were previously decrypting with `decrypt_in_place`, this method must be used to finish
- // decrypting and check the tag. Returns whether or not the tag is valid.
+ /// If we were previously decrypting with `just_decrypt_in_place`, this method must be used
+ /// to check the tag. Returns whether or not the tag is valid.
pub(super) fn finish_and_check_tag(&mut self, tag: &[u8]) -> bool {
debug_assert!(self.finished == false);
self.finished = true;
true
}
+ pub fn check_decrypt_in_place(&mut self, input_output: &mut [u8], tag: &[u8]) -> Result<(), ()> {
+ self.decrypt_in_place(input_output);
+ if self.finish_and_check_tag(tag) { Ok(()) } else { Err(()) }
+ }
+
pub(super) fn decrypt_in_place(&mut self, _input: &mut [u8]) {
assert!(self.finished == false);
}
/// - The counterparty will get an [`HTLCIntercepted`] event upon payment forward, and call
/// [`forward_intercepted_htlc`] with less than the amount provided in
/// [`HTLCIntercepted::expected_outbound_amount_msat`]. The difference between the expected and
- /// actual forward amounts is their fee.
- // TODO: link to LSP JIT channel invoice generation spec when it's merged
+ /// actual forward amounts is their fee. See
+ /// <https://github.com/BitcoinAndLightningLayerSpecs/lsp/tree/main/LSPS2#flow-lsp-trusts-client-model>
+ /// for how this feature may be used in the LSP use case.
///
/// # Note
/// It's important for payee wallet software to verify that [`PaymentClaimable::amount_msat`] is
}
}
+ /// Constructs a new, empty map with the given capacity pre-allocated
+ pub fn with_capacity(capacity: usize) -> Self {
+ Self {
+ map: HashMap::with_capacity(capacity),
+ keys: Vec::with_capacity(capacity),
+ }
+ }
+
#[inline(always)]
/// Fetches the element with the given `key`, if one exists.
pub fn get(&self, key: &K) -> Option<&V> {
/// Writes `self` out to a `Vec<u8>`.
fn encode(&self) -> Vec<u8> {
- let mut msg = VecWriter(Vec::new());
+ let len = self.serialized_length();
+ let mut msg = VecWriter(Vec::with_capacity(len));
self.write(&mut msg).unwrap();
+ // Note that objects with interior mutability may change size between when we called
+ // serialized_length and when we called write. That's okay, but shouldn't happen during
+ // testing as most of our tests are not threaded.
+ #[cfg(test)]
+ debug_assert_eq!(len, msg.0.len());
msg.0
}
0u16.write(&mut msg).unwrap();
self.write(&mut msg).unwrap();
let len = msg.0.len();
+ debug_assert_eq!(len - 2, self.serialized_length());
msg.0[..2].copy_from_slice(&(len as u16 - 2).to_be_bytes());
msg.0
}
/// This serialization is used by [`BOLT 7`] hostnames.
///
/// [`BOLT 7`]: https://github.com/lightning/bolts/blob/master/07-routing-gossip.md
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Hostname(String);
impl Hostname {
/// Returns the length of the hostname.
/// if the `Transaction`'s consensus-serialized length is <= u16::MAX.
///
/// Use [`TransactionU16LenLimited::into_transaction`] to convert into the contained `Transaction`.
-#[derive(Clone, Debug, PartialEq, Eq)]
+#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct TransactionU16LenLimited(Transaction);
impl TransactionU16LenLimited {
/// Channel state used for policy enforcement
pub state: Arc<Mutex<EnforcementState>>,
pub disable_revocation_policy_check: bool,
+ /// When `true` (the default), the signer will respond immediately with signatures. When `false`,
+ /// the signer will return an error indicating that it is unavailable.
+ pub available: Arc<Mutex<bool>>,
}
impl PartialEq for TestChannelSigner {
Self {
inner,
state,
- disable_revocation_policy_check: false
+ disable_revocation_policy_check: false,
+ available: Arc::new(Mutex::new(true)),
}
}
Self {
inner,
state,
- disable_revocation_policy_check
+ disable_revocation_policy_check,
+ available: Arc::new(Mutex::new(true)),
}
}
pub fn get_enforcement_state(&self) -> MutexGuard<EnforcementState> {
self.state.lock().unwrap()
}
+
+ /// Marks the signer's availability.
+ ///
+ /// When `true`, methods are forwarded to the underlying signer as normal. When `false`, some
+ /// methods will return `Err` indicating that the signer is unavailable. Intended to be used for
+ /// testing asynchronous signing.
+ #[cfg(test)]
+ pub fn set_available(&self, available: bool) {
+ *self.available.lock().unwrap() = available;
+ }
}
impl ChannelSigner for TestChannelSigner {
self.verify_counterparty_commitment_tx(commitment_tx, secp_ctx);
{
+ if !*self.available.lock().unwrap() {
+ return Err(());
+ }
let mut state = self.state.lock().unwrap();
let actual_commitment_number = commitment_tx.commitment_number();
let last_commitment_number = state.last_counterparty_commitment;
}
fn validate_counterparty_revocation(&self, idx: u64, _secret: &SecretKey) -> Result<(), ()> {
+ if !*self.available.lock().unwrap() {
+ return Err(());
+ }
let mut state = self.state.lock().unwrap();
assert!(idx == state.last_counterparty_revoked_commitment || idx == state.last_counterparty_revoked_commitment - 1, "expecting to validate the current or next counterparty revocation - trying {}, current {}", idx, state.last_counterparty_revoked_commitment);
state.last_counterparty_revoked_commitment = idx;
}
fn sign_holder_commitment(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1<secp256k1::All>) -> Result<Signature, ()> {
+ if !*self.available.lock().unwrap() {
+ return Err(());
+ }
let trusted_tx = self.verify_holder_commitment_tx(commitment_tx, secp_ctx);
let state = self.state.lock().unwrap();
let commitment_number = trusted_tx.commitment_number();
fn handle_closing_signed(&self, _their_node_id: &PublicKey, msg: &msgs::ClosingSigned) {
self.received_msg(wire::Message::ClosingSigned(msg.clone()));
}
+ fn handle_stfu(&self, _their_node_id: &PublicKey, msg: &msgs::Stfu) {
+ self.received_msg(wire::Message::Stfu(msg.clone()));
+ }
+ fn handle_splice(&self, _their_node_id: &PublicKey, msg: &msgs::Splice) {
+ self.received_msg(wire::Message::Splice(msg.clone()));
+ }
+ fn handle_splice_ack(&self, _their_node_id: &PublicKey, msg: &msgs::SpliceAck) {
+ self.received_msg(wire::Message::SpliceAck(msg.clone()));
+ }
+ fn handle_splice_locked(&self, _their_node_id: &PublicKey, msg: &msgs::SpliceLocked) {
+ self.received_msg(wire::Message::SpliceLocked(msg.clone()));
+ }
fn handle_update_add_htlc(&self, _their_node_id: &PublicKey, msg: &msgs::UpdateAddHTLC) {
self.received_msg(wire::Message::UpdateAddHTLC(msg.clone()));
}
--- /dev/null
+ * `ChannelManager`s written with LDK 0.0.119 are no longer readable by versions
+ of LDK prior to 0.0.113. Users wishing to downgrade to LDK 0.0.112 or before
+ can read an 0.0.119-serialized `ChannelManager` with a version of LDK from
+ 0.0.113 to 0.0.118, re-serialize it, and then downgrade.