Previously, if a user simultaneously called
`PeerHandler::process_events()` from two threads, we'd race, which
ended up sending messages out-of-order in the real world.
Specifically, we first called `get_and_clear_pending_msg_events`,
then take the `peers` lock and push the messages we got into the
sending queue. Two threads may both get some set of messages to
send, but then race each other into the `peers` lock and send the
messages in random order.
Because we already hold the `peers` lock when calling most message
handler functions, we can simply take the lock before calling
`get_and_clear_pending_msg_events`, solving the race.
// buffer by doing things like announcing channels on another node. We should be willing to
// drop optional-ish messages when send buffers get full!
+ let mut peers_lock = self.peers.lock().unwrap();
let mut events_generated = self.message_handler.chan_handler.get_and_clear_pending_msg_events();
events_generated.append(&mut self.message_handler.route_handler.get_and_clear_pending_msg_events());
- let mut peers_lock = self.peers.lock().unwrap();
let peers = &mut *peers_lock;
for event in events_generated.drain(..) {
macro_rules! get_peer_for_forwarding {