[peer_handler] Take the peers lock before getting messages to send 2021-04-peer_handler_lock
authorMatt Corallo <git@bluematt.me>
Wed, 21 Apr 2021 21:50:41 +0000 (21:50 +0000)
committerMatt Corallo <git@bluematt.me>
Wed, 21 Apr 2021 22:03:45 +0000 (22:03 +0000)
Previously, if a user simultaneously called
`PeerHandler::process_events()` from two threads, we'd race, which
ended up sending messages out-of-order in the real world.
Specifically, we first called `get_and_clear_pending_msg_events`,
then take the `peers` lock and push the messages we got into the
sending queue. Two threads may both get some set of messages to
send, but then race each other into the `peers` lock and send the
messages in random order.

Because we already hold the `peers` lock when calling most message
handler functions, we can simply take the lock before calling
`get_and_clear_pending_msg_events`, solving the race.

lightning/src/ln/peer_handler.rs

index 9488a34db0d7e04d322b787bc47501be5cba0c80..d4eb9eae5e7910f7a0b4de52ab6591794210f8ea 100644 (file)
@@ -1020,9 +1020,9 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, L: Deref> PeerManager<D
                        // buffer by doing things like announcing channels on another node. We should be willing to
                        // drop optional-ish messages when send buffers get full!
 
+                       let mut peers_lock = self.peers.lock().unwrap();
                        let mut events_generated = self.message_handler.chan_handler.get_and_clear_pending_msg_events();
                        events_generated.append(&mut self.message_handler.route_handler.get_and_clear_pending_msg_events());
-                       let mut peers_lock = self.peers.lock().unwrap();
                        let peers = &mut *peers_lock;
                        for event in events_generated.drain(..) {
                                macro_rules! get_peer_for_forwarding {