`lightning-block-sync`'s REST and RPC clients both hold a cache for
a connected client to avoid the extra connection round-trip on each
request. Because only one client can be using a connection at once,
the connection is `take()`n out of an `Option` behind a `Mutex` and
if there isn't one present, we call `HttpClient::connect` to build
a new one.
However, this full logic is completed in one statement, causing a
client-cache lock to be held during `HttpClient::connect`. This
can turn into quite a bit of contention when using these clients as
gossip verifiers as we can create many requests back-to-back during
startup.
I noticed this as my node during startup only seemed to be
saturating one core and managed to get a backtrace that showed
several threads being blocked on this mutex when hitting a Bitcoin
Core node over REST that is on the same LAN, but not the same
machine.
{
let host = format!("{}:{}", self.endpoint.host(), self.endpoint.port());
let uri = format!("{}/{}", self.endpoint.path().trim_end_matches("/"), resource_path);
- let mut client = if let Some(client) = self.client.lock().unwrap().take() {
+ let reserved_client = self.client.lock().unwrap().take();
+ let mut client = if let Some(client) = reserved_client {
client
} else {
HttpClient::connect(&self.endpoint)?
"id": &self.id.fetch_add(1, Ordering::AcqRel).to_string()
});
- let mut client = if let Some(client) = self.client.lock().unwrap().take() {
+ let reserved_client = self.client.lock().unwrap().take();
+ let mut client = if let Some(client) = reserved_client {
client
} else {
HttpClient::connect(&self.endpoint)?