Class ProbabilisticScorer


  • public class ProbabilisticScorer
    extends Object
    [`Score`] implementation using channel success probability distributions. Channels are tracked with upper and lower liquidity bounds - when an HTLC fails at a channel, we learn that the upper-bound on the available liquidity is lower than the amount of the HTLC. When a payment is forwarded through a channel (but fails later in the route), we learn the lower-bound on the channel's available liquidity must be at least the value of the HTLC. These bounds are then used to determine a success probability using the formula from Optimally Reliable & Cheap Payment Flows on the Lightning Network* by Rene Pickhardt and Stefan Richter [[1]] (i.e. `(upper_bound - payment_amount) / (upper_bound - lower_bound)`). This probability is combined with the [`liquidity_penalty_multiplier_msat`] and [`liquidity_penalty_amount_multiplier_msat`] parameters to calculate a concrete penalty in milli-satoshis. The penalties, when added across all hops, have the property of being linear in terms of the entire path's success probability. This allows the router to directly compare penalties for different paths. See the documentation of those parameters for the exact formulas. The liquidity bounds are decayed by halving them every [`liquidity_offset_half_life`]. Further, we track the history of our upper and lower liquidity bounds for each channel, allowing us to assign a second penalty (using [`historical_liquidity_penalty_multiplier_msat`] and [`historical_liquidity_penalty_amount_multiplier_msat`]) based on the same probability formula, but using the history of a channel rather than our latest estimates for the liquidity bounds. # Note Mixing the `no-std` feature between serialization and deserialization results in undefined behavior. [1]: https://arxiv.org/abs/2107.05322 [`liquidity_penalty_multiplier_msat`]: ProbabilisticScoringParameters::liquidity_penalty_multiplier_msat [`liquidity_penalty_amount_multiplier_msat`]: ProbabilisticScoringParameters::liquidity_penalty_amount_multiplier_msat [`liquidity_offset_half_life`]: ProbabilisticScoringParameters::liquidity_offset_half_life [`historical_liquidity_penalty_multiplier_msat`]: ProbabilisticScoringParameters::historical_liquidity_penalty_multiplier_msat [`historical_liquidity_penalty_amount_multiplier_msat`]: ProbabilisticScoringParameters::historical_liquidity_penalty_amount_multiplier_msat
    • Method Detail

      • debug_log_liquidity_stats

        public void debug_log_liquidity_stats()
        Dump the contents of this scorer into the configured logger. Note that this writes roughly one line per channel for which we have a liquidity estimate, which may be a substantial amount of log output.
      • estimated_channel_liquidity_range

        public Option_C2Tuple_u64u64ZZ estimated_channel_liquidity_range​(long scid,
                                                                         NodeId target)
        Query the estimated minimum and maximum liquidity available for sending a payment over the channel with `scid` towards the given `target` node.
      • historical_estimated_channel_liquidity_probabilities

        public Option_C2Tuple_EightU16sEightU16sZZ historical_estimated_channel_liquidity_probabilities​(long scid,
                                                                                                        NodeId target)
        Query the historical estimated minimum and maximum liquidity available for sending a payment over the channel with `scid` towards the given `target` node. Returns two sets of 8 buckets. The first set describes the octiles for lower-bound liquidity estimates, the second set describes the octiles for upper-bound liquidity estimates. Each bucket describes the relative frequency at which we've seen a liquidity bound in the octile relative to the channel's total capacity, on an arbitrary scale. Because the values are slowly decayed, more recent data points are weighted more heavily than older datapoints. When scoring, the estimated probability that an upper-/lower-bound lies in a given octile relative to the channel's total capacity is calculated by dividing that bucket's value with the total of all buckets for the given bound. For example, a value of `[0, 0, 0, 0, 0, 0, 32]` indicates that we believe the probability of a bound being in the top octile to be 100%, and have never (recently) seen it in any other octiles. A value of `[31, 0, 0, 0, 0, 0, 0, 32]` indicates we've seen the bound being both in the top and bottom octile, and roughly with similar (recent) frequency. Because the datapoints are decayed slowly over time, values will eventually return to `Some(([0; 8], [0; 8]))`.
      • add_banned

        public void add_banned​(NodeId node_id)
        Marks the node with the given `node_id` as banned, i.e., it will be avoided during path finding.
      • remove_banned

        public void remove_banned​(NodeId node_id)
        Removes the node with the given `node_id` from the list of nodes to avoid.
      • set_manual_penalty

        public void set_manual_penalty​(NodeId node_id,
                                       long penalty)
        Sets a manual penalty for the given node.
      • remove_manual_penalty

        public void remove_manual_penalty​(NodeId node_id)
        Removes the node with the given `node_id` from the list of manual penalties.
      • clear_manual_penalties

        public void clear_manual_penalties()
        Clears the list of manual penalties that are applied during path finding.
      • as_Score

        public Score as_Score()
        Constructs a new Score which calls the relevant methods on this_arg. This copies the `inner` pointer in this_arg and thus the returned Score must be freed before this_arg is
      • write

        public byte[] write()
        Serialize the ProbabilisticScorer object into a byte array which can be read by ProbabilisticScorer_read