* You MUST ensure that no ChannelMonitors for a given channel anywhere contain out-of-date
* information and are actively monitoring the chain.
*
- * Pending Events or updated HTLCs which have not yet been read out by
- * get_and_clear_pending_monitor_events or get_and_clear_pending_events are serialized to disk and
- * reloaded at deserialize-time. Thus, you must ensure that, when handling events, all events
- * gotten are fully handled before re-serializing the new state.
- *
* Note that the deserializer is only implemented for (BlockHash, ChannelMonitor), which
* tells you the last block hash which was block_connect()ed. You MUST rescan any blocks along
* the \"reorg path\" (ie disconnecting blocks until you find a common ancestor from both the
}
/**
- * Gets the list of pending events which were generated by previous actions, clearing the list
- * in the process.
+ * Processes [`SpendableOutputs`] events produced from each [`ChannelMonitor`] upon maturity.
+ *
+ * For channels featuring anchor outputs, this method will also process [`BumpTransaction`]
+ * events produced from each [`ChannelMonitor`] while there is a balance to claim onchain
+ * within each channel. As the confirmation of a commitment transaction may be critical to the
+ * safety of funds, we recommend invoking this every 30 seconds, or lower if running in an
+ * environment with spotty connections, like on mobile.
*
- * This is called by the [`EventsProvider::process_pending_events`] implementation for
- * [`ChainMonitor`].
+ * An [`EventHandler`] may safely call back to the provider, though this shouldn't be needed in
+ * order to handle these events.
*
- * [`EventsProvider::process_pending_events`]: crate::events::EventsProvider::process_pending_events
- * [`ChainMonitor`]: crate::chain::chainmonitor::ChainMonitor
+ * [`SpendableOutputs`]: crate::events::Event::SpendableOutputs
+ * [`BumpTransaction`]: crate::events::Event::BumpTransaction
*/
- public Event[] get_and_clear_pending_events() {
- long[] ret = bindings.ChannelMonitor_get_and_clear_pending_events(this.ptr);
+ public void process_pending_events(org.ldk.structs.EventHandler handler) {
+ bindings.ChannelMonitor_process_pending_events(this.ptr, handler.ptr);
Reference.reachabilityFence(this);
- int ret_conv_7_len = ret.length;
- Event[] ret_conv_7_arr = new Event[ret_conv_7_len];
- for (int h = 0; h < ret_conv_7_len; h++) {
- long ret_conv_7 = ret[h];
- org.ldk.structs.Event ret_conv_7_hu_conv = org.ldk.structs.Event.constr_from_ptr(ret_conv_7);
- if (ret_conv_7_hu_conv != null) { ret_conv_7_hu_conv.ptrs_to.add(this); };
- ret_conv_7_arr[h] = ret_conv_7_hu_conv;
- }
- return ret_conv_7_arr;
+ Reference.reachabilityFence(handler);
+ if (this != null) { this.ptrs_to.add(handler); };
}
/**
/**
* Returns the set of txids that should be monitored for re-organization out of the chain.
*/
- public TwoTuple_TxidBlockHashZ[] get_relevant_txids() {
+ public TwoTuple_TxidCOption_BlockHashZZ[] get_relevant_txids() {
long[] ret = bindings.ChannelMonitor_get_relevant_txids(this.ptr);
Reference.reachabilityFence(this);
- int ret_conv_25_len = ret.length;
- TwoTuple_TxidBlockHashZ[] ret_conv_25_arr = new TwoTuple_TxidBlockHashZ[ret_conv_25_len];
- for (int z = 0; z < ret_conv_25_len; z++) {
- long ret_conv_25 = ret[z];
- TwoTuple_TxidBlockHashZ ret_conv_25_hu_conv = new TwoTuple_TxidBlockHashZ(null, ret_conv_25);
- if (ret_conv_25_hu_conv != null) { ret_conv_25_hu_conv.ptrs_to.add(this); };
- ret_conv_25_arr[z] = ret_conv_25_hu_conv;
+ int ret_conv_34_len = ret.length;
+ TwoTuple_TxidCOption_BlockHashZZ[] ret_conv_34_arr = new TwoTuple_TxidCOption_BlockHashZZ[ret_conv_34_len];
+ for (int i = 0; i < ret_conv_34_len; i++) {
+ long ret_conv_34 = ret[i];
+ TwoTuple_TxidCOption_BlockHashZZ ret_conv_34_hu_conv = new TwoTuple_TxidCOption_BlockHashZZ(null, ret_conv_34);
+ if (ret_conv_34_hu_conv != null) { ret_conv_34_hu_conv.ptrs_to.add(this); };
+ ret_conv_34_arr[i] = ret_conv_34_hu_conv;
}
- return ret_conv_25_arr;
+ return ret_conv_34_arr;
}
/**