- * Note that for every new monitor, you **must** persist the new `ChannelMonitor`
- * to disk/backups. And, on every update, you **must** persist either the
- * `ChannelMonitorUpdate` or the updated monitor itself. Otherwise, there is risk
- * of situations such as revoking a transaction, then crashing before this
- * revocation can be persisted, then unintentionally broadcasting a revoked
- * transaction and losing money. This is a risk because previous channel states
- * are toxic, so it's important that whatever channel state is persisted is
- * kept up-to-date.
+ * Persistence can happen in one of two ways - synchronously completing before the trait method
+ * calls return or asynchronously in the background.
+ *
+ * # For those implementing synchronous persistence
+ *
+ * If persistence completes fully (including any relevant `fsync()` calls), the implementation
+ * should return [`ChannelMonitorUpdateStatus::Completed`], indicating normal channel operation
+ * should continue.
+ *
+ * If persistence fails for some reason, implementations should consider returning
+ * [`ChannelMonitorUpdateStatus::InProgress`] and retry all pending persistence operations in
+ * the background with [`ChainMonitor::list_pending_monitor_updates`] and
+ * [`ChainMonitor::get_monitor`].
+ *
+ * Once a full [`ChannelMonitor`] has been persisted, all pending updates for that channel can
+ * be marked as complete via [`ChainMonitor::channel_monitor_updated`].
+ *
+ * If at some point no further progress can be made towards persisting the pending updates, the
+ * node should simply shut down.
+ *
+ * If the persistence has failed and cannot be retried further (e.g. because of an outage),
+ * [`ChannelMonitorUpdateStatus::UnrecoverableError`] can be used, though this will result in
+ * an immediate panic and future operations in LDK generally failing.
+ *
+ * # For those implementing asynchronous persistence
+ *
+ * All calls should generally spawn a background task and immediately return
+ * [`ChannelMonitorUpdateStatus::InProgress`]. Once the update completes,
+ * [`ChainMonitor::channel_monitor_updated`] should be called with the corresponding
+ * [`MonitorUpdateId`].
+ *
+ * Note that unlike the direct [`chain::Watch`] interface,
+ * [`ChainMonitor::channel_monitor_updated`] must be called once for *each* update which occurs.
+ *
+ * If at some point no further progress can be made towards persisting a pending update, the node
+ * should simply shut down. Until then, the background task should either loop indefinitely, or
+ * persistence should be regularly retried with [`ChainMonitor::list_pending_monitor_updates`]
+ * and [`ChainMonitor::get_monitor`] (note that if a full monitor is persisted all pending
+ * monitor updates may be marked completed).
+ *
+ * # Using remote watchtowers
+ *
+ * Watchtowers may be updated as a part of an implementation of this trait, utilizing the async
+ * update process described above while the watchtower is being updated. The following methods are
+ * provided for bulding transactions for a watchtower:
+ * [`ChannelMonitor::initial_counterparty_commitment_tx`],
+ * [`ChannelMonitor::counterparty_commitment_txs_from_update`],
+ * [`ChannelMonitor::sign_to_local_justice_tx`], [`TrustedCommitmentTransaction::revokeable_output_index`],
+ * [`TrustedCommitmentTransaction::build_to_local_justice_tx`].
+ *
+ * [`TrustedCommitmentTransaction::revokeable_output_index`]: crate::ln::chan_utils::TrustedCommitmentTransaction::revokeable_output_index
+ * [`TrustedCommitmentTransaction::build_to_local_justice_tx`]: crate::ln::chan_utils::TrustedCommitmentTransaction::build_to_local_justice_tx