Skip to content

Conversation

@valentinewallace
Copy link
Contributor

@valentinewallace valentinewallace commented Feb 9, 2026

Previously, the InboundUpdateAdd::Forwarded enum variant contained an
HTLCPreviousHopData, which had a lot of fields that were redundant with the
outer InboundHTLCOutput/Channel structs. Here we dedup those fields, which is
important because the pending InboundUpdateAdds are persisted whenever the
ChannelManager is persisted.

Based on #4303
Partially addresses #4286

valentinewallace and others added 14 commits February 6, 2026 14:02
We recently added support for reconstructing
ChannelManager::decode_update_add_htlcs on startup, using data present in the
Channels. However, we failed to prune HTLCs from this rebuilt map if a given
inbound HTLC was already forwarded to the outbound edge and in the outbound
holding cell (this bug could've caused us to double-forward HTLCs, fortunately
it never shipped).

As part of fixing this bug, we clean up the overall pruning approach by:
1. If the Channel is open, then it is the source of truth for what HTLCs are
outbound+pending (including pending in the holding cell)
2. If the Channel is closed, then the corresponding ChannelMonitor is the
source of truth for what HTLCs are outbound+pending

Previously, we would only consider the monitor's pending HTLCs, which ignored
holding cell HTLCs.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This cleanup falls out of the changes made in the previous commit. Separated
out here for reviewability.
We recently added support for reconstructing
ChannelManager::decode_update_add_htlcs on startup, using data present in the
Channels. However, we failed to prune HTLCs from this rebuilt map if a given
HTLC was already forwarded+removed from the outbound edge and resolved in the
inbound edge's holding cell.

Here we fix this bug that would have caused us to
double-forward inbound HTLC forwards, which fortunately was not shipped.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
In 0.3+, we are taking steps to remove the requirement of regularly persisting
the ChannelManager and instead rebuild the set of HTLC forwards (and the
manager generally) from Channel{Monitor} data.

We previously merged support for reconstructing the
ChannelManager::decode_update_add_htlcs map from channel data, using a new
HTLC onion field that will be present for inbound HTLCs received on 0.3+ only.

However, we now want to add support for pruning this field once it's no longer
needed so it doesn't get persisted every time the manager gets persisted. At
the same time, in a future LDK version we need to detect whether the field was
ever present to begin with to prevent upgrading with legacy HTLCs present.

We accomplish both by converting the plain update_add option that was
previously serialized to an enum that can indicate whether the HTLC is from
0.2- versus 0.3+-with-onion-pruned (a variant for the latter is added in the
next commit).

Actual pruning of the new update_add field is added in the next commit.
We store inbound committed HTLCs' onions in Channels for use in reconstructing
the pending HTLC set on ChannelManager read. If an HTLC has been forwarded to
the outbound edge, we no longer need to persist the inbound edge's onion and
can prune it here.
We recently merged (test-only, for now) support for the ChannelManager
reconstructing its set of pending HTLCs from Channel{Monitor} data, rather than
using its own persisted maps. But because we want test coverage of both the new
reconstruction codepaths as well as the old persisted map codepaths,
in tests we would decide between those two sets of codepaths randomly.

We now want to add some tests that require using the new codepaths, so here we
add an option to explicitly set whether to reconstruct or not rather than
choosing randomly.
Cleans it up a bit in preparation for adding a new variant in the next commit.
In a recent commit, we added support for pruning an inbound HTLC's persisted
onion once the HTLC has been irrevocably forwarded to the outbound edge.

Here, we add a check on startup that those inbound HTLCs were actually handled.
Specifically, we check that the inbound HTLC is either (a) currently present in
the outbound edge or (b) was removed via claim. If neither of those are true,
we infer that the HTLC was removed from the outbound edge via fail and fail the
inbound HTLC backwards.
In 0.3+, we are taking steps to remove the requirement of regularly persisting
the ChannelManager and instead rebuild the set of HTLC forwards (and the
manager generally) from Channel{Monitor} data.

We previously merged support for reconstructing the
ChannelManager::decode_update_add_htlcs map from channel data, using a new
HTLC onion field that will be present for inbound HTLCs received on 0.3+ only.
The plan is that in upcoming LDK versions, the manager will reconstruct this
map and the other forward/claimable/pending HTLC maps will automatically
repopulate themselves on the next call to process_pending_htlc_forwards.

As such, once we're in a future version that reconstructs the pending HTLC set,
we can stop persisting the legacy ChannelManager maps such as forward_htlcs,
pending_intercepted_htlcs since they will never be used.

For 0.3 to be compatible with this future version, in this commit we detect
that the manager was last written on a version of LDK that doesn't persist the
legacy maps. In that case, we don't try to read the old forwards map and run
the new reconstruction logic only.
Useful when using these macros in lightning-tests/upgrade_downgrade_tests
In the next commit, we want to dedup fields between the
InboundUpdateAdd::Forwarded's HTLCPreviousHopData and the outer
InboundHTLCOutput/Channel structs, since many fields are duplicated in both
places at the moment. As part of doing this cleanly, we first refactor the
method that retrieves these InboundUpdateAdds for reconstructing the set of
pending HTLCs during ChannelManager deconstruction.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, the InboundUpdateAdd::Forwarded enum variant contained an
HTLCPreviousHopData, which had a lot of fields that were redundant with the
outer InboundHTLCOutput/Channel structs. Here we dedup those fields, which is
important because the pending InboundUpdateAdds are persisted whenever the
ChannelManager is persisted.
@valentinewallace valentinewallace added this to the 0.3 milestone Feb 9, 2026
@valentinewallace valentinewallace self-assigned this Feb 9, 2026
@valentinewallace valentinewallace added the weekly goal Someone wants to land this this week label Feb 9, 2026
@ldk-reviews-bot
Copy link

👋 Hi! I see this is a draft PR.
I'll wait to assign reviewers until you mark it as ready for review.
Just convert it out of draft status when you're ready for review!

@codecov
Copy link

codecov bot commented Feb 9, 2026

Codecov Report

❌ Patch coverage is 86.94639% with 56 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.03%. Comparing base (1d90fce) to head (0d38512).
⚠️ Report is 16 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/channelmanager.rs 86.33% 30 Missing and 8 partials ⚠️
lightning/src/ln/channel.rs 88.97% 12 Missing and 2 partials ⚠️
lightning/src/util/ser.rs 40.00% 0 Missing and 3 partials ⚠️
lightning/src/ln/functional_test_utils.rs 94.73% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4405      +/-   ##
==========================================
- Coverage   86.03%   86.03%   -0.01%     
==========================================
  Files         156      156              
  Lines      103036   103374     +338     
  Branches   103036   103374     +338     
==========================================
+ Hits        88652    88941     +289     
- Misses      11876    11919      +43     
- Partials     2508     2514       +6     
Flag Coverage Δ
tests 86.03% <86.94%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

weekly goal Someone wants to land this this week

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants