Skip to content

Fix RTP NACK responder buffers not released after RemoveTrack/Stop#405

Closed
ivanlukomskiy wants to merge 1 commit intopion:masterfrom
ivanlukomskiy:fix-responder-interceptor-resources-cleanup
Closed

Fix RTP NACK responder buffers not released after RemoveTrack/Stop#405
ivanlukomskiy wants to merge 1 commit intopion:masterfrom
ivanlukomskiy:fix-responder-interceptor-resources-cleanup

Conversation

@ivanlukomskiy
Copy link
Copy Markdown
Contributor

@ivanlukomskiy ivanlukomskiy commented Feb 14, 2026

Description

Currently, RTP packet history allocated by ResponderInterceptor remains retained after a sender is removed (RemoveTrack) while the PeerConnection stays active. This causes gradual memory growth in applications that dynamically add and remove outbound streams.

This change ensures that per-stream RTP history is properly cleaned up during UnbindLocalStream, preventing unnecessary memory retention while keeping the PeerConnection alive.

Reference issue

Fixes #404 https://github.com/pion/interceptor/issues/404

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 14, 2026

Codecov Report

❌ Patch coverage is 0% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 79.71%. Comparing base (40d68d9) to head (7f62d2b).
⚠️ Report is 23 commits behind head on master.

Files with missing lines Patch % Lines
internal/rtpbuffer/rtpbuffer.go 0.00% 5 Missing ⚠️
pkg/nack/responder_interceptor.go 0.00% 1 Missing ⚠️

❌ Your patch status has failed because the patch coverage (0.00%) is below the target coverage (70.00%). You can increase the patch coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #405      +/-   ##
==========================================
+ Coverage   79.69%   79.71%   +0.02%     
==========================================
  Files          82       88       +6     
  Lines        4063     4590     +527     
==========================================
+ Hits         3238     3659     +421     
- Misses        659      748      +89     
- Partials      166      183      +17     
Flag Coverage Δ
go 79.71% <0.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@y-kawawa y-kawawa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ivanlukomskiy
Thanks for the fix! I believe there's an issue with the Reset() implementation though.

// Reset clears the RTPBuffer of all packets and resets its state.
func (r *RTPBuffer) Reset() {
for i := range r.packets {
r.packets[i] = nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently it sets packets to nil without calling Release():

func (r *RTPBuffer) Reset() {
    for i := range r.packets {
        r.packets[i] = nil // Release() not called
    }
}

RetainablePacket uses reference counting — when Release() decrements count to 0, it calls onRelease to return the header and payload buffer back to sync.Pool. Without calling Release(), the header (func1) and payload buffer (func2) allocated by PacketFactoryCopy are never returned to the pool.

This is consistent with how RTPBuffer.Add() already handles slot replacement:

prevPacket := r.packets[idx]
if prevPacket != nil {
    prevPacket.Release() // existing code properly releases
}
r.packets[idx] = packet

Suggested fix:

func (r *RTPBuffer) Reset() {
    for i := range r.packets {
        if r.packets[i] != nil {
            r.packets[i].Release()
            r.packets[i] = nil
        }
    }
    r.highestAdded = 0
    r.started = false
}

We hit the same issue described in #404 in production — heap profiles show steady accumulation of NewPacketFactoryCopy.func1 (headerPool) and func2 (payloadPool), which would not be resolved without the Release() call.

@y-kawawa
Copy link
Copy Markdown
Contributor

@ivanlukomskiy

Thanks for working on this!
I also opened a related PR (#409), where I added the original issue reporter as a co-author.
Your implementation looks great to me. I'm happy to close mine in favor of this one if it moves forward.
Please let me know if I can help with testing, review, or anything else!

@JoTurk
Copy link
Copy Markdown
Member

JoTurk commented Mar 28, 2026

thank you merged with fixes in #409 as b7cb342

@JoTurk JoTurk closed this Mar 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants