Skip to content

[#13909][fix] Reuse hidden_states buffer across CUDA graph captures in Eagle3#13930

Draft
achartier wants to merge 6 commits intoNVIDIA:mainfrom
achartier:fix/eagle3-cuda-graph-memory
Draft

[#13909][fix] Reuse hidden_states buffer across CUDA graph captures in Eagle3#13930
achartier wants to merge 6 commits intoNVIDIA:mainfrom
achartier:fix/eagle3-cuda-graph-memory

Conversation

@achartier
Copy link
Copy Markdown
Collaborator

@coderabbitai summary

Description

Copy of !13920 addressing review

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Spurthi Sandiri and others added 6 commits May 8, 2026 16:35
…Eagle3

Previously, Eagle3OneModelSpecMetadata allocated a new hidden_states
buffer (max_num_tokens × hidden_size × num_capture_layers) per CUDA
graph capture. This caused memory to grow linearly with the number of
graphs captured, wasting ~2.75 GiB for 16 captures on Qwen3-235B.

Fix: Always create Eagle3ResourceManager in the one-model flow and
reuse its pre-allocated hidden_states buffer instead of allocating a
new one per capture.

Fixes: NVIDIA#13909

Signed-off-by: Spurthi Sandiri <spurthi@amazon.com>
Remove Memory-Debug logger.debug call and its logger import
added for investigation — not needed in the final fix.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Eagle3OneModelDynamicTreeResourceManager does not have a
hidden_states attribute, so the unconditional access added
in the previous commit would raise AttributeError in
dynamic-tree one-model mode. Fall back to allocating a fresh
tensor when the resource manager doesn't expose the buffer.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
When reusing the resource manager's hidden_states buffer, verify
the column dimension matches hidden_size * num_capture_layers.
Fails fast with a clear message if the buffer was allocated with
different capture layer settings.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Include resource manager type, hidden_size, and capture layers
in the assertion to make mismatches easier to diagnose in
multi-model setups.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Add hidden_states = None to Eagle3OneModelDynamicTreeResourceManager
so the attribute is always present, then replace the hasattr
duck-typing with a direct None check.

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
@achartier
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47474 [ run ] triggered by Bot. Commit: b1becf4 Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants