Skip to content

[https://nvbugs/6094107][fix] Exclude PP send/recv from piecewise CUDA graph capture#13296

Closed
tensorrt-cicd wants to merge 1 commit intoNVIDIA:mainfrom
tensorrt-cicd:repair-bot-bug6094107
Closed

[https://nvbugs/6094107][fix] Exclude PP send/recv from piecewise CUDA graph capture#13296
tensorrt-cicd wants to merge 1 commit intoNVIDIA:mainfrom
tensorrt-cicd:repair-bot-bug6094107

Conversation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

@tensorrt-cicd tensorrt-cicd commented Apr 21, 2026

Summary

  • Fix for NVBugs 6094107: [TensorRT-LLM][main]: TestDeepSeekV3Lite::test_fp8_block_scales_4gpus is stuck
  • Root cause: Exclude PP send/recv from piecewise CUDA graph capture
  • Fix: (auto-detected from git commit)
  • Automated fix generated by repair-bot

Test plan

  • Verify fix on the same GPU type as the original failure
  • Check for regressions in related tests

Links

Summary by CodeRabbit

  • Bug Fixes
    • Fixed handling of pipeline parallel communication operations in CUDA graph capture to ensure consistent exclusion from graph optimization.

…A graph capture

The piecewise CUDA graph optimizer was capturing NCCL point-to-point
communication ops (pp_send_tensors/pp_recv_tensors) inside CUDA graph
sections. When replayed across pipeline-parallel ranks, these captured
NCCL operations could intermittently deadlock, causing the PP4 +
torch_compile + piecewise CUDA graph configuration to hang.

Add pp_send_tensors and pp_recv_tensors as graph-break points in the
piecewise optimizer so they always run eagerly, similar to how attention
custom ops are already excluded.

Signed-off-by: tensorrt-cicd <90828364+tensorrt-cicd@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: dacf91fe-2ed9-4bc8-a7d6-2d3b9d12e0de

📥 Commits

Reviewing files that changed from the base of the PR and between 6e5a339 and 9e05d24.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/compilation/piecewise_optimizer.py

📝 Walkthrough

Walkthrough

Updated the piecewise optimizer's graph partitioning logic to always exclude Pipeline Parallelism point-to-point communication operations from CUDA graph capture, regardless of the stop_partition setting, by assigning them distinct partition IDs.

Changes

Cohort / File(s) Summary
Pipeline Parallelism Op Exclusion
tensorrt_llm/_torch/compilation/piecewise_optimizer.py
Modified graph partition exclusion logic to unconditionally detect and exclude pp_send_tensors.default and pp_recv_tensors.default operations by assigning new partition IDs, removing the prior conditional behavior tied to stop_partition.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main change: excluding PP send/recv operations from piecewise CUDA graph capture to fix a deadlock issue.
Description check ✅ Passed The PR description includes a summary explaining the root cause and fix, references the bug ticket, includes a test plan with verification steps, and provides relevant links. All key sections are present and adequately filled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@yuxianq
Copy link
Copy Markdown
Collaborator

yuxianq commented Apr 22, 2026

/bot run --add-multi-gpu-test

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator Author

PR_Github #44854 [ run ] triggered by Bot. Commit: 9e05d24 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator Author

PR_Github #44854 [ run ] completed with state SUCCESS. Commit: 9e05d24
/LLM/main/L0_MergeRequest_PR pipeline #35193 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

if (not stop_partition and is_call_function(node, [
# PP send/recv must always be excluded from CUDA graph capture
# regardless of stop_partition, because capturing NCCL point-to-point
# communication in CUDA graphs can cause intermittent deadlocks.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The explanation does not make sense as the normal cuda-graph does not exclude the pp_send and pp_recv. There could be deeper issue on that.

@yuxianq
Copy link
Copy Markdown
Collaborator

yuxianq commented May 8, 2026

close since we cannot reproduce this bug with latest main, see #13891

@yuxianq yuxianq closed this May 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants