Skip to content

Conversation

@yuantailing
Copy link
Member

@yuantailing yuantailing commented Jan 23, 2026

Summary by CodeRabbit

  • Improvements

    • Increased default support for large input requests in the executor.
  • Refactor

    • Optimized request scheduling and termination logic for better resource management during execution.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Bug occurs if all the three conditions meet:

  1. Use overlap scheduler;
  2. capacity_scheduler_policy is MAX_UTILIZATION , and pause happens;
  3. Use sampler_type: TRTLLMSampler.

Whether pause happens depends on dataset, machine, and many other things. To make its reproduction easier, you can set a smaller free_gpu_memory_fraction. Both mlperf dataset and random dataset can trigger the bug, if only the KV cache requirement is large enough to trigger pause.

Reason:

  1. When GPU memory is not enough, some requests are evicted by pause(). Then, each paused request is regarded as new request whose mPromptLen includes the original prompt and generated tokens.
  2. The overlap scheduler calls addNewToken(), which increases every reqTokens by one. This includes the recent paused requests.
  3. When the paused requests are resumed, it comes to the reqTokens.size() == promptLen check, where reqTokens has been increased by one. This causes assertion error.

Compare to non-overlap scheduler:
By setting disable_overlap_scheduler: true, I observed that pause happens and addNewToken() is not called on paused requests, so there is no error.

How to fix:

  1. Move pause() to behind addNewToken().
  2. max_input_len: Size of reqTokens is truncated to max_input_len when pause() is called. If it's shorter than the length of the original request, some input is losing. So I change the default value of max_input_len to INT32 max to avoid truncation.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
@yuantailing yuantailing requested a review from a team as a code owner January 23, 2026 03:33
@yuantailing yuantailing requested a review from byshiue January 23, 2026 03:33
@yuantailing
Copy link
Member Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

📝 Walkthrough

Walkthrough

PyExecutor initialization parameters and control flow around request lifecycle management have been modified. The max_input_len default parameter increased substantially, and the handling of paused requests now terminates before subsequent processing, with new helper methods introduced for separation of concerns.

Changes

Cohort / File(s) Summary
PyExecutor Request Lifecycle
tensorrt_llm/_torch/pyexecutor/py_executor.py
Default max_input_len parameter increased from 2048 to 0x3fffffff. Control flow modified to terminate paused requests before pausing them during scheduling and overlap executor loops. Added new internal methods _terminate_requests() and _pause_requests() for explicit handling of request termination vs. pausing logic.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically identifies the fix: adjusting the timing of pause() for the overlap scheduler, directly matching the core change in the PR.
Description check ✅ Passed PR description clearly explains the bug, its root causes, reproduction conditions, and the fix. Specific technical details about the overlap scheduler, pause timing, and max_input_len changes are provided.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/pyexecutor/py_executor.py`:
- Around line 1497-1498: The current pause flow calls _terminate_requests (which
uses _terminate_request) and thus removes result_wait_queues, breaking resumed
requests; add a new method (e.g., _release_resources_for_pause or
_pause_release_resources) that frees execution resources but does NOT delete
entries from result_wait_queues, update _pause_requests and all call sites
currently invoking _terminate_requests for pausing (including the calls near the
existing _pause_requests usage) to call this new release-for-pause path instead
of _terminate_requests, and ensure unit tests cover that paused requests retain
their result_wait_queue entries and can resume delivery.
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

247-247: Name the new max_input_len sentinel for clarity.

The hex literal is a magic value; consider promoting it to a constant with a short comment so intent is obvious to callers and future maintainers.

♻️ Suggested refactor
+MAX_INPUT_LEN_DEFAULT = 0x3FFFFFFF  # effectively "no cap" for pause truncation
 ...
-                 max_input_len: int = 0x3fffffff,
+                 max_input_len: int = MAX_INPUT_LEN_DEFAULT,

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33276 [ run ] triggered by Bot. Commit: 5a364c6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33276 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 8 PM PST on 1/22.

@yuantailing
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33303 [ run ] triggered by Bot. Commit: 5a364c6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33303 [ run ] completed with state SUCCESS. Commit: 5a364c6
/LLM/main/L0_MergeRequest_PR pipeline #25712 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yuantailing
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33330 [ run ] triggered by Bot. Commit: 5a364c6

@yuantailing
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33405 [ run ] triggered by Bot. Commit: 5a364c6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33405 [ run ] completed with state SUCCESS. Commit: 5a364c6
/LLM/main/L0_MergeRequest_PR pipeline #25784 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yuantailing
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33432 [ run ] triggered by Bot. Commit: 5a364c6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33432 [ run ] completed with state SUCCESS. Commit: 5a364c6
/LLM/main/L0_MergeRequest_PR pipeline #25805 completed with status: 'SUCCESS'

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
@yuantailing
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33446 [ run ] triggered by Bot. Commit: e0e8bb8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33446 [ run ] completed with state SUCCESS. Commit: e0e8bb8
/LLM/main/L0_MergeRequest_PR pipeline #25816 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yuantailing
Copy link
Member Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33462 [ run ] triggered by Bot. Commit: e0e8bb8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants