Skip to content

[None][fix] Raise server_waiting_timeout to 3600s for DSv4 disagg tests#14028

Open
Shixiaowei02 wants to merge 2 commits into
NVIDIA:feat/deepseek_v4from
Shixiaowei02:user/xiaoweis/dsv4_disagg_timeout
Open

[None][fix] Raise server_waiting_timeout to 3600s for DSv4 disagg tests#14028
Shixiaowei02 wants to merge 2 commits into
NVIDIA:feat/deepseek_v4from
Shixiaowei02:user/xiaoweis/dsv4_disagg_timeout

Conversation

@Shixiaowei02
Copy link
Copy Markdown
Collaborator

@Shixiaowei02 Shixiaowei02 commented May 12, 2026

V4-Flash safetensors prefetch (~148 GB at ~90 MB/s on shared scratch) plus autotuner and CUDA graph warmup needs ~37 min/worker at TP=2, exceeding the 2100s (35 min) default. The wait loop times out, then pytest.fail short-circuits the with-block before yield, and cleanup hangs in Popen.wait() until the 60 min @pytest.mark.timeout kills it.

@coderabbitai summary

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

@Shixiaowei02 Shixiaowei02 force-pushed the user/xiaoweis/dsv4_disagg_timeout branch from 131fb75 to 8e4686e Compare May 12, 2026 04:55
@Shixiaowei02 Shixiaowei02 marked this pull request as ready for review May 12, 2026 04:56
@Shixiaowei02 Shixiaowei02 requested a review from a team as a code owner May 12, 2026 04:56
@Shixiaowei02
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47876 [ run ] triggered by Bot. Commit: 8e4686e Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47876 [ run ] completed with state SUCCESS. Commit: 8e4686e
/LLM/main/L0_MergeRequest_PR pipeline #37735 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

V4-Flash safetensors prefetch (~148 GB at ~90 MB/s on shared scratch)
plus autotuner and CUDA graph warmup needs ~37 min/worker at TP=2,
exceeding the 2100s (35 min) default. The wait loop times out, then
pytest.fail short-circuits the with-block before yield, and cleanup
hangs in Popen.wait() until the 60 min @pytest.mark.timeout kills it.

Signed-off-by: Xiaowei Shi <39303645+Shixiaowei02@users.noreply.github.com>
…DIA#14024)"

This reverts commit f2af68f.

Signed-off-by: Xiaowei Shi <39303645+Shixiaowei02@users.noreply.github.com>
@Shixiaowei02 Shixiaowei02 force-pushed the user/xiaoweis/dsv4_disagg_timeout branch from 8e4686e to 8410783 Compare May 12, 2026 07:48
@Shixiaowei02
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu-test --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47929 [ run ] triggered by Bot. Commit: 8410783 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47929 [ run ] completed with state SUCCESS. Commit: 8410783
/LLM/main/L0_MergeRequest_PR pipeline #37775 completed with status: 'SUCCESS'

CI Report

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants