Skip to content

[#8542][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 post merge#14040

Draft
MrGeva wants to merge 1 commit into
NVIDIA:mainfrom
nv-auto-deploy:feat/ad-perf-sanity-deepseek-r1-fp8-dgx-b200
Draft

[#8542][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 post merge#14040
MrGeva wants to merge 1 commit into
NVIDIA:mainfrom
nv-auto-deploy:feat/ad-perf-sanity-deepseek-r1-fp8-dgx-b200

Conversation

@MrGeva
Copy link
Copy Markdown
Collaborator

@MrGeva MrGeva commented May 12, 2026

Mirrors super_ad_blackwell-super_ad_ws4_1k1k but for DeepSeek-R1 0528 FP8 on a full DGX B200 (8 GPUs).

Changes:

  • tests/scripts/perf-sanity/aggregated/deepseek_r1_fp8_ad_blackwell.yaml: new perf-sanity config with one server config r1_fp8_ad_ws8_1k1k using the _autodeploy backend, world_size: 8, and the existing examples/auto_deploy/model_registry/configs/deepseek-r1.yaml (MLA + trtllm_mla cached attention + fuse_rope_into_trtllm_mla + multi-stream MoE + AllReduce-residual-RMSNorm fusion + sharding). Client config matches the reference: concurrency 64, 10 iterations, ISL=OSL=1024, openai backend.
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml: enroll the new test in the existing 8-GPU stage: post_merge / backend: autodeploy block alongside the existing DeepSeek-R1-0528 accuracy registry test.

The deepseek_r1_0528_fp8 -> DeepSeek-R1/DeepSeek-R1-0528/ mapping already exists in test_perf_sanity.py::MODEL_PATH_DICT, so no edit to the test driver is required.

Test ID:
perf/test_perf_sanity.py::test_e2e[aggr_upload-deepseek_r1_fp8_ad_blackwell-r1_fp8_ad_ws8_1k1k]

@coderabbitai summary

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Mirrors super_ad_blackwell-super_ad_ws4_1k1k but for DeepSeek-R1 0528 FP8
on a full DGX B200 (8 GPUs).

Changes:
- ``tests/scripts/perf-sanity/aggregated/deepseek_r1_fp8_ad_blackwell.yaml``:
  new perf-sanity config with one server config ``r1_fp8_ad_ws8_1k1k``
  using the ``_autodeploy`` backend, ``world_size: 8``, and the existing
  ``examples/auto_deploy/model_registry/configs/deepseek-r1.yaml`` (MLA +
  trtllm_mla cached attention + fuse_rope_into_trtllm_mla + multi-stream
  MoE + AllReduce-residual-RMSNorm fusion + sharding). Client config matches
  the reference: concurrency 64, 10 iterations, ISL=OSL=1024, openai backend.
- ``tests/integration/test_lists/test-db/l0_dgx_b200.yml``: enroll the new
  test in the existing 8-GPU ``stage: post_merge`` / ``backend: autodeploy``
  block alongside the existing DeepSeek-R1-0528 accuracy registry test.

The ``deepseek_r1_0528_fp8 -> DeepSeek-R1/DeepSeek-R1-0528/`` mapping
already exists in ``test_perf_sanity.py::MODEL_PATH_DICT``, so no edit to
the test driver is required.

Test ID:
``perf/test_perf_sanity.py::test_e2e[aggr_upload-deepseek_r1_fp8_ad_blackwell-r1_fp8_ad_ws8_1k1k]``

Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
@MrGeva MrGeva changed the title [None][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 [#8542][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 May 12, 2026
@MrGeva MrGeva changed the title [#8542][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 [#8542][feat] AutoDeploy: add DeepSeek-R1 FP8 perf-sanity test on 8x B200 post merge May 12, 2026
@MrGeva
Copy link
Copy Markdown
Collaborator Author

MrGeva commented May 12, 2026

/bot run --extra-stage "H100_PCIe-AutoDeploy-1" --disable-fail-fast

@MrGeva
Copy link
Copy Markdown
Collaborator Author

MrGeva commented May 12, 2026

/bot run --extra-stage "DGX_B200-8_GPUs-AutoDeploy-Post-Merge-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47935 [ run ] triggered by Bot. Commit: 1320deb Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #47937 [ run ] triggered by Bot. Commit: 1320deb Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants