[https://nvbugs/6069543][fix] Lower accuracy threshold for H20 qwen3.5 test#13895
[https://nvbugs/6069543][fix] Lower accuracy threshold for H20 qwen3.5 test#13895rosenrodt wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
📝 WalkthroughWalkthroughThis PR adds H20 GPU-specific accuracy thresholds for the Qwen 3.5 35B model by introducing a reference accuracy specification and conditional test logic. The changes enable the test suite to apply different acceptance criteria when evaluating the model on H20 hardware. ChangesH20 GPU Accuracy Handling
🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/integration/defs/accuracy/references/gsm8k.yaml (1)
205-206: ⚡ Quick winDocument how the H20 threshold was derived.
Please add a short inline note next to
accuracy: 83.9(for example: calibration date + run window/build IDs). Without provenance, future threshold changes are hard to audit and can mask drift.
Also, QA list updates look unnecessary here (no new/renamed integration test definition, so no change needed intests/integration/test_lists/qa/llm_function_core.txt).As per coding guidelines, "Keep feedback actionable: suggest concrete list file names and whether coverage is sufficient, insufficient, or needs follow-up outside the PR."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/integration/defs/accuracy/references/gsm8k.yaml` around lines 205 - 206, The YAML entry uses extra_acc_spec: h20 with accuracy: 83.9 but lacks provenance; update the line with a short inline note after accuracy: 83.9 (e.g., " # derived: calibration YYYY-MM-DD; run window: [start:end]; build IDs: <build1>,<build2>") explaining how the H20 threshold was computed and which calibration/run/build produced it, and keep the extra_acc_spec key unchanged; also revert any edits to the QA list (llm_function_core.txt) — coverage is sufficient for this change and no QA list update is needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@tests/integration/defs/accuracy/references/gsm8k.yaml`:
- Around line 205-206: The YAML entry uses extra_acc_spec: h20 with accuracy:
83.9 but lacks provenance; update the line with a short inline note after
accuracy: 83.9 (e.g., " # derived: calibration YYYY-MM-DD; run window:
[start:end]; build IDs: <build1>,<build2>") explaining how the H20 threshold was
computed and which calibration/run/build produced it, and keep the
extra_acc_spec key unchanged; also revert any edits to the QA list
(llm_function_core.txt) — coverage is sufficient for this change and no QA list
update is needed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 28f722bb-c912-4250-b384-31ccac392441
📒 Files selected for processing (2)
tests/integration/defs/accuracy/references/gsm8k.yamltests/integration/defs/accuracy/test_llm_api_pytorch.py
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
/bot run |
|
PR_Github #47373 [ run ] triggered by Bot. Commit: |
|
PR_Github #47373 [ run ] completed with state
|
Summary by CodeRabbit
Description
For some reason H20 has small and sometimes fluctuating accuracy gap relative to H100/H200 BF16 MoE config, resulting in occasional failures. To keep track if it regresses further, we lower the accuracy threshold.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.