[TRTLLM-12288][feat] Support NVFP4 W4A16 inference on Hopper for Nemotron H models#14009
[TRTLLM-12288][feat] Support NVFP4 W4A16 inference on Hopper for Nemotron H models#14009tijyojwad wants to merge 2 commits into
Conversation
On GPUs without FP4 tensor cores (sm < 100, e.g. Hopper), dequantize NVFP4 weights to BF16 and use standard matmul instead of nvfp4_gemm. Activations remain in BF16 throughout. Changes: - Add NVFP4W4A16LinearMethod: inherits NVFP4 weight storage, overrides apply() to dequant weights to BF16 + F.linear - Route get_quant_method() to W4A16 method when sm < 100 - Guard is_nvfp4 in NemotronHLayer with sm >= 100 to disable fused RMSNorm+NVFP4 and Fp4QuantizedTensor on Hopper - MoE on Hopper: override quant config to unquantized, wrap load_weights to dequant NVFP4 expert weights to BF16 at load time - Add unit tests for dequant, linear forward, routing, and MoE dequant Signed-off-by: tijyojwad <1127155+tijyojwad@users.noreply.github.com> Co-authored-by: Cursor <cursoragent@cursor.com>
Move duplicated E2M1_VALUES lookup table and dequantization logic from NVFP4W4A16LinearMethod (linear.py) and NemotronHMOE (modeling_nemotron_h.py) into a shared dequantize_nvfp4() function in fp4_utils.py. This makes the FP4 dequant utility reusable by any module that needs NVFP4 weight dequantization without duplicating the E2M1 LUT and nibble-unpacking code. Signed-off-by: tijyojwad <1127155+tijyojwad@users.noreply.github.com> Co-authored-by: Cursor <cursoragent@cursor.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (4)
👮 Files not reviewed due to content moderation or server errors (4)
📝 WalkthroughWalkthroughThis PR extends NVFP4 quantization support to GPUs without FP4 tensor cores by introducing a fallback dequantization path. NVFP4 weights are dequantized to BF16 at load or inference time, enabling W4A16 computation on legacy hardware (SM < 100) via a new linear method, with SM-aware routing and MoE integration. ChangesNVFP4 W4A16 Legacy GPU Support
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
On GPUs without FP4 tensor cores (sm < 100, e.g. Hopper), dequantize NVFP4 weights to BF16 and use standard matmul instead of nvfp4_gemm. Activations remain in BF16 throughout.
Changes:
Summary by CodeRabbit
New Features
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.