Skip to content

[Do Not merge]Draft:Add NVFP4 four-over-six (4o6) adaptive activation quantization#1050

Draft
Fridah-nv wants to merge 1 commit intomainfrom
fridah/4_6_act
Draft

[Do Not merge]Draft:Add NVFP4 four-over-six (4o6) adaptive activation quantization#1050
Fridah-nv wants to merge 1 commit intomainfrom
fridah/4_6_act

Conversation

@Fridah-nv
Copy link
Contributor

Implements the adaptive per-block scale selection strategy from arxiv:2512.02010. Each 16-element activation block independently chooses between a 4-bit or 6-bit FP8 block scale (MSE/MAE/abs_max criteria), reducing quantization error vs. uniform scale encoding without requiring Blackwell hardware.

New public API:

  • nvfp4_4o6_fake_quant in modelopt.torch.quantization.calib.fouroversix
  • NVFP4_4O6_W4A4_CFG config (standard NVFP4 weights + 4o6 activations)
  • "nvfp4_4o6" qformat in hf_ptq.py

Integration uses the existing TensorQuantizer backend mechanism (register_quant_backend("nvfp4_4o6", ...)), with dynamic per-inference amax.

What does this PR do?

Type of change: ?

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Implements the adaptive per-block scale selection strategy from arxiv:2512.02010.
Each 16-element activation block independently chooses between a 4-bit or 6-bit
FP8 block scale (MSE/MAE/abs_max criteria), reducing quantization error vs. uniform
scale encoding without requiring Blackwell hardware.

New public API:
- `nvfp4_4o6_fake_quant` in `modelopt.torch.quantization.calib.fouroversix`
- `NVFP4_4O6_W4A4_CFG` config (standard NVFP4 weights + 4o6 activations)
- `"nvfp4_4o6"` qformat in `hf_ptq.py`

Integration uses the existing `TensorQuantizer` backend mechanism
(`register_quant_backend("nvfp4_4o6", ...)`), with dynamic per-inference amax.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 16, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@Fridah-nv Fridah-nv self-assigned this Mar 16, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3f8608c5-6143-4fc4-94e8-8f96e95e1e79

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fridah/4_6_act
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 17, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 70.16%. Comparing base (1070d89) to head (04c15a3).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1050      +/-   ##
==========================================
+ Coverage   70.10%   70.16%   +0.06%     
==========================================
  Files         221      222       +1     
  Lines       25541    25606      +65     
==========================================
+ Hits        17905    17967      +62     
- Misses       7636     7639       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant