Skip to content

Conversation

@Edwardf0t1
Copy link
Contributor

@Edwardf0t1 Edwardf0t1 commented Jan 15, 2026

What does this PR do?

Type of change: Bugfix

Overview: Fix a nvfp4 weight amax attribute issue during export, especially when calibration size is small. Context: sgl-project/sglang#14677 (comment)

Usage

python3 hf_ptq.py --pyt_ckpt_path /home/scratch.jingyux_coreai/kimi-k2/models/Kimi-K2-Thinking-BF16 --qformat nvfp4_mlp_only --export_path /home/omniml_data_3/zhiyuc/checkpoints/Kimi-K2-Thinking-NVFP4 --kv_cache_qformat none --calib_size 20 --trust_remote_code --dataset cnn_dailymail

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
@Edwardf0t1 Edwardf0t1 requested review from a team as code owners January 15, 2026 01:13
@codecov
Copy link

codecov bot commented Jan 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.18%. Comparing base (307fe71) to head (afec8d2).
⚠️ Report is 33 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #785      +/-   ##
==========================================
- Coverage   74.66%   74.18%   -0.48%     
==========================================
  Files         192      192              
  Lines       18975    19236     +261     
==========================================
+ Hits        14167    14271     +104     
- Misses       4808     4965     +157     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
QUANTIZATION_W4A8_NVFP4_FP8,
]:
weight = getattr(module, weight_name)
_ensure_weight_quantizer_calibrated(weight_quantizer, weight)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq, is this only NVFP4 specific? Do we need this for W4A8_AWQ (int4)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and will we hit similar issue with FP8 for get_activation_scaling_factor ?

Copy link
Contributor Author

@Edwardf0t1 Edwardf0t1 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far we only had an issue with NVFP4. I think we can include other cases as needed later. I feel FP8 should be fine.

@cjluo-nv
Copy link
Collaborator

I think @meenchen 's question is legitimate:

Do you feel we need to do this for all the quant formats not just for NVFP4?

And even with this weight calibration, the activation amax is still not present. How will this PR be able to generate a valid HF checkpoint?

@Edwardf0t1
Copy link
Contributor Author

I think @meenchen 's question is legitimate:

Do you feel we need to do this for all the quant formats not just for NVFP4?

And even with this weight calibration, the activation amax is still not present. How will this PR be able to generate a valid HF checkpoint?

I think we can include other cases as needed later.

"How will this PR be able to generate a valid HF checkpoint?" What do you mean? This patch has been tested by Google team, they were able to generate the kimi-k2-thinking nvfp4 checkpoint.

@cjluo-nv
Copy link
Collaborator

I think @meenchen 's question is legitimate:
Do you feel we need to do this for all the quant formats not just for NVFP4?
And even with this weight calibration, the activation amax is still not present. How will this PR be able to generate a valid HF checkpoint?

I think we can include other cases as needed later.

"How will this PR be able to generate a valid HF checkpoint?" What do you mean? This patch has been tested by Google team, they were able to generate the kimi-k2-thinking nvfp4 checkpoint.

My question is:

If the weights are not quantized because the expert has not been activated yet. Even you quantize the weights, the inputs are not quantized and the input scales are not available. How can the deployment framework deploy this checkpoint without complaining the input scales not present?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants