-
Notifications
You must be signed in to change notification settings - Fork 391
[Recipes][LLM PTQ] Add nvfp4 MSE+FP8-cast-KV recipes (experts_only / mlp_only) + --recipe in example scripts #1407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,48 @@ | ||
| # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Stylistic nit: sibling recipes in this directory follow the
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. agree. |
||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| imports: | ||
| base_disable_all: configs/ptq/units/base_disable_all | ||
| default_disabled_quantizers: configs/ptq/units/default_disabled_quantizers | ||
| nvfp4: configs/numerics/nvfp4 | ||
| nvfp4_static: configs/numerics/nvfp4_static | ||
| kv_fp8_cast: configs/ptq/units/kv_fp8_cast | ||
|
|
||
| metadata: | ||
| recipe_type: ptq | ||
| description: NVFP4 static weight (MSE FP8-scale sweep) and dynamic activation for expert layers only (W4A4), FP8 KV cache with constant amax. | ||
| quantize: | ||
| algorithm: | ||
| method: mse | ||
| fp8_scale_sweep: true | ||
| # layerwise=false required for VLMs where the decoder layers are nested under | ||
| # `model.language_model.layers` (layerwise_calibrate can't find them otherwise). | ||
| layerwise: false | ||
| quant_cfg: | ||
| - $import: base_disable_all | ||
| - quantizer_name: '*mlp.experts*weight_quantizer' | ||
| cfg: | ||
| $import: nvfp4_static | ||
| - quantizer_name: '*mlp.experts*input_quantizer' | ||
| cfg: | ||
| $import: nvfp4 | ||
| - quantizer_name: '*block_sparse_moe*weight_quantizer' | ||
| cfg: | ||
| $import: nvfp4_static | ||
| - quantizer_name: '*block_sparse_moe*input_quantizer' | ||
| cfg: | ||
| $import: nvfp4 | ||
| - $import: kv_fp8_cast | ||
| - $import: default_disabled_quantizers | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,54 @@ | ||
| # SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
| # SPDX-License-Identifier: Apache-2.0 | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| imports: | ||
| base_disable_all: configs/ptq/units/base_disable_all | ||
| default_disabled_quantizers: configs/ptq/units/default_disabled_quantizers | ||
| nvfp4: configs/numerics/nvfp4 | ||
| nvfp4_static: configs/numerics/nvfp4_static | ||
| kv_fp8_cast: configs/ptq/units/kv_fp8_cast | ||
|
|
||
| metadata: | ||
| recipe_type: ptq | ||
| description: NVFP4 static weight (MSE FP8-scale sweep) and dynamic activation for MLP/MoE linear layers (W4A4), FP8 KV cache with constant amax. | ||
| quantize: | ||
| algorithm: | ||
| method: mse | ||
| fp8_scale_sweep: true | ||
| # layerwise=false required for VLMs where the decoder layers are nested under | ||
| # `model.language_model.layers` (layerwise_calibrate can't find them otherwise). | ||
| layerwise: false | ||
| quant_cfg: | ||
| - $import: base_disable_all | ||
| - quantizer_name: '*mlp*weight_quantizer' | ||
| cfg: | ||
| $import: nvfp4_static | ||
| - quantizer_name: '*mlp*input_quantizer' | ||
| cfg: | ||
| $import: nvfp4 | ||
| - quantizer_name: '*block_sparse_moe*weight_quantizer' | ||
| cfg: | ||
| $import: nvfp4_static | ||
| - quantizer_name: '*block_sparse_moe*input_quantizer' | ||
| cfg: | ||
| $import: nvfp4 | ||
| - quantizer_name: '*.experts.*weight_quantizer' | ||
| cfg: | ||
| $import: nvfp4_static | ||
| - quantizer_name: '*.experts.*input_quantizer' | ||
| cfg: | ||
| $import: nvfp4 | ||
| - $import: kv_fp8_cast | ||
| - $import: default_disabled_quantizers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regression: deleting the
for qformat in $QFORMAT; do … doneloop also drops the implicit binding of the lowercase loop variable$qformat, which is still used below atif [ "$qformat" == "bf16" ] || [ "$qformat" == "fp16" ]. With the loop removed,$qformatis empty and that bf16/fp16 shortcut (which symlinks the source model into$SAVE_PATHand marksMODEL_CONFIG_EXIST=true) will never trigger — users running--quant=bf16or--quant=fp16will now fall through topython hf_ptq.py --qformat=bf16instead. Either replace$qformatwith$QFORMATin that check, or add a dedicatedqformat="$QFORMAT"assignment here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need bf16/fp16 path anyway? Maybe we can deprecate them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we still have the use cases where we quantize fp32 to fp16.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think we can delete. Let me add this to the PR