-
Notifications
You must be signed in to change notification settings - Fork 73
Pull requests: intel/auto-round
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
rm duplicate args of the quantization extra config
#1334
opened Jan 23, 2026 by
WeiweiZhang1
Loading…
1 of 9 tasks
update release package and bump version
#1327
opened Jan 23, 2026 by
chensuyue
Loading…
3 of 11 tasks
Autoround in vLLM Office Hours
documentation
Improvements or additions to documentation
#1322
opened Jan 23, 2026 by
yiliu30
Loading…
1 of 18 tasks
enable glm4_moe_lite quantization & generation
#1321
opened Jan 22, 2026 by
WeiweiZhang1
Loading…
3 of 18 tasks
Delay materializing the replaced model weights until quantization
#1307
opened Jan 21, 2026 by
yiliu30
Loading…
6 of 7 tasks
Optimize FP8 layer conversion by skipping weight initialization
#1295
opened Jan 16, 2026 by
Copilot
AI
Loading…
Robust FP8 layer detection for ignore_layers (#1283)
#1289
opened Jan 15, 2026 by
scopophobic
Loading…
Fix ignore_layers not working for FP8 models
#1286
opened Jan 15, 2026 by
Copilot
AI
Loading…
11 tasks done
[WIP][refactor quanizers][step 1] refactor rtn and tuning
#1278
opened Jan 14, 2026 by
n1ck-guo
Loading…
ProTip!
Updated in the last three days: updated:>2026-01-20.