[None][chore] Remove glm_moe_dsa tokenizer WAR after Transformers 5.x upgrade#13901
[None][chore] Remove glm_moe_dsa tokenizer WAR after Transformers 5.x upgrade#13901longlee0622 wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
… upgrade The custom GlmMoeDsaTokenizer was added in NVIDIA#12586 to manually load GLM-5 checkpoints whose tokenizer_config.json uses transformers 5.x features (TokenizersBackend class, list-form extra_special_tokens) that were not supported by the pinned transformers 4.x. With the dependency now bumped to transformers==5.3.0 (NVIDIA#12829), AutoTokenizer.from_pretrained() handles these checkpoints natively. This removes the workaround: - Delete tensorrt_llm/tokenizer/glm_moe_dsa/. - Drop the 'glm_moe_dsa' alias from TOKENIZER_ALIASES in both tokenizer/tokenizer.py and llmapi/llm_args.py. The 'deepseek_v32' alias and the --custom_tokenizer plumbing remain (used by the DeepSeek-V3.2 workaround for an unrelated issue). - Drop custom_tokenizer="glm_moe_dsa" from the GLM-5 accuracy and guided-decoding tests, and from the deployment guide YAML. - Update --custom_tokenizer help text to drop glm_moe_dsa from the examples (deepseek_v32 kept). Verified end-to-end on 8xB200 with transformers==5.3.0: - AutoTokenizer.from_pretrained() on /models/GLM-5-NVFP4 produces output equivalent to GlmMoeDsaTokenizer (vocab/special tokens/ chat_template/encoded ids all match). - trtllm-serve loads GLM-5-NVFP4 with custom_tokenizer=None and answers chat completions correctly. Signed-off-by: Jonas Li <6110159+longlee0622@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR removes the GLM-Moe-Dsa tokenizer implementation, its public exports, registry entries, CLI help text references, deployment guide examples, and test configurations. The change is a complete deprecation of a specific tokenizer type. ChangesGLM-Moe-Dsa Tokenizer Removal
🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
|
/bot run |
|
PR_Github #47388 [ run ] triggered by Bot. Commit: |
|
PR_Github #47388 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
/bot run --disable-fail-fast |
|
PR_Github #47448 [ run ] triggered by Bot. Commit: |
|
PR_Github #47447 [ run ] triggered by Bot. Commit: |
|
PR_Github #47448 [ run ] completed with state |
|
PR_Github #47447 [ run ] completed with state |
The custom GlmMoeDsaTokenizer was added in #12586 to manually load GLM-5 checkpoints whose tokenizer_config.json uses transformers 5.x features (TokenizersBackend class, list-form extra_special_tokens) that were not supported by the pinned transformers 4.x.
With the dependency now bumped to transformers==5.3.0 (#12829), AutoTokenizer.from_pretrained() handles these checkpoints natively. This removes the workaround:
Verified end-to-end on 8xB200 with transformers==5.3.0:
Summary by CodeRabbit
Removed Features
glm_moe_dsaalias require updates.Documentation
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.