[None][fix] Fix accracy regression in DeepSeek models#13924
[None][fix] Fix accracy regression in DeepSeek models#13924taylor-yb-lee wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
…okenizer
DeepSeek-V3/R1 set tokenizer_class="LlamaTokenizer" in tokenizer_config.json
but ship a ByteLevel BPE tokenizer.json. Under transformers 5.x,
LlamaTokenizer.__init__ forces a Metaspace pre-tokenizer that silently
overrides the ByteLevel one, stripping spaces from prompts ("hello world"
-> "helloworld") and breaking the few-shot format that GSM8K strict-match
depends on (regression: gsm8k strict 95.30 -> 0.00, eval acc 95.38 -> 46.82
with no other change).
PR NVIDIA#12829 added maybe_fix_byte_level_tokenizer in tensorrt_llm/tokenizer
and wired it into the pytorch backend via TransformersTokenizer.from_pretrained,
but AutoDeploy bypasses that path: AutoModelForCausalLMFactory.init_tokenizer
returns AutoTokenizer.from_pretrained(...) directly, and tokenizer_factory's
PreTrainedTokenizerBase branch wraps without re-running the fix.
Mirror the fix in init_tokenizer so AutoDeploy gets the same correction.
Verified on DeepSeek-R1-0528 with transformers 5.3.0:
gsm8k flex 95.072 / strict 95.072 / avg 95.07 (was 93.63 / 0.00 / 46.82),
MMLU 87.40, test PASSED.
Signed-off-by: Taylor Yeonbok Lee <249374542+taylor-yb-lee@users.noreply.github.com>
|
ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe change enhances ChangesTokenizer Initialization Fix
🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
Summary by CodeRabbit
Description
auto_deploy: apply byte-level pre-tokenizer fix in init_tokenizer
DeepSeek-V3/R1 set tokenizer_class="LlamaTokenizer" in tokenizer_config.json but ship a ByteLevel BPE tokenizer.json. Under transformers 5.x, LlamaTokenizer.init forces a Metaspace pre-tokenizer that silently overrides the ByteLevel one, stripping spaces from prompts ("hello world" -> "helloworld") and breaking the few-shot format that GSM8K strict-match depends on (regression: gsm8k strict 95.30 -> 0.00, eval acc 95.38 -> 46.82 with no other change).
PR #12829 added maybe_fix_byte_level_tokenizer in tensorrt_llm/tokenizer and wired it into the pytorch backend via TransformersTokenizer.from_pretrained, but AutoDeploy bypasses that path: AutoModelForCausalLMFactory.init_tokenizer returns AutoTokenizer.from_pretrained(...) directly, and tokenizer_factory's PreTrainedTokenizerBase branch wraps without re-running the fix.
Mirror the fix in init_tokenizer so AutoDeploy gets the same correction.
Verified on DeepSeek-R1-0528 with transformers 5.3.0
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.