Skip to content

[None][fix] Fix accracy regression in DeepSeek models#13924

Open
taylor-yb-lee wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:taylor/fix_tokenizer
Open

[None][fix] Fix accracy regression in DeepSeek models#13924
taylor-yb-lee wants to merge 1 commit intoNVIDIA:mainfrom
nv-auto-deploy:taylor/fix_tokenizer

Conversation

@taylor-yb-lee
Copy link
Copy Markdown
Collaborator

@taylor-yb-lee taylor-yb-lee commented May 8, 2026

Summary by CodeRabbit

  • Bug Fixes
    • Improved tokenizer compatibility handling for HuggingFace models, addressing configuration mismatches with newer Transformers library versions.

Description

auto_deploy: apply byte-level pre-tokenizer fix in init_tokenizer

DeepSeek-V3/R1 set tokenizer_class="LlamaTokenizer" in tokenizer_config.json but ship a ByteLevel BPE tokenizer.json. Under transformers 5.x, LlamaTokenizer.init forces a Metaspace pre-tokenizer that silently overrides the ByteLevel one, stripping spaces from prompts ("hello world" -> "helloworld") and breaking the few-shot format that GSM8K strict-match depends on (regression: gsm8k strict 95.30 -> 0.00, eval acc 95.38 -> 46.82 with no other change).

PR #12829 added maybe_fix_byte_level_tokenizer in tensorrt_llm/tokenizer and wired it into the pytorch backend via TransformersTokenizer.from_pretrained, but AutoDeploy bypasses that path: AutoModelForCausalLMFactory.init_tokenizer returns AutoTokenizer.from_pretrained(...) directly, and tokenizer_factory's PreTrainedTokenizerBase branch wraps without re-running the fix.

Mirror the fix in init_tokenizer so AutoDeploy gets the same correction.
Verified on DeepSeek-R1-0528 with transformers 5.3.0

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

…okenizer

DeepSeek-V3/R1 set tokenizer_class="LlamaTokenizer" in tokenizer_config.json
but ship a ByteLevel BPE tokenizer.json. Under transformers 5.x,
LlamaTokenizer.__init__ forces a Metaspace pre-tokenizer that silently
overrides the ByteLevel one, stripping spaces from prompts ("hello world"
-> "helloworld") and breaking the few-shot format that GSM8K strict-match
depends on (regression: gsm8k strict 95.30 -> 0.00, eval acc 95.38 -> 46.82
with no other change).

PR NVIDIA#12829 added maybe_fix_byte_level_tokenizer in tensorrt_llm/tokenizer
and wired it into the pytorch backend via TransformersTokenizer.from_pretrained,
but AutoDeploy bypasses that path: AutoModelForCausalLMFactory.init_tokenizer
returns AutoTokenizer.from_pretrained(...) directly, and tokenizer_factory's
PreTrainedTokenizerBase branch wraps without re-running the fix.

Mirror the fix in init_tokenizer so AutoDeploy gets the same correction.
Verified on DeepSeek-R1-0528 with transformers 5.3.0:
  gsm8k flex 95.072 / strict 95.072 / avg 95.07 (was 93.63 / 0.00 / 46.82),
  MMLU 87.40, test PASSED.

Signed-off-by: Taylor Yeonbok Lee <249374542+taylor-yb-lee@users.noreply.github.com>
@taylor-yb-lee taylor-yb-lee marked this pull request as ready for review May 8, 2026 22:26
@taylor-yb-lee taylor-yb-lee requested a review from a team as a code owner May 8, 2026 22:27
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 8, 2026

Review Change Stack
No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 96d3a67f-7741-4346-8b10-e3d9eb9c0f9d

📥 Commits

Reviewing files that changed from the base of the PR and between 43f4b94 and 3cb3e5c.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/auto_deploy/models/hf.py

📝 Walkthrough

Walkthrough

The change enhances AutoModelForCausalLMFactory.init_tokenizer to post-process tokenizers loaded from HuggingFace with a compatibility wrapper function. The method now applies maybe_fix_byte_level_tokenizer to address Transformers 5.x LlamaTokenizer and Metaspace pre-tokenizer behavior mismatches, and documents this compatibility fix inline.

Changes

Tokenizer Initialization Fix

Layer / File(s) Summary
Tokenizer Initialization Enhancement
tensorrt_llm/_torch/auto_deploy/models/hf.py
init_tokenizer wraps AutoTokenizer.from_pretrained(...) result with maybe_fix_byte_level_tokenizer(...) to address Transformers 5.x LlamaTokenizer/Metaspace pre-tokenizer incompatibilities, including inline documentation of the compatibility fix.

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title mentions a fix for DeepSeek models and references accuracy regression, which aligns with the main change (applying a tokenizer fix), but contains a typo ('accracy' instead of 'accuracy').
Description check ✅ Passed The description clearly explains the issue (tokenizer mismatch in DeepSeek models), the root cause (Metaspace pre-tokenizer override under transformers 5.x), and the solution (mirroring the byte-level fix in init_tokenizer). However, the Test Coverage section is empty.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant