Skip to content

fix(basemodel): Format AssertionError message for max_seq_length vs max_total_token_num#1300

Merged
hiworldwzj merged 2 commits intomainfrom
wzj_fix
May 9, 2026
Merged

fix(basemodel): Format AssertionError message for max_seq_length vs max_total_token_num#1300
hiworldwzj merged 2 commits intomainfrom
wzj_fix

Conversation

@hiworldwzj
Copy link
Copy Markdown
Collaborator

No description provided.

@hiworldwzj hiworldwzj merged commit 592cad2 into main May 9, 2026
1 check passed
@hiworldwzj hiworldwzj deleted the wzj_fix branch May 9, 2026 06:27
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a validation check in the _check_mem_size method of the base model to ensure that max_total_token_num is greater than or equal to max_seq_length, except when the performance mode is set to 'personal'. Feedback from the review suggests improving the consistency of error messages by including actual values in the existing batch_max_tokens assertion and correcting a logical discrepancy and grammar in the newly added assertion message.

Comment on lines 216 to 218
assert (
self.max_total_token_num > self.batch_max_tokens
), "max_total_token_num must be greater than batch_max_tokens"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the detailed error message added below, consider including the actual values of max_total_token_num and batch_max_tokens in this assertion message. This helps users diagnose configuration issues more easily.

Suggested change
assert (
self.max_total_token_num > self.batch_max_tokens
), "max_total_token_num must be greater than batch_max_tokens"
assert self.max_total_token_num > self.batch_max_tokens, (
f"max_total_token_num must be > batch_max_tokens, "
f"got max_total_token_num={self.max_total_token_num}, "
f"batch_max_tokens={self.batch_max_tokens}. "
f"Try setting --batch_max_tokens to a smaller value < {self.max_total_token_num}."
)

Comment on lines +225 to +230
assert self.max_seq_length <= self.max_total_token_num, (
f"max_total_token_num must be >= max_seq_length, "
f"got max_total_token_num={self.max_total_token_num}, "
f"max_seq_length={self.max_seq_length}. "
f"Try set --max_req_total_len a smaller value < {self.max_total_token_num}."
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message suggestion uses < but the condition is <=. Since max_seq_length can be equal to max_total_token_num, the suggestion should reflect that. Also, 'Try set' should be 'Try setting' for better grammar.

Suggested change
assert self.max_seq_length <= self.max_total_token_num, (
f"max_total_token_num must be >= max_seq_length, "
f"got max_total_token_num={self.max_total_token_num}, "
f"max_seq_length={self.max_seq_length}. "
f"Try set --max_req_total_len a smaller value < {self.max_total_token_num}."
)
assert self.max_seq_length <= self.max_total_token_num, (
f"max_total_token_num must be >= max_seq_length, "
f"got max_total_token_num={self.max_total_token_num}, "
f"max_seq_length={self.max_seq_length}. "
f"Try setting --max_req_total_len to a value <= {self.max_total_token_num}."
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant