Skip to content

fix passing vllm model config's max_len to _generate_token_mode#3

Merged
wangshangsam merged 1 commit intoCentML:mlperf-inf-mm-q3vl-v6.0from
soodoshll:qidongs/fix-max-tokens
Jan 29, 2026
Merged

fix passing vllm model config's max_len to _generate_token_mode#3
wangshangsam merged 1 commit intoCentML:mlperf-inf-mm-q3vl-v6.0from
soodoshll:qidongs/fix-max-tokens

Conversation

@soodoshll
Copy link

Overview:

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

@github-actions
Copy link

👋 Hi soodoshll! Thank you for contributing to CentML/dynamo.

Just a reminder: The NVIDIA Test Github Validation CI runs an essential subset of the testing framework to quickly catch errors.Your PR reviewers may elect to test the changes comprehensively before approving your changes.

🚀

@wangshangsam wangshangsam merged commit 9b0fc99 into CentML:mlperf-inf-mm-q3vl-v6.0 Jan 29, 2026
10 of 14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants