[https://nvbugs/6157892] [fix] MistralCommonImageProcessor text-only path#13916
[https://nvbugs/6157892] [fix] MistralCommonImageProcessor text-only path#13916evezhier wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
Signed-off-by: Olya Kozlova <okozlova@nvidia.com>
📝 WalkthroughWalkthroughThe ChangesOptional Images Parameter
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/modeling_mistral.py (1)
1-1:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd/update NVIDIA copyright header in this modified file.
This file is modified in this PR and currently has no NVIDIA copyright header at the top.
As per coding guidelines: "All C++, Python, and other source files must contain NVIDIA copyright header with current modification year".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tensorrt_llm/_torch/models/modeling_mistral.py` at line 1, Add or update the NVIDIA copyright header at the top of tensorrt_llm/_torch/models/modeling_mistral.py (above the existing "import copy" statement) so the modified file includes the required NVIDIA copyright header with the current modification year; insert the standard multi-line header block as used across the repo, ensuring it appears before any imports or code in the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tensorrt_llm/_torch/models/modeling_mistral.py`:
- Line 312: The method MistralCommonImageProcessor.__call__ uses a mutable
default parameter images=[], which can be shared across calls; change the
signature to use images=None and inside the method initialize it with images =
[] if images is None, updating any subsequent references to the parameter
accordingly (ensure no other logic depends on an empty-list default at function
definition time).
---
Outside diff comments:
In `@tensorrt_llm/_torch/models/modeling_mistral.py`:
- Line 1: Add or update the NVIDIA copyright header at the top of
tensorrt_llm/_torch/models/modeling_mistral.py (above the existing "import copy"
statement) so the modified file includes the required NVIDIA copyright header
with the current modification year; insert the standard multi-line header block
as used across the repo, ensuring it appears before any imports or code in the
module.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: bc4f39f5-2abf-48cc-803a-c421816541ae
📒 Files selected for processing (1)
tensorrt_llm/_torch/models/modeling_mistral.py
| return ncols * nrows + nrows | ||
|
|
||
| def __call__(self, text, images, **kwargs): | ||
| def __call__(self, text, images=[], **kwargs): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import ast
from pathlib import Path
path = Path("tensorrt_llm/_torch/models/modeling_mistral.py")
tree = ast.parse(path.read_text())
for node in ast.walk(tree):
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
for default in node.args.defaults:
if isinstance(default, (ast.List, ast.Dict, ast.Set)):
print(f"{path}:{default.lineno} -> {node.name} has mutable default")
PYRepository: NVIDIA/TensorRT-LLM
Length of output: 1960
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# List files to understand the structure
fd "modeling_mistral.py" --type fRepository: NVIDIA/TensorRT-LLM
Length of output: 166
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check file size first
wc -l tensorrt_llm/_torch/models/modeling_mistral.py
# Read around line 312 to get context
sed -n '300,320p' tensorrt_llm/_torch/models/modeling_mistral.pyRepository: NVIDIA/TensorRT-LLM
Length of output: 789
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Run AST analysis to find all mutable defaults in the file
python - <<'PY'
import ast
from pathlib import Path
path = Path("tensorrt_llm/_torch/models/modeling_mistral.py")
tree = ast.parse(path.read_text())
mutable_defaults = []
for node in ast.walk(tree):
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
for i, default in enumerate(node.args.defaults):
if isinstance(default, (ast.List, ast.Dict, ast.Set)):
mutable_defaults.append({
'line': default.lineno,
'function': node.name,
'type': type(default).__name__
})
# Also check kw_defaults for keyword-only args
for i, default in enumerate(node.args.kw_defaults):
if default and isinstance(default, (ast.List, ast.Dict, ast.Set)):
mutable_defaults.append({
'line': default.lineno,
'function': node.name,
'type': type(default).__name__
})
if mutable_defaults:
for item in mutable_defaults:
print(f"Line {item['line']}: {item['function']} has mutable default ({item['type']})")
else:
print("No mutable defaults found")
PYRepository: NVIDIA/TensorRT-LLM
Length of output: 109
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get broader context to see the class name
sed -n '280,320p' tensorrt_llm/_torch/models/modeling_mistral.py | grep -E "^class |def __call__"
# Also check for the class definition
grep -n "class.*ImageProcessor" tensorrt_llm/_torch/models/modeling_mistral.py | head -5Repository: NVIDIA/TensorRT-LLM
Length of output: 153
Fix mutable default argument images=[] in MistralCommonImageProcessor.__call__.
Line 312 uses images=[], which creates a shared mutable default across invocations. Change to images=None with conditional initialization.
Proposed fix
- def __call__(self, text, images=[], **kwargs):
+ def __call__(self, text, images=None, **kwargs):
+ if images is None:
+ images = []
mm_items = []🧰 Tools
🪛 Ruff (0.15.12)
[warning] 312-312: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tensorrt_llm/_torch/models/modeling_mistral.py` at line 312, The method
MistralCommonImageProcessor.__call__ uses a mutable default parameter images=[],
which can be shared across calls; change the signature to use images=None and
inside the method initialize it with images = [] if images is None, updating any
subsequent references to the parameter accordingly (ensure no other logic
depends on an empty-list default at function definition time).
Summary by CodeRabbit
Description
Allows text-only path (images optional)for MistralCommonImageProcessor
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.