Skip to content

[Qwen2VL] Fix missing min_pixels/max_pixels attributes in fast image processor#43548

Closed
tomaszcichy98 wants to merge 1 commit intohuggingface:mainfrom
tomaszcichy98:fix-qwen2vl-fast-processor-min-max-pixels
Closed

[Qwen2VL] Fix missing min_pixels/max_pixels attributes in fast image processor#43548
tomaszcichy98 wants to merge 1 commit intohuggingface:mainfrom
tomaszcichy98:fix-qwen2vl-fast-processor-min-max-pixels

Conversation

@tomaszcichy98
Copy link
Copy Markdown
Contributor

@tomaszcichy98 tomaszcichy98 commented Jan 28, 2026

What does this PR do?

Fixes a bug where Qwen2VLImageProcessorFast doesn't set min_pixels and max_pixels instance attributes, breaking compatibility with code that expects these attributes.

The Problem

The slow processor (Qwen2VLImageProcessor) sets self.min_pixels and self.max_pixels as instance attributes. The fast processor converts these to size["shortest_edge"] and size["longest_edge"] but never sets the original attributes.

This causes vLLM (and potentially other libraries) to fail when loading Qwen2-VL/Qwen2.5-Omni models:

File "transformers/models/qwen2_vl/image_processing_qwen2_vl.py", line 92, in smart_resize
    if h_bar * w_bar > max_pixels:
TypeError: '>' not supported between instances of 'int' and 'NoneType'

Why config values don't help

Even if preprocessor_config.json contains min_pixels and max_pixels values, this doesn't fix the issue because:

  1. use_fast: false in the config is ignored - transformers 5.0 always auto-upgrades to the fast processor
  2. The fast processor receives these values as kwargs but only stores them in self.size dict, not as instance attributes
  3. Code accessing processor.min_pixels gets None instead of the config value

The Fix

Set min_pixels and max_pixels attributes after super().__init__() to maintain backward compatibility:

super().__init__(size=size, **kwargs)
self.min_pixels = self.size.get("shortest_edge")
self.max_pixels = self.size.get("longest_edge")

Verification

from transformers import AutoProcessor

# Before fix
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
print(processor.image_processor.min_pixels)  # None
print(processor.image_processor.max_pixels)  # None

# After fix
print(processor.image_processor.min_pixels)  # 3136
print(processor.image_processor.max_pixels)  # 12845056

…processor

The fast image processor converts min_pixels/max_pixels to size dict entries
but doesn't set them as instance attributes. This breaks compatibility with
code that expects these attributes (e.g., vLLM).

The slow processor (Qwen2VLImageProcessor) sets these attributes, but the
fast processor (Qwen2VLImageProcessorFast) only sets self.size.

This causes vLLM to fail with:
  TypeError: '>' not supported between instances of 'int' and 'NoneType'

Fix by setting min_pixels/max_pixels attributes after super().__init__().
@tomaszcichy98 tomaszcichy98 force-pushed the fix-qwen2vl-fast-processor-min-max-pixels branch from 4f0a445 to 7ce8dab Compare January 28, 2026 10:55
@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: glm_image, qwen2_vl, video_llama_3

@Rocketknight1
Copy link
Copy Markdown
Member

cc @zucchini-nlp maybe?

Copy link
Copy Markdown
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was removed as part of v5 release in favor of self.size. I believe the error is raised from within vLLM code?

We can ask vLLM team to update the code on their side, not sure if they already bumped to v5

@zucchini-nlp
Copy link
Copy Markdown
Member

fyi @hmellor ,can be added in vllm-project/vllm#30566 if it's indeed not working

@tomaszcichy98
Copy link
Copy Markdown
Contributor Author

It was removed as part of v5 release in favor of self.size. I believe the error is raised from within vLLM code?

We can ask vLLM team to update the code on their side, not sure if they already bumped to v5

Yeah it is coming from vLLM @zucchini-nlp

@hmellor
Copy link
Copy Markdown
Member

hmellor commented Jan 29, 2026

This is already fixed in vLLM vllm-project/vllm#33208

@hmellor hmellor closed this Jan 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants