Skip to content

feat(text-metrics): split oneig_reasoning#648

Open
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-3c-text-score-pairfrom
feat/vlm-pr-3d-oneig-reasoning
Open

feat(text-metrics): split oneig_reasoning#648
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-3c-text-score-pairfrom
feat/vlm-pr-3d-oneig-reasoning

Conversation

@davidberenstein1957
Copy link
Copy Markdown
Member

Summary

  • Split oneig_reasoning into its own stacked PR
  • Add OneIGReasoningMetric implementation and lazy MetricRegistry loading
  • Add OneIG knowledge-reasoning benchmark wiring and reasoning-focused tests

Test plan

  • Run uv run pytest tests/evaluation/test_text_metrics.py -k oneig_reasoning

Made with Cursor

Adds oneig_reasoning metric, lazy registry loading, and focused benchmark/test wiring as the final text-metric stack branch.

Made-with: Cursor
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment @cursor review or bugbot run to trigger another review on this PR

Reviewed by Cursor Bugbot for commit e586366. Configure here.

model_name=self.model_name,
llm_model_name=self.llm_model_name,
device=self.device,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scorer not cached, models reload every batch

High Severity

_get_scorer creates a brand new _LLM2CLIPScorer instance on every update() call when no mock scorer is injected (the production path). Because _LLM2CLIPScorer._load_models lazily loads models by checking self._clip_model is not None, and each new instance starts with self._clip_model = None, the multi-gigabyte LLM2CLIP checkpoints are re-loaded from disk on every batch. The method needs to cache the newly created scorer in self._scorer before returning it.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit e586366. Configure here.

self.model_name,
self.llm_model_name,
)
dtype = torch.bfloat16 if self.device == "cuda" else torch.float32
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Device check misses "cuda:N" device strings

Medium Severity

The dtype selection (self.device == "cuda") and the autocast guard in score both use exact string equality, so they fail to match indexed device strings like "cuda:0". When the framework's device resolution produces "cuda:0" (which set_to_best_available_device("cuda") does via _resolve_cuda_device), the model loads in float32 instead of bfloat16 and autocast is skipped. The nearby attn_impl line correctly uses startswith("cuda:") but these two checks do not.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit e586366. Configure here.

input_pixels = self._processor(images=pil_images, return_tensors="pt").pixel_values.to(self.device)
captions = [text_prompt]
text_features = self._l2v.encode(captions, convert_to_tensor=True, device=self.device).to(self.device)
text_features = self._clip_model.get_text_features(text_features)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Text features computed outside torch.no_grad() context

Medium Severity

In _LLM2CLIPScorer.score, self._clip_model.get_text_features(text_features) runs outside torch.no_grad(), while the analogous get_image_features call is correctly wrapped. The model's parameters still have requires_grad=True (only train(mode=False) was called, not requires_grad_(False)), so PyTorch builds a full computation graph for the text projection forward pass that is never used for backprop. This wastes GPU memory and is inconsistent with the image feature computation on the very next line.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit e586366. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant