Skip to content

feat(text-metrics): split qa_accuracy#645

Open
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-2-infrastructurefrom
feat/vlm-pr-3a-qa-accuracy
Open

feat(text-metrics): split qa_accuracy#645
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-2-infrastructurefrom
feat/vlm-pr-3a-qa-accuracy

Conversation

@davidberenstein1957
Copy link
Copy Markdown
Member

Summary

  • Split qa_accuracy into its own stacked branch/PR
  • Add QAAccuracyMetric implementation
  • Wire GenEval benchmark to qa_accuracy + clip_score

Test plan

  • Run uv run pytest tests/evaluation/test_text_metrics.py -k qa_accuracy

Made with Cursor

Isolates qa_accuracy metric implementation and GenEval benchmark wiring so it can be reviewed independently before stacking the remaining text metrics.

Made-with: Cursor
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment @cursor review or bugbot run to trigger another review on this PR

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

"between prompts and generated images via VQA-style questions."
),
metrics=["clip_score"], # §3.2: Mask2Former; not in Pruna
metrics=["qa_accuracy", "clip_score"], # strict QA + CLIP score
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GenEval benchmark uses lenient mean aggregation

Medium Severity

The GenEval benchmark's qa_accuracy metric is intended for "strict QA" (all-or-nothing aggregation). However, Task.from_benchmark doesn't pass the all_or_nothing kwarg, causing QAAccuracyMetric to default to mean aggregation. This leads to inflated scores that are not comparable to the paper's reference.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

raise ValueError(
f"qa_accuracy aggregation must be one of {{'mean', 'all_or_nothing'}}. Got: {self.aggregation!r}."
)
self.metric_units = type(self).metric_units
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant metric_units self-assignment in __init__

Low Severity

Setting self.metric_units = type(self).metric_units is a no-op because metric_units is already declared as a class attribute and nothing earlier in __init__ (neither super().__init__ nor the surrounding code) overrides it. The line just shadows the class attribute with the same value and adds maintenance noise.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 15db155. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant