Skip to content

feat(e2e-tests): stacked e2e after split metrics#641

Open
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-4c-img-edit-scorefrom
feat/vlm-pr-5-e2e-tests
Open

feat(e2e-tests): stacked e2e after split metrics#641
davidberenstein1957 wants to merge 1 commit intofeat/vlm-pr-4c-img-edit-scorefrom
feat/vlm-pr-5-e2e-tests

Conversation

@davidberenstein1957
Copy link
Copy Markdown
Member

@davidberenstein1957 davidberenstein1957 commented Apr 25, 2026

Summary

  • Rebased the existing e2e branch on top of the fully split metric stack
  • Keeps e2e/integration-focused changes isolated from per-metric PRs
  • Removes overlap with metric-specific unit coverage now handled in dedicated PRs

Test plan

  • Run uv run pytest tests/evaluation/test_vlm_e2e.py tests/evaluation/test_task.py

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 4 potential issues.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment @cursor review or bugbot run to trigger another review on this PR

Reviewed by Cursor Bugbot for commit 7f24f9d. Configure here.

Comment thread pyproject.toml
"peft>=0.18.0,<0.19.0",
"trl<=0.21.0",
"termcolor==2.3.0",
"realesrgan",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Package realesrgan moved from optional to core dependency

High Severity

realesrgan was moved from the [upscale] optional dependency group into core dependencies, and the [upscale] extra was deleted entirely. This forces every user to install realesrgan and its heavy transitive dependencies (basicsr, facexlib, gfpgan, etc.) even if they never use upscaling. This PR is about VLM e2e tests and has no reason to change this. Likely an accidental inclusion from a rebase or merge.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 7f24f9d. Configure here.

Comment thread pyproject.toml
[project]
name = "pruna"
version = "0.3.3"
version = "0.3.2"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Version downgraded and Python 3.13 support dropped

High Severity

version was downgraded from "0.3.3" to "0.3.2" and requires-python was tightened from ">=3.10,<3.14" to ">=3.10,<3.13", dropping Python 3.13 support. The PR description says "pyproject.toml — Already updated in PR-2", suggesting these regressions were accidentally included during a rebase or merge conflict resolution.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 7f24f9d. Configure here.

Comment thread pyproject.toml
evaluation = [
"outlines>1.2.0,<2.0.0",
"litellm>=1.0.0",
]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

evaluation extra silently drops lmharness and rapidata

Medium Severity

The [evaluation] optional extra was redefined from ["pruna[rapidata]", "pruna[lmharness]"] to ["outlines>1.2.0,<2.0.0", "litellm>=1.0.0"]. Users running pip install pruna[evaluation] will no longer get lm-eval or rapidata. The [rapidata] extra was also completely removed. This is a silent backward-incompatible change to the package's public install interface.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 7f24f9d. Configure here.

{"img_size": 224},
),
"DrawBench": (setup_drawbench_dataset, "prompt_collate", {}),
"DrawBench": (setup_drawbench_dataset, "prompt_with_auxiliaries_collate", {}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DrawBench/GenAIBench collate change alters return type

Medium Severity

DrawBench and GenAIBench collate functions changed from prompt_collate (returns (prompts, None)) to prompt_with_auxiliaries_collate (returns (prompts, list[dict])). Any existing code consuming these datasets and expecting gt=None (e.g., model inference handlers, metric update calls that check for None ground truth) will now receive a list of dicts, potentially causing unexpected behavior.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 7f24f9d. Configure here.

- Add _vlm_batch_snapshot_helpers for test data generation
- Add end-to-end tests for metric interactions
- Add datamodule support for VLM evaluation
- Add task-level VLM metric integration
- Add VLM timing/profiling support
- Strip VLM task routing kwargs in TorchMetricWrapper
- Update docs with VLM evaluation guide
- Update data loaders for image/caption support
- Integration with evaluation agent for VLM metric selection
@davidberenstein1957 davidberenstein1957 changed the title feat(e2e): comprehensive VLM metric integration and testing feat(e2e-tests): stacked e2e after split metrics Apr 28, 2026
@davidberenstein1957 davidberenstein1957 changed the base branch from main to feat/vlm-pr-4c-img-edit-score April 28, 2026 13:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant