Skip to content

Conversation

@mollyxu
Copy link
Contributor

@mollyxu mollyxu commented Dec 17, 2025

Remove remaining skipped tests from collection in internal environment

Prevent internal CI failures from skipped tests by excluding skipped tests from collection in internal environment.

TorchCodec tests were flaky / broken internal because:

  • Tests were using runtime pytest.skip() calls, so they were collected but then skipped during execution

Follow up to #1068

Changes Made

1. Replaced all instances of pytest.skip() with @pytest.mark.skipif()

2. Updated conftest.py to document our test collection logic

3. Updated all_supported_devices in conftest.py to accept optional list of marks to add to cuda devices

4. Removed skip that was no longer applicable in test_10bit_videos

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 17, 2025
@meta-codesync
Copy link

meta-codesync bot commented Dec 19, 2025

@mollyxu has imported this pull request. If you are a Meta employee, you can view this in D89555188.

@mollyxu mollyxu marked this pull request as ready for review December 23, 2025 19:57
# This just validates that we can decode 10-bit videos.
# TODO validate against the ref that the decoded frames are correct

if device == "cuda:beta" and asset is H264_10BITS:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no longer applicable because the BETA interface fallsback to the CPU now

)


def _get_format_encode_params(formats, encode_params=VIDEO_ENCODE_PARAMS):
Copy link
Contributor Author

@mollyxu mollyxu Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to skip tests that depend on both format and encoding parameters
(ie. avi / flv with yuv444p)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we would handle all parametrization for a test in one place.

By adding a helper function, it is harder to add new test cases and to understand what is being tested. If we cannot fit the logic into a pytest.param, it might be best to let these test cases appear skipped on CI.

This is one of the downsides to having a test with many parameters - we cannot easily isolate these parameter combinations that fail.

in_fbcode(),
reason="AV1 CUDA not supported internally",
),
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job on these! It is easier to understand where the test is skipped now that the skip occurs in the parametrization.

Copy link
Contributor

@Dan-Flores Dan-Flores left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @mollyxu! Lets focus these changes on the cases where we can simply swap out pytest.skip. I left some comments on the changes where I think the value tradeoff for increased complexity is less clear.



def all_supported_devices():
def all_supported_devices(*, cuda_marks: list[pytest.MarkDecorator] | None = None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see how this covers the test skip in test_get_frame_at_av1, its a tricky case.

Since there's only one case where we need the cuda_marks argument, I'm not sure its worth exposing at the moment. While other usages of all_supported_devices are not affected by the new arg, its not obvious that this function should be used to add marks to the cuda device params.

In this case, it might be bes to keep the pytest.skip in the test_get_frame_at_av1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants