-
Notifications
You must be signed in to change notification settings - Fork 82
Remove remaining tests from collection #1142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| # This just validates that we can decode 10-bit videos. | ||
| # TODO validate against the ref that the decoded frames are correct | ||
|
|
||
| if device == "cuda:beta" and asset is H264_10BITS: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no longer applicable because the BETA interface fallsback to the CPU now
| ) | ||
|
|
||
|
|
||
| def _get_format_encode_params(formats, encode_params=VIDEO_ENCODE_PARAMS): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to skip tests that depend on both format and encoding parameters
(ie. avi / flv with yuv444p)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we would handle all parametrization for a test in one place.
By adding a helper function, it is harder to add new test cases and to understand what is being tested. If we cannot fit the logic into a pytest.param, it might be best to let these test cases appear skipped on CI.
This is one of the downsides to having a test with many parameters - we cannot easily isolate these parameter combinations that fail.
| in_fbcode(), | ||
| reason="AV1 CUDA not supported internally", | ||
| ), | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job on these! It is easier to understand where the test is skipped now that the skip occurs in the parametrization.
Dan-Flores
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @mollyxu! Lets focus these changes on the cases where we can simply swap out pytest.skip. I left some comments on the changes where I think the value tradeoff for increased complexity is less clear.
|
|
||
|
|
||
| def all_supported_devices(): | ||
| def all_supported_devices(*, cuda_marks: list[pytest.MarkDecorator] | None = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see how this covers the test skip in test_get_frame_at_av1, its a tricky case.
Since there's only one case where we need the cuda_marks argument, I'm not sure its worth exposing at the moment. While other usages of all_supported_devices are not affected by the new arg, its not obvious that this function should be used to add marks to the cuda device params.
In this case, it might be bes to keep the pytest.skip in the test_get_frame_at_av1
Remove remaining skipped tests from collection in internal environment
Prevent internal CI failures from skipped tests by excluding skipped tests from collection in internal environment.
TorchCodec tests were flaky / broken internal because:
pytest.skip()calls, so they were collected but then skipped during executionFollow up to #1068
Changes Made
1. Replaced all instances of
pytest.skip()with@pytest.mark.skipif()2. Updated
conftest.pyto document our test collection logic3. Updated
all_supported_devicesinconftest.pyto accept optional list of marks to add to cuda devices4. Removed skip that was no longer applicable in
test_10bit_videos