Skip to content

[PyTorch] Remove internal PyTorch testing helper#2969

Open
timmoon10 wants to merge 3 commits intoNVIDIA:mainfrom
timmoon10:tmoon/debug-fused-optimizer-test
Open

[PyTorch] Remove internal PyTorch testing helper#2969
timmoon10 wants to merge 3 commits intoNVIDIA:mainfrom
timmoon10:tmoon/debug-fused-optimizer-test

Conversation

@timmoon10
Copy link
Copy Markdown
Collaborator

Description

We have been experiencing test failures in test_fused_optimizer.py. There is some import error when importing torch.testing._internal.common_device_type. Fortunately, largeTensorTest is simple to reimplement, so I figure that's better than dealing with unstable internal tools.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Remove internal PyTorch testing helper

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Tim Moon <tmoon@nvidia.com>
@timmoon10 timmoon10 added the bug Something isn't working label May 8, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 8, 2026

Greptile Summary

This PR replaces the unstable torch.testing._internal.common_device_type.largeTensorTest decorator — which was causing import failures — with a self-contained inline guard inside test_large_tensor. The reimplementation flushes Python GC and the CUDA allocator cache before querying free memory, matching the behavior of the upstream PyTorch implementation.

  • The @largeTensorTest("60GB", "cuda") decorator is dropped; the test now explicitly calls gc.collect(), torch.cuda.empty_cache(), and checks torch.cuda.memory.mem_get_info()[0] before proceeding, skipping via pytest.skip if memory is insufficient.
  • No test logic is changed — only the mechanism for detecting and skipping when there is not enough GPU memory.

Confidence Score: 5/5

Safe to merge — the change removes a flaky internal dependency and replaces it with a straightforward, self-contained memory guard that correctly flushes the CUDA allocator cache before querying free memory.

The change is narrow and low-risk: one import removed, one decorator replaced with three equivalent lines. The reimplementation includes gc.collect() and torch.cuda.empty_cache() before the memory check, which is the correct approach and matches the upstream PyTorch reference. No test logic is altered.

No files require special attention.

Important Files Changed

Filename Overview
tests/pytorch/test_fused_optimizer.py Removes unstable internal PyTorch import and reimplements large-tensor skip logic inline with gc.collect(), torch.cuda.empty_cache(), and mem_get_info check.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[test_large_tensor called] --> B[gc.collect]
    B --> C[torch.cuda.empty_cache]
    C --> D{mem_get_info free >= 60 GB?}
    D -- No --> E[pytest.skip Insufficient available memory]
    D -- Yes --> F[Allocate large tensors 2x 2359332864 fp16]
    F --> G[Run FusedAdam optimizer step]
    G --> H[Assert close vs torch.optim.Adam]
    H --> I[torch.cuda.synchronize]
Loading

Reviews (2): Last reviewed commit: "[pre-commit.ci] auto fixes from pre-comm..." | Re-trigger Greptile

Comment thread tests/pytorch/test_fused_optimizer.py
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
@timmoon10
Copy link
Copy Markdown
Collaborator Author

/te-ci pytorch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant