-
Notifications
You must be signed in to change notification settings - Fork 70
fix: Additional tests optimization when running on github actions. #293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
…-computing/mellea into fix/286-tests-cleanup
| hf_model_name="ibm-granite/granite-4.0-micro", | ||
| ollama_name="ibm/granite4:micro", | ||
| ollama_name="granite4:micro", | ||
| openai_name="granite4:micro", # setting this just for testing purposes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is user facing, I don't think we should set this for testing purposes. We should just explicitly refer to .ollama_name when instantiating the backend/session.
test/conftest.py
Outdated
| @pytest.fixture(autouse=True, scope="function") | ||
| def aggressive_cleanup(): | ||
| """Aggressive memory cleanup after each test to prevent OOM on CI runners.""" | ||
| yield | ||
| # Only run aggressive cleanup in CI where memory is constrained | ||
| if int(os.environ.get("CICD", 0)) != 1: | ||
| return | ||
|
|
||
| # Cleanup after each test | ||
| gc.collect() | ||
| gc.collect() | ||
|
|
||
| # If torch is available, clear CUDA cache | ||
| try: | ||
| import torch | ||
|
|
||
| if torch.cuda.is_available(): | ||
| torch.cuda.empty_cache() | ||
| torch.cuda.synchronize() | ||
| except ImportError: | ||
| pass | ||
|
|
||
|
|
||
| @pytest.fixture(autouse=True, scope="module") | ||
| def cleanup_module_fixtures(): | ||
| """Cleanup module-scoped fixtures to free memory between test modules.""" | ||
| yield | ||
| # Only run aggressive cleanup in CI where memory is constrained | ||
| if int(os.environ.get("CICD", 0)) != 1: | ||
| return | ||
|
|
||
| # Cleanup after module | ||
| gc.collect() | ||
| gc.collect() | ||
| gc.collect() | ||
|
|
||
| # If torch is available, clear CUDA cache | ||
| try: | ||
| import torch | ||
|
|
||
| if torch.cuda.is_available(): | ||
| torch.cuda.empty_cache() | ||
| torch.cuda.synchronize() | ||
| except ImportError: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we extract this logic into a single function and then just create two pytest fixtures that call it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
This PR will hope to fix #286 with some additional inputs from @jakelorocco. For now, it:
conftest.pyqualitative