feat: show elapsed time after model downloads#427
Conversation
📝 WalkthroughWalkthroughAdds elapsed-time tracking to the model download command: starts a monotonic timer only when a download proceeds, formats the duration via a new Changes
sequenceDiagram
participant CLI as "CLI"
participant Cmd as "models.download\n(command)"
participant HF as "HuggingFace\ndownloader"
participant Gen as "Generic\ndownloader"
participant FS as "Filesystem"
CLI->>Cmd: invoke download
Cmd->>Cmd: check file exists
alt file exists
Cmd-->>CLI: return early (no timer)
else file not exists
Cmd->>Cmd: start time.monotonic()
opt huggingface path
Cmd->>HF: download model
HF->>FS: write file
HF-->>Cmd: done
end
opt generic path
Cmd->>Gen: download_file(...)
Gen->>FS: write file
Gen-->>Cmd: done
end
Cmd->>Cmd: stop timer
Cmd-->>CLI: print "Done in <elapsed>"
end
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. @@ Coverage Diff @@
## main #427 +/- ##
==========================================
+ Coverage 76.36% 76.43% +0.07%
==========================================
Files 35 35
Lines 4261 4274 +13
==========================================
+ Hits 3254 3267 +13
Misses 1007 1007
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/comfy_cli/command/models/test_models.py (1)
355-372: 🧹 Nitpick | 🔵 TrivialAssert the CLI exit code in this integration test.
You already assert output content; adding success status makes this check less flaky and more diagnostic if behavior regresses.
✅ Suggested test hardening
result = runner.invoke( app, [ "download", "--url", "http://example.com/model.bin", "--downloader", "aria2", "--filename", "model.bin", ], ) + assert result.exit_code == 0 assert mock_dl.called _, kwargs = mock_dl.call_args assert kwargs.get("downloader") == "aria2" assert "Done in " in result.output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/comfy_cli/command/models/test_models.py` around lines 355 - 372, Add an assertion that the CLI exited successfully by checking result.exit_code == 0 after calling runner.invoke(app, [...]) in the test (near the existing assertions that inspect mock_dl and result.output); this ensures the integration test asserts a successful exit status in addition to checking "Done in " and the downloader argument from mock_dl.call_args.comfy_cli/command/models/models.py (1)
322-359:⚠️ Potential issue | 🟠 MajorStart timing at the actual download call, not before preflight steps.
At Line 322, timing starts before Hugging Face auth probing and potential runtime
pip installofhuggingface_hub. That over-reports download duration and can be very misleading on first run.⏱️ Suggested fix
- start_time = time.monotonic() - if is_huggingface_url and check_unauthorized(url, headers): if hf_api_token is None: print( f"Unauthorized access to Hugging Face model. Please set the Hugging Face API token using `comfy model download --set-hf-api-token` or via the `{constants.HF_API_TOKEN_ENV_KEY}` environment variable" ) return else: try: import huggingface_hub except ImportError: print("huggingface_hub not found. Installing...") import subprocess from comfy_cli.resolve_python import resolve_workspace_python python = resolve_workspace_python(str(get_workspace())) subprocess.check_call([python, "-m", "pip", "install", "huggingface_hub"]) import huggingface_hub print(f"Downloading model {model_id} from Hugging Face...") + start_time = time.monotonic() output_path = huggingface_hub.hf_hub_download( repo_id=repo_id, filename=hf_filename, subfolder=hf_folder_name, revision=hf_branch_name, token=hf_api_token, local_dir=get_workspace() / relative_path, cache_dir=get_workspace() / relative_path, ) + elapsed = time.monotonic() - start_time print(f"Model downloaded successfully to: {output_path}") + print(f"Done in {_format_elapsed(elapsed)}") else: print(f"Start downloading URL: {url} into {local_filepath}") + start_time = time.monotonic() download_file(url, local_filepath, headers, downloader=resolved_downloader) - - elapsed = time.monotonic() - start_time - print(f"Done in {_format_elapsed(elapsed)}") + elapsed = time.monotonic() - start_time + print(f"Done in {_format_elapsed(elapsed)}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_cli/command/models/models.py` around lines 322 - 359, The timing starts too early (start_time variable) before preflight actions like auth checks and optional pip install, so move the start_time assignment to immediately before the actual download call(s): set start_time right before calling huggingface_hub.hf_hub_download in the Hugging Face branch and right before calling download_file in the fallback branch, then compute elapsed and call _format_elapsed as currently done; update references to start_time so no timing is measured across the auth/install steps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@comfy_cli/command/models/models.py`:
- Around line 322-359: The timing starts too early (start_time variable) before
preflight actions like auth checks and optional pip install, so move the
start_time assignment to immediately before the actual download call(s): set
start_time right before calling huggingface_hub.hf_hub_download in the Hugging
Face branch and right before calling download_file in the fallback branch, then
compute elapsed and call _format_elapsed as currently done; update references to
start_time so no timing is measured across the auth/install steps.
In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 355-372: Add an assertion that the CLI exited successfully by
checking result.exit_code == 0 after calling runner.invoke(app, [...]) in the
test (near the existing assertions that inspect mock_dl and result.output); this
ensures the integration test asserts a successful exit status in addition to
checking "Done in " and the downloader argument from mock_dl.call_args.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 485192e0-e7a3-40fb-9da4-3f08913ddfc1
📒 Files selected for processing (2)
comfy_cli/command/models/models.pytests/comfy_cli/command/models/test_models.py
e522960 to
76865c9
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/comfy_cli/command/models/test_models.py (1)
351-367: 🧹 Nitpick | 🔵 TrivialAssert exit code before checking output text.
Line 367 validates output content, but this can mask failures if the command exits non-zero with partial output. Add an explicit
result.exit_code == 0assertion first.Proposed test hardening
result = runner.invoke( app, [ "download", "--url", "http://example.com/model.bin", "--downloader", "aria2", "--filename", "model.bin", ], ) assert mock_dl.called _, kwargs = mock_dl.call_args assert kwargs.get("downloader") == "aria2" + assert result.exit_code == 0 assert "Done in " in result.output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/comfy_cli/command/models/test_models.py` around lines 351 - 367, The test calls runner.invoke(...) and then checks result.output but does not assert the process succeeded; add an explicit assert that result.exit_code == 0 immediately after the runner.invoke call (before inspecting result.output or mock calls) to ensure failures with non-zero exit codes are caught; locate the invocation of runner.invoke in the test (variable name result) and insert the exit code assertion there.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_cli/command/models/models.py`:
- Around line 39-40: The seconds branch can print "60.0s" for inputs like 59.95
because you compare the raw float but format a rounded value; change the logic
so you round first and compare the rounded value: compute rounded_seconds =
round(seconds, 1) (or use round(seconds,1) directly in the condition), then use
that rounded_seconds for the comparison and for the return
f"{rounded_seconds:.1f}s" so values that round to 60.0 will fall through to the
minute formatting; update the code around the existing seconds variable and the
return f"{seconds:.1f}s" line accordingly.
In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 310-311: Remove the extra blank line inside or immediately after
the TestFormatElapsed class definition to satisfy the formatter; locate the
class TestFormatElapsed in tests/comfy_cli/command/models/test_models.py and
ensure there is no unintended empty line between the class declaration and its
first member or the next code block, then run the project formatter (ruff /
black) to verify the change.
---
Outside diff comments:
In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 351-367: The test calls runner.invoke(...) and then checks
result.output but does not assert the process succeeded; add an explicit assert
that result.exit_code == 0 immediately after the runner.invoke call (before
inspecting result.output or mock calls) to ensure failures with non-zero exit
codes are caught; locate the invocation of runner.invoke in the test (variable
name result) and insert the exit code assertion there.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 83ddea12-1d7a-41ec-8ba2-b691d16ae35b
📒 Files selected for processing (2)
comfy_cli/command/models/models.pytests/comfy_cli/command/models/test_models.py
|
|
||
| class TestFormatElapsed: |
There was a problem hiding this comment.
Remove the extra blank line to fix the failing formatter check.
Line 310 has spacing that triggers the ruff format --diff pipeline failure; please run formatter on this file.
🧰 Tools
🪛 Pylint (4.0.5)
[convention] 311-311: Missing class docstring
(C0115)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/comfy_cli/command/models/test_models.py` around lines 310 - 311, Remove
the extra blank line inside or immediately after the TestFormatElapsed class
definition to satisfy the formatter; locate the class TestFormatElapsed in
tests/comfy_cli/command/models/test_models.py and ensure there is no unintended
empty line between the class declaration and its first member or the next code
block, then run the project formatter (ruff / black) to verify the change.
Print "Done in Xs" / "Xm Ys" / "Xh Ym Zs" after successful model downloads, covering both the httpx/aria2 and HuggingFace Hub paths. Closes #421
76865c9 to
0505896
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/comfy_cli/command/models/test_models.py (1)
353-370: 🧹 Nitpick | 🔵 TrivialAssert command success before checking output text.
Add an exit-code assertion so this test fails fast on command errors instead of only string matching.
✅ Suggested test hardening
result = runner.invoke( app, [ "download", "--url", "http://example.com/model.bin", "--downloader", "aria2", "--filename", "model.bin", ], ) assert mock_dl.called _, kwargs = mock_dl.call_args assert kwargs.get("downloader") == "aria2" + assert result.exit_code == 0 assert "Done in " in result.output🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/comfy_cli/command/models/test_models.py` around lines 353 - 370, Add an explicit assertion that the CLI command succeeded by checking result.exit_code == 0 right after invoking runner.invoke(app, [...]) and before any assertions on result.output; update the test containing runner.invoke, result, and mock_dl to assert the command exit code (result.exit_code) is 0 so failures surface immediately before checking "Done in " and mock_dl behavior.comfy_cli/command/models/models.py (1)
323-360:⚠️ Potential issue | 🟡 MinorStart timing at actual download kickoff to keep elapsed time accurate.
Right now the stopwatch starts before pre-download work (auth probe/import/install path). That can over-report “download” duration.
⏱️ Proposed adjustment
- start_time = time.monotonic() - if is_huggingface_url and check_unauthorized(url, headers): if hf_api_token is None: print( f"Unauthorized access to Hugging Face model. Please set the Hugging Face API token using `comfy model download --set-hf-api-token` or via the `{constants.HF_API_TOKEN_ENV_KEY}` environment variable" ) return else: try: import huggingface_hub except ImportError: print("huggingface_hub not found. Installing...") import subprocess from comfy_cli.resolve_python import resolve_workspace_python python = resolve_workspace_python(str(get_workspace())) subprocess.check_call([python, "-m", "pip", "install", "huggingface_hub"]) import huggingface_hub print(f"Downloading model {model_id} from Hugging Face...") + start_time = time.monotonic() output_path = huggingface_hub.hf_hub_download( repo_id=repo_id, filename=hf_filename, subfolder=hf_folder_name, revision=hf_branch_name, token=hf_api_token, local_dir=get_workspace() / relative_path, cache_dir=get_workspace() / relative_path, ) print(f"Model downloaded successfully to: {output_path}") else: print(f"Start downloading URL: {url} into {local_filepath}") + start_time = time.monotonic() download_file(url, local_filepath, headers, downloader=resolved_downloader) elapsed = time.monotonic() - start_time print(f"Done in {_format_elapsed(elapsed)}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_cli/command/models/models.py` around lines 323 - 360, The timing currently starts before pre-download work; move the start_time = time.monotonic() so it is set immediately before the actual network download: for the Hugging Face branch set start_time right before the call to huggingface_hub.hf_hub_download (and keep the printed "Downloading model..." message), and for the generic branch set start_time right before calling download_file; remove the earlier start_time declaration at the top so elapsed (used with _format_elapsed) measures only the download phase.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@comfy_cli/command/models/models.py`:
- Around line 323-360: The timing currently starts before pre-download work;
move the start_time = time.monotonic() so it is set immediately before the
actual network download: for the Hugging Face branch set start_time right before
the call to huggingface_hub.hf_hub_download (and keep the printed "Downloading
model..." message), and for the generic branch set start_time right before
calling download_file; remove the earlier start_time declaration at the top so
elapsed (used with _format_elapsed) measures only the download phase.
In `@tests/comfy_cli/command/models/test_models.py`:
- Around line 353-370: Add an explicit assertion that the CLI command succeeded
by checking result.exit_code == 0 right after invoking runner.invoke(app, [...])
and before any assertions on result.output; update the test containing
runner.invoke, result, and mock_dl to assert the command exit code
(result.exit_code) is 0 so failures surface immediately before checking "Done in
" and mock_dl behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 228bd338-6afa-4b88-86d2-af44a73c8915
📒 Files selected for processing (2)
comfy_cli/command/models/models.pytests/comfy_cli/command/models/test_models.py
Summary
Done in Xs/Xm Ys/Xh Ym Zs) after successful model downloadsCloses #421
Test plan
_format_elapsedcovering boundary values (sub-minute, minutes, hours)