Skip to content

feat: improve concurrency safety, cross-platform support and dynamic versioning#6

Open
Fahl-Design wants to merge 4 commits intomainfrom
feat/concurrency-and-versioning
Open

feat: improve concurrency safety, cross-platform support and dynamic versioning#6
Fahl-Design wants to merge 4 commits intomainfrom
feat/concurrency-and-versioning

Conversation

@Fahl-Design
Copy link
Member

@Fahl-Design Fahl-Design commented Mar 6, 2026

Summary of Changes

1. Shell Update & Session Isolation (Root Cause Fix)

  • Fixed PVM_DIR Export: The pvm env command now explicitly exports PVM_DIR, ensuring the shell wrapper can correctly locate its configuration files.
  • Session-Specific Updates: Implemented isolated environment update files using the shell's PID (e.g., .env_update_1234). This prevents race conditions and "version bleeding" between different terminal tabs.
  • Safety Guards: Added directory presence checks in shell wrappers to prevent rm or cat operations on invalid paths.
  • Silent Operations: Bypassed user-defined aliases by using command rm and redirected all temporary file cleanup output to /dev/null.

2. Concurrency & Reliability

  • File Locking: Integrated the fs4 crate to implement advisory file locking on shared state files (.env_update, .update_check_guard, remote_cache.json). This ensures data integrity when multiple PVM instances run simultaneously.
  • JSON API Integration: Refactored network.rs to use the remote JSON API instead of HTML scraping. This is faster, more robust, and allowed for the removal of the scraper dependency.
  • Enhanced Progress Reporting: Added a real-time download progress bar with accurate byte counts and ETA for PHP version installations.

3. Cross-Platform Support

  • Dynamic Detection: Replaced hardcoded "linux-x86_64" strings with dynamic OS and Architecture detection (std::env::consts). PVM now natively supports macOS (Intel/Silicon) and ARM64 Linux (e.g., Raspberry Pi).

4. Modernization & Maintainability

  • Dynamic Versioning: Created a build.rs script that injects the current Git commit ID and build timestamp into the version string.
  • Robust Versioning: Integrated the semver crate for standard-compliant version parsing and sorting, replacing brittle manual string splitting.
  • Centralized Constants: Moved all magic strings and environment variable names into a single src/constants.rs file.
  • Dependency Audit: Updated all dependencies to their latest stable versions and removed unused crates (scraper, regex, thiserror, log).

Summary by CodeRabbit

  • New Features

    • Per-directory PHP version file support for local version management.
    • Build-time version info embedded into CLI (shows tag/commit and timestamp).
  • Improvements

    • Safer, atomic environment and PATH updates with cross-platform handling.
    • Shell wrappers use a dynamic env-file flow for more reliable environment loading.
    • Remote listings and downloads use JSON/streaming with progress and better diagnostics.
    • Improved version sorting for diverse version formats.
  • Tests / Chores

    • Updated tests, toolchain config, and VCS ignore for local version file.

@Fahl-Design Fahl-Design self-assigned this Mar 6, 2026
…versioning

This change introduces:
- File locking for shared state files using fs4
- Dynamic versioning with commit ID and build time via build.rs
- Cross-platform support for macOS and ARM architectures
- Session-specific environment updates using PVM_SHELL_PID
- Safety guards and silent removal of temp files in shell wrappers
- Centralized constants and semantic version sorting logic
- Bug fixes in interactive menu and shell file discovery
@Fahl-Design Fahl-Design force-pushed the feat/concurrency-and-versioning branch from 9d28092 to e213c57 Compare March 6, 2026 20:53
@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

Adds a build script embedding a git/timestamp PVM_VERSION, centralizes runtime strings as constants, replaces HTML scraping with JSON remote listings, implements PID-scoped atomic env-update files with file locking, adds version-sorting utilities, and updates related commands, shells, tests, and toolchain/dependencies.

Changes

Cohort / File(s) Summary
Build & Toolchain
build.rs, Cargo.toml, rust-toolchain.toml, src/cli.rs
Adds build.rs that emits PVM_VERSION (git + build time); switches CLI version to env!("PVM_VERSION"); updates dependencies (adds fs4, futures-util, semver, chrono; bumps tokio/tempfile; removes log/regex/scraper/thiserror); pins toolchain to 1.85.0.
Constants & Module Entry
src/constants.rs, src/main.rs
New src/constants.rs exposing PVM_DIR_VAR, MULTISHELL_PATH_VAR, ENV_UPDATE_FILE, REMOTE_CACHE_FILE, UPDATE_CHECK_GUARD_FILE, PHP_VERSION_FILE, BASE_URL; main.rs exposes new modules.
Filesystem & Env Utilities
src/fs.rs, src/utils.rs
Replaces hard-coded env names with constants; adds get_env_update_path(Option<PathBuf>) and write_env_file_locked for PID/override env-update path and locked writes; delegates version sorting to new sort_versions(&mut [String]) using semver with numeric fallback.
Commands: env/install/use
src/commands/env.rs, src/commands/install.rs, src/commands/use_cmd.rs
Commands now compute pvm_dir, use PHP_VERSION_FILE constant for per-dir version, use MULTISHELL_PATH_VAR, write env updates via get_env_update_path(None) and write_env_file_locked, and apply OS-aware PATH separators.
Shell Wrappers
src/shell.rs
Refactors Bash/Zsh/Fish wrapper templates to use directory-backed, PID-scoped env-update files (via PVM_ENV_UPDATE_PATH/PVM_SHELL_PID), conditionally eval produced env file and clean up; templates reference new constants.
Network / Remote listing & download
src/network.rs
Replaces HTML scraping with JSON-based remote file listing (RemoteFile), introduces target-triple mapping, cached JSON with locked reads/writes and 24h expiry, streaming download with progress, and improved error contexts.
Update Guarding
src/update.rs
Adds .update_check_guard lock-based timestamp guard to rate-limit update checks (~24h), honors PVM_UPDATE_MODE=disabled, and writes timestamp with exclusive lock.
Tests & Ignore
tests/cli.rs, .gitignore
Test updated to assert PVM_DIR export; .gitignore now ignores .php-version.

Sequence Diagram

sequenceDiagram
    participant Shell as Shell Wrapper
    participant PVM as pvm CLI
    participant FS as File System
    participant Lock as File Lock

    Shell->>PVM: invoke pvm with PVM_SHELL_PID / PVM_ENV_UPDATE_PATH
    PVM->>FS: call get_env_update_path(override=None)
    FS-->>PVM: return PID-specific env-update path
    PVM->>Lock: open env-update file (exclusive lock)
    Lock-->>PVM: lock acquired
    PVM->>FS: write environment exports to env-update file (buffered)
    PVM->>Lock: flush & release lock
    Lock-->>PVM: lock released
    PVM-->>Shell: exit code + path to env-update file
    Shell->>FS: read env-update file contents
    FS-->>Shell: environment export lines
    Shell->>Shell: eval exports into shell environment
    Shell->>FS: delete env-update file
    FS-->>Shell: cleanup complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 I dug a build from git and time, a tiny version seed,
Locked the burrow's env updates so every write proceeds,
From HTML to JSON fields I hopped with semver in tow,
Shells now pass their PIDs and dance — the paths all set to go,
A carrot for CI, a nibble for the devs — hooray, we grow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main objectives of the changeset, which focuses on improving concurrency safety with file locking, adding cross-platform support through dynamic OS/architecture detection, and implementing dynamic versioning via build.rs.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/concurrency-and-versioning
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

@Fahl-Design
Copy link
Member Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Cargo.toml (1)

4-4: ⚠️ Potential issue | 🟡 Minor

Ensure CI and local toolchains support Rust 1.85.0 or later.

The edition = "2024" requires rustc 1.85.0 (stable) or later. Verify your CI pipeline and development environment are configured to use this minimum version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` at line 4, The Cargo.toml now sets edition = "2024" which
requires rustc 1.85.0+; update your toolchain configuration and CI to pin/use
Rust 1.85.0 or later (e.g., update any rust-toolchain, GitHub Actions workflow
steps, CI images, and Dockerfiles) so local and CI builds match the new edition;
verify rustup/toolchain settings and CI matrix entries reference 1.85.0+ and
rerun the pipeline to confirm compatibility.
🧹 Nitpick comments (3)
build.rs (2)

20-20: CI detection is limited to GitHub Actions.

The GITHUB_ACTIONS check excludes other CI environments (GitLab CI, CircleCI, Jenkins, etc.). Consider using the more universal CI environment variable, which is set by most CI systems.

💡 Proposed broader CI detection
-    let is_ci = std::env::var("GITHUB_ACTIONS").is_ok();
+    // CI env var is set by most CI systems (GitHub Actions, GitLab CI, CircleCI, Travis, etc.)
+    let is_ci = std::env::var("CI").is_ok() || std::env::var("GITHUB_ACTIONS").is_ok();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.rs` at line 20, The CI detection currently sets is_ci by checking
GITHUB_ACTIONS only; update it to detect a wider range of CI environments by
checking the generic "CI" env var (and optionally fall back to other CI-specific
vars) instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for
most CI systems; locate the declaration of the is_ci variable and replace the
std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for
std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden
detection.

3-41: Add cargo:rerun-if-changed directives for build efficiency.

Without rerun directives, this build script executes on every compilation, invoking git and date commands unnecessarily. Adding directives improves incremental build times.

⚡ Proposed fix to add rerun directives
 fn main() {
+    // Only rerun if git state or Cargo.toml changes
+    println!("cargo:rerun-if-changed=.git/HEAD");
+    println!("cargo:rerun-if-changed=.git/refs/heads/");
+    println!("cargo:rerun-if-changed=Cargo.toml");
+
     let commit_hash = Command::new("git")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.rs` around lines 3 - 41, In main(), add cargo rerun directives so the
build script only reruns when relevant inputs change: emit
println!("cargo:rerun-if-changed=.git/HEAD") and
println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely
on) to avoid invoking git/date on every compile, and emit environment-change
directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and
println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those
vars or package version trigger a rerun; place these near the top of main()
before computing commit_hash, build_time, and version.
src/utils.rs (1)

10-18: Mixed semver/non-semver comparison may produce inconsistent ordering.

When one string is valid semver and the other is not, both are compared using the numeric fallback, discarding valid semver metadata. For example, comparing "8.2.0-alpha" (valid semver with prerelease) against "8.2" (invalid semver) uses the fallback, yielding [8,2,0] vs [8,2], treating the prerelease as greater—contradicting semver semantics where 8.2.0-alpha < 8.2.0.

Consider handling the mixed case explicitly or documenting this limitation.

💡 Proposed fix to handle mixed case
         match (a_sem, b_sem) {
             (Ok(av), Ok(bv)) => av.cmp(&bv),
+            (Ok(_), Err(_)) => std::cmp::Ordering::Greater, // valid semver sorts after non-semver
+            (Err(_), Ok(_)) => std::cmp::Ordering::Less,
             _ => {
                 // Fallback for non-semver strings (e.g., "8.2")
                 let a_parts: Vec<u32> = a.split('.').filter_map(|s| s.parse().ok()).collect();
                 let b_parts: Vec<u32> = b.split('.').filter_map(|s| s.parse().ok()).collect();
                 a_parts.cmp(&b_parts)
             }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils.rs` around lines 10 - 18, The current match over (a_sem, b_sem)
falls through to the numeric fallback for mixed semver/non-semver cases and
loses semver prerelease semantics; update the match to handle mixed cases
explicitly by adding arms for (Ok(av), Err(_)) and (Err(_), Ok(bv)): when one
side parsed as semver, split the non-semver string into numeric parts and
compare major/minor/patch from av (use av.major,av.minor,av.patch) to those
numeric parts; if numeric parts are equal but av has a prerelease, treat the
semver prerelease as less than the non-semver “release” (i.e., return Less), and
mirror the logic for the reversed case using bv, ensuring a and b variables and
the existing match block are used to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.php-version:
- Line 1: The repository includes a committed .php-version file which is
machine-specific and can confuse contributors or cause merge conflicts; remove
the tracked .php-version from the repo (git rm --cached .php-version or delete
and commit), add ".php-version" to .gitignore to prevent re-committing, and if
keeping it intentionally, add a short note in the project README explaining why
this specific PHP version is required; reference the .php-version file when
making these changes.

In `@build.rs`:
- Around line 12-18: The current build_time computation uses
Command::new("date") (see build_time variable in build.rs) which is POSIX-only
and breaks on Windows; replace this shell call with a cross-platform timestamp
from Rust (add chrono as a build-dependency and compute build_time via
chrono::Utc::now().format(...).to_string()) so the timestamp is produced
reliably on all platforms and remove the Command::new("date") logic.

In `@src/commands/install.rs`:
- Around line 81-94: The current install path in install.rs swallows all I/O
errors when writing the env-update file (using .ok() on lock_exclusive,
writeln!, writer.flush, and unlock), causing a reported "successful switch" even
when the write fails; change this block to treat the write as fallible like
src/commands/use_cmd.rs: use crate::fs::get_env_update_path() as you already do,
but propagate errors (replace the .ok() calls with proper Result handling and
use ? to return the error or map it into the function's error type) for
file.open(), file.lock_exclusive(), writeln!(...), writer.flush(), and
file.unlock() so the install command returns an error if the env-update write
fails.

In `@src/commands/use_cmd.rs`:
- Around line 92-98: The code currently calls std::fs::write(PHP_VERSION_FILE,
&version).ok() and then unconditionally prints a success eprintln; change this
to check the Result from std::fs::write (e.g., match or if let Err(e)) so you
only print the success message when write returns Ok, and print or warn
(eprintln) with the error and mention PHP_VERSION_FILE when it fails so the user
knows the pin update was not written; update the write call handling and the
eprintln block around PHP_VERSION_FILE and version.bold() accordingly.

In `@src/fs.rs`:
- Around line 64-72: get_env_update_path currently rebuilds the .env_update
filename from PVM_SHELL_PID which still collides for concurrent backgrounded
invocations; instead accept a per-invocation path from the caller. Change the
signature of get_env_update_path (and any callers in src/shell.rs) to accept an
Option<PathBuf> or &Path (e.g., get_env_update_path(override: Option<&Path>))
and if provided return that path immediately, otherwise fall back to the
existing pvm_dir + PVM_SHELL_PID logic; update the shell wrapper code that
spawns each invocation to create a unique temp filename and pass that PathBuf
into get_env_update_path (or set and pass through an explicit env var like
PVM_ENV_UPDATE_PATH) so each invocation uses a unique .env_update file instead
of reconstructing from PVM_SHELL_PID.

In `@src/network.rs`:
- Around line 109-116: The cache file is currently truncated by File::create
before acquiring the exclusive lock, creating a race; replace the
File::create(&cache_path) call with opening via std::fs::OpenOptions
(create=true, read=true, write=true) so the file is not truncated on open, then
call file.lock_exclusive(), call file.set_len(0) to clear the file while holding
the lock, then create the std::io::BufWriter from the locked file and
write_all/flush, and finally call file.unlock(); keep references to
serde_json::to_string(&versions), pvm_dir, cache_path, lock_exclusive, set_len,
BufWriter, write_all, flush, and unlock when making the change.

In `@src/update.rs`:
- Around line 18-45: The guard file is being truncated at open which makes
read_to_string() always return empty; remove truncate(true) from the OpenOptions
chain used to open the file (the let file =
std::fs::OpenOptions::new()...open(&guard_file)? block) so the existing
timestamp can be read, then after deciding to update, call file.set_len(0) and
rewind before writing the new timestamp (the current set_len(0),
BufWriter::new(&file), writeln!(writer, "{}", now) and writer.flush() sequence)
and keep the existing lock_exclusive()/unlock() usage around the
read-and-possibly-write sequence.

---

Outside diff comments:
In `@Cargo.toml`:
- Line 4: The Cargo.toml now sets edition = "2024" which requires rustc 1.85.0+;
update your toolchain configuration and CI to pin/use Rust 1.85.0 or later
(e.g., update any rust-toolchain, GitHub Actions workflow steps, CI images, and
Dockerfiles) so local and CI builds match the new edition; verify
rustup/toolchain settings and CI matrix entries reference 1.85.0+ and rerun the
pipeline to confirm compatibility.

---

Nitpick comments:
In `@build.rs`:
- Line 20: The CI detection currently sets is_ci by checking GITHUB_ACTIONS
only; update it to detect a wider range of CI environments by checking the
generic "CI" env var (and optionally fall back to other CI-specific vars)
instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for most CI
systems; locate the declaration of the is_ci variable and replace the
std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for
std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden
detection.
- Around line 3-41: In main(), add cargo rerun directives so the build script
only reruns when relevant inputs change: emit
println!("cargo:rerun-if-changed=.git/HEAD") and
println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely
on) to avoid invoking git/date on every compile, and emit environment-change
directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and
println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those
vars or package version trigger a rerun; place these near the top of main()
before computing commit_hash, build_time, and version.

In `@src/utils.rs`:
- Around line 10-18: The current match over (a_sem, b_sem) falls through to the
numeric fallback for mixed semver/non-semver cases and loses semver prerelease
semantics; update the match to handle mixed cases explicitly by adding arms for
(Ok(av), Err(_)) and (Err(_), Ok(bv)): when one side parsed as semver, split the
non-semver string into numeric parts and compare major/minor/patch from av (use
av.major,av.minor,av.patch) to those numeric parts; if numeric parts are equal
but av has a prerelease, treat the semver prerelease as less than the non-semver
“release” (i.e., return Less), and mirror the logic for the reversed case using
bv, ensuring a and b variables and the existing match block are used to locate
the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 632cbd06-1e14-494d-8cb9-2e803f4dbca3

📥 Commits

Reviewing files that changed from the base of the PR and between 544d4d8 and e213c57.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (15)
  • .php-version
  • Cargo.toml
  • build.rs
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs

.php-version Outdated
@@ -0,0 +1 @@
8.4.18 No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider removing .php-version from the repository.

This file is typically generated per-project by the pvm init or pvm use commands. Committing it to the version manager's own repository may:

  1. Confuse users who clone the repo, as pvm might try to use PHP 8.4.18 when working on the tool itself.
  2. Create merge conflicts if developers have different local PHP versions.

Consider adding .php-version to .gitignore instead, or document why it's intentionally included.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.php-version at line 1, The repository includes a committed .php-version
file which is machine-specific and can confuse contributors or cause merge
conflicts; remove the tracked .php-version from the repo (git rm --cached
.php-version or delete and commit), add ".php-version" to .gitignore to prevent
re-committing, and if keeping it intentionally, add a short note in the project
README explaining why this specific PHP version is required; reference the
.php-version file when making these changes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/shell.rs (1)

49-55: ⚠️ Potential issue | 🟠 Major

The wrapper temp filename can still collide.

In Bash and Zsh, $$ is the shell PID even in subshells, and Fish's $fish_pid is also the shell PID. With only a second-resolution timestamp added, concurrent pvm invocations from the same shell can still generate the same env_file and clobber/remove each other's exports. Generate the path with mktemp (or another per-invocation random suffix) in each wrapper instead of assembling it manually. (gnu.org)

Example pattern for Bash/Zsh
-      local env_file="${PVM_DIR}/.env_update_$(date +%s)_$$"
-      [[ -f "$env_file" ]] && command rm -f "$env_file" 2>/dev/null
+      local env_file
+      env_file="$(mktemp "${PVM_DIR}/.env_update.XXXXXX")" || return 1

Also applies to: 104-110, 157-165

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shell.rs` around lines 49 - 55, The temp env file generation using local
env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename
assembly with a per-invocation secure temp file created by mktemp (e.g. call
mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before
invoking command pvm, and keep the existing eval/cleanup logic but remove the
old collision-prone construction; update the same pattern wherever env_file is
built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and
ensure you still remove the temp file after eval and on error (and add a trap if
needed) so each wrapper invocation gets a unique file.
🧹 Nitpick comments (2)
src/update.rs (1)

39-45: Consider propagating errors from critical I/O operations.

Using .ok() on set_len, rewind, and flush silently ignores potential I/O failures. While unlikely, if these fail, the guard file may be left in an inconsistent state. At minimum, propagating the unlock() error (or logging failures) would improve debuggability.

♻️ Optional: Propagate errors for better reliability
     // Write the new timestamp to prevent spam on next commands
-    file.set_len(0).ok();
-    file.rewind().ok();
+    file.set_len(0)?;
+    file.rewind()?;
     let mut writer = std::io::BufWriter::new(&file);
-    writeln!(writer, "{}", now).ok();
-    writer.flush().ok();
-    file.unlock().ok();
+    writeln!(writer, "{}", now)?;
+    writer.flush()?;
+    file.unlock()?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/update.rs` around lines 39 - 45, Replace the silent .ok() calls on
critical I/O operations so errors are propagated or at least surfaced: remove
.ok() on file.set_len, file.rewind, writer.flush, and file.unlock in the guard
update logic (the block creating BufWriter and writing the timestamp) and use
the ? operator to return a Result from the containing function (or map_err/log
the error) so failures aren’t ignored; ensure the containing function signature
is updated to return Result if needed and specifically propagate or handle
file.unlock() error instead of discarding it.
src/network.rs (1)

178-180: Consider handling missing Content-Length header gracefully.

Some servers or proxies may not provide a Content-Length header (e.g., with chunked transfer encoding). Currently, this causes a hard error. Consider falling back to an indeterminate progress bar instead.

♻️ Optional: Support downloads without Content-Length
-    let total_size = response
-        .content_length()
-        .context("Failed to get content length from server")?;
-
-    let pb = ProgressBar::new(total_size);
-    pb.set_style(ProgressStyle::default_bar()
-        .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
-        .progress_chars("#>-"));
+    let total_size = response.content_length();
+
+    let pb = if let Some(size) = total_size {
+        let pb = ProgressBar::new(size);
+        pb.set_style(ProgressStyle::default_bar()
+            .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
+            .progress_chars("#>-"));
+        pb
+    } else {
+        let pb = ProgressBar::new_spinner();
+        pb.set_style(ProgressStyle::default_spinner()
+            .template("{spinner:.green} [{elapsed_precise}] {bytes}")?);
+        pb
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 178 - 180, The code currently unwraps
response.content_length() into total_size which errors when Content-Length is
missing; change total_size to an Option<u64> (e.g., let total_size =
response.content_length().ok(); or use .context only for unexpected errors) and
adjust the progress bar creation/usage (the progress bar variable, e.g., pb) to
accept an indeterminate/unknown length when total_size is None (create spinner
or progress bar with unknown length) and ensure the download/read loop updates
bytes read without assuming a known total; update any places that used
total_size as u64 to handle the Option (match or if let Some(total) { set length
} else { use indeterminate mode }).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/install.rs`:
- Around line 73-74: The file src/commands/install.rs is not formatted; run
rustfmt (cargo fmt) to fix style so the Release workflow passes; specifically
format the code around the calls to crate::fs::resolve_local_version and
crate::fs::get_version_bin_dir in the install command (e.g., the let v =
crate::fs::resolve_local_version(&resolved_version)?; and let bin_dir =
crate::fs::get_version_bin_dir(&v)?; lines) and commit the reformatted file.
- Around line 82-91: Open the file write sequence in src/commands/install.rs and
remove the truncate(true) flag from OpenOptions so the file is opened without
truncation, then acquire the advisory lock via file.lock_exclusive() before
clearing the file and call file.set_len(0)? to truncate while holding the lock,
followed by writing the contents (using the existing BufWriter/writeln!/flush)
and file.unlock()?; apply the same change to the duplicated sequence in
src/commands/use_cmd.rs or refactor both into a shared helper (e.g., a
write_env_file_locked function) to centralize opening (without truncate),
locking (file.lock_exclusive()), truncating (file.set_len(0)?), writing
(writeln!/flush), and unlocking (file.unlock()?).

In `@src/network.rs`:
- Around line 49-61: The nested conditional block around the cache check
(variables read_res, cache_path, modified, elapsed, CACHE_DURATION, and the
serde_json::from_str call that returns Ok(versions)) is misformatted; run
rustfmt (cargo fmt) or reformat that block so it matches rustfmt style (reflow
indentation and spacing in the if-let chains) to satisfy cargo fmt --check in
CI.
- Around line 93-112: The code block around get_target_triple()/suffix/versions
has formatting that fails cargo fmt; run `cargo fmt` or reformat the snippet so
it matches rustfmt style (fix spacing/indentation around the let target =
get_target_triple()?; let suffix = format!("-cli-{}.tar.gz", target); the for
file in res { ... } block and the call to crate::utils::sort_versions(&mut
versions)) to satisfy rustfmt and CI.

---

Duplicate comments:
In `@src/shell.rs`:
- Around line 49-55: The temp env file generation using local
env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename
assembly with a per-invocation secure temp file created by mktemp (e.g. call
mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before
invoking command pvm, and keep the existing eval/cleanup logic but remove the
old collision-prone construction; update the same pattern wherever env_file is
built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and
ensure you still remove the temp file after eval and on error (and add a trap if
needed) so each wrapper invocation gets a unique file.

---

Nitpick comments:
In `@src/network.rs`:
- Around line 178-180: The code currently unwraps response.content_length() into
total_size which errors when Content-Length is missing; change total_size to an
Option<u64> (e.g., let total_size = response.content_length().ok(); or use
.context only for unexpected errors) and adjust the progress bar creation/usage
(the progress bar variable, e.g., pb) to accept an indeterminate/unknown length
when total_size is None (create spinner or progress bar with unknown length) and
ensure the download/read loop updates bytes read without assuming a known total;
update any places that used total_size as u64 to handle the Option (match or if
let Some(total) { set length } else { use indeterminate mode }).

In `@src/update.rs`:
- Around line 39-45: Replace the silent .ok() calls on critical I/O operations
so errors are propagated or at least surfaced: remove .ok() on file.set_len,
file.rewind, writer.flush, and file.unlock in the guard update logic (the block
creating BufWriter and writing the timestamp) and use the ? operator to return a
Result from the containing function (or map_err/log the error) so failures
aren’t ignored; ensure the containing function signature is updated to return
Result if needed and specifically propagate or handle file.unlock() error
instead of discarding it.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 843fd8fc-6613-4da5-b1f4-6e91e5b0a9eb

📥 Commits

Reviewing files that changed from the base of the PR and between e213c57 and af0ef46.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • .gitignore
  • Cargo.toml
  • build.rs
  • rust-toolchain.toml
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/fs.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
✅ Files skipped from review due to trivial changes (1)
  • .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/utils.rs
  • build.rs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/network.rs (2)

177-179: Download will fail if server omits Content-Length header.

Some servers or CDN configurations may not provide a Content-Length header, especially for dynamically served content. The current code requires it and will error out with "Failed to get content length from server".

Consider falling back to an indeterminate progress bar (spinner) when content_length() returns None:

Suggested approach
-    let total_size = response
-        .content_length()
-        .context("Failed to get content length from server")?;
-
-    let pb = ProgressBar::new(total_size);
-    pb.set_style(ProgressStyle::default_bar()
-        .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
-        .progress_chars("#>-"));
+    let total_size = response.content_length();
+    
+    let pb = if let Some(size) = total_size {
+        let bar = ProgressBar::new(size);
+        bar.set_style(ProgressStyle::default_bar()
+            .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
+            .progress_chars("#>-"));
+        bar
+    } else {
+        let bar = ProgressBar::new_spinner();
+        bar.set_style(ProgressStyle::default_spinner()
+            .template("{spinner:.green} [{elapsed_precise}] {bytes} downloaded")?);
+        bar
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 177 - 179, The code currently unwraps
response.content_length() into total_size which errors when the header is
missing; change the logic to treat content_length() as an Option: if Some(len)
keep using total_size as before, otherwise create an indeterminate progress
indicator (spinner) instead of a length-based progress bar. Update all uses of
total_size (and the progress bar creation/updates, e.g., where you construct the
ProgressBar or call set_length) to handle the None case (use spinner.start() /
spinner.inc() or a ProgressStyle::template for unknown length) and avoid
returning an error from the download function when content_length() is None so
downloads succeed against servers without Content-Length.

186-196: Minor: downloaded variable is redundant.

The downloaded variable tracks the same value as buffer.len(). You can simplify by using buffer.len() as u64 directly:

Simplified approach
-    let mut downloaded: u64 = 0;
     let mut stream = response.bytes_stream();
     let mut buffer = Vec::new();

     while let Some(item) = stream.next().await {
         let chunk = item.context("Error while downloading chunk")?;
         buffer.extend_from_slice(&chunk);
-        let new = std::cmp::min(downloaded + (chunk.len() as u64), total_size);
-        downloaded = new;
-        pb.set_position(new);
+        pb.set_position(buffer.len() as u64);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 186 - 196, Remove the redundant downloaded
variable and compute progress directly from buffer.len(); specifically delete
the declaration "let mut downloaded: u64 = 0" and the assignment "downloaded =
new", and within the loop replace the min/update logic with let new =
std::cmp::min(buffer.len() as u64 + (chunk.len() as u64), total_size) (or simply
compute new = std::cmp::min(buffer.len() as u64, total_size) after extending
buffer) and call pb.set_position(new); keep the existing stream iteration
(response.bytes_stream(), stream.next().await) and error handling (context)
unchanged.
src/update.rs (1)

60-64: Silently ignoring network errors may hide connectivity issues.

When network::resolve_version fails, the function returns Ok(None) without any indication of the failure. While this is acceptable for an optional update check, users with persistent network issues won't receive any feedback.

Consider logging a debug message when the resolution fails, or at minimum document this silent-failure behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/update.rs` around lines 60 - 64, The call to
network::resolve_version(&minor_prefix).await currently ignores errors and
returns Ok(None) silently; update the handling so that on Err(e) you emit a
debug/log message (e.g., using tracing::debug! or log::debug!) that includes the
error and context (minor_prefix and target_version) and then continue to return
Ok(None) to preserve behavior; modify the match/if-let around
network::resolve_version to capture Err(e) and log it rather than dropping it
silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/use_cmd.rs`:
- Around line 120-130: The current pattern in use_cmd.rs opens the env file with
.truncate(true) before calling file.lock_exclusive(), which causes a race where
the file is truncated prior to acquiring the exclusive lock; remove the
.truncate(true) flag from the OpenOptions call, call file.lock_exclusive()
immediately after open, then call file.set_len(0)? to truncate the file while
holding the lock, write and flush via the existing writer, and finally unlock;
additionally, factor the logic into a shared helper (e.g., write_env_file_locked
in src::fs) used by both use_cmd.rs and install.rs to avoid duplication and
ensure the correct locking/truncation sequence.

---

Nitpick comments:
In `@src/network.rs`:
- Around line 177-179: The code currently unwraps response.content_length() into
total_size which errors when the header is missing; change the logic to treat
content_length() as an Option: if Some(len) keep using total_size as before,
otherwise create an indeterminate progress indicator (spinner) instead of a
length-based progress bar. Update all uses of total_size (and the progress bar
creation/updates, e.g., where you construct the ProgressBar or call set_length)
to handle the None case (use spinner.start() / spinner.inc() or a
ProgressStyle::template for unknown length) and avoid returning an error from
the download function when content_length() is None so downloads succeed against
servers without Content-Length.
- Around line 186-196: Remove the redundant downloaded variable and compute
progress directly from buffer.len(); specifically delete the declaration "let
mut downloaded: u64 = 0" and the assignment "downloaded = new", and within the
loop replace the min/update logic with let new = std::cmp::min(buffer.len() as
u64 + (chunk.len() as u64), total_size) (or simply compute new =
std::cmp::min(buffer.len() as u64, total_size) after extending buffer) and call
pb.set_position(new); keep the existing stream iteration
(response.bytes_stream(), stream.next().await) and error handling (context)
unchanged.

In `@src/update.rs`:
- Around line 60-64: The call to network::resolve_version(&minor_prefix).await
currently ignores errors and returns Ok(None) silently; update the handling so
that on Err(e) you emit a debug/log message (e.g., using tracing::debug! or
log::debug!) that includes the error and context (minor_prefix and
target_version) and then continue to return Ok(None) to preserve behavior;
modify the match/if-let around network::resolve_version to capture Err(e) and
log it rather than dropping it silently.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d8ce650f-1cfb-48f0-8541-1861082d4c4c

📥 Commits

Reviewing files that changed from the base of the PR and between af0ef46 and 07cf984.

📒 Files selected for processing (4)
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/network.rs
  • src/update.rs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/commands/use_cmd.rs (2)

58-63: ⚠️ Potential issue | 🟠 Major

This flow ignores the answer to install's use now prompt.

Install::call() already asks whether the new version should be used immediately in src/commands/install.rs Lines 57-97. version = newer_version happens unconditionally afterward, so use_cmd still switches even if the user answered “no” there. This path needs a non-interactive install helper, or install should return whether it already performed the switch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/use_cmd.rs` around lines 58 - 63, Install::call currently
prompts the user and the caller in use_cmd.rs ignores its answer; modify
Install::call (or add a new method on Install) to return a Result<bool, _> (or
similar) that indicates whether the install performed the "use now" switch, then
update the call site where install_cmd.call().await is invoked to capture that
boolean and only set version = newer_version when it is true; ensure the new
return type is propagated or provide a non-interactive helper like
Install::call_non_interactive that performs install without prompting and
returns whether it switched so use_cmd can decide whether to update its version
variable.

119-131: ⚠️ Potential issue | 🔴 Critical

Remove the unsafe environment mutation; rely on the env-update file approach already being written.

The unsafe block here uses the same unsound std::env::set_var pattern as in install.rs. In Rust 2024, std::env::set_var is explicitly marked unsafe because it mutates process-wide global state without proving single-threaded or exclusive access—the standard library docs state that on non-Windows platforms in multithreaded programs, the only safe option is to not use set_var/remove_var at all, since no thread can reliably guarantee exclusive access to the environment.

The code already correctly writes env exports to the file (lines 112–116), which will be evaluated by the shell hook wrapper. Remove the unsafe block (lines 119–131) and let child processes read the environment from the file instead of mutating the current process's global environment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/use_cmd.rs` around lines 119 - 131, Remove the unsafe
process-wide environment mutation in use_cmd.rs: delete the unsafe block that
calls std::env::set_var for MULTISHELL_PATH_VAR and PATH (the code that reads
std::env::var_os("PATH"), builds new_path with bin_dir, and calls
std::env::set_var("PATH", ...)). Rely on the env-update file already written
earlier (the env exports written at lines ~112–116) so child shells pick up
bin_dir via the shell hook wrapper instead of mutating the current process; do
not replace it with any other std::env::set_var/remove_var calls.
src/fs.rs (1)

188-229: ⚠️ Potential issue | 🟠 Major

Env-override tests use unsafe mutation without proper multi-threaded protection.

In Rust 2024, std::env::set_var and std::env::remove_var require that no other thread concurrently reads or writes the environment. A local Mutex only serializes access within the two tests that use it; it cannot prevent other tests running in parallel (which is the default in cargo test) or library code from reading PVM_DIR concurrently. On non-Windows platforms, this is unsound.

Move these tests to subprocess-based integration tests (e.g., using assert_cmd), serialize the entire test suite with --test-threads=1, or refactor the code under test to accept configuration without mutating the process environment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/fs.rs` around lines 188 - 229, The tests
test_get_pvm_dir_with_env_override and test_list_installed_versions mutate the
process env unsafely via ENV_LOCK and unsafe std::env::set_var/remove_var;
change them into subprocess-based integration tests instead: remove the ENV_LOCK
and unsafe env mutations from src/fs.rs tests and create integration tests
(e.g., in tests/ or using assert_cmd) that spawn a child process with PVM_DIR
set via Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in
that temp_dir, run a small test helper or the binary which calls get_pvm_dir(),
get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit
code; this avoids global env mutation and resolves the Rust 2024 thread-safety
issue.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/network.rs`:
- Line 38: The remote cache file currently uses a single key (REMOTE_CACHE_FILE)
at cache_path (let cache_path = pvm_dir.join(REMOTE_CACHE_FILE)) causing
cross-target pollution; modify caching in the functions that read/write the
versions cache to include the target triple from get_target_triple() (e.g.,
append or prepend it to REMOTE_CACHE_FILE to form a target-scoped filename) or
embed the target triple inside the cached payload and validate it on load
(reject and refresh if mismatched), and update all places that construct
cache_path and the load/save logic so the cached versions used by functions
handling versions are always scoped per target triple.
- Around line 80-91: The HTTP clients are created with Client::new() with no
timeouts; replace those calls with a reqwest::Client built via
reqwest::Client::builder() (ClientBuilder) and set both connect_timeout(...) and
timeout(...) to reasonable durations, then call build() to obtain the Client;
apply this change to both locations in this file that currently call
Client::new() so both the connect and overall request timeouts are enforced.

In `@src/update.rs`:
- Around line 31-35: The throttle check using `now - last_check` can underflow
if `last_check` is in the future; change the subtraction to a safe form (e.g.,
use `now.saturating_sub(last_check)` or `checked_sub` and handle None) when
comparing against `86400` in the block that parses `last_check` from `contents`,
so the guard in the `if let Ok(last_check)` branch correctly skips the update
when appropriate and still unlocks the file via
`fs4::fs_std::FileExt::unlock(&file)` before returning `Ok(None)`.

---

Outside diff comments:
In `@src/commands/use_cmd.rs`:
- Around line 58-63: Install::call currently prompts the user and the caller in
use_cmd.rs ignores its answer; modify Install::call (or add a new method on
Install) to return a Result<bool, _> (or similar) that indicates whether the
install performed the "use now" switch, then update the call site where
install_cmd.call().await is invoked to capture that boolean and only set version
= newer_version when it is true; ensure the new return type is propagated or
provide a non-interactive helper like Install::call_non_interactive that
performs install without prompting and returns whether it switched so use_cmd
can decide whether to update its version variable.
- Around line 119-131: Remove the unsafe process-wide environment mutation in
use_cmd.rs: delete the unsafe block that calls std::env::set_var for
MULTISHELL_PATH_VAR and PATH (the code that reads std::env::var_os("PATH"),
builds new_path with bin_dir, and calls std::env::set_var("PATH", ...)). Rely on
the env-update file already written earlier (the env exports written at lines
~112–116) so child shells pick up bin_dir via the shell hook wrapper instead of
mutating the current process; do not replace it with any other
std::env::set_var/remove_var calls.

In `@src/fs.rs`:
- Around line 188-229: The tests test_get_pvm_dir_with_env_override and
test_list_installed_versions mutate the process env unsafely via ENV_LOCK and
unsafe std::env::set_var/remove_var; change them into subprocess-based
integration tests instead: remove the ENV_LOCK and unsafe env mutations from
src/fs.rs tests and create integration tests (e.g., in tests/ or using
assert_cmd) that spawn a child process with PVM_DIR set via
Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in that
temp_dir, run a small test helper or the binary which calls get_pvm_dir(),
get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit
code; this avoids global env mutation and resolves the Rust 2024 thread-safety
issue.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: af88eca4-d900-479a-95d9-af23f75828b9

📥 Commits

Reviewing files that changed from the base of the PR and between 07cf984 and 29f0b8e.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • Cargo.toml
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/fs.rs
  • src/network.rs
  • src/update.rs

pub async fn get_available_versions() -> Result<Vec<String>> {
let pvm_dir = crate::fs::get_pvm_dir()?;
let cache_path = pvm_dir.join(CACHE_FILE);
let cache_path = pvm_dir.join(REMOTE_CACHE_FILE);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Scope the version cache by target triple.

versions is now filtered by get_target_triple(), but the cache key is still the single remote_cache.json. Any shared PVM_DIR used from another architecture/OS can reuse the wrong filtered list for 24 hours, which can hide valid versions or resolve one that later 404s on download. Key the cache by target, or store the target in the cached payload and reject mismatches.

Suggested fix
-    let cache_path = pvm_dir.join(REMOTE_CACHE_FILE);
+    let target = get_target_triple()?;
+    let cache_path = pvm_dir.join(format!("{REMOTE_CACHE_FILE}.{target}"));
...
-    let target = get_target_triple()?;
     let suffix = format!("-cli-{}.tar.gz", target);

Also applies to: 94-110

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` at line 38, The remote cache file currently uses a single key
(REMOTE_CACHE_FILE) at cache_path (let cache_path =
pvm_dir.join(REMOTE_CACHE_FILE)) causing cross-target pollution; modify caching
in the functions that read/write the versions cache to include the target triple
from get_target_triple() (e.g., append or prepend it to REMOTE_CACHE_FILE to
form a target-scoped filename) or embed the target triple inside the cached
payload and validate it on load (reject and refresh if mismatched), and update
all places that construct cache_path and the load/save logic so the cached
versions used by functions handling versions are always scoped per target
triple.

Comment on lines 80 to +91
let client = Client::new();
let res = client.get(BASE_URL).send().await?.text().await?;
let json_url = format!("{}?format=json", BASE_URL);
let res = client
.get(json_url)
.send()
.await
.context("Failed to fetch version list from remote")?
.error_for_status()
.context("Remote server returned an error when fetching version list")?
.json::<Vec<RemoteFile>>()
.await
.context("Failed to parse remote version JSON")?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's check the file size and examine the relevant sections
wc -l src/network.rs

Repository: WebProject-xyz/php-version-manager

Length of output: 96


🏁 Script executed:

# Read the file to see the context around lines 80-91 and 169-175
cat -n src/network.rs | head -20

Repository: WebProject-xyz/php-version-manager

Length of output: 735


🏁 Script executed:

# Search for all Client::new() calls in the repository
rg "Client::new\(\)" src/network.rs -n -A 5 -B 2

Repository: WebProject-xyz/php-version-manager

Length of output: 650


🏁 Script executed:

# Check imports at the top of the file
head -30 src/network.rs

Repository: WebProject-xyz/php-version-manager

Length of output: 899


🏁 Script executed:

# View full context around lines 80-91
sed -n '75,95p' src/network.rs | cat -n

Repository: WebProject-xyz/php-version-manager

Length of output: 946


🏁 Script executed:

# View full context around lines 169-175
sed -n '165,180p' src/network.rs | cat -n

Repository: WebProject-xyz/php-version-manager

Length of output: 808


🏁 Script executed:

# Count all occurrences of Client::new()
rg "Client::new\(\)" src/network.rs | wc -l

Repository: WebProject-xyz/php-version-manager

Length of output: 79


🏁 Script executed:

# Find all occurrences with line numbers
rg "Client::new\(\)" src/network.rs -n

Repository: WebProject-xyz/php-version-manager

Length of output: 148


🏁 Script executed:

# Check Cargo.toml for reqwest version
cat Cargo.toml | grep -A 2 reqwest

Repository: WebProject-xyz/php-version-manager

Length of output: 224


🏁 Script executed:

# Search for any existing client configuration patterns
rg "ClientBuilder\|client\(" src/network.rs -n

Repository: WebProject-xyz/php-version-manager

Length of output: 60


🌐 Web query:

reqwest 0.13 ClientBuilder timeout connect_timeout API documentation

💡 Result:

In reqwest 0.13.x, these are the relevant ClientBuilder timeout APIs:

Async client (reqwest::Client::builder())

  • timeout(Duration): total request deadline, from “start connecting” until the response body is fully read (default: no timeout). [1]
  • read_timeout(Duration): per-read timeout that resets after each successful read (default: no timeout). [1]
  • connect_timeout(Duration): timeout for only the connect phase (default: None/no timeout). [1]

Blocking client (reqwest::blocking::Client::builder())

  • timeout(T: Into<Option<Duration>>) (e.g., Some(d) / None): timeout for connect, read, and write operations (default: 30 seconds; None disables). [2]
  • connect_timeout(T: Into<Option<Duration>>) (e.g., Some(d) / None): timeout for only the connect phase (default: None). [2]

Example (async):

use std::time::Duration;

let client = reqwest::Client::builder()
    .connect_timeout(Duration::from_secs(3))
    .timeout(Duration::from_secs(10))
    .build()?;

Example (blocking):

use std::time::Duration;

let client = reqwest::blocking::Client::builder()
    .connect_timeout(Some(Duration::from_secs(3)))
    .timeout(Some(Duration::from_secs(10)))
    .build()?;

Sources: reqwest::ClientBuilder (async) docs [1], reqwest::blocking::ClientBuilder docs [2].

[1] (docs.rs)
[2] (docs.rs)

Citations:


Add bounded timeouts to both HTTP clients.

Both calls use Client::new() without timeouts, which allows hung connections to block indefinitely. Use ClientBuilder to set both connect_timeout() and timeout() on both occurrences at lines 80 and 168.

Suggested fix
-    let client = Client::new();
+    let client = Client::builder()
+        .connect_timeout(Duration::from_secs(10))
+        .timeout(Duration::from_secs(300))
+        .build()
+        .context("Failed to initialize HTTP client")?;

Apply to both call sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 80 - 91, The HTTP clients are created with
Client::new() with no timeouts; replace those calls with a reqwest::Client built
via reqwest::Client::builder() (ClientBuilder) and set both connect_timeout(...)
and timeout(...) to reasonable durations, then call build() to obtain the
Client; apply this change to both locations in this file that currently call
Client::new() so both the connect and overall request timeouts are enforced.

Comment on lines +31 to +35
if !contents.is_empty() {
if let Ok(last_check) = contents.trim().parse::<u64>() {
if now - last_check < 86400 {
fs4::fs_std::FileExt::unlock(&file).ok();
return Ok(None);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle future timestamps in the guard file.

If the stored timestamp is ahead of now because of clock skew, a restored VM snapshot, or manual edits, now - last_check underflows and the throttle decision becomes wrong. Use saturating_sub or checked_sub before comparing with 86400.

Suggested fix
-            if now - last_check < 86400 {
+            if now.saturating_sub(last_check) < 86400 {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/update.rs` around lines 31 - 35, The throttle check using `now -
last_check` can underflow if `last_check` is in the future; change the
subtraction to a safe form (e.g., use `now.saturating_sub(last_check)` or
`checked_sub` and handle None) when comparing against `86400` in the block that
parses `last_check` from `contents`, so the guard in the `if let Ok(last_check)`
branch correctly skips the update when appropriate and still unlocks the file
via `fs4::fs_std::FileExt::unlock(&file)` before returning `Ok(None)`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant