feat: improve concurrency safety, cross-platform support and dynamic versioning#6
feat: improve concurrency safety, cross-platform support and dynamic versioning#6Fahl-Design wants to merge 4 commits intomainfrom
Conversation
…versioning This change introduces: - File locking for shared state files using fs4 - Dynamic versioning with commit ID and build time via build.rs - Cross-platform support for macOS and ARM architectures - Session-specific environment updates using PVM_SHELL_PID - Safety guards and silent removal of temp files in shell wrappers - Centralized constants and semantic version sorting logic - Bug fixes in interactive menu and shell file discovery
9d28092 to
e213c57
Compare
📝 WalkthroughWalkthroughAdds a build script embedding a git/timestamp PVM_VERSION, centralizes runtime strings as constants, replaces HTML scraping with JSON remote listings, implements PID-scoped atomic env-update files with file locking, adds version-sorting utilities, and updates related commands, shells, tests, and toolchain/dependencies. Changes
Sequence DiagramsequenceDiagram
participant Shell as Shell Wrapper
participant PVM as pvm CLI
participant FS as File System
participant Lock as File Lock
Shell->>PVM: invoke pvm with PVM_SHELL_PID / PVM_ENV_UPDATE_PATH
PVM->>FS: call get_env_update_path(override=None)
FS-->>PVM: return PID-specific env-update path
PVM->>Lock: open env-update file (exclusive lock)
Lock-->>PVM: lock acquired
PVM->>FS: write environment exports to env-update file (buffered)
PVM->>Lock: flush & release lock
Lock-->>PVM: lock released
PVM-->>Shell: exit code + path to env-update file
Shell->>FS: read env-update file contents
FS-->>Shell: environment export lines
Shell->>Shell: eval exports into shell environment
Shell->>FS: delete env-update file
FS-->>Shell: cleanup complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Cargo.toml (1)
4-4:⚠️ Potential issue | 🟡 MinorEnsure CI and local toolchains support Rust 1.85.0 or later.
The
edition = "2024"requires rustc 1.85.0 (stable) or later. Verify your CI pipeline and development environment are configured to use this minimum version.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` at line 4, The Cargo.toml now sets edition = "2024" which requires rustc 1.85.0+; update your toolchain configuration and CI to pin/use Rust 1.85.0 or later (e.g., update any rust-toolchain, GitHub Actions workflow steps, CI images, and Dockerfiles) so local and CI builds match the new edition; verify rustup/toolchain settings and CI matrix entries reference 1.85.0+ and rerun the pipeline to confirm compatibility.
🧹 Nitpick comments (3)
build.rs (2)
20-20: CI detection is limited to GitHub Actions.The
GITHUB_ACTIONScheck excludes other CI environments (GitLab CI, CircleCI, Jenkins, etc.). Consider using the more universalCIenvironment variable, which is set by most CI systems.💡 Proposed broader CI detection
- let is_ci = std::env::var("GITHUB_ACTIONS").is_ok(); + // CI env var is set by most CI systems (GitHub Actions, GitLab CI, CircleCI, Travis, etc.) + let is_ci = std::env::var("CI").is_ok() || std::env::var("GITHUB_ACTIONS").is_ok();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build.rs` at line 20, The CI detection currently sets is_ci by checking GITHUB_ACTIONS only; update it to detect a wider range of CI environments by checking the generic "CI" env var (and optionally fall back to other CI-specific vars) instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for most CI systems; locate the declaration of the is_ci variable and replace the std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden detection.
3-41: Addcargo:rerun-if-changeddirectives for build efficiency.Without rerun directives, this build script executes on every compilation, invoking
gitanddatecommands unnecessarily. Adding directives improves incremental build times.⚡ Proposed fix to add rerun directives
fn main() { + // Only rerun if git state or Cargo.toml changes + println!("cargo:rerun-if-changed=.git/HEAD"); + println!("cargo:rerun-if-changed=.git/refs/heads/"); + println!("cargo:rerun-if-changed=Cargo.toml"); + let commit_hash = Command::new("git")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build.rs` around lines 3 - 41, In main(), add cargo rerun directives so the build script only reruns when relevant inputs change: emit println!("cargo:rerun-if-changed=.git/HEAD") and println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely on) to avoid invoking git/date on every compile, and emit environment-change directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those vars or package version trigger a rerun; place these near the top of main() before computing commit_hash, build_time, and version.src/utils.rs (1)
10-18: Mixed semver/non-semver comparison may produce inconsistent ordering.When one string is valid semver and the other is not, both are compared using the numeric fallback, discarding valid semver metadata. For example, comparing
"8.2.0-alpha"(valid semver with prerelease) against"8.2"(invalid semver) uses the fallback, yielding[8,2,0]vs[8,2], treating the prerelease as greater—contradicting semver semantics where8.2.0-alpha < 8.2.0.Consider handling the mixed case explicitly or documenting this limitation.
💡 Proposed fix to handle mixed case
match (a_sem, b_sem) { (Ok(av), Ok(bv)) => av.cmp(&bv), + (Ok(_), Err(_)) => std::cmp::Ordering::Greater, // valid semver sorts after non-semver + (Err(_), Ok(_)) => std::cmp::Ordering::Less, _ => { // Fallback for non-semver strings (e.g., "8.2") let a_parts: Vec<u32> = a.split('.').filter_map(|s| s.parse().ok()).collect(); let b_parts: Vec<u32> = b.split('.').filter_map(|s| s.parse().ok()).collect(); a_parts.cmp(&b_parts) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils.rs` around lines 10 - 18, The current match over (a_sem, b_sem) falls through to the numeric fallback for mixed semver/non-semver cases and loses semver prerelease semantics; update the match to handle mixed cases explicitly by adding arms for (Ok(av), Err(_)) and (Err(_), Ok(bv)): when one side parsed as semver, split the non-semver string into numeric parts and compare major/minor/patch from av (use av.major,av.minor,av.patch) to those numeric parts; if numeric parts are equal but av has a prerelease, treat the semver prerelease as less than the non-semver “release” (i.e., return Less), and mirror the logic for the reversed case using bv, ensuring a and b variables and the existing match block are used to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.php-version:
- Line 1: The repository includes a committed .php-version file which is
machine-specific and can confuse contributors or cause merge conflicts; remove
the tracked .php-version from the repo (git rm --cached .php-version or delete
and commit), add ".php-version" to .gitignore to prevent re-committing, and if
keeping it intentionally, add a short note in the project README explaining why
this specific PHP version is required; reference the .php-version file when
making these changes.
In `@build.rs`:
- Around line 12-18: The current build_time computation uses
Command::new("date") (see build_time variable in build.rs) which is POSIX-only
and breaks on Windows; replace this shell call with a cross-platform timestamp
from Rust (add chrono as a build-dependency and compute build_time via
chrono::Utc::now().format(...).to_string()) so the timestamp is produced
reliably on all platforms and remove the Command::new("date") logic.
In `@src/commands/install.rs`:
- Around line 81-94: The current install path in install.rs swallows all I/O
errors when writing the env-update file (using .ok() on lock_exclusive,
writeln!, writer.flush, and unlock), causing a reported "successful switch" even
when the write fails; change this block to treat the write as fallible like
src/commands/use_cmd.rs: use crate::fs::get_env_update_path() as you already do,
but propagate errors (replace the .ok() calls with proper Result handling and
use ? to return the error or map it into the function's error type) for
file.open(), file.lock_exclusive(), writeln!(...), writer.flush(), and
file.unlock() so the install command returns an error if the env-update write
fails.
In `@src/commands/use_cmd.rs`:
- Around line 92-98: The code currently calls std::fs::write(PHP_VERSION_FILE,
&version).ok() and then unconditionally prints a success eprintln; change this
to check the Result from std::fs::write (e.g., match or if let Err(e)) so you
only print the success message when write returns Ok, and print or warn
(eprintln) with the error and mention PHP_VERSION_FILE when it fails so the user
knows the pin update was not written; update the write call handling and the
eprintln block around PHP_VERSION_FILE and version.bold() accordingly.
In `@src/fs.rs`:
- Around line 64-72: get_env_update_path currently rebuilds the .env_update
filename from PVM_SHELL_PID which still collides for concurrent backgrounded
invocations; instead accept a per-invocation path from the caller. Change the
signature of get_env_update_path (and any callers in src/shell.rs) to accept an
Option<PathBuf> or &Path (e.g., get_env_update_path(override: Option<&Path>))
and if provided return that path immediately, otherwise fall back to the
existing pvm_dir + PVM_SHELL_PID logic; update the shell wrapper code that
spawns each invocation to create a unique temp filename and pass that PathBuf
into get_env_update_path (or set and pass through an explicit env var like
PVM_ENV_UPDATE_PATH) so each invocation uses a unique .env_update file instead
of reconstructing from PVM_SHELL_PID.
In `@src/network.rs`:
- Around line 109-116: The cache file is currently truncated by File::create
before acquiring the exclusive lock, creating a race; replace the
File::create(&cache_path) call with opening via std::fs::OpenOptions
(create=true, read=true, write=true) so the file is not truncated on open, then
call file.lock_exclusive(), call file.set_len(0) to clear the file while holding
the lock, then create the std::io::BufWriter from the locked file and
write_all/flush, and finally call file.unlock(); keep references to
serde_json::to_string(&versions), pvm_dir, cache_path, lock_exclusive, set_len,
BufWriter, write_all, flush, and unlock when making the change.
In `@src/update.rs`:
- Around line 18-45: The guard file is being truncated at open which makes
read_to_string() always return empty; remove truncate(true) from the OpenOptions
chain used to open the file (the let file =
std::fs::OpenOptions::new()...open(&guard_file)? block) so the existing
timestamp can be read, then after deciding to update, call file.set_len(0) and
rewind before writing the new timestamp (the current set_len(0),
BufWriter::new(&file), writeln!(writer, "{}", now) and writer.flush() sequence)
and keep the existing lock_exclusive()/unlock() usage around the
read-and-possibly-write sequence.
---
Outside diff comments:
In `@Cargo.toml`:
- Line 4: The Cargo.toml now sets edition = "2024" which requires rustc 1.85.0+;
update your toolchain configuration and CI to pin/use Rust 1.85.0 or later
(e.g., update any rust-toolchain, GitHub Actions workflow steps, CI images, and
Dockerfiles) so local and CI builds match the new edition; verify
rustup/toolchain settings and CI matrix entries reference 1.85.0+ and rerun the
pipeline to confirm compatibility.
---
Nitpick comments:
In `@build.rs`:
- Line 20: The CI detection currently sets is_ci by checking GITHUB_ACTIONS
only; update it to detect a wider range of CI environments by checking the
generic "CI" env var (and optionally fall back to other CI-specific vars)
instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for most CI
systems; locate the declaration of the is_ci variable and replace the
std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for
std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden
detection.
- Around line 3-41: In main(), add cargo rerun directives so the build script
only reruns when relevant inputs change: emit
println!("cargo:rerun-if-changed=.git/HEAD") and
println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely
on) to avoid invoking git/date on every compile, and emit environment-change
directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and
println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those
vars or package version trigger a rerun; place these near the top of main()
before computing commit_hash, build_time, and version.
In `@src/utils.rs`:
- Around line 10-18: The current match over (a_sem, b_sem) falls through to the
numeric fallback for mixed semver/non-semver cases and loses semver prerelease
semantics; update the match to handle mixed cases explicitly by adding arms for
(Ok(av), Err(_)) and (Err(_), Ok(bv)): when one side parsed as semver, split the
non-semver string into numeric parts and compare major/minor/patch from av (use
av.major,av.minor,av.patch) to those numeric parts; if numeric parts are equal
but av has a prerelease, treat the semver prerelease as less than the non-semver
“release” (i.e., return Less), and mirror the logic for the reversed case using
bv, ensuring a and b variables and the existing match block are used to locate
the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 632cbd06-1e14-494d-8cb9-2e803f4dbca3
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
.php-versionCargo.tomlbuild.rssrc/cli.rssrc/commands/env.rssrc/commands/install.rssrc/commands/use_cmd.rssrc/constants.rssrc/fs.rssrc/main.rssrc/network.rssrc/shell.rssrc/update.rssrc/utils.rstests/cli.rs
.php-version
Outdated
| @@ -0,0 +1 @@ | |||
| 8.4.18 No newline at end of file | |||
There was a problem hiding this comment.
Consider removing .php-version from the repository.
This file is typically generated per-project by the pvm init or pvm use commands. Committing it to the version manager's own repository may:
- Confuse users who clone the repo, as
pvmmight try to use PHP 8.4.18 when working on the tool itself. - Create merge conflicts if developers have different local PHP versions.
Consider adding .php-version to .gitignore instead, or document why it's intentionally included.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.php-version at line 1, The repository includes a committed .php-version
file which is machine-specific and can confuse contributors or cause merge
conflicts; remove the tracked .php-version from the repo (git rm --cached
.php-version or delete and commit), add ".php-version" to .gitignore to prevent
re-committing, and if keeping it intentionally, add a short note in the project
README explaining why this specific PHP version is required; reference the
.php-version file when making these changes.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
src/shell.rs (1)
49-55:⚠️ Potential issue | 🟠 MajorThe wrapper temp filename can still collide.
In Bash and Zsh,
$$is the shell PID even in subshells, and Fish's$fish_pidis also the shell PID. With only a second-resolution timestamp added, concurrentpvminvocations from the same shell can still generate the sameenv_fileand clobber/remove each other's exports. Generate the path withmktemp(or another per-invocation random suffix) in each wrapper instead of assembling it manually. (gnu.org)Example pattern for Bash/Zsh
- local env_file="${PVM_DIR}/.env_update_$(date +%s)_$$" - [[ -f "$env_file" ]] && command rm -f "$env_file" 2>/dev/null + local env_file + env_file="$(mktemp "${PVM_DIR}/.env_update.XXXXXX")" || return 1Also applies to: 104-110, 157-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shell.rs` around lines 49 - 55, The temp env file generation using local env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename assembly with a per-invocation secure temp file created by mktemp (e.g. call mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before invoking command pvm, and keep the existing eval/cleanup logic but remove the old collision-prone construction; update the same pattern wherever env_file is built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and ensure you still remove the temp file after eval and on error (and add a trap if needed) so each wrapper invocation gets a unique file.
🧹 Nitpick comments (2)
src/update.rs (1)
39-45: Consider propagating errors from critical I/O operations.Using
.ok()onset_len,rewind, andflushsilently ignores potential I/O failures. While unlikely, if these fail, the guard file may be left in an inconsistent state. At minimum, propagating theunlock()error (or logging failures) would improve debuggability.♻️ Optional: Propagate errors for better reliability
// Write the new timestamp to prevent spam on next commands - file.set_len(0).ok(); - file.rewind().ok(); + file.set_len(0)?; + file.rewind()?; let mut writer = std::io::BufWriter::new(&file); - writeln!(writer, "{}", now).ok(); - writer.flush().ok(); - file.unlock().ok(); + writeln!(writer, "{}", now)?; + writer.flush()?; + file.unlock()?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/update.rs` around lines 39 - 45, Replace the silent .ok() calls on critical I/O operations so errors are propagated or at least surfaced: remove .ok() on file.set_len, file.rewind, writer.flush, and file.unlock in the guard update logic (the block creating BufWriter and writing the timestamp) and use the ? operator to return a Result from the containing function (or map_err/log the error) so failures aren’t ignored; ensure the containing function signature is updated to return Result if needed and specifically propagate or handle file.unlock() error instead of discarding it.src/network.rs (1)
178-180: Consider handling missingContent-Lengthheader gracefully.Some servers or proxies may not provide a
Content-Lengthheader (e.g., with chunked transfer encoding). Currently, this causes a hard error. Consider falling back to an indeterminate progress bar instead.♻️ Optional: Support downloads without Content-Length
- let total_size = response - .content_length() - .context("Failed to get content length from server")?; - - let pb = ProgressBar::new(total_size); - pb.set_style(ProgressStyle::default_bar() - .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")? - .progress_chars("#>-")); + let total_size = response.content_length(); + + let pb = if let Some(size) = total_size { + let pb = ProgressBar::new(size); + pb.set_style(ProgressStyle::default_bar() + .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")? + .progress_chars("#>-")); + pb + } else { + let pb = ProgressBar::new_spinner(); + pb.set_style(ProgressStyle::default_spinner() + .template("{spinner:.green} [{elapsed_precise}] {bytes}")?); + pb + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/network.rs` around lines 178 - 180, The code currently unwraps response.content_length() into total_size which errors when Content-Length is missing; change total_size to an Option<u64> (e.g., let total_size = response.content_length().ok(); or use .context only for unexpected errors) and adjust the progress bar creation/usage (the progress bar variable, e.g., pb) to accept an indeterminate/unknown length when total_size is None (create spinner or progress bar with unknown length) and ensure the download/read loop updates bytes read without assuming a known total; update any places that used total_size as u64 to handle the Option (match or if let Some(total) { set length } else { use indeterminate mode }).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/install.rs`:
- Around line 73-74: The file src/commands/install.rs is not formatted; run
rustfmt (cargo fmt) to fix style so the Release workflow passes; specifically
format the code around the calls to crate::fs::resolve_local_version and
crate::fs::get_version_bin_dir in the install command (e.g., the let v =
crate::fs::resolve_local_version(&resolved_version)?; and let bin_dir =
crate::fs::get_version_bin_dir(&v)?; lines) and commit the reformatted file.
- Around line 82-91: Open the file write sequence in src/commands/install.rs and
remove the truncate(true) flag from OpenOptions so the file is opened without
truncation, then acquire the advisory lock via file.lock_exclusive() before
clearing the file and call file.set_len(0)? to truncate while holding the lock,
followed by writing the contents (using the existing BufWriter/writeln!/flush)
and file.unlock()?; apply the same change to the duplicated sequence in
src/commands/use_cmd.rs or refactor both into a shared helper (e.g., a
write_env_file_locked function) to centralize opening (without truncate),
locking (file.lock_exclusive()), truncating (file.set_len(0)?), writing
(writeln!/flush), and unlocking (file.unlock()?).
In `@src/network.rs`:
- Around line 49-61: The nested conditional block around the cache check
(variables read_res, cache_path, modified, elapsed, CACHE_DURATION, and the
serde_json::from_str call that returns Ok(versions)) is misformatted; run
rustfmt (cargo fmt) or reformat that block so it matches rustfmt style (reflow
indentation and spacing in the if-let chains) to satisfy cargo fmt --check in
CI.
- Around line 93-112: The code block around get_target_triple()/suffix/versions
has formatting that fails cargo fmt; run `cargo fmt` or reformat the snippet so
it matches rustfmt style (fix spacing/indentation around the let target =
get_target_triple()?; let suffix = format!("-cli-{}.tar.gz", target); the for
file in res { ... } block and the call to crate::utils::sort_versions(&mut
versions)) to satisfy rustfmt and CI.
---
Duplicate comments:
In `@src/shell.rs`:
- Around line 49-55: The temp env file generation using local
env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename
assembly with a per-invocation secure temp file created by mktemp (e.g. call
mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before
invoking command pvm, and keep the existing eval/cleanup logic but remove the
old collision-prone construction; update the same pattern wherever env_file is
built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and
ensure you still remove the temp file after eval and on error (and add a trap if
needed) so each wrapper invocation gets a unique file.
---
Nitpick comments:
In `@src/network.rs`:
- Around line 178-180: The code currently unwraps response.content_length() into
total_size which errors when Content-Length is missing; change total_size to an
Option<u64> (e.g., let total_size = response.content_length().ok(); or use
.context only for unexpected errors) and adjust the progress bar creation/usage
(the progress bar variable, e.g., pb) to accept an indeterminate/unknown length
when total_size is None (create spinner or progress bar with unknown length) and
ensure the download/read loop updates bytes read without assuming a known total;
update any places that used total_size as u64 to handle the Option (match or if
let Some(total) { set length } else { use indeterminate mode }).
In `@src/update.rs`:
- Around line 39-45: Replace the silent .ok() calls on critical I/O operations
so errors are propagated or at least surfaced: remove .ok() on file.set_len,
file.rewind, writer.flush, and file.unlock in the guard update logic (the block
creating BufWriter and writing the timestamp) and use the ? operator to return a
Result from the containing function (or map_err/log the error) so failures
aren’t ignored; ensure the containing function signature is updated to return
Result if needed and specifically propagate or handle file.unlock() error
instead of discarding it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 843fd8fc-6613-4da5-b1f4-6e91e5b0a9eb
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
.gitignoreCargo.tomlbuild.rsrust-toolchain.tomlsrc/commands/install.rssrc/commands/use_cmd.rssrc/fs.rssrc/network.rssrc/shell.rssrc/update.rssrc/utils.rs
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
- src/utils.rs
- build.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/network.rs (2)
177-179: Download will fail if server omitsContent-Lengthheader.Some servers or CDN configurations may not provide a
Content-Lengthheader, especially for dynamically served content. The current code requires it and will error out with "Failed to get content length from server".Consider falling back to an indeterminate progress bar (spinner) when
content_length()returnsNone:Suggested approach
- let total_size = response - .content_length() - .context("Failed to get content length from server")?; - - let pb = ProgressBar::new(total_size); - pb.set_style(ProgressStyle::default_bar() - .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")? - .progress_chars("#>-")); + let total_size = response.content_length(); + + let pb = if let Some(size) = total_size { + let bar = ProgressBar::new(size); + bar.set_style(ProgressStyle::default_bar() + .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")? + .progress_chars("#>-")); + bar + } else { + let bar = ProgressBar::new_spinner(); + bar.set_style(ProgressStyle::default_spinner() + .template("{spinner:.green} [{elapsed_precise}] {bytes} downloaded")?); + bar + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/network.rs` around lines 177 - 179, The code currently unwraps response.content_length() into total_size which errors when the header is missing; change the logic to treat content_length() as an Option: if Some(len) keep using total_size as before, otherwise create an indeterminate progress indicator (spinner) instead of a length-based progress bar. Update all uses of total_size (and the progress bar creation/updates, e.g., where you construct the ProgressBar or call set_length) to handle the None case (use spinner.start() / spinner.inc() or a ProgressStyle::template for unknown length) and avoid returning an error from the download function when content_length() is None so downloads succeed against servers without Content-Length.
186-196: Minor:downloadedvariable is redundant.The
downloadedvariable tracks the same value asbuffer.len(). You can simplify by usingbuffer.len() as u64directly:Simplified approach
- let mut downloaded: u64 = 0; let mut stream = response.bytes_stream(); let mut buffer = Vec::new(); while let Some(item) = stream.next().await { let chunk = item.context("Error while downloading chunk")?; buffer.extend_from_slice(&chunk); - let new = std::cmp::min(downloaded + (chunk.len() as u64), total_size); - downloaded = new; - pb.set_position(new); + pb.set_position(buffer.len() as u64); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/network.rs` around lines 186 - 196, Remove the redundant downloaded variable and compute progress directly from buffer.len(); specifically delete the declaration "let mut downloaded: u64 = 0" and the assignment "downloaded = new", and within the loop replace the min/update logic with let new = std::cmp::min(buffer.len() as u64 + (chunk.len() as u64), total_size) (or simply compute new = std::cmp::min(buffer.len() as u64, total_size) after extending buffer) and call pb.set_position(new); keep the existing stream iteration (response.bytes_stream(), stream.next().await) and error handling (context) unchanged.src/update.rs (1)
60-64: Silently ignoring network errors may hide connectivity issues.When
network::resolve_versionfails, the function returnsOk(None)without any indication of the failure. While this is acceptable for an optional update check, users with persistent network issues won't receive any feedback.Consider logging a debug message when the resolution fails, or at minimum document this silent-failure behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/update.rs` around lines 60 - 64, The call to network::resolve_version(&minor_prefix).await currently ignores errors and returns Ok(None) silently; update the handling so that on Err(e) you emit a debug/log message (e.g., using tracing::debug! or log::debug!) that includes the error and context (minor_prefix and target_version) and then continue to return Ok(None) to preserve behavior; modify the match/if-let around network::resolve_version to capture Err(e) and log it rather than dropping it silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/use_cmd.rs`:
- Around line 120-130: The current pattern in use_cmd.rs opens the env file with
.truncate(true) before calling file.lock_exclusive(), which causes a race where
the file is truncated prior to acquiring the exclusive lock; remove the
.truncate(true) flag from the OpenOptions call, call file.lock_exclusive()
immediately after open, then call file.set_len(0)? to truncate the file while
holding the lock, write and flush via the existing writer, and finally unlock;
additionally, factor the logic into a shared helper (e.g., write_env_file_locked
in src::fs) used by both use_cmd.rs and install.rs to avoid duplication and
ensure the correct locking/truncation sequence.
---
Nitpick comments:
In `@src/network.rs`:
- Around line 177-179: The code currently unwraps response.content_length() into
total_size which errors when the header is missing; change the logic to treat
content_length() as an Option: if Some(len) keep using total_size as before,
otherwise create an indeterminate progress indicator (spinner) instead of a
length-based progress bar. Update all uses of total_size (and the progress bar
creation/updates, e.g., where you construct the ProgressBar or call set_length)
to handle the None case (use spinner.start() / spinner.inc() or a
ProgressStyle::template for unknown length) and avoid returning an error from
the download function when content_length() is None so downloads succeed against
servers without Content-Length.
- Around line 186-196: Remove the redundant downloaded variable and compute
progress directly from buffer.len(); specifically delete the declaration "let
mut downloaded: u64 = 0" and the assignment "downloaded = new", and within the
loop replace the min/update logic with let new = std::cmp::min(buffer.len() as
u64 + (chunk.len() as u64), total_size) (or simply compute new =
std::cmp::min(buffer.len() as u64, total_size) after extending buffer) and call
pb.set_position(new); keep the existing stream iteration
(response.bytes_stream(), stream.next().await) and error handling (context)
unchanged.
In `@src/update.rs`:
- Around line 60-64: The call to network::resolve_version(&minor_prefix).await
currently ignores errors and returns Ok(None) silently; update the handling so
that on Err(e) you emit a debug/log message (e.g., using tracing::debug! or
log::debug!) that includes the error and context (minor_prefix and
target_version) and then continue to return Ok(None) to preserve behavior;
modify the match/if-let around network::resolve_version to capture Err(e) and
log it rather than dropping it silently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d8ce650f-1cfb-48f0-8541-1861082d4c4c
📒 Files selected for processing (4)
src/commands/install.rssrc/commands/use_cmd.rssrc/network.rssrc/update.rs
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/commands/use_cmd.rs (2)
58-63:⚠️ Potential issue | 🟠 MajorThis flow ignores the answer to install's
use nowprompt.
Install::call()already asks whether the new version should be used immediately insrc/commands/install.rsLines 57-97.version = newer_versionhappens unconditionally afterward, souse_cmdstill switches even if the user answered “no” there. This path needs a non-interactive install helper, orinstallshould return whether it already performed the switch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/use_cmd.rs` around lines 58 - 63, Install::call currently prompts the user and the caller in use_cmd.rs ignores its answer; modify Install::call (or add a new method on Install) to return a Result<bool, _> (or similar) that indicates whether the install performed the "use now" switch, then update the call site where install_cmd.call().await is invoked to capture that boolean and only set version = newer_version when it is true; ensure the new return type is propagated or provide a non-interactive helper like Install::call_non_interactive that performs install without prompting and returns whether it switched so use_cmd can decide whether to update its version variable.
119-131:⚠️ Potential issue | 🔴 CriticalRemove the unsafe environment mutation; rely on the env-update file approach already being written.
The
unsafeblock here uses the same unsoundstd::env::set_varpattern as ininstall.rs. In Rust 2024,std::env::set_varis explicitly markedunsafebecause it mutates process-wide global state without proving single-threaded or exclusive access—the standard library docs state that on non-Windows platforms in multithreaded programs, the only safe option is to not useset_var/remove_varat all, since no thread can reliably guarantee exclusive access to the environment.The code already correctly writes env exports to the file (lines 112–116), which will be evaluated by the shell hook wrapper. Remove the
unsafeblock (lines 119–131) and let child processes read the environment from the file instead of mutating the current process's global environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/use_cmd.rs` around lines 119 - 131, Remove the unsafe process-wide environment mutation in use_cmd.rs: delete the unsafe block that calls std::env::set_var for MULTISHELL_PATH_VAR and PATH (the code that reads std::env::var_os("PATH"), builds new_path with bin_dir, and calls std::env::set_var("PATH", ...)). Rely on the env-update file already written earlier (the env exports written at lines ~112–116) so child shells pick up bin_dir via the shell hook wrapper instead of mutating the current process; do not replace it with any other std::env::set_var/remove_var calls.src/fs.rs (1)
188-229:⚠️ Potential issue | 🟠 MajorEnv-override tests use unsafe mutation without proper multi-threaded protection.
In Rust 2024,
std::env::set_varandstd::env::remove_varrequire that no other thread concurrently reads or writes the environment. A localMutexonly serializes access within the two tests that use it; it cannot prevent other tests running in parallel (which is the default incargo test) or library code from readingPVM_DIRconcurrently. On non-Windows platforms, this is unsound.Move these tests to subprocess-based integration tests (e.g., using
assert_cmd), serialize the entire test suite with--test-threads=1, or refactor the code under test to accept configuration without mutating the process environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fs.rs` around lines 188 - 229, The tests test_get_pvm_dir_with_env_override and test_list_installed_versions mutate the process env unsafely via ENV_LOCK and unsafe std::env::set_var/remove_var; change them into subprocess-based integration tests instead: remove the ENV_LOCK and unsafe env mutations from src/fs.rs tests and create integration tests (e.g., in tests/ or using assert_cmd) that spawn a child process with PVM_DIR set via Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in that temp_dir, run a small test helper or the binary which calls get_pvm_dir(), get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit code; this avoids global env mutation and resolves the Rust 2024 thread-safety issue.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/network.rs`:
- Line 38: The remote cache file currently uses a single key (REMOTE_CACHE_FILE)
at cache_path (let cache_path = pvm_dir.join(REMOTE_CACHE_FILE)) causing
cross-target pollution; modify caching in the functions that read/write the
versions cache to include the target triple from get_target_triple() (e.g.,
append or prepend it to REMOTE_CACHE_FILE to form a target-scoped filename) or
embed the target triple inside the cached payload and validate it on load
(reject and refresh if mismatched), and update all places that construct
cache_path and the load/save logic so the cached versions used by functions
handling versions are always scoped per target triple.
- Around line 80-91: The HTTP clients are created with Client::new() with no
timeouts; replace those calls with a reqwest::Client built via
reqwest::Client::builder() (ClientBuilder) and set both connect_timeout(...) and
timeout(...) to reasonable durations, then call build() to obtain the Client;
apply this change to both locations in this file that currently call
Client::new() so both the connect and overall request timeouts are enforced.
In `@src/update.rs`:
- Around line 31-35: The throttle check using `now - last_check` can underflow
if `last_check` is in the future; change the subtraction to a safe form (e.g.,
use `now.saturating_sub(last_check)` or `checked_sub` and handle None) when
comparing against `86400` in the block that parses `last_check` from `contents`,
so the guard in the `if let Ok(last_check)` branch correctly skips the update
when appropriate and still unlocks the file via
`fs4::fs_std::FileExt::unlock(&file)` before returning `Ok(None)`.
---
Outside diff comments:
In `@src/commands/use_cmd.rs`:
- Around line 58-63: Install::call currently prompts the user and the caller in
use_cmd.rs ignores its answer; modify Install::call (or add a new method on
Install) to return a Result<bool, _> (or similar) that indicates whether the
install performed the "use now" switch, then update the call site where
install_cmd.call().await is invoked to capture that boolean and only set version
= newer_version when it is true; ensure the new return type is propagated or
provide a non-interactive helper like Install::call_non_interactive that
performs install without prompting and returns whether it switched so use_cmd
can decide whether to update its version variable.
- Around line 119-131: Remove the unsafe process-wide environment mutation in
use_cmd.rs: delete the unsafe block that calls std::env::set_var for
MULTISHELL_PATH_VAR and PATH (the code that reads std::env::var_os("PATH"),
builds new_path with bin_dir, and calls std::env::set_var("PATH", ...)). Rely on
the env-update file already written earlier (the env exports written at lines
~112–116) so child shells pick up bin_dir via the shell hook wrapper instead of
mutating the current process; do not replace it with any other
std::env::set_var/remove_var calls.
In `@src/fs.rs`:
- Around line 188-229: The tests test_get_pvm_dir_with_env_override and
test_list_installed_versions mutate the process env unsafely via ENV_LOCK and
unsafe std::env::set_var/remove_var; change them into subprocess-based
integration tests instead: remove the ENV_LOCK and unsafe env mutations from
src/fs.rs tests and create integration tests (e.g., in tests/ or using
assert_cmd) that spawn a child process with PVM_DIR set via
Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in that
temp_dir, run a small test helper or the binary which calls get_pvm_dir(),
get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit
code; this avoids global env mutation and resolves the Rust 2024 thread-safety
issue.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: af88eca4-d900-479a-95d9-af23f75828b9
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
Cargo.tomlsrc/commands/install.rssrc/commands/use_cmd.rssrc/fs.rssrc/network.rssrc/update.rs
| pub async fn get_available_versions() -> Result<Vec<String>> { | ||
| let pvm_dir = crate::fs::get_pvm_dir()?; | ||
| let cache_path = pvm_dir.join(CACHE_FILE); | ||
| let cache_path = pvm_dir.join(REMOTE_CACHE_FILE); |
There was a problem hiding this comment.
Scope the version cache by target triple.
versions is now filtered by get_target_triple(), but the cache key is still the single remote_cache.json. Any shared PVM_DIR used from another architecture/OS can reuse the wrong filtered list for 24 hours, which can hide valid versions or resolve one that later 404s on download. Key the cache by target, or store the target in the cached payload and reject mismatches.
Suggested fix
- let cache_path = pvm_dir.join(REMOTE_CACHE_FILE);
+ let target = get_target_triple()?;
+ let cache_path = pvm_dir.join(format!("{REMOTE_CACHE_FILE}.{target}"));
...
- let target = get_target_triple()?;
let suffix = format!("-cli-{}.tar.gz", target);Also applies to: 94-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/network.rs` at line 38, The remote cache file currently uses a single key
(REMOTE_CACHE_FILE) at cache_path (let cache_path =
pvm_dir.join(REMOTE_CACHE_FILE)) causing cross-target pollution; modify caching
in the functions that read/write the versions cache to include the target triple
from get_target_triple() (e.g., append or prepend it to REMOTE_CACHE_FILE to
form a target-scoped filename) or embed the target triple inside the cached
payload and validate it on load (reject and refresh if mismatched), and update
all places that construct cache_path and the load/save logic so the cached
versions used by functions handling versions are always scoped per target
triple.
| let client = Client::new(); | ||
| let res = client.get(BASE_URL).send().await?.text().await?; | ||
| let json_url = format!("{}?format=json", BASE_URL); | ||
| let res = client | ||
| .get(json_url) | ||
| .send() | ||
| .await | ||
| .context("Failed to fetch version list from remote")? | ||
| .error_for_status() | ||
| .context("Remote server returned an error when fetching version list")? | ||
| .json::<Vec<RemoteFile>>() | ||
| .await | ||
| .context("Failed to parse remote version JSON")?; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file size and examine the relevant sections
wc -l src/network.rsRepository: WebProject-xyz/php-version-manager
Length of output: 96
🏁 Script executed:
# Read the file to see the context around lines 80-91 and 169-175
cat -n src/network.rs | head -20Repository: WebProject-xyz/php-version-manager
Length of output: 735
🏁 Script executed:
# Search for all Client::new() calls in the repository
rg "Client::new\(\)" src/network.rs -n -A 5 -B 2Repository: WebProject-xyz/php-version-manager
Length of output: 650
🏁 Script executed:
# Check imports at the top of the file
head -30 src/network.rsRepository: WebProject-xyz/php-version-manager
Length of output: 899
🏁 Script executed:
# View full context around lines 80-91
sed -n '75,95p' src/network.rs | cat -nRepository: WebProject-xyz/php-version-manager
Length of output: 946
🏁 Script executed:
# View full context around lines 169-175
sed -n '165,180p' src/network.rs | cat -nRepository: WebProject-xyz/php-version-manager
Length of output: 808
🏁 Script executed:
# Count all occurrences of Client::new()
rg "Client::new\(\)" src/network.rs | wc -lRepository: WebProject-xyz/php-version-manager
Length of output: 79
🏁 Script executed:
# Find all occurrences with line numbers
rg "Client::new\(\)" src/network.rs -nRepository: WebProject-xyz/php-version-manager
Length of output: 148
🏁 Script executed:
# Check Cargo.toml for reqwest version
cat Cargo.toml | grep -A 2 reqwestRepository: WebProject-xyz/php-version-manager
Length of output: 224
🏁 Script executed:
# Search for any existing client configuration patterns
rg "ClientBuilder\|client\(" src/network.rs -nRepository: WebProject-xyz/php-version-manager
Length of output: 60
🌐 Web query:
reqwest 0.13 ClientBuilder timeout connect_timeout API documentation
💡 Result:
In reqwest 0.13.x, these are the relevant ClientBuilder timeout APIs:
Async client (reqwest::Client::builder())
timeout(Duration): total request deadline, from “start connecting” until the response body is fully read (default: no timeout). [1]read_timeout(Duration): per-read timeout that resets after each successful read (default: no timeout). [1]connect_timeout(Duration): timeout for only the connect phase (default: None/no timeout). [1]
Blocking client (reqwest::blocking::Client::builder())
timeout(T: Into<Option<Duration>>)(e.g.,Some(d)/None): timeout for connect, read, and write operations (default: 30 seconds;Nonedisables). [2]connect_timeout(T: Into<Option<Duration>>)(e.g.,Some(d)/None): timeout for only the connect phase (default: None). [2]
Example (async):
use std::time::Duration;
let client = reqwest::Client::builder()
.connect_timeout(Duration::from_secs(3))
.timeout(Duration::from_secs(10))
.build()?;Example (blocking):
use std::time::Duration;
let client = reqwest::blocking::Client::builder()
.connect_timeout(Some(Duration::from_secs(3)))
.timeout(Some(Duration::from_secs(10)))
.build()?;Sources: reqwest::ClientBuilder (async) docs [1], reqwest::blocking::ClientBuilder docs [2].
Citations:
- 1: https://docs.rs/reqwest/latest/reqwest/struct.ClientBuilder.html
- 2: https://docs.rs/reqwest/latest/reqwest/blocking/struct.ClientBuilder.html
Add bounded timeouts to both HTTP clients.
Both calls use Client::new() without timeouts, which allows hung connections to block indefinitely. Use ClientBuilder to set both connect_timeout() and timeout() on both occurrences at lines 80 and 168.
Suggested fix
- let client = Client::new();
+ let client = Client::builder()
+ .connect_timeout(Duration::from_secs(10))
+ .timeout(Duration::from_secs(300))
+ .build()
+ .context("Failed to initialize HTTP client")?;Apply to both call sites.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/network.rs` around lines 80 - 91, The HTTP clients are created with
Client::new() with no timeouts; replace those calls with a reqwest::Client built
via reqwest::Client::builder() (ClientBuilder) and set both connect_timeout(...)
and timeout(...) to reasonable durations, then call build() to obtain the
Client; apply this change to both locations in this file that currently call
Client::new() so both the connect and overall request timeouts are enforced.
| if !contents.is_empty() { | ||
| if let Ok(last_check) = contents.trim().parse::<u64>() { | ||
| if now - last_check < 86400 { | ||
| fs4::fs_std::FileExt::unlock(&file).ok(); | ||
| return Ok(None); |
There was a problem hiding this comment.
Handle future timestamps in the guard file.
If the stored timestamp is ahead of now because of clock skew, a restored VM snapshot, or manual edits, now - last_check underflows and the throttle decision becomes wrong. Use saturating_sub or checked_sub before comparing with 86400.
Suggested fix
- if now - last_check < 86400 {
+ if now.saturating_sub(last_check) < 86400 {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/update.rs` around lines 31 - 35, The throttle check using `now -
last_check` can underflow if `last_check` is in the future; change the
subtraction to a safe form (e.g., use `now.saturating_sub(last_check)` or
`checked_sub` and handle None) when comparing against `86400` in the block that
parses `last_check` from `contents`, so the guard in the `if let Ok(last_check)`
branch correctly skips the update when appropriate and still unlocks the file
via `fs4::fs_std::FileExt::unlock(&file)` before returning `Ok(None)`.
Summary of Changes
1. Shell Update & Session Isolation (Root Cause Fix)
pvm envcommand now explicitly exportsPVM_DIR, ensuring the shell wrapper can correctly locate its configuration files..env_update_1234). This prevents race conditions and "version bleeding" between different terminal tabs.rmorcatoperations on invalid paths.command rmand redirected all temporary file cleanup output to/dev/null.2. Concurrency & Reliability
fs4crate to implement advisory file locking on shared state files (.env_update,.update_check_guard,remote_cache.json). This ensures data integrity when multiple PVM instances run simultaneously.network.rsto use the remote JSON API instead of HTML scraping. This is faster, more robust, and allowed for the removal of thescraperdependency.3. Cross-Platform Support
std::env::consts). PVM now natively supports macOS (Intel/Silicon) and ARM64 Linux (e.g., Raspberry Pi).4. Modernization & Maintainability
build.rsscript that injects the current Git commit ID and build timestamp into the version string.semvercrate for standard-compliant version parsing and sorting, replacing brittle manual string splitting.src/constants.rsfile.scraper,regex,thiserror,log).Summary by CodeRabbit
New Features
Improvements
Tests / Chores