Conversation
📝 WalkthroughWalkthroughAdds a large, Node-facing filesystem API implemented in Rust via napi: many new modules (access, stat, read/write, copy, cp, glob, readdir, rm, symlink, etc.), new Dirent/Stats types, extensive AVA tests and benchmarks, documentation/contributing updates, JS re-exports, and reference JS utilities. Changes
Sequence Diagram(s)sequenceDiagram
participant JS as "JavaScript caller"
participant Module as "module.exports (index.js)"
participant Native as "nativeBinding (N-API)"
participant Rust as "Rust implementation (src/*.rs)"
participant FS as "Filesystem"
JS->>Module: call fs.glob(pattern, options)
Module->>Native: nativeBinding.glob(pattern, options)
Native->>Rust: invoke glob task (sync or async)
Rust->>FS: traverse filesystem (WalkBuilder, gitignore, exclude, concurrency)
FS-->>Rust: entries + metadata
Rust-->>Native: return Either<Vec<string>, Vec<Dirent>>
Native-->>Module: propagate result / Promise
Module-->>JS: resolve with strings or Dirent objects
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (11)
benchmark/glob.ts (1)
33-33: Use a neutral method to check directory existence.Using
fast-globto check fornode_modulesexistence creates a circular dependency in the benchmark, as you're using one of the libraries being benchmarked to control the benchmark flow. This could introduce bias or side effects.Apply this diff to use Node's built-in filesystem check:
+import { existsSync } from 'fs' + // ... - const hasNodeModules = fastGlob.sync('node_modules').length > 0 + const hasNodeModules = existsSync('node_modules')__test__/glob.spec.ts (1)
50-70: Improve git_ignore test pattern for more reliable verification.The test uses pattern
target/**/*.d, but Rust build artifacts (intarget/) typically don't include.dfiles. This may cause the test to always pass with empty results, reducing its effectiveness.Consider using a pattern that matches actual Rust build artifacts:
- const ignoredFiles = globSync('target/**/*.d', { cwd: CWD }) + const ignoredFiles = globSync('target/**/*.rlib', { cwd: CWD }) // Should be empty or very few if ignored (actually cargo ignores target/) // But wait, standard_filters includes .ignore and .gitignore. // If we force git_ignore: false, we should see files - const includedFiles = globSync('target/**/*.d', { cwd: CWD, git_ignore: false }) + const includedFiles = globSync('target/**/*.rlib', { cwd: CWD, git_ignore: false })Alternatively, use
target/**/*to match all files for a more comprehensive test.src/types.rs (3)
1-1: Reconsider the blanket dead_code suppression.The module-level
#![allow(dead_code)]directive suppresses warnings for all unused code, which can hide genuinely unused items.If specific items need this attribute, apply it at the item level. Otherwise, consider removing it to maintain code hygiene.
7-14: Define constants for file_type values.The
file_typefield uses magic numbers (1-7) with meanings documented only in a comment. Consider defining module-level constants to improve readability and maintainability.Add constants at the module level:
const FILE_TYPE_FILE: u8 = 1; const FILE_TYPE_DIR: u8 = 2; const FILE_TYPE_SYMLINK: u8 = 3; const FILE_TYPE_BLOCK: u8 = 4; const FILE_TYPE_CHAR: u8 = 5; const FILE_TYPE_FIFO: u8 = 6; const FILE_TYPE_SOCKET: u8 = 7; const FILE_TYPE_UNKNOWN: u8 = 0;Then update the methods:
#[napi(js_name = "isFile")] pub fn is_file(&self) -> bool { - self.file_type == 1 + self.file_type == FILE_TYPE_FILE }
53-57: Clarify deprecation status and optimize if retained.The comment marks this as a "Deprecated alias," but the public API in
index.d.ts(line 13) exposespathas a standard getter without deprecation warnings.Additionally, cloning
parent_pathon every call is inefficient. Sinceparent_pathis already exposed as a readonly field, consider:
- If deprecated: Document the replacement API and add deprecation annotations
- If retained: Return a reference instead of cloning
If retained, return a reference:
-#[napi(getter)] -pub fn path(&self) -> String { - self.parent_path.clone() -} +#[napi(getter)] +pub fn path(&self) -> &str { + &self.parent_path +}src/glob.rs (6)
24-30: Consider implementing Default for GlobOptions.The manual construction of default options is verbose. Implementing the
Defaulttrait would simplify this.Add to
GlobOptions:impl Default for GlobOptions { fn default() -> Self { Self { cwd: None, with_file_types: None, exclude: None, concurrency: None, git_ignore: None, } } }Then simplify line 24:
-let opts = options.unwrap_or(GlobOptions { - cwd: None, - with_file_types: None, - exclude: None, - concurrency: None, - git_ignore: None, -}); +let opts = options.unwrap_or_default();
32-34: Document the default concurrency value.The default concurrency of 4 threads is reasonable but arbitrary. Consider documenting why this value was chosen or making it configurable based on available CPU cores.
You could use
num_cpus::get()for a dynamic default, or simply document the rationale in a comment:+// Default to 4 threads for balanced performance across different systems let concurrency = opts.concurrency.unwrap_or(4) as usize;
57-61: Clarify git_ignore option semantics.The
git_ignoreoption name might be ambiguous. The default valuetrueenables respecting.gitignorefiles (viastandard_filters), but the name could be read as "ignore git files" rather than "use gitignore rules."Consider renaming to
respect_git_ignoreoruse_git_ignorefor clarity, or add a comment:builder .overrides(overrides) // Apply glob patterns - .standard_filters(opts.git_ignore.unwrap_or(true)) // Automatically handle .gitignore, .ignore etc + .standard_filters(opts.git_ignore.unwrap_or(true)) // When true, respect .gitignore, .ignore, etc. .threads(concurrency); // Core: Enable multithreading with one line!
107-116: Validate path component extraction edge cases.The
unwrap_orfallbacks forparent_pathandfile_namemight produce unexpected results for edge cases (root-level files, special paths). Consider validating that these components exist before proceeding.Add validation or explicit error handling:
let parent_path = relative_path .parent() .map(|p| p.to_string_lossy().to_string()) .unwrap_or_else(|| ".".to_string()); let name = relative_path .file_name() .map(|n| n.to_string_lossy().to_string()) .ok_or_else(|| /* handle missing filename */)?;
134-136: Handle or log walk errors.Silently ignoring errors during filesystem traversal can hide permission issues, broken symlinks, or other problems. Consider logging these errors or collecting them for the caller.
At minimum, add a comment explaining why errors are ignored:
Err(_) => { - // Handle errors or ignore permission errors + // Ignore permission errors and inaccessible paths to continue traversal }Or collect errors for optional reporting:
Err(e) => { // Log or collect error for debugging eprintln!("Warning: Failed to access path: {}", e); }
143-152: Improve error messages for lock failures.The generic "Lock error" and "Mutex error" messages don't provide useful debugging information. These errors typically indicate a logic bug rather than a runtime failure.
Enhance error messages:
let final_results = Arc::try_unwrap(result_dirents) - .map_err(|_| Error::from_reason("Lock error"))? + .map_err(|_| Error::from_reason("Failed to unwrap Arc: reference count > 1"))? .into_inner() - .map_err(|_| Error::from_reason("Mutex error"))?; + .map_err(|e| Error::from_reason(format!("Failed to acquire mutex lock: {}", e)))?;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (15)
.github/ISSUE_TEMPLATE.md(1 hunks).husky/pre-commit(0 hunks)Cargo.toml(1 hunks)PR_TEMPLATE.md(0 hunks)__test__/glob.spec.ts(1 hunks)__test__/readdir.spec.ts(2 hunks)benchmark/glob.ts(1 hunks)index.d.ts(1 hunks)index.js(1 hunks)package.json(2 hunks)src/glob.rs(1 hunks)src/lib.rs(1 hunks)src/readdir.rs(3 hunks)src/types.rs(1 hunks)src/utils.rs(1 hunks)
💤 Files with no reviewable changes (2)
- PR_TEMPLATE.md
- .husky/pre-commit
🧰 Additional context used
🧬 Code graph analysis (7)
src/lib.rs (3)
src/glob.rs (1)
glob(177-179)src/readdir.rs (1)
readdir(179-181)src/rm.rs (1)
rm(122-124)
benchmark/glob.ts (2)
benchmark/bench.ts (1)
runBenchmarks(7-39)benchmark/rm.ts (4)
runGroup(32-85)run(87-95)dir(88-88)dir(89-89)
index.d.ts (1)
src/glob.rs (1)
glob(177-179)
index.js (1)
benchmark/readdir.ts (3)
readdirSync(25-25)readdirSync(19-19)readdirSync(31-31)
src/glob.rs (4)
src/utils.rs (1)
get_file_type_id(4-22)src/types.rs (1)
path(55-57)index.d.ts (1)
GlobOptions(18-24)src/readdir.rs (2)
compute(170-172)resolve(173-175)
__test__/glob.spec.ts (1)
src/glob.rs (1)
glob(177-179)
src/readdir.rs (1)
src/utils.rs (1)
get_file_type_id(4-22)
🪛 LanguageTool
.github/ISSUE_TEMPLATE.md
[style] ~10-~10: Consider using a different verb for a more formal wording.
Context: ...] 🐛 Bug fix (non-breaking change which fixes an issue) - [ ]
(FIX_RESOLVE)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: stable - x86_64-pc-windows-msvc - node@22
🔇 Additional comments (11)
.github/ISSUE_TEMPLATE.md (1)
1-35: Template structure and guidance are well-designed.The issue template provides comprehensive sections with clear categorization of change types, helpful placeholder instructions, and an actionable checklist. The emoji indicators enhance visual clarity and user engagement. Users can easily identify which type of change they're submitting and provide necessary context.
This is a solid replacement for the PR template, though you may want to verify that the title suggestion
[FEAT/BUG]adequately covers all change types listed (e.g., performance improvements, refactoring, documentation updates).package.json (2)
110-110: Verify the packageManager constraint change.The packageManager field was changed from an exact version constraint to a range (
pnpm@^9 || pnpm@^10). Please confirm:
- Is this intentional to support both pnpm 9 and 10?
- If this was unintentional, what was the original constraint?
67-68: Bothfast-globandglobare correctly placed as devDependencies.The
globandglobSyncfunctions exported by this package are Rust native bindings (not JavaScript wrappers around npm packages). The npmfast-globandglobpackages are used exclusively in benchmarking (benchmark/glob.ts) to compare performance against native implementations. Consumers using the public API will not encounter runtime errors, as they receive the Rust-compiled bindings directly.Likely an incorrect or invalid review comment.
index.js (1)
575-577: LGTM!The new exports (Dirent, glob, globSync) follow the existing pattern and are properly exposed from the native binding.
src/lib.rs (1)
13-23: LGTM!The module additions (glob, types, utils) follow the existing pattern. The selective re-export of
glob::*andtypes::*while keepingutilsinternal is appropriate for encapsulation.__test__/readdir.spec.ts (1)
28-46: LGTM!The test updates correctly reflect the Dirent API change from properties to methods (
isDirectory(),isFile()), aligning with Node.js fs.Dirent conventions.src/readdir.rs (1)
1-2: LGTM!The refactor to use external
Direntfromcrate::typesandget_file_type_idutility is correctly implemented. The usage is consistent across both single-directory and recursive paths.Also applies to: 86-86, 125-125
Cargo.toml (1)
12-12: No action required; version 0.4.25 is the latest release of theignorecrate with no known security advisories.The dependency is appropriately specified and current as of October 2025.
index.d.ts (1)
3-14: LGTM: Clean Dirent API.The Dirent class API is well-designed with readonly properties and comprehensive file-type predicate methods that align with the Rust implementation.
src/glob.rs (2)
9-17: LGTM: GlobOptions structure is correct.The options structure properly mirrors the TypeScript interface definition and uses appropriate Option types for all fields.
43-51: LGTM: Exclude pattern handling is correct.The approach of prepending "!" to exclude patterns aligns with the
ignorecrate's override syntax.
| @@ -0,0 +1,35 @@ | |||
| --- | |||
| name: Feature Request / Bug Fix | |||
| about: Common template for pr and bugfix | |||
There was a problem hiding this comment.
Fix inconsistent capitalization and phrasing in the front matter.
The about field uses inconsistent capitalization and abbreviated terms. Standardize to match the title field format.
Apply this diff:
-about: Common template for pr and bugfix
+about: Common template for feature requests and bug fixes📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| about: Common template for pr and bugfix | |
| about: Common template for feature requests and bug fixes |
🤖 Prompt for AI Agents
.github/ISSUE_TEMPLATE.md around line 3: the front-matter key "about" uses
lowercase and an abbreviated term; update it to match title-style capitalization
and wording (e.g., "About: Common template for PR and Bugfix") so the front
matter is consistently capitalized and phrased like the title.
| "glob": "^13.0.0", | ||
| "husky": "^9.1.7", | ||
| "lint-staged": "^16.1.6", | ||
| "mitata": "^1.0.34", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find benchmark directory structure and files
find . -type d -name benchmark -o -name benchmarks 2>/dev/null | head -20Repository: CoderSerio/hyper-fs
Length of output: 74
🏁 Script executed:
# Search for tinybench and mitata imports across the codebase
echo "=== tinybench usage ==="
rg -i "tinybench" --type ts --type js --type json 2>/dev/null | head -20
echo ""
echo "=== mitata usage ==="
rg -i "mitata" --type ts --type js --type json 2>/dev/null | head -20Repository: CoderSerio/hyper-fs
Length of output: 358
🏁 Script executed:
# List all benchmark files
ls -la benchmark/
echo ""
echo "=== Check rm.ts imports ==="
head -20 benchmark/rm.ts
echo ""
echo "=== Confirm tinybench not imported anywhere ==="
rg "from ['\"]\s*tinybench" --type ts --type js 2>/dev/null || echo "No tinybench imports found"Repository: CoderSerio/hyper-fs
Length of output: 1240
Remove tinybench from devDependencies—only mitata is used.
The codebase uses only mitata for benchmarking (imported in benchmark/glob.ts and benchmark/readdir.ts). tinybench is not imported or referenced anywhere and should be removed from package.json.
🤖 Prompt for AI Agents
In package.json around line 71, devDependencies include an unused "tinybench"
entry; remove the "tinybench" dependency from the devDependencies section (leave
"mitata" intact) and run npm/yarn install or update lockfile so package.json and
package-lock.json/yarn.lock remain consistent.
| } else if cfg!(unix) && ft.is_block_device() { | ||
| 4 | ||
| } else if cfg!(unix) && ft.is_char_device() { | ||
| 5 | ||
| } else if cfg!(unix) && ft.is_fifo() { | ||
| 6 | ||
| } else if cfg!(unix) && ft.is_socket() { | ||
| 7 |
There was a problem hiding this comment.
Fix Unix-specific code to compile on all platforms.
The runtime cfg!(unix) check won't prevent compilation errors on non-Unix platforms. When building on Windows, the compiler will fail because is_block_device(), is_char_device(), etc., don't exist (since FileTypeExt is not in scope).
Use compile-time #[cfg(unix)] attributes instead:
} else if ft.is_symlink() {
3
- } else if cfg!(unix) && ft.is_block_device() {
+ #[cfg(unix)]
+ } else if ft.is_block_device() {
4
- } else if cfg!(unix) && ft.is_char_device() {
+ #[cfg(unix)]
+ } else if ft.is_char_device() {
5
- } else if cfg!(unix) && ft.is_fifo() {
+ #[cfg(unix)]
+ } else if ft.is_fifo() {
6
- } else if cfg!(unix) && ft.is_socket() {
+ #[cfg(unix)]
+ } else if ft.is_socket() {
7
} else {
0
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/utils.rs around lines 11 to 18, the code uses runtime cfg!(unix) with
calls to unix-only methods (is_block_device, is_char_device, is_fifo, is_socket)
which causes compilation failures on non-Unix targets because FileTypeExt isn't
available; change to compile-time conditionalization by importing
std::os::unix::fs::FileTypeExt behind #[cfg(unix)] and moving the unix-specific
branches into #[cfg(unix)] blocks (or implement an #[cfg(not(unix))] fallback
branch) so the Unix-only methods are only referenced when compiling for Unix
targets.
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
44-52:⚠️ Potential issue | 🟡 MinorStale return type for
readdir—isDir: booleanshould reflect the currentDirentclass.The documented return type shows
isDir: booleanas a property, but the actualDirentclass (inindex.d.ts) uses method predicates likeisDirectory(): boolean,isFile(): boolean, etc., and hasnameandparentPathfields. Update the README to match.Suggested update
- **Return Type**: ```ts - string[] - | { - name: string, // ✅ - parentPath: string, // ✅ - isDir: boolean // ✅ - }[] + string[] | Dirent[]
- Where
Direnthasname: string,parentPath: string,isFile(): boolean,isDirectory(): boolean,isSymbolicLink(): boolean, etc.</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@README.mdaround lines 44 - 52, The README's readdir return type is stale:
replace the inline object type with the actual Dirent type used by the API by
changing the documented return from "string[] | { name: string; parentPath:
string; isDir: boolean }[]" to "string[] | Dirent[]", and add a short note that
Dirent exposes name: string, parentPath: string and predicate methods like
isFile(): boolean, isDirectory(): boolean, isSymbolicLink(): boolean, etc., to
match the Dirent class defined in index.d.ts and the readdir function signature.</details> </blockquote></details> </blockquote></details>🟡 Minor comments (16)
CONTRIBUTING.md-300-303 (1)
300-303:⚠️ Potential issue | 🟡 MinorIncorrect error-handling test example —
symlinkSyncwith a non-existent target does not throw on POSIX.On Linux/macOS,
symlink(2)does not check whether the target path exists; creating a symlink to a missing path silently succeeds (dangling symlink). The test as written would fail. A correct ENOENT case forsymlinkSyncis when the destination directory doesn't exist, e.g.:test('symlinkSync: should throw ENOENT when destination dir is missing', (t) => { t.throws( () => symlinkSync('/some/target', '/no/such/dir/link'), { message: /ENOENT/ } ) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 300 - 303, The test incorrectly expects symlinkSync to throw when the target does not exist; update the test named 'symlinkSync: should throw on non-existent target' to instead assert ENOENT for a missing destination directory by calling symlinkSync with a valid or arbitrary target path and a destination path whose parent directory does not exist (e.g., '/no/such/dir/link'), keeping the t.throws assertion matching /ENOENT/ and preserving the test harness (test(...) and symlinkSync references).CONTRIBUTING.md-11-11 (1)
11-11:⚠️ Potential issue | 🟡 MinorBroken TOC anchor —
#编写-rust-实现has no matching heading.No heading in the document generates the anchor
编写-rust-实现. The Rust implementation content lives under### 第二步:创建 Rust 源文件(inside## 开发一个新 API 的完整流程), so there is no standalone target for this link. Either remove this TOC entry or update the fragment to point to the existing section.✏️ Proposed fix (option: repoint to the parent section)
-- [编写 Rust 实现](`#编写-rust-实现`) +- [编写 Rust 实现](`#开发一个新-api-的完整流程`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` at line 11, The TOC entry "[编写 Rust 实现]" points to a nonexistent anchor "#编写-rust-实现"; update or remove it. Locate the TOC line containing the link text "[编写 Rust 实现]" and either delete that entry or repoint it to the existing section "### 第二步:创建 Rust 源文件" by using the parent section anchor for "## 开发一个新 API 的完整流程" (slugified anchor like "#开发一个新-api-的完整流程") so the link resolves to the Rust implementation content.CONTRIBUTING.md-69-69 (1)
69-69:⚠️ Potential issue | 🟡 MinorFenced code block is missing a language identifier (MD040).
The directory tree block should declare a language to satisfy markdownlint and improve tooling support.
✏️ Proposed fix
-``` +```text hyper-fs/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` at line 69, The fenced code block currently contains just a triple backtick marker (```); update that fence to include a language identifier by replacing the opening fence with "```text" (and ensure the closing fence remains), e.g. change the existing "```" to "```text" around the directory-tree snippet (the block containing "hyper-fs/") so markdownlint MD040 is satisfied..github/ISSUE_TEMPLATE.md-27-39 (1)
27-39:⚠️ Potential issue | 🟡 MinorMinor text quality issues.
- Line 27: The
## Related Issues:heading has a trailing colon that no other heading uses; remove it for consistency.- Line 39: "passed all the test sample" → "passed all tests" (grammatical).
✏️ Proposed fixes
-## Related Issues: +## Related Issues-- [ ] I have run `pnpm test` and passed all the test sample +- [ ] I have run `pnpm test` and all tests pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/ISSUE_TEMPLATE.md around lines 27 - 39, Update the markdown headings and checkbox text for consistency and grammar: remove the trailing colon from the "## Related Issues:" heading so it becomes "## Related Issues", and change the checklist line text "passed all the test sample" to "passed all tests" (modify the checkbox string in the Checklist section).src/truncate.rs-1-54 (1)
1-54:⚠️ Potential issue | 🟡 MinorCI failure:
cargo fmtformatting issues detected in this file.The pipeline reports that
cargo fmt -- --checkfound formatting changes needed insrc/truncate.rs. Please runcargo fmtto fix the code style before merging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/truncate.rs` around lines 1 - 54, Run rustfmt (e.g., execute cargo fmt) to apply the required style changes to this file; ensure the formatting is applied to src/truncate.rs (affecting functions/structs like truncate_impl, truncate_sync, TruncateTask, and the truncate AsyncTask constructor) and re-run cargo fmt -- --check to verify the CI error is resolved.src/mkdir.rs-1-5 (1)
1-5:⚠️ Potential issue | 🟡 MinorCI failure:
cargo fmtcheck failed for this file.Please run
cargo fmtto fix the code style.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mkdir.rs` around lines 1 - 5, The file src/mkdir.rs fails the formatter; run `cargo fmt` (or `cargo fmt --all` if workspace) to reformat this file and update the import ordering/spacing in the use statements (e.g., the lines with `use napi::bindgen_prelude::*;`, `use napi::Task;`, `use napi_derive::napi;`, `use std::fs;`, `use std::path::Path;`) so the file passes the `cargo fmt` / CI style check.src/read_file.rs-1-5 (1)
1-5:⚠️ Potential issue | 🟡 MinorCI failure:
cargo fmtcheck failed for this file.The pipeline reports formatting changes are needed. Please run
cargo fmtto fix the code style.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 1 - 5, This file fails the rustfmt check; run `cargo fmt` to apply standard Rust formatting to the imports and the rest of src/read_file.rs (the `use napi::bindgen_prelude::*`, `use napi::Task`, `use napi_derive::napi`, `use std::fs`, `use std::path::Path` lines) so the CI formatting check passes—ensure you commit the formatted file after running `cargo fmt`.src/stat.rs-100-105 (1)
100-105:⚠️ Potential issue | 🟡 MinorAll metadata errors are mapped to ENOENT, masking permission-denied and other errors.
Same pattern seen in
src/unlink.rs. If the file exists but is inaccessible, this produces a misleading "no such file or directory" message. Checke.kind()to differentiate error types.Proposed fix
- let meta = meta.map_err(|_| { - Error::from_reason(format!( + let meta = meta.map_err(|e| { + if e.kind() == std::io::ErrorKind::NotFound { + Error::from_reason(format!( "ENOENT: no such file or directory, stat '{}'", path.to_string_lossy() - )) + )) + } else if e.kind() == std::io::ErrorKind::PermissionDenied { + Error::from_reason(format!( + "EACCES: permission denied, stat '{}'", + path.to_string_lossy() + )) + } else { + Error::from_reason(format!("{}", e)) + } })?;src/unlink.rs-11-16 (1)
11-16:⚠️ Potential issue | 🟡 Minor
symlink_metadataerror is always mapped to ENOENT, masking other error kinds.If the actual error is permission denied or another I/O error (not just "not found"), this will still produce an
ENOENTmessage, which is misleading. Consider checkinge.kind()similar to other modules (e.g.,read_file_impl).Proposed fix
- let meta = fs::symlink_metadata(path).map_err(|_| { - Error::from_reason(format!( + let meta = fs::symlink_metadata(path).map_err(|e| { + if e.kind() == std::io::ErrorKind::NotFound { + Error::from_reason(format!( "ENOENT: no such file or directory, unlink '{}'", path.to_string_lossy() - )) + )) + } else if e.kind() == std::io::ErrorKind::PermissionDenied { + Error::from_reason(format!( + "EACCES: permission denied, unlink '{}'", + path.to_string_lossy() + )) + } else { + Error::from_reason(format!("{}", e)) + } })?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/unlink.rs` around lines 11 - 16, The current mapping of fs::symlink_metadata(path) always converts any error to an ENOENT message; update the closure that handles the error from symlink_metadata to inspect the original io::Error (e) and branch on e.kind() (e.g., ErrorKind::NotFound => produce the ENOENT message as now, PermissionDenied => produce an EACCES/permission denied message, otherwise include e.to_string() or e.kind() in the Error::from_reason). Modify the map_err for the symlink_metadata call (the closure around fs::symlink_metadata(path)) to use the actual error kind and include the real error text instead of unconditionally reporting "no such file or directory".src/write_file.rs-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorFix formatting: CI pipeline reports
cargo fmtfailure.The pipeline indicates formatting changes are needed in this file. Please run
cargo fmtbefore merging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/write_file.rs` at line 1, The file's formatting is out of compliance with rustfmt; run cargo fmt (or rustfmt) to reformat src/write_file.rs so the import line and the file adhere to the project's formatting rules; ensure the use statement (use napi::bindgen_prelude::*;) and surrounding whitespace are formatted by rustfmt and commit the resulting changes.src/access.rs-31-69 (1)
31-69:⚠️ Potential issue | 🟡 MinorSupplementary groups are not checked — permission may be denied incorrectly.
The implementation only checks the process's primary GID (
libc::getgid()), but POSIXaccess(2)also considers supplementary groups. A user who is in a supplementary group that has access will be incorrectly denied here.This is a known limitation of reimplementing permission checks in userspace. For full correctness, consider using
libc::access(path, mode)directly on Unix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/access.rs` around lines 31 - 69, The current Unix permission check reimplements access(2) and only compares meta.gid() against libc::getgid(), missing supplementary groups and causing false negatives; replace the manual group logic by calling libc::access(path_cstr.as_ptr(), mode_bits) and return an errno-derived Error on failure (instead of the check closure), or if you must keep the manual checks update the group test to gather supplementary groups via libc::getgroups() and check meta.gid() against that list (adjusting references to gid/getgid, the check closure, and file_mode/meta.uid/meta.gid accordingly).index.d.ts-38-41 (1)
38-41:⚠️ Potential issue | 🟡 MinorReturn
Dateobjects fromatime,mtime,ctime, andbirthtimegetters for Node.js compatibility.The current implementation returns
numberfor these getters (insrc/types.rs, lines 139–157), but Node.jsfs.StatsreturnsDateinstances. The*Msfields correctly return numbers (milliseconds), but the non-Msgetters should convert these milliseconds to JavaScriptDateobjects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.d.ts` around lines 38 - 41, The getters atime, mtime, ctime, and birthtime currently return numbers but must return JavaScript Date objects for Node.js fs.Stats compatibility; update the TypeScript declaration types to return Date and change the implementation of those getters (the methods that currently read the *Ms millisecond fields) to construct and return Date instances (e.g., new Date(ms) / js_sys::Date::new(&JsValue::from_f64(ms)) or equivalent) instead of returning the raw millisecond number, leaving the corresponding *Ms properties as numbers.__test__/chmod.spec.ts-30-32 (1)
30-32:⚠️ Potential issue | 🟡 MinorHardcoded
/tmp/in ENOENT test is inconsistent and non-portable.The two permission tests above it skip on Windows, but this test doesn't — and it still hardcodes a Unix-only path. Use
join(tmpdir(), ...)(already imported) and either add a Windows guard or rely ontmpdir()to produce a valid-but-nonexistent path.🛡️ Proposed fix
test('chmodSync: should throw on non-existent file', (t) => { - t.throws(() => chmodSync('/tmp/no-such-file-' + Date.now(), 0o644), { message: /ENOENT/ }) + t.throws(() => chmodSync(join(tmpdir(), 'no-such-file-' + Date.now()), 0o644), { message: /ENOENT/ }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/chmod.spec.ts` around lines 30 - 32, The test "chmodSync: should throw on non-existent file" hardcodes a Unix-only path; change it to construct the path using join(tmpdir(), 'no-such-file-'+Date.now()) (tmpdir and join are already imported) instead of '/tmp/...', and make the test platform-safe by either skipping it on Windows (process.platform === 'win32') or relying on tmpdir() returning a platform-appropriate temp directory; ensure you still call chmodSync with that constructed path and assert the ENOENT error.__test__/truncate.spec.ts-30-31 (1)
30-31:⚠️ Potential issue | 🟡 MinorHardcoded
/tmp/breaks portability on Windows.Every other path in this file is constructed with
tmpdir(), but this path is hardcoded to a Unix-specific root. Usejoin(tmpdir(), ...)for consistency and portability.🛡️ Proposed fix
- t.throws(() => truncateSync('/tmp/no-such-file-' + Date.now()), { message: /ENOENT/ }) + t.throws(() => truncateSync(join(tmpdir(), 'no-such-file-' + Date.now())), { message: /ENOENT/ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/truncate.spec.ts` around lines 30 - 31, The test uses a hardcoded Unix path in the call to truncateSync which breaks on Windows; update the test to build the path with tmpdir() and path.join (use join(tmpdir(), 'no-such-file-' + Date.now())) so the nonexistent-path assertion still checks ENOENT portably; modify the truncateSync call in the test to use join and import or reference join/tmpdir() as used elsewhere in the file.__test__/stat.spec.ts-47-50 (1)
47-50:⚠️ Potential issue | 🟡 Minor
lstatSynctest doesn't actually validate symlink-specific behavior.
./package.jsonis a regular file —lstatandstatreturn identical results for it. The "without following symlinks" contract is never exercised. Consider creating a temporary symlink in the test (pointing topackage.json) and asserting thatlstatSyncon the link returnsisSymbolicLink() === truewhilestatSyncreturnsfalse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/stat.spec.ts` around lines 47 - 50, The test for lstatSync currently uses a regular file and doesn't exercise symlink behavior; update the test 'lstatSync: should return stats without following symlinks' to create a temporary symlink (pointing to './package.json'), call lstatSync on the symlink and assert s.isSymbolicLink() === true, then call statSync on the same path and assert isSymbolicLink() === false, and finally remove the created symlink in a teardown/cleanup step to avoid leaving artifacts.src/rmdir.rs-16-17 (1)
16-17:⚠️ Potential issue | 🟡 Minor
AlreadyExistsis not the correctErrorKindfor ENOTEMPTY — it maps toEEXIST.In stable Rust,
std::fs::remove_diron a non-empty directory yieldsErrorKind::Other(ENOTEMPTY has no dedicatedErrorKindvariant). The string checkcontains("not empty")already handles this correctly; the|| e.kind() == std::io::ErrorKind::AlreadyExistsbranch is semantically wrong (EEXIST ≠ ENOTEMPTY) and dead code. Remove it.🐛 Proposed fix
- } else if e.to_string().contains("not empty") - || e.kind() == std::io::ErrorKind::AlreadyExists - { + } else if e.to_string().contains("not empty") {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/rmdir.rs` around lines 16 - 17, Remove the incorrect AlreadyExists check in the error handling branch that detects ENOTEMPTY: replace the conditional that reads `else if e.to_string().contains("not empty") || e.kind() == std::io::ErrorKind::AlreadyExists` with a single check using the string test only (e.to_string().contains("not empty")), since `std::fs::remove_dir` on a non-empty directory maps to ErrorKind::Other and the AlreadyExists variant (EEXIST) is incorrect and dead code; update the branch around the `e` error handling where `remove_dir` is called in src/rmdir.rs to only rely on the "not empty" string check.🧹 Nitpick comments (21)
CONTRIBUTING.md (1)
362-364:npx avadirectly is inconsistent with the "always usepackage.jsonscripts" guidance (line 51).The guide recommends against running tools outside pnpm scripts. Consider replacing with an equivalent pnpm filter or noting the distinction explicitly.
✏️ Proposed fix
-npx ava __test__/stat.spec.ts # 只运行 stat 的测试 +pnpm exec ava __test__/stat.spec.ts # 只运行 stat 的测试🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 362 - 364, Replace the direct npx invocation with the repository-standard pnpm script usage: change the example command "npx ava __test__/stat.spec.ts" to either "pnpm test -- __test__/stat.spec.ts" (to pass the spec path through the existing test script) or add a package.json script like "test:stat": "ava __test__/stat.spec.ts" and show "pnpm run test:stat" in the CONTRIBUTING.md; update the line containing "npx ava __test__/stat.spec.ts" accordingly.src/truncate.rs (1)
7-24: Silently clamping negativelento 0 diverges from Node.js behavior.Node.js
fs.truncatethrowsERR_OUT_OF_RANGEfor negative lengths. Here, negative values are silently clamped to 0 via.max(0). If Node.js compatibility is a goal, consider throwing an error for negative lengths instead.♻️ Suggested approach
fn truncate_impl(path_str: String, len: Option<i64>) -> Result<()> { let path = Path::new(&path_str); - let len = len.unwrap_or(0).max(0) as u64; + let raw_len = len.unwrap_or(0); + if raw_len < 0 { + return Err(Error::from_reason(format!( + "ERR_OUT_OF_RANGE: The value of \"len\" is out of range. It must be >= 0. Received {}", + raw_len + ))); + } + let len = raw_len as u64;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/truncate.rs` around lines 7 - 24, The truncate_impl function currently clamps negative len to 0; change it to detect when len.is_some() and the provided i64 value is negative and return an error instead of clamping. Specifically, in truncate_impl (parameters path_str: String, len: Option<i64>) validate the Option<i64> up front: if let Some(n) = len and n < 0 then return Err(Error::from_reason("ERR_OUT_OF_RANGE: invalid length")) (or a matching Node.js-style message), otherwise proceed converting the non-negative value to u64 and continuing with OpenOptions::new().write(true).open(path) and file.set_len(len_u64).src/chown.rs (2)
30-39: Non-Unixchownsilently no-ops — consider documenting or warning.On non-Unix platforms, the function only validates path existence and silently succeeds without changing ownership. This is a reasonable compromise since Windows lacks uid/gid semantics, but callers should be aware. Consider adding a doc comment or logging a warning.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chown.rs` around lines 30 - 39, The non-Unix branch in chown.rs (the #[cfg(not(unix))] block that references uid, gid and checks path.exists()) currently silently no-ops after validating the path; update the implementation to make this behavior explicit by either adding a clear doc comment on the chown function explaining that ownership changes are not performed on non-Unix platforms or emitting a warning/log when callers request a uid/gid change on non-Unix systems (e.g., using the existing logger or a new warning path), and ensure the message references the attempted uid/gid and the path for clarity.
15-27: All non-NotFounderrors are mapped toEPERM, losing the original errno.While
EPERMis the most commonchownfailure, other errors likeEACCES,EIO, orELOOPwould be misreported. Consider preserving the original error:♻️ Suggested improvement
if e.kind() == std::io::ErrorKind::NotFound { return Err(Error::from_reason(format!( "ENOENT: no such file or directory, chown '{}'", path.to_string_lossy() ))); } - return Err(Error::from_reason(format!( - "EPERM: operation not permitted, chown '{}'", - path.to_string_lossy() - ))); + return Err(Error::from_reason(format!( + "{}, chown '{}'", + e, path.to_string_lossy() + )));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chown.rs` around lines 15 - 27, The current error branch in chown.rs maps every non-NotFound std::io::Error to a generic "EPERM" message; update the ret != 0 handling to preserve and surface the original errno: keep the NotFound branch as-is, but for other errors obtain the raw errno (e.raw_os_error()), convert it to a readable name/message (e.g., via nix::errno::Errno::from_i32 or e.to_string()), and include that info in the Error::from_reason string instead of hardcoding "EPERM" while still showing the path via path.to_string_lossy() so the real error (EACCES/EIO/ELOOP/etc.) is not lost.__test__/unlink.spec.ts (1)
7-13: Temp directories are never cleaned up.
createTempFilecreates a temp directory but only the file is unlinked by the tests. The parent directory persists. Consider adding cleanup in a teardown or usingt.teardown().♻️ Proposed improvement
function createTempFile(): string { const dir = join(tmpdir(), `hyper-fs-test-unlink-${Date.now()}-${Math.random().toString(36).slice(2)}`) mkdirSync(dir, { recursive: true }) const file = join(dir, 'test.txt') writeFileSync(file, 'hello') - return file + return file // caller should clean up `dir` after test }In each test, add cleanup:
t.teardown(() => nodeRmdirSync(dirname(file), { recursive: true }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/unlink.spec.ts` around lines 7 - 13, createTempFile currently creates a temp directory but tests only unlink the file, leaving the parent dir behind; update tests that call createTempFile to register a teardown that removes the temp directory (e.g., use t.teardown to call nodeRmdirSync(dirname(file), { recursive: true })) or modify createTempFile to return the directory path so callers can clean it up; ensure you reference the createTempFile helper and call nodeRmdirSync (or equivalent recursive remove) on dirname(file) during teardown to fully clean up temp dirs.src/types.rs (1)
12-13: Consider replacing magic numbers with named constants forfile_type.The comment on Line 12 documents the mapping, but the predicates in Lines 19–51 use raw integer literals. Defining constants (e.g.,
const FILE_TYPE_FILE: u8 = 1;) would improve readability and reduce the risk of typos.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.rs` around lines 12 - 13, Introduce named constants for the file type mapping and replace raw literals in predicates: define u8 constants such as FILE_TYPE_FILE = 1, FILE_TYPE_DIR = 2, FILE_TYPE_SYMLINK = 3, FILE_TYPE_BLOCK = 4, FILE_TYPE_CHAR = 5, FILE_TYPE_FIFO = 6, FILE_TYPE_SOCKET = 7 (and FILE_TYPE_UNKNOWN = 0) near the struct that contains the file_type field, then update all predicate checks that reference numeric literals (the predicates that inspect file_type) to use these constants instead of magic numbers to improve readability and prevent typos.__test__/mkdir.spec.ts (2)
11-16: Consider usingt.teardown()for reliable cleanup.If an assertion fails before the
rmdirSynccall, the temp directory is leaked. AVA'st.teardown()ensures cleanup runs regardless of test outcome. This applies to all tests in this file.Example for one test
test('mkdirSync: should create a directory', (t) => { const dir = tmpPath('basic') + t.teardown(() => { try { rmdirSync(dir) } catch {} }) mkdirSync(dir) t.true(existsSync(dir)) - rmdirSync(dir) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 11 - 16, The test currently creates a temp directory with tmpPath('basic') and removes it with rmdirSync(dir) at the end of the test, which can leak the directory if an assertion fails; replace the manual cleanup with AVA's t.teardown() by registering a teardown callback that calls rmdirSync(dir) (or a safe recursive removal) immediately after creating dir so that mkdirSync, tmpPath, and the teardown guarantee cleanup regardless of test outcome.
25-35: Missing assertion on the return value of recursivemkdirSync.Per Node.js behavior (and
mkdir_implinsrc/mkdir.rs),mkdirSyncwith{ recursive: true }should return the first directory path created (orundefinedif it already existed). Consider asserting the return value to validate this behavior.Example
test('mkdirSync: recursive should create nested dirs', (t) => { const dir = tmpPath('recursive') const nested = join(dir, 'a', 'b', 'c') - mkdirSync(nested, { recursive: true }) + const result = mkdirSync(nested, { recursive: true }) + t.is(result, dir, 'should return the first created directory') t.true(existsSync(nested))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 25 - 35, The test should assert mkdirSync's return value for recursive creation: when calling mkdirSync(nested, { recursive: true }) capture its return (e.g., const result = mkdirSync(nested, { recursive: true })) and assert it equals the first directory actually created (join(dir, 'a')) per mkdir_impl behavior; optionally also assert that calling mkdirSync again returns undefined when the path already exists. Ensure you reference mkdirSync and mkdir_impl in the change.reference/internal/fs/watchers.js (1)
1-455: This appears to be a vendored copy of Node.js internalfs/watchersmodule.Since this is under
reference/, the code is presumably kept for reference rather than executed within this project. Consider adding a header comment noting the Node.js version this was sourced from, to help future contributors understand the baseline and track upstream changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@reference/internal/fs/watchers.js` around lines 1 - 455, This file is a vendored copy of Node.js's internal fs/watchers implementation (exports include createIgnoreMatcher, FSWatcher, StatWatcher, kFSWatchStart, kFSStatWatcherStart, watch), so add a clear header comment at the very top of the file indicating the original Node.js version (and optionally the upstream commit or URL) that this was sourced from and that it is kept for reference only; place the comment above the 'use strict' line and ensure it mentions that the file is vendored, the Node.js version/tag, and any notes about updating/tracking upstream changes.src/read_file.rs (1)
40-51: Onlyutf8/utf-8encoding is supported; all others are rejected.Node.js
readFilesupports many encodings (ascii,latin1,base64,hex, etc.). Currently, passing any of these will produce"Unknown encoding: ...". If this is intentional for an initial release, consider documenting the limitation. Otherwise, at minimumascii/latin1are common and relatively easy to add.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 40 - 51, The match on opts.encoding.as_deref() currently only handles "utf8"/"utf-8" and rejects other encodings; extend it to support at least "ascii" and "latin1"/"latin-1": for "ascii" check that all bytes in data are <= 0x7F and return Either::A(String) or map an Error::from_reason on invalid bytes, and for "latin1"/"latin-1" convert bytes to a String by mapping each byte to a char (e.g. data.iter().map(|&b| b as char).collect()) and return Either::A; keep the existing String::from_utf8 branch for utf8, preserve the Unknown encoding error via Error::from_reason for others, and leave the None branch returning Either::B(Buffer::from(data)) unchanged.src/mkdir.rs (1)
21-28: TOCTOU race:path.exists()check beforecreate_dir_allcan produce incorrect return values.Between the
path.exists()check (Line 27) and the actualcreate_dir_allcall (Line 42), another process could create or remove the directory. The early return ofOk(None)when the path exists could be stale. This is a minor concern given the FS-API context, but worth noting —create_dir_allitself is idempotent so the real risk is only the return value accuracy, not a failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mkdir.rs` around lines 21 - 28, Remove the TOCTOU-prone early check of path.exists() inside the recursive branch: do not return Ok(None) before calling std::fs::create_dir_all; instead call create_dir_all unconditionally and then determine the return value afterward (e.g., check path.exists() after create_dir_all completes) so the decision reflects the post-operation state; update the logic around the recursive branch that currently uses path.exists() and std::fs::create_dir_all to make the existence check occur after create_dir_all completes.src/write_file.rs (1)
87-94: Unnecessary clone ofself.datainsidecompute.
self.datais aVec<u8>that is only used once duringcompute. You can usestd::mem::taketo move it out instead of cloning, avoiding an allocation for potentially large files. The same applies toAppendFileTask::computeat Line 154.Proposed fix
fn compute(&mut self) -> Result<Self::Output> { - let data_clone = self.data.clone(); + let data = std::mem::take(&mut self.data); write_file_impl( self.path.clone(), - Either::B(Buffer::from(data_clone)), + Either::B(Buffer::from(data)), self.options.clone(), ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/write_file.rs` around lines 87 - 94, The compute method is cloning self.data unnecessarily; replace cloning with moving the Vec<u8> out using std::mem::take (or std::mem::replace) so you pass ownership into write_file_impl without allocation—update compute in the current struct (the function named compute that calls write_file_impl with Either::B(Buffer::from(...))) to use let data = std::mem::take(&mut self.data) and pass that instead; apply the same change to AppendFileTask::compute (the compute implementation around Line 154) to avoid cloning there as well.__test__/rename.spec.ts (1)
31-41: Consider verifying file content in the async rename test for parity with sync.The sync test (Line 23) verifies content preservation with
readFileSync, but the async test does not. Minor gap but would strengthen the async test.Suggested addition
t.false(existsSync(src)) t.true(existsSync(dest)) + t.is(readFileSync(dest, 'utf8'), 'data') })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/rename.spec.ts` around lines 31 - 41, The async rename test "rename: async should rename a file" currently checks presence/absence of src/dest but not that file contents were preserved; update the test that calls rename(src, dest) to also read the destination file (use readFileSync or an async read) and assert its contents equal "data" (same check as the sync test) so the async test mirrors the sync test's content verification for functions rename, readFileSync, src and dest.__test__/realpath.spec.ts (1)
27-27: Redundant innerawaitinthrowsAsynccallback.
t.throwsAsyncreceives a function and awaits the promise it returns; the extraawaitinside theasyncarrow is a no-op.♻️ Simplify the callback
- await t.throwsAsync(async () => await realpath('./no-such-path'), { message: /ENOENT/ }) + await t.throwsAsync(() => realpath('./no-such-path'), { message: /ENOENT/ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/realpath.spec.ts` at line 27, The test callback passed to t.throwsAsync unnecessarily contains an inner await; simplify the async arrow by returning the promise from realpath directly (remove the redundant await inside the function passed to t.throwsAsync) so change the callback from async () => await realpath('./no-such-path') to async () => realpath('./no-such-path') (or simply () => realpath('./no-such-path')) while keeping the use of t.throwsAsync and the ENOENT assertion intact.__test__/write_file.spec.ts (1)
2-2: Usenode:fs.readFileSyncto verify write results instead of the library's ownreadFileSync.Verifying writes via the library under test creates a cross-dependency: a bug in
readFileSynccould cause write tests to fail or pass spuriously. Other tests in this suite (e.g.,truncate.spec.ts,rmdir.spec.ts) already use native Node APIs for independent verification — this file should follow the same pattern.♻️ Proposed fix
import { writeFileSync, writeFile, readFileSync, appendFileSync, appendFile } from '../index.js' -import { existsSync, unlinkSync, mkdirSync } from 'node:fs' +import { mkdirSync, readFileSync as nodeReadFileSync } from 'node:fs'Then replace all
readFileSync(file, ...)calls used only for verification withnodeReadFileSync(file, ...).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/write_file.spec.ts` at line 2, The tests import the module's readFileSync and then use it to verify file writes, creating a cross-dependency; update the import line that currently brings in readFileSync from '../index.js' (alongside writeFileSync/writeFile/appendFile/appendFileSync) to instead import the native Node verifier — import readFileSync from 'node:fs' under an alias like nodeReadFileSync — and then replace all verification uses of readFileSync(file, ...) in this spec with nodeReadFileSync(file, ...) so verification relies on Node's fs rather than the library under test.__test__/read_file.spec.ts (1)
36-36: Redundant innerawaitinthrowsAsynccallback.Same pattern as
realpath.spec.ts— the innerawaitis unnecessary.♻️ Simplify the callback
- await t.throwsAsync(async () => await readFile('./no-such-file'), { message: /ENOENT/ }) + await t.throwsAsync(() => readFile('./no-such-file'), { message: /ENOENT/ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/read_file.spec.ts` at line 36, The test currently uses an unnecessary inner await inside the t.throwsAsync callback; update the throwsAsync invocation so the callback returns the promise directly (e.g., change async () => await readFile('./no-such-file') to () => readFile('./no-such-file') or async () => readFile('./no-such-file')) so t.throwsAsync receives the promise from readFile; reference the t.throwsAsync call and the readFile invocation in the test to locate and simplify the callback.__test__/copy_file.spec.ts (1)
37-37: Use a named constant instead of the magic number1forCOPYFILE_EXCL.The numeric
1is not self-documenting. Use the Node constant to make the intent explicit.♻️ Proposed fix
- t.throws(() => copyFileSync(src, dest, 1), { message: /EEXIST/ }) + t.throws(() => copyFileSync(src, dest, nodeFs.constants.COPYFILE_EXCL), { message: /EEXIST/ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/copy_file.spec.ts` at line 37, Replace the magic number 1 used as the flags argument to copyFileSync with the Node constant COPYFILE_EXCL to make the intent explicit; update the test to call copyFileSync(src, dest, fs.constants.COPYFILE_EXCL) or import { COPYFILE_EXCL } from 'fs' and use COPYFILE_EXCL, ensuring the test still asserts the EEXIST error when the dest exists and adjusting any imports as needed.src/rename.rs (1)
37-40:RenameTaskfields don't need to bepub.The
Taskimpl accesses them throughselfwithin the same module. Making thempubunnecessarily widens visibility. Same pattern applies to the analogous task structs inchmod.rs,rmdir.rs, andcopy_file.rs.♻️ Proposed refactor
-pub struct RenameTask { - pub old_path: String, - pub new_path: String, -} +pub struct RenameTask { + old_path: String, + new_path: String, +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/rename.rs` around lines 37 - 40, The fields on the RenameTask struct (old_path and new_path) are unnecessarily public; change "pub old_path: String" and "pub new_path: String" to private fields (remove the pub) in RenameTask and do the same for the equivalent structs in chmod.rs, rmdir.rs, and copy_file.rs, then rebuild and fix any external access points (prefer using the Task impl's self access or add accessor methods if truly needed) so visibility is as narrow as possible.index.js (1)
575-618: All property re-assignments are no-ops aftermodule.exports = nativeBinding.Line 574 sets
module.exports = nativeBinding, makingmodule.exportsandnativeBindingthe same object reference. Every subsequentmodule.exports.foo = nativeBinding.foois therefore a self-assignment (nativeBinding.foo = nativeBinding.foo) with no effect. The entire block can be removed; consumers already access these bindings through the object set on line 574.If the intent is to document the public API surface, an alternative is to destructure and re-export via an ESM wrapper or to rely solely on
index.d.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.js` around lines 575 - 618, The block reassigning properties after the statement module.exports = nativeBinding is redundant because module.exports already references nativeBinding; remove the entire repeated property-assignment block (the lines starting with module.exports.Dirent ... through module.exports.writeFileSync) to avoid no-op self-assignments, or if you intended to explicitly list the public API, instead export a new object that destructures the specific symbols from nativeBinding before assigning to module.exports (refer to the existing module.exports = nativeBinding and the listed symbols like Dirent, Stats, access, readFileSync, writeFile, etc., to locate which props to keep or document).src/chmod.rs (1)
47-50:ChmodTaskfields don't need to bepub— same asRenameTask.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chmod.rs` around lines 47 - 50, ChmodTask's struct fields (path and mode) should not be public like they are now; change pub path: String and pub mode: u32 to private fields (remove the pub) to match RenameTask's encapsulation, then update any external code that accessed ChmodTask.path or ChmodTask.mode to use the appropriate constructor or accessor methods (create getters or a new public constructor on ChmodTask if needed) so all usages compile.__test__/access.spec.ts (1)
4-6: Import access mode constants fromnode:fsrather than hardcoding.These values match
node:fs/constantstoday but are undocumented magic numbers. If the library exposes its own constants in the future they should be tested against those; in the interim, import fromnode:fs.♻️ Proposed refactor
import test from 'ava' import { accessSync, access } from '../index.js' +import { constants as fsConstants } from 'node:fs' -const F_OK = 0 -const R_OK = 4 -const W_OK = 2 +const { F_OK, R_OK, W_OK } = fsConstants🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/access.spec.ts` around lines 4 - 6, Replace the hardcoded magic numbers for F_OK, R_OK, and W_OK in __test__/access.spec.ts with the official Node constants from node:fs; remove the const F_OK = 0 / R_OK = 4 / W_OK = 2 definitions and instead import the constants (e.g. via "import { constants as fsConstants } from 'node:fs'" or importing the named constants) and use fsConstants.F_OK, fsConstants.R_OK, and fsConstants.W_OK (or the named imports) wherever F_OK, R_OK, and W_OK are referenced in the tests.
| let meta = std::fs::symlink_metadata(path).map_err(|_| { | ||
| Error::from_reason(format!( | ||
| "ENOENT: no such file or directory, access '{}'", | ||
| path.to_string_lossy() | ||
| )) | ||
| })?; |
There was a problem hiding this comment.
symlink_metadata does not follow symlinks — diverges from Node.js access() behavior.
Node.js access() delegates to the POSIX access(2) syscall, which follows symlinks (resolves to the target). Using symlink_metadata (equivalent to lstat) means this implementation checks the symlink itself. Consequences:
- A dangling symlink will pass
F_OKhere but fail in Node.js. - Permission bits checked are those of the symlink, not the target.
Consider using std::fs::metadata (which follows symlinks) instead, or ideally call libc::access / libc::faccessat directly on Unix to match Node.js semantics exactly.
Proposed fix (minimal)
- let meta = std::fs::symlink_metadata(path).map_err(|_| {
+ let meta = std::fs::metadata(path).map_err(|_| {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let meta = std::fs::symlink_metadata(path).map_err(|_| { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, access '{}'", | |
| path.to_string_lossy() | |
| )) | |
| })?; | |
| let meta = std::fs::metadata(path).map_err(|_| { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, access '{}'", | |
| path.to_string_lossy() | |
| )) | |
| })?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/access.rs` around lines 19 - 24, The current use of
std::fs::symlink_metadata in src/access.rs causes lstat semantics (does not
follow symlinks) which diverges from Node.js access(); change the call site that
invokes std::fs::symlink_metadata(path) to use std::fs::metadata(path) so
symlinks are followed and dangling symlinks are treated like Node.js, and update
the surrounding error mapping (the Error::from_reason message around this block)
if necessary; alternatively, for exact POSIX semantics on Unix, replace the
logic in the access handling function with a direct libc::access or
libc::faccessat call and map errno into the same Error::from_reason format used
elsewhere.
| pub const COPYFILE_EXCL: u32 = 1; | ||
| pub const COPYFILE_FICLONE: u32 = 2; | ||
| pub const COPYFILE_FICLONE_FORCE: u32 = 4; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/copy_file.rs | head -50Repository: CoderSerio/hyper-fs
Length of output: 1723
🏁 Script executed:
rg "COPYFILE_FICLONE|COPYFILE_EXCL|COPYFILE_FICLONE_FORCE" --type rustRepository: CoderSerio/hyper-fs
Length of output: 294
🏁 Script executed:
find . -name "*.rs" -type f | head -20Repository: CoderSerio/hyper-fs
Length of output: 374
🏁 Script executed:
cat -n src/lib.rsRepository: CoderSerio/hyper-fs
Length of output: 1117
🌐 Web query:
napi-rs constant export #[napi] attribute requirement
💡 Result:
To export a Rust constant from napi-rs to JavaScript, it must be a public constant annotated with #[napi]:
use napi_derive::napi;
#[napi]
pub const MY_CONST: u32 = 42;#[napi]can be applied toconstitems (the macro explicitly parsessyn::Item::Const). [2]- Only
pub constis allowed—otherwise the macro errors with “only public const allowed”. [1] - Some
#[napi(...)]options are not valid on consts (e.g.ts_type,ts_args_type,ts_return_type,custom_finalize). [1]
Sources: [1] [2]
🌐 Web query:
Node.js fs.copyFile COPYFILE_FICLONE_FORCE behavior COW not supported ENOTSUP
💡 Result:
fs.copyFile(src, dest, fs.constants.COPYFILE_FICLONE_FORCE) tells Node to only perform a copy-on-write “reflink/clone” copy; no fallback to a byte-for-byte copy is allowed. So if the OS/filesystem can’t do CoW cloning for that file, the call fails with an error such as ENOTSUP (“operation not supported”). Node’s docs explicitly describe this “must reflink or fail” behavior. [1]
Under the hood, Node delegates to libuv (uv_fs_copyfile) which implements:
...FICLONE: try CoW reflink; if not supported or errors, fallback to a normal copy....FICLONE_FORCE: try CoW reflink; if not supported or errors, return an error (no fallback). [2]
So ENOTSUP (or sometimes another errno) generally means the reflink primitive isn’t available for that source/destination on that filesystem/path combination (e.g., ext4 typically won’t support reflinks; Btrfs/XFS often do; cross-filesystem clones can also fail). At the syscall level on Linux, reflink cloning is done via ioctl(FICLONE/FICLONERANGE), which fails when reflink isn’t supported. [3]
One more nuance: libuv notes that starting with libuv 1.33.0, FICLONE_FORCE returns the actual underlying error, whereas older libuv versions mapped all such failures to ENOTSUP. [2]
Sources: Node.js fs.copyFile docs (flags semantics) [1]; libuv uv_fs_copyfile docs (fallback vs force + error mapping history) [2]; Linux ioctl_ficlonerange(2) (reflink ioctl errors) [3].
Add #[napi] attributes to export constants and implement correct COPYFILE_FICLONE_FORCE behavior.
Two issues must be fixed:
-
Missing
#[napi]attributes: Without these annotations, all three constants are invisible to JavaScript. napi-rs only exports constants marked with#[napi]. -
Behavioral correctness for
COPYFILE_FICLONE_FORCE: The current implementation silently falls back to a regular copy when copy-on-write cloning is unavailable. However,COPYFILE_FICLONE_FORCEmust fail withENOTSUPwhen reflink cloning is not supported—it should never silently degrade to a byte-for-byte copy. TheCOPYFILE_FICLONEflag should perform the fallback, butCOPYFILE_FICLONE_FORCEmust not.
Currently, the code only checks COPYFILE_EXCL and unconditionally calls fs::copy(), ignoring both COPYFILE_FICLONE and COPYFILE_FICLONE_FORCE entirely.
♻️ Proposed fix for the missing annotations
+#[napi]
pub const COPYFILE_EXCL: u32 = 1;
+#[napi]
pub const COPYFILE_FICLONE: u32 = 2;
+#[napi]
pub const COPYFILE_FICLONE_FORCE: u32 = 4;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub const COPYFILE_EXCL: u32 = 1; | |
| pub const COPYFILE_FICLONE: u32 = 2; | |
| pub const COPYFILE_FICLONE_FORCE: u32 = 4; | |
| #[napi] | |
| pub const COPYFILE_EXCL: u32 = 1; | |
| #[napi] | |
| pub const COPYFILE_FICLONE: u32 = 2; | |
| #[napi] | |
| pub const COPYFILE_FICLONE_FORCE: u32 = 4; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/copy_file.rs` around lines 8 - 10, Add #[napi] to the three exported
constants (COPYFILE_EXCL, COPYFILE_FICLONE, COPYFILE_FICLONE_FORCE) so napi-rs
will expose them to JS, and update the copy logic in the copy implementation
(the function handling the copy operation—e.g., copy_file / copyfile handler) to
honor the clone flags: always check COPYFILE_EXCL as before; if COPYFILE_FICLONE
or COPYFILE_FICLONE_FORCE is set, attempt a reflink/reflink-clone first and on
success return; if reflink fails and COPYFILE_FICLONE is set, fall back to
fs::copy() (byte-for-byte), but if COPYFILE_FICLONE_FORCE is set, return an
ENOTSUP error instead of doing a fallback copy. Also ensure the code
distinguishes the two flags rather than unconditionally calling fs::copy().
| if mode & COPYFILE_EXCL != 0 && dest.exists() { | ||
| return Err(Error::from_reason(format!( | ||
| "EEXIST: file already exists, copyfile '{}' -> '{}'", | ||
| src.to_string_lossy(), | ||
| dest.to_string_lossy() | ||
| ))); | ||
| } |
There was a problem hiding this comment.
TOCTOU race condition defeats the COPYFILE_EXCL exclusivity guarantee.
dest.exists() and fs::copy() are not atomic. A concurrent writer can create dest between the check and the copy, silently overwriting it. Node.js uses O_EXCL at the OS level. On Unix, the proper fix is to open dest with O_CREAT | O_EXCL before copying, or use std::fs::OpenOptions::create_new(true).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/copy_file.rs` around lines 17 - 23, The current CHECK using dest.exists()
before calling fs::copy() is racy for COPYFILE_EXCL; replace the existence check
and non-atomic fs::copy() path with an atomic create_new open: when mode has
COPYFILE_EXCL, attempt to open dest using
std::fs::OpenOptions::new().write(true).create_new(true).open(dest) to get a
File (this enforces O_CREAT|O_EXCL), then open src (File::open(src)) and stream
contents with std::io::copy(&mut src_file, &mut dest_file); remove the
dest.exists() branch and handle the error from create_new to return the EEXIST
Error::from_reason as before; ensure you preserve error handling for other modes
and clean up on other failures.
| fs::create_dir_all(path).map_err(|e| { | ||
| Error::from_reason(format!( | ||
| "ENOENT: no such file or directory, mkdir '{}'", | ||
| e | ||
| )) | ||
| })?; |
There was a problem hiding this comment.
Bug: Error message includes the I/O error text instead of the path.
The format string inserts e (the std::io::Error) where it should insert the path. This produces messages like "ENOENT: no such file or directory, mkdir 'Permission denied (os error 13)'" instead of the intended path.
Proposed fix
fs::create_dir_all(path).map_err(|e| {
- Error::from_reason(format!(
- "ENOENT: no such file or directory, mkdir '{}'",
- e
- ))
+ if e.kind() == std::io::ErrorKind::PermissionDenied {
+ Error::from_reason(format!(
+ "EACCES: permission denied, mkdir '{}'",
+ path.to_string_lossy()
+ ))
+ } else {
+ Error::from_reason(format!(
+ "ENOENT: no such file or directory, mkdir '{}'",
+ path.to_string_lossy()
+ ))
+ }
})?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fs::create_dir_all(path).map_err(|e| { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, mkdir '{}'", | |
| e | |
| )) | |
| })?; | |
| fs::create_dir_all(path).map_err(|e| { | |
| if e.kind() == std::io::ErrorKind::PermissionDenied { | |
| Error::from_reason(format!( | |
| "EACCES: permission denied, mkdir '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, mkdir '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } | |
| })?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mkdir.rs` around lines 42 - 47, The error message is formatting the
std::io::Error (`e`) into the path slot; update the map_err closure around
fs::create_dir_all(path) so the formatted message uses the path (e.g.,
path.display() or path.to_string_lossy()) instead of `e`, and if desired include
the original io error separately (e.g., append `: {}` with `e`) when calling
Error::from_reason(format!(...)) so the message becomes "ENOENT: no such file or
directory, mkdir '<path>'" (and optionally ": <io error>").
| let c_path = CString::new(path.to_string_lossy().as_bytes()) | ||
| .map_err(|_| Error::from_reason("Invalid path"))?; |
There was a problem hiding this comment.
to_string_lossy() can mangle non-UTF-8 paths on Unix.
Unix paths are byte sequences, not necessarily valid UTF-8. Using to_string_lossy() replaces invalid bytes with U+FFFD, which produces a different (incorrect) CString. Use OsStr::as_bytes() instead for lossless conversion.
Proposed fix
+ use std::os::unix::ffi::OsStrExt;
- let c_path = CString::new(path.to_string_lossy().as_bytes())
+ let c_path = CString::new(path.as_os_str().as_bytes())
.map_err(|_| Error::from_reason("Invalid path"))?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let c_path = CString::new(path.to_string_lossy().as_bytes()) | |
| .map_err(|_| Error::from_reason("Invalid path"))?; | |
| use std::os::unix::ffi::OsStrExt; | |
| let c_path = CString::new(path.as_os_str().as_bytes()) | |
| .map_err(|_| Error::from_reason("Invalid path"))?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utimes.rs` around lines 32 - 33, The code builds a CString from path
using to_string_lossy() which mangles non-UTF-8 paths; change the CString
construction in src/utimes.rs to use the raw bytes from the OS string instead
(e.g. use path.as_os_str().as_bytes() on Unix) and feed that into CString::new,
keeping the existing map_err handling; ensure you import
std::os::unix::ffi::OsStrExt if not already present so as_bytes() is available
and the conversion is lossless for Unix paths.
| let atime_sec = atime as i64; | ||
| let atime_nsec = ((atime - atime as i64 as f64) * 1_000_000_000.0) as i64; | ||
| let mtime_sec = mtime as i64; | ||
| let mtime_nsec = ((mtime - mtime as i64 as f64) * 1_000_000_000.0) as i64; |
There was a problem hiding this comment.
Negative timestamps produce invalid tv_nsec values violating POSIX.
For pre-epoch times (negative atime/mtime), the nanoseconds calculation yields a negative tv_nsec. POSIX requires tv_nsec to be in [0, 999_999_999]. For example, -1.5 seconds → tv_sec=-1, tv_nsec=-500_000_000, which utimensat may reject with EINVAL.
The correct decomposition for negative times needs to floor the seconds and keep nsec positive:
Proposed fix
- let atime_sec = atime as i64;
- let atime_nsec = ((atime - atime as i64 as f64) * 1_000_000_000.0) as i64;
- let mtime_sec = mtime as i64;
- let mtime_nsec = ((mtime - mtime as i64 as f64) * 1_000_000_000.0) as i64;
+ let atime_sec = atime.floor() as i64;
+ let atime_nsec = ((atime - atime.floor()) * 1_000_000_000.0) as i64;
+ let mtime_sec = mtime.floor() as i64;
+ let mtime_nsec = ((mtime - mtime.floor()) * 1_000_000_000.0) as i64;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let atime_sec = atime as i64; | |
| let atime_nsec = ((atime - atime as i64 as f64) * 1_000_000_000.0) as i64; | |
| let mtime_sec = mtime as i64; | |
| let mtime_nsec = ((mtime - mtime as i64 as f64) * 1_000_000_000.0) as i64; | |
| let atime_sec = atime.floor() as i64; | |
| let atime_nsec = ((atime - atime.floor()) * 1_000_000_000.0) as i64; | |
| let mtime_sec = mtime.floor() as i64; | |
| let mtime_nsec = ((mtime - mtime.floor()) * 1_000_000_000.0) as i64; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utimes.rs` around lines 35 - 38, The current decomposition of atime/mtime
into atime_sec/atime_nsec and mtime_sec/mtime_nsec produces negative tv_nsec for
pre-epoch times; update the logic in src/utimes.rs to compute seconds by
flooring the floating timestamp (use a floor operation rather than simple cast)
and compute nanoseconds as (time - sec) * 1_000_000_000, ensuring tv_nsec is in
[0, 999_999_999]; if you get a negative nsec, subtract 1 from the seconds and
add 1_000_000_000 to nsec (apply this for both atime_* and mtime_* before
calling utimensat) so that utimensat receives valid POSIX-compliant
tv_sec/tv_nsec pairs.
| #[cfg(not(unix))] | ||
| { | ||
| use std::fs; | ||
| let atime_sys = to_system_time(atime); | ||
| let mtime_sys = to_system_time(mtime); | ||
| let file = fs::OpenOptions::new() | ||
| .write(true) | ||
| .open(path) | ||
| .map_err(|e| Error::from_reason(e.to_string()))?; | ||
| file | ||
| .set_modified(mtime_sys) | ||
| .map_err(|e| Error::from_reason(e.to_string()))?; | ||
| let _ = atime_sys; // Windows doesn't easily support setting atime via std | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "utimes.rs" -type fRepository: CoderSerio/hyper-fs
Length of output: 78
🏁 Script executed:
cat -n src/utimes.rsRepository: CoderSerio/hyper-fs
Length of output: 3528
🌐 Web query:
Rust filetime crate set_file_times Windows SetFileTime support
💡 Result:
Yes. On Windows, filetime’s set_file_times / set_file_mtime / set_file_atime are implemented by opening the target with write access and calling the Win32 SetFileTime API (passing NULL for the creation-time parameter, and NULL for whichever of atime/mtime you’re not changing). (docs.rs)
Notes:
- Permissions:
SetFileTimerequires a handle withFILE_WRITE_ATTRIBUTES, sofiletimeopens the path with.write(true)(and usesFILE_FLAG_BACKUP_SEMANTICSso it can also work with directories). (docs.rs) - Creation time: the
filetimecrate doesn’t set creation/birth time (it always passesNULLfor creation time). If you need Windows creation-time support, use something likefiletime_creation(it usesSetFileTimeto set creation time on Windows). (docs.rs)
Citations:
- 1: https://docs.rs/crate/filetime/latest/source/src/windows.rs
- 2: https://docs.rs/crate/filetime/latest/source/src/windows.rs
- 3: https://docs.rs/crate/filetime/latest/source/src/windows.rs
atime parameter is silently ignored on Windows; consider using the filetime crate for cross-platform support.
On non-Unix platforms, the function accepts an atime parameter but discards it without error, only setting mtime via set_modified(). The Unix implementation (using libc::utimensat) properly handles both timestamps, creating a behavioral inconsistency across platforms.
The write(true) mode is necessary for the underlying Windows API (SetFileTime requires FILE_WRITE_ATTRIBUTES); the filetime crate uses the same approach and provides a robust, standard way to handle both atime and mtime consistently across platforms, including proper Windows support.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utimes.rs` around lines 62 - 75, The non-Unix branch in src/utimes.rs
currently ignores the atime and only calls file.set_modified(mtime_sys); replace
that logic to use the filetime crate (e.g., FileTime::from_system_time and
filetime::set_file_times) to set both atime and mtime atomically on Windows;
open the file with OpenOptions::new().write(true) (to ensure
FILE_WRITE_ATTRIBUTES) and convert atime and mtime via to_system_time before
passing them to filetime::set_file_times, and remove the unused atime_sys
placeholder so the behavior matches the Unix utimensat implementation.
| #[cfg(unix)] | ||
| if let Some(mode) = opts.mode { | ||
| use std::os::unix::fs::PermissionsExt; | ||
| let _ = fs::set_permissions(path, fs::Permissions::from_mode(mode)); | ||
| } |
There was a problem hiding this comment.
Silently discarding permission-setting errors.
let _ = fs::set_permissions(...) swallows any error when applying the requested mode. If the caller explicitly passes mode, they expect it to take effect. A failure here (e.g., permission denied) should be surfaced.
Proposed fix
#[cfg(unix)]
if let Some(mode) = opts.mode {
use std::os::unix::fs::PermissionsExt;
- let _ = fs::set_permissions(path, fs::Permissions::from_mode(mode));
+ fs::set_permissions(path, fs::Permissions::from_mode(mode)).map_err(|e| {
+ Error::from_reason(format!(
+ "EPERM: operation not permitted, chmod '{}'",
+ path.to_string_lossy()
+ ))
+ })?;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/write_file.rs` around lines 57 - 61, The code currently silences errors
when applying a requested Unix mode (the block checking opts.mode) by using `let
_ = fs::set_permissions(...)`; instead propagate or return the error so callers
see failures: replace the discard with a proper Result handling (e.g., call
`fs::set_permissions(path, fs::Permissions::from_mode(mode))?` or map the error
into the function's error type) and keep the existing `use
std::os::unix::fs::PermissionsExt;` and `path`/`opts.mode` identifiers; ensure
the surrounding function signature returns a compatible Result so the permission
error is not swallowed.
…ding copy, exists, mkdir, readFile, rm, stat, and writeFile
…h corresponding tests
…atch node:fs behavior
…e copying, force overwrite, and error handling; add tests and update documentation
…luding Date object support for timestamps, updated README, and additional tests for access and stat functions
… README documentation
| fn ms_to_datetime(ms: f64) -> DateTime<Local> { | ||
| let secs = (ms / 1000.0) as i64; | ||
| let nsecs = ((ms % 1000.0) * 1_000_000.0) as u32; | ||
| Local | ||
| .timestamp_opt(secs, nsecs) | ||
| .single() | ||
| .unwrap_or_else(|| Local.timestamp_opt(0, 0).unwrap()) | ||
| } |
There was a problem hiding this comment.
🟡 ms_to_datetime produces invalid nsecs for negative or fractional millisecond values
The ms_to_datetime function in types.rs can produce a negative nsecs value (cast to u32, wrapping around) when ms has a negative fractional part, causing incorrect Date objects for atime, mtime, ctime, and birthtime on Stats.
Root Cause
At src/types.rs:167, the nanosecond component is computed as:
let nsecs = ((ms % 1000.0) * 1_000_000.0) as u32;When ms is negative (e.g., representing dates before the Unix epoch) or when ms has a fractional part where ms % 1000.0 is negative, the intermediate f64 value will be negative. Casting a negative f64 to u32 in Rust is saturating to 0 in recent Rust editions, but on older editions or with certain values it could produce u32::MAX or 0. More importantly, even with saturation to 0, the fractional millisecond precision is lost.
For example, with ms = 1500.5, we get secs = 1, nsecs = ((500.5) * 1_000_000.0) as u32 = 500_500_000 — this is correct. But with ms = -500.0, we get secs = 0 (truncation toward zero), nsecs = ((-500.0) * 1_000_000.0) as u32 which saturates to 0, producing DateTime at epoch (0, 0) instead of 500ms before epoch.
Node.js uses Math.round(ms) and passes it directly to new Date(ms), which handles negative values correctly.
Impact: Stats date fields (atime, mtime, ctime, birthtime) may be incorrect for files with timestamps before the Unix epoch or when sub-second precision matters with negative remainders. In practice this is rare but can cause subtle mismatches with Node.js fs.Stats date values.
| fn ms_to_datetime(ms: f64) -> DateTime<Local> { | |
| let secs = (ms / 1000.0) as i64; | |
| let nsecs = ((ms % 1000.0) * 1_000_000.0) as u32; | |
| Local | |
| .timestamp_opt(secs, nsecs) | |
| .single() | |
| .unwrap_or_else(|| Local.timestamp_opt(0, 0).unwrap()) | |
| } | |
| fn ms_to_datetime(ms: f64) -> DateTime<Local> { | |
| let total_ms = ms.round() as i64; | |
| let secs = total_ms.div_euclid(1000); | |
| let nsecs = (total_ms.rem_euclid(1000) * 1_000_000) as u32; | |
| Local | |
| .timestamp_opt(secs, nsecs) | |
| .single() | |
| .unwrap_or_else(|| Local.timestamp_opt(0, 0).unwrap()) | |
| } | |
Was this helpful? React with 👍 or 👎 to provide feedback.
…d precision in statSync; enhance writeFile and appendFile with encoding options for hex and base64
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
29-29:⚠️ Potential issue | 🟡 MinorFull-width colon inconsistency in legend.
Line 29 uses a full-width colon
:(U+FF1A) while all other legend items use ASCII:(U+003A).-> - ✨:New feature from hyper-fs +> - ✨: New feature from hyper-fs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 29, Replace the full-width colon in the README legend entry "✨:New feature from hyper-fs" with an ASCII colon so it matches the other items (change "✨:New feature from hyper-fs" to "✨: New feature from hyper-fs"); update only that character in the README legend to ensure consistent punctuation across legend items.
🧹 Nitpick comments (31)
__test__/rename.spec.ts (1)
32-42: Async test doesn't verify file content after rename.The other rename tests (e.g., line 24) assert
readFileSync(dest, 'utf8') === 'content'. The async variant only checks existence. Add a content assertion for completeness.💡 Suggested addition
t.false(existsSync(src)) t.true(existsSync(dest)) + t.is(readFileSync(dest, 'utf8'), 'data')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/rename.spec.ts` around lines 32 - 42, The async rename test ("rename: async should rename a file") currently only checks existence; after calling await rename(src, dest) add an assertion that the destination file's contents equal the original string written ("data") by calling readFileSync(dest, 'utf8') and comparing to 'data' (similar to the other test on line 24). Locate this in the test block that uses tmpDir(), join(), writeFileSync(src, 'data'), and rename(src, dest), and insert the readFileSync(dest, 'utf8') === 'data' assertion immediately after the existsSync checks.__test__/truncate.spec.ts (1)
52-53: Redundant length assertion.Line 53's
t.is(hyperContent.length, nodeContent.length)is always true if line 52'st.is(hyperContent, nodeContent)passes, since string equality implies length equality.🧹 Suggested cleanup
t.is(hyperContent, nodeContent) - t.is(hyperContent.length, nodeContent.length)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/truncate.spec.ts` around lines 52 - 53, The assertion t.is(hyperContent.length, nodeContent.length) is redundant because t.is(hyperContent, nodeContent) already guarantees equal lengths; remove the redundant length check in __test__/truncate.spec.ts (the second assertion comparing hyperContent.length and nodeContent.length) and keep only the string equality assertion involving hyperContent and nodeContent to simplify the test.benchmark/mkdir.ts (1)
11-38: Extractbenchmark/runGroupto a shared utility — these are duplicated verbatim inbenchmark/copy_file.ts.Having identical implementations across multiple benchmark files means bug fixes or improvements (e.g., switching from mean to median) must be applied in every copy. A single
benchmark/utils.tsexport would eliminate the duplication and make the per-file scripts much shorter.♻️ Suggested extraction
Create
benchmark/utils.ts:export function benchmark(name: string, fn: () => void, iterations = 100): number { fn() // warmup const times: number[] = [] for (let i = 0; i < iterations; i++) { const start = process.hrtime.bigint() fn() const end = process.hrtime.bigint() times.push(Number(end - start) / 1_000_000) } return times.reduce((a, b) => a + b, 0) / times.length } export function runGroup(groupName: string, impls: { name: string; fn: () => void }[], iterations?: number) { console.log(`\n${groupName}`) const results = impls.map((impl) => ({ name: impl.name, time: benchmark(impl.name, impl.fn, iterations) })) const baseline = results[0] for (const res of results) { const ratio = res.time / baseline.time const diffStr = res === baseline ? '(baseline)' : `${ratio.toFixed(2)}x ${ratio > 1 ? '(slower)' : '(faster)'}` console.log(` ${res.name.padEnd(25)} ${res.time.toFixed(3)} ms ${diffStr}`) } }Then in
benchmark/mkdir.tsandbenchmark/copy_file.ts:+import { runGroup } from './utils.js' -function benchmark(...) { ... } -function runGroup(...) { ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/mkdir.ts` around lines 11 - 38, Extract the duplicated benchmark() and runGroup() implementations into a shared module (e.g., create benchmark/utils.ts) and export them; ensure benchmark(name, fn, iterations = 100) accepts an optional iterations parameter and still performs the warmup call, and runGroup(groupName, impls, iterations?) passes iterations through to benchmark. Replace the copies in benchmark/mkdir.ts and benchmark/copy_file.ts with imports from the new utils module and call runGroup/benchmark from there (keep the existing function names benchmark and runGroup so callers like runGroup(...) and benchmark(...) continue to work)..github/workflows/CI.yml (1)
114-114: Alignupload-artifactanddownload-artifactversions.
actions/upload-artifact@v5(Line 114) is paired withactions/download-artifact@v4(Lines 155, 211, 255). v5 and v6 ofupload-artifactshare the same v4+ artifact backend, so this is likely functionally compatible today, but the version skew obscures intent and creates a maintenance risk if any v5-specific upload features are ever relied on. Both should be pinned to the same major version. Note that v6 is also now available, so this is a good opportunity to align everything at@v5(or@v6if the runner constraint is acceptable).🔧 Proposed alignment fix
- uses: actions/upload-artifact@v5 + uses: actions/upload-artifact@v5 # or upgrade both to `@v6` # In test-macOS-windows-binding, test-linux-binding, and publish jobs: - uses: actions/download-artifact@v4 + uses: actions/download-artifact@v5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/CI.yml at line 114, Align the artifact action versions so upload and download use the same major release: change all occurrences of actions/download-artifact@v4 (referenced at lines near the download steps) to actions/download-artifact@v5 to match actions/upload-artifact@v5, or alternatively pin both upload and download to `@v6` if you prefer the newer major—ensure the chosen version is supported by your runner, update every uses: entry for actions/download-artifact and any actions/upload-artifact occurrences so they match the same `@v5/`@v6 tag, and run the CI workflow to verify no compatibility issues.benchmark/exists.ts (2)
5-7: Hardcoded/tmppath is fragile and non-portable; CWD-relative paths assume project root.
- Line 7:
/tmp/hyper-fs-bench-nonexistent-path-12345is hardcoded, Unix-only, and would produce incorrect results if that path ever exists. Preferpath.join(os.tmpdir(), 'hyper-fs-bench-nonexistent-' + Date.now()).- Lines 5–6 (
'package.json','node_modules') and Line 36 ('src') are CWD-relative and silently break when the benchmark is run from any directory other than project root. Preferpath.join(import.meta.dirname ?? __dirname, '../', ...)or equivalent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/exists.ts` around lines 5 - 7, Replace hardcoded and CWD-relative paths so the benchmark is portable: construct nonExistent using os.tmpdir() (e.g., path.join(os.tmpdir(), 'hyper-fs-bench-nonexistent-' + Date.now())) instead of the literal '/tmp/…', and resolve existingFile, existingDir (and any other CWD-relative refs like 'src') relative to the repository file location using import.meta.url/dirname or __dirname (e.g., path.join(import.meta.dirname ?? __dirname, '../', 'package.json')) so variables existingFile, existingDir, and any occurrences of 'src' are based on the file’s directory rather than the current working directory.
1-3: Inconsistent benchmarking framework across the benchmark suite.This file uses
mitata(bench/group/run), whilebenchmark/mkdir.tsandbenchmark/copy_file.tsuse a hand-rolled timing loop. The suite should standardize on one approach —mitatais the better choice since it's already a project dependency and handles statistical robustness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/exists.ts` around lines 1 - 3, This file uses mitata's run/bench/group, but other benchmarks (benchmark/mkdir.ts and benchmark/copy_file.ts) use a hand-rolled timing loop; standardize by converting those files to mitata too: replace their manual timing loops with mitata's bench() and group() APIs, import run, bench, group from 'mitata' (same as this file), wrap each case in bench callbacks, and invoke run() at the end—remove the custom timing/for-loop logic so all benchmarks use the same mitata-based structure as exists.ts.Cargo.toml (1)
3-3: Authors list cleared.The
authorsfield was changed from the original author to an empty array. If this is intentional (e.g., moving to a community-maintained project), consider documenting attribution in aCONTRIBUTORSorAUTHORSfile instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` at line 3, The Cargo.toml authors field was cleared (authors = []), removing package attribution; restore the original author entry or, if the project is intentionally moving to community ownership, document the change by either repopulating the authors field in Cargo.toml with the original maintainer(s) or add an AUTHORS or CONTRIBUTORS file in the repo and leave a comment in Cargo.toml referencing that file; update the authors field or add the new attribution file and commit both so attribution is preserved (look for the authors key in Cargo.toml to locate where to change).src/mkdtemp.rs (1)
9-57: Consider using thegetrandomcrate instead of rolling platform-specific RNG.The current implementation manually handles
/dev/urandomon Unix andBCryptGenRandomFFI on Windows with a custom fallback PRNG. Thegetrandomcrate provides a single cross-platform API for OS-level randomness, is widely audited, and would eliminate the platform-specific blocks and the fallback path entirely.Does the Rust getrandom crate support napi and is it commonly used for random temp dir names?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mkdtemp.rs` around lines 9 - 57, Replace the platform-specific RNG logic in generate_random_suffix with a single call to the getrandom crate: remove the #[cfg(unix)] block that opens /dev/urandom, the #[cfg(windows)] BCryptGenRandom FFI and the #[cfg(not(any(unix, windows)))] fallback, and instead call getrandom::getrandom(&mut bytes) (handle the Result by falling back to a hard error or a secure fallback path you choose). Keep CHARS and the final mapping loop unchanged, and remove or stop calling fill_fallback; update imports to add getrandom and remove platform-specific imports/externs such as BCryptGenRandom. This centralizes randomness, eliminates conditional compilation, and simplifies generate_random_suffix.src/readdir.rs (1)
29-32:encodingoption is accepted but silently ignored.The new
encodingfield is documented and parsed, but the_encodingvariable is never used (Line 65). All paths return UTF-8 strings regardless. If'buffer'encoding is intentionally unsupported, consider returning an explicit error when a caller passesencoding: 'buffer'rather than silently ignoring it — this would match Node.js behavior more closely and avoid confusion.Also applies to: 63-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/readdir.rs` around lines 29 - 32, The pub encoding field is parsed into the local _encoding variable but never used, so callers asking for encoding: "buffer" silently get UTF-8 strings; update the readdir handling to honor _encoding: if _encoding == "buffer" return an explicit error (or implement returning raw bytes) instead of ignoring it, and treat any other value as "utf8" as documented; locate the code that declares pub encoding and the local _encoding variable in the readdir function and add a conditional branch that either returns a proper error for "buffer" or converts names to Buffer-like byte arrays when requested, leaving the default UTF-8 string path unchanged.src/cp.rs (1)
50-53: Non-Unixset_timestampsis a silent no-op.When
preserve_timestamps: trueis passed on Windows, this returnsOk(())without preserving anything. Consider either logging a warning or implementing the Windows equivalent usingSetFileTimevia thewindows-syscrate, or at minimum documenting this limitation in theCpOptionsdoc comment.__test__/rm.spec.ts (1)
255-282: Retry tests only exercise the happy (first-attempt-success) path.All three
maxRetriestests create a normally removable file/directory, so the operation succeeds on the first attempt and the retry loop is never actually exercised. These tests confirm the option doesn't break normal removal, but they don't validate that retries actually fire on transient failures.Consider adding a test that simulates a transient failure (e.g., a file locked by another handle on Windows, or a directory that temporarily cannot be removed) to actually cover the retry logic path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/rm.spec.ts` around lines 255 - 282, Update the tests so they simulate a transient failure instead of always succeeding on first attempt: for the sync test override/monkeypatch fs.rmSync (and for the async test fs.rm) to throw a transient error (set err.code to 'EBUSY' or 'EPERM') for the first N calls and then call through to the original implementation, then call rmSync/rm with maxRetries > N to assert the retry loop actually retries and eventually removes the target; restore the original fs functions after the test. Reference the existing test helpers and functions rmSync and rm, and the test names "sync: maxRetries should eventually succeed on transient failure" / "sync: maxRetries with recursive should work" / "async: maxRetries with recursive should work" to locate where to inject the monkeypatch and restore logic.src/types.rs (1)
1-1:#![allow(dead_code)]is module-scoped — consider narrowing.This inner attribute suppresses dead-code warnings for the entire module. The only unused items appear to be the
S_IF*constants (used only within this file'simplblocks, which the compiler should recognize). If the warning is just for a few constants, a targeted#[allow(dead_code)]on the constant block would be cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.rs` at line 1, The module-wide attribute #![allow(dead_code)] is too broad; narrow it by removing the crate/module-level attribute and applying #[allow(dead_code)] only to the unused S_IF* constants (or the const block that defines them) so the rest of the file continues to get dead-code warnings; locate the S_IF* constant declarations in src/types.rs and add the attribute directly to that const block or each constant, then delete the top-level #![allow(dead_code)].src/symlink.rs (2)
13-18: TOCTOU race in EEXIST pre-check; also redundant on Unix.The existence check on Line 13 races with the actual
symlinksyscall — another process can create or remove the path in between. On Unix,symlink(2)itself returnsEEXISTwhen the target path exists, so you could drop this pre-check and handleAlreadyExistsin themap_errblock below (Lines 23–32), similar to howNotFoundis already handled:Suggested approach
fn symlink_impl(target: String, path_str: String, symlink_type: Option<String>) -> Result<()> { let path = Path::new(&path_str); let target_path = Path::new(&target); - - if path.exists() || path.symlink_metadata().is_ok() { - return Err(Error::from_reason(format!( - "EEXIST: file already exists, symlink '{}' -> '{}'", - target, path_str - ))); - } #[cfg(unix)] { let _ = symlink_type; // unused on Unix std::os::unix::fs::symlink(target_path, path).map_err(|e| { if e.kind() == std::io::ErrorKind::NotFound { Error::from_reason(format!( "ENOENT: no such file or directory, symlink '{}' -> '{}'", target, path_str )) + } else if e.kind() == std::io::ErrorKind::AlreadyExists { + Error::from_reason(format!( + "EEXIST: file already exists, symlink '{}' -> '{}'", + target, path_str + )) } else { Error::from_reason(format!("{}, symlink '{}' -> '{}'", e, target, path_str)) } })?; }The same
AlreadyExistsarm should be added to the Windows branch as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/symlink.rs` around lines 13 - 18, Remove the TOCTOU pre-check that calls path.exists() || path.symlink_metadata().is_ok() and instead let the underlying symlink syscall fail atomically; handle the AlreadyExists case in the existing map_err/error match that follows (the same place where NotFound is handled — see the error-mapping block around lines 23–32) so that EEXIST is converted to Error::from_reason(...) rather than pre-checking, and add the corresponding AlreadyExists arm to the Windows-specific branch as well; update any log/message text to remain identical to the previous EEXIST message.
39-42:"junction"case usessymlink_dir(directory symbolic link) instead of true NTFS junction point.This implementation difference matters for Node.js API parity:
fs.symlink(target, path, 'junction')on Node.js creates a true NTFS junction (using reparse-point tagIO_REPARSE_TAG_MOUNT_POINT), but the current code creates a directory symbolic link viaCreateSymbolicLinkW. The semantics differ—junctions typically require no elevation and accept absolute paths, whilesymlink_dirrequiresSeCreateSymbolicLinkPrivilege(often admin/Developer Mode) and can handle relative/UNC paths. Since the README advertises'junction'with a checkmark suggesting Node.js parity, either implement true junctions (viajunctioncrate or directNtCreateFile/DeviceIoControl), or document this as a semantic approximation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/symlink.rs` around lines 39 - 42, The "junction" match arm currently calls std::os::windows::fs::symlink_dir (via the "junction" => branch in src/symlink.rs), which creates a directory symbolic link rather than a true NTFS junction; replace this with a real junction implementation (for example call junction::create(target_path, path) from the junction crate or implement the equivalent CreateMountPoint reparse-point logic via DeviceIoControl), ensuring the "junction" branch creates IO_REPARSE_TAG_MOUNT_POINT reparse points and preserves the existing argument handling, or if you choose not to implement native junctions update the README and any exported API comments to clearly document that "junction" is a semantic approximation that currently creates a symlink_dir and not a true NTFS junction.src/write_file.rs (1)
20-55: Custom base64 decoder duplicated across files.Both
write_file.rs(decode) andread_file.rs(encode) have hand-rolled base64 implementations. Consider extracting these into a sharedencodingmodule or using thebase64crate to reduce duplication and audit surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/write_file.rs` around lines 20 - 55, The custom base64 decoding in base64_decode is duplicated with a hand-rolled encoder in read_file.rs; extract a single implementation into a shared place or replace both with the standard base64 crate: move the logic out of write_file.rs's base64_decode (and the encoder in read_file.rs) into a new encoding module (e.g., pub fn decode_base64 and pub fn encode_base64) and update both files to call those functions, or add the base64 crate to Cargo.toml and replace base64_decode and the encoder with calls to base64::engine::general_purpose::{STANDARD, URL_SAFE}.decode/encode (and update imports) so there is no duplicated logic and both read_file.rs and write_file.rs reference the same implementation.src/read_file.rs (2)
39-66: Custom base64 encoder — consider using thebase64crate.Rolling your own base64 is fragile and harder to audit. The
base64crate is widely used in the Rust ecosystem, handles standard and URL-safe alphabets, and would reduce maintenance surface. If minimizing dependencies is a deliberate choice, a comment explaining that rationale would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 39 - 66, The custom base64 implementation in base64_encode is fragile—replace it with the well-tested base64 crate: remove the manual encoding logic in base64_encode and call the crate API (e.g., use base64::engine::general_purpose::{STANDARD, URL_SAFE} or base64::encode_config(data, base64::STANDARD / base64::URL_SAFE) depending on your crate version) and preserve the url_safe boolean to choose the correct config; if you intentionally avoid adding a dependency, keep the existing base64_encode function but add a clear comment above base64_encode explaining why the dependency was avoided and link to tests that validate correctness.
97-98: Unknown flags silently fall back to read-only instead of erroring.Node.js throws
ERR_INVALID_OPT_VALUEfor unrecognizedflagvalues. Here, an invalid flag like"z"silently opens the file as read-only, which could mask caller mistakes. Consider returning an error for unrecognized flags.Suggested change
- _ => { open_opts.read(true); } + other => { + return Err(Error::from_reason(format!( + "ERR_INVALID_OPT_VALUE: invalid value for flag: '{}'", other + ))); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 97 - 98, The match arm that currently treats unknown flag values by falling back to open_opts.read(true) should instead return an error for unrecognized flags; update the flag-parsing code (the match that sets open_opts) so that the catch-all branch (currently "_ => { open_opts.read(true); }") returns a proper Err with a clear message (e.g., invalid flag / ERR_INVALID_OPT_VALUE) rather than mutating open_opts, ensuring callers receive a failure when an invalid flag is provided.__test__/glob.spec.ts (1)
3-3: Minor: inconsistentpathimport style.Other test files in this PR use
from 'node:path'(thenode:protocol prefix). This file usesfrom 'path'. Both work, but consistency across the test suite is preferable.Suggested fix
-import { join } from 'path' +import { join } from 'node:path'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/glob.spec.ts` at line 3, The import in this test uses the non-prefixed module specifier (import { join } from 'path'); update it to the node: protocol to match the rest of the suite by changing the module specifier to 'node:path' so the symbol join is imported from 'node:path' instead of 'path' for consistency with other tests.__test__/chmod.spec.ts (1)
8-14: Consider extracting the sharedtmpFile/tmpDirhelper.This helper is duplicated across multiple test files (
encoding.spec.ts,chmod.spec.ts,readlink.spec.ts, etc.) with minor variations. A shared__test__/helpers.tsmodule would reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/chmod.spec.ts` around lines 8 - 14, Extract the duplicated tmpFile/tmpDir test helpers into a single __test__/helpers.ts module and update tests to import them: create exported functions tmpFile(name: string) and tmpDir() that encapsulate the logic currently in chmod.spec.ts (creating a unique temp directory with join(tmpdir(), ...), mkdirSync(...), writeFileSync(...)), replace the inline tmpFile implementations in chmod.spec.ts, encoding.spec.ts, readlink.spec.ts, etc. with imports from __test__/helpers.ts, and ensure signatures and behavior (file content, unique dir naming) match existing usages so tests remain unchanged except for the import.__test__/readdir.spec.ts (1)
121-127: Silent skip whenhyperEntryis not found may mask regressions.If the hyper-fs readdir returns fewer entries than Node.js, this loop silently skips missing entries without failing. The preceding test (lines 104-111) covers name-set parity via
deepEqual, so this is likely fine in practice, but an explicit assertion here would make the test self-sufficient.Proposed improvement
const hyperEntry = hyperEntries.find((e) => e.name === nodeEntry.name) - if (hyperEntry) { + if (!hyperEntry) { + t.fail(`Missing hyper-fs entry for ${nodeEntry.name}`) + continue + } + { t.is(hyperEntry.isFile(), nodeEntry.isFile(), `isFile mismatch for ${nodeEntry.name}`) t.is(hyperEntry.isDirectory(), nodeEntry.isDirectory(), `isDirectory mismatch for ${nodeEntry.name}`) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/readdir.spec.ts` around lines 121 - 127, The loop over nodeEntries silently skips when hyperEntries lacks a corresponding entry—add an explicit assertion inside the for-loop to fail the test if hyperEntry is undefined (e.g., assert that hyperEntry exists for each nodeEntry.name) before comparing isFile()/isDirectory(); update the block that iterates nodeEntries and references nodeEntry, hyperEntries, and hyperEntry to first assert presence of hyperEntry (including nodeEntry.name in the failure message) then perform the existing isFile/isDirectory comparisons.benchmark/cp.ts (1)
104-109: Ratio label is slightly misleading whenratioequals exactly1.0.When
ratiois exactly1.0(or very close due to floating point), the ternaryratio > 1 ? '(slower)' : '(faster)'will print"1.00x (faster)". Consider adding an equality case or a tolerance band.Suggested tweak
- const diffStr = res === baseline ? '(baseline)' : `${ratio.toFixed(2)}x ${ratio > 1 ? '(slower)' : '(faster)'}` + const tag = ratio > 1.01 ? '(slower)' : ratio < 0.99 ? '(faster)' : '(~same)' + const diffStr = res === baseline ? '(baseline)' : `${ratio.toFixed(2)}x ${tag}`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/cp.ts` around lines 104 - 109, The ratio label logic in the loop using baseline, results, ratio and diffStr can mis-label runs when ratio === 1.0 (or near due to floating point); change the diffStr construction to check for an equality/tolerance case first (e.g., const eps = 1e-2; if (Math.abs(ratio - 1) < eps) label '(same)') then fall back to ratio > 1 '? (slower)' : '(faster)'. Update the creation of diffStr (and its use when res === baseline) so equal/near-equal speeds are reported as '(same)' instead of '(faster)' or '(slower)'.__test__/readlink.spec.ts (1)
36-38: Hardcoded/tmp/path may behave unexpectedly on non-Unix systems.The ENOENT test uses
/tmp/no-such-link-...which won't match the system temp directory on Windows. Consider usingjoin(tmpdir(), ...)for consistency (even though this test would likely still pass on Windows since the path won't exist).Suggested fix
test('readlinkSync: should throw ENOENT on non-existent path', (t) => { - t.throws(() => readlinkSync('/tmp/no-such-link-' + Date.now()), { message: /ENOENT/ }) + t.throws(() => readlinkSync(join(tmpdir(), 'no-such-link-' + Date.now())), { message: /ENOENT/ }) })Apply the same fix to the async variant on line 59.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/readlink.spec.ts` around lines 36 - 38, The test uses a hardcoded '/tmp/' path which is not portable; change the non-existent-path construction to use path.join(os.tmpdir(), `no-such-link-${Date.now()}`) (importing/using tmpdir from 'os' and join from 'path' if not already) in the test named "readlinkSync: should throw ENOENT on non-existent path" so the temp directory is platform-correct, and apply the identical change to the async ENOENT test (the async variant for readlink) so both tests use join(tmpdir(), ...) instead of '/tmp/'.__test__/encoding.spec.ts (1)
8-16: Temp directories are never cleaned up.Each test call to
tmpFilecreates a new directory underos.tmpdir()but there's no teardown to remove them. This is common for test suites (the OS cleans tmpdir periodically), but if you want deterministic cleanup, consider using AVA'stest.afterEachortest.after.alwayshooks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/encoding.spec.ts` around lines 8 - 16, The tmpFile function creates temp directories but never records or removes them; modify tmpFile to record each created directory path (e.g., push dir into a module-scoped array) and add an AVA teardown hook (test.afterEach or test.after.always) to iterate that array and remove each directory (use fs.rmSync or fs.rmdirSync with recursive/force) so temp dirs are cleaned up after tests; refer to tmpFile to collect dirs and to the AVA hooks test.afterEach/test.after.always to perform the cleanup.__test__/mkdir.spec.ts (2)
45-60: Same fragile cleanup pattern in async tests.Lines 49, 57–59 also manually call
rmdirSyncper level. Same suggestion applies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 45 - 60, The async tests for mkdir (test cases using tmpPath, mkdir, existsSync) use fragile per-level cleanup via multiple rmdirSync calls; replace those manual level-by-level removals with a single recursive removal (e.g., use fs.rmSync or fs.rmdirSync with recursive:true/force as supported) to delete the entire dir tree in one call after the assertion, and update both the 'mkdir: async should create a directory' and 'mkdir: async recursive' tests to call that single recursive remover instead of multiple rmdirSync(join(...)) lines.
26-36: Manual recursive directory cleanup is fragile.Manually calling
rmdirSyncfor each nested level is error-prone—if the structure changes or a test fails before cleanup completes, directories leak. UsenodeFs.rmSync(dir, { recursive: true })as you already do in the dual-run tests (lines 74–75).Proposed fix
t.true(existsSync(nested)) - // cleanup - rmdirSync(join(dir, 'a', 'b', 'c')) - rmdirSync(join(dir, 'a', 'b')) - rmdirSync(join(dir, 'a')) - rmdirSync(dir) + nodeFs.rmSync(dir, { recursive: true }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 26 - 36, The test "mkdirSync: recursive should create nested dirs" currently cleans up by calling rmdirSync on each nested level which is fragile; replace the manual rmdirSync calls with a single nodeFs.rmSync(dir, { recursive: true }) (or fs.rmSync) to recursively remove the top-level tmpPath('recursive') directory in one operation after asserting existsSync(nested); update the cleanup to use rmSync on the `dir` variable and remove the individual rmdirSync calls referencing join(dir, 'a', ...).benchmark/write_file.ts (1)
16-46: Inconsistent benchmarking approach vs.benchmark/read_file.ts.This file rolls its own micro-benchmark loop while
benchmark/read_file.tsuses themitatalibrary (which provides warmup, statistical analysis, and formatted output). Consider aligning on one approach for consistency and more robust statistical reporting (median, percentiles, etc.).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/write_file.ts` around lines 16 - 46, The custom micro-benchmark loop in benchmark(name: string, fn: () => void) and the reporting in runGroup are inconsistent with benchmark/read_file.ts which uses the mitata library; replace the homegrown timing logic and result aggregation with mitata's benchmarking API (including warmup and statistical reporting) so you get median/percentiles and proper formatted output, update benchmark to invoke mitata.bench/runner for each impl and collect the median or mean from mitata's results, then adapt runGroup to consume mitata's result objects to compute the baseline ratio and print the same padded name, time and "(baseline)/(slower)/(faster)" annotations using mitata-provided statistics rather than manual hrtime arrays.__test__/link.spec.ts (1)
7-11: Temp directories are never cleaned up.Unlike some other test files in this PR (e.g.,
mkdir.spec.ts), this file never removes the directories created bytmpDir(). Consider adding cleanup viatest.after.alwaysorrmSyncat the end of each test to avoid accumulating leftover directories in the OS temp folder across repeated runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/link.spec.ts` around lines 7 - 11, The tmpDir() helper creates temp directories but never deletes them; modify tests in __test__/link.spec.ts to track created paths (e.g., have tmpDir push each created path into a local array) and add a cleanup hook such as test.after.always(() => { createdPaths.forEach(p => rmSync(p, { recursive: true, force: true })) }) or ensure each test calls rmSync on its temp dir at teardown; update references to tmpDir() and the test suite to use this tracked cleanup so temp directories are removed after test runs.__test__/symlink.spec.ts (2)
88-99: Async test validates result with the library under test (readlinkSync), obscuring failures.Line 97 uses
readlinkSyncfrom../index.jsto verify the outcome of the asyncsymlinkcall. IfreadlinkSyncitself has a defect, a real bug insymlinkcould slip through undetected. UsenodeReadlinkSync(already imported on line 4) for independence.♻️ Suggested fix
- const resolved = readlinkSync(link) + const resolved = nodeReadlinkSync(link, 'utf8') t.is(resolved, target)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/symlink.spec.ts` around lines 88 - 99, The test "symlink: async should create a symbolic link" is validating the async symlink using the library's own readlinkSync, which can mask failures; update the assertion to use the imported nodeReadlinkSync (instead of readlinkSync) to independently resolve the created link after calling symlink(target, link) and assert nodeReadlinkSync(link) === target so the test validates the OS-level result rather than the library implementation.
88-99: Missing asyncEEXISTerror-path test.The sync path has an explicit
EEXISTtest (lines 52–60), but the asyncsymlinkfunction has no equivalent. Consider adding a test that callssymlink(target, existingPath)and asserts rejection withENOENT/EEXIST.Would you like me to draft the async EEXIST test case?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/symlink.spec.ts` around lines 88 - 99, Add an async test that mirrors the sync EEXIST case: create a tmp dir, write a file at target and also create an "existing" path (e.g., writeFileSync(existingPath, '') or create a symlink there), then call the async symlink(target, existingPath) and assert the Promise rejects with an EEXIST (or ENOENT depending on platform) error; reference the existing test block name "symlink: async should create a symbolic link" and the exported function "symlink" to place the new test alongside it and use helpers like tmpDir(), writeFileSync(), existsSync(), and t.throwsAsync/t.rejects to assert the expected error.__test__/access.spec.ts (1)
7-10: Import permission constants fromnode:constantsinstead of hardcoding them.The values are numerically correct, but defining them manually creates a maintenance risk if you ever need to combine flags (e.g.,
R_OK | W_OK) or reference them by name in error messages. The canonical source isnode:constants(ornode:fs.constants).♻️ Suggested refactor
-const F_OK = 0 -const R_OK = 4 -const W_OK = 2 -const X_OK = 1 +import { constants as fsConstants } from 'node:fs' +const { F_OK, R_OK, W_OK, X_OK } = fsConstants🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/access.spec.ts` around lines 7 - 10, Replace the hardcoded permission constants F_OK, R_OK, W_OK, X_OK with the canonical exports from Node (e.g., import { F_OK, R_OK, W_OK, X_OK } from 'node:constants' or import { constants as fsConstants } from 'node:fs' and use fsConstants.F_OK etc.); remove the manual const declarations and update any usages in this test (such as in access.spec.ts references to F_OK/R_OK/W_OK/X_OK) to use the imported symbols so flag combinations and named references come from the authoritative source.index.js (1)
575-626: Optional: Add a startup guard to catch missing native binding properties.The pattern of self-assigning properties after
module.exports = nativeBinding(line 574) works correctly when all properties exist in the native binding. However, if any property is missing from the Rust implementation, it silently becomesundefined, and consumers will get a confusing runtime error (TypeError: X is not a function) rather than a clear initialization error.While verification confirms all 50 exports (Dirent, Stats, and 48 functions) are properly implemented in the Rust source, adding a startup guard provides defensive protection against future additions:
Suggested optional guard pattern
+const REQUIRED_EXPORTS = [ + 'access', 'accessSync', 'appendFile', 'appendFileSync', + 'chmod', 'chmodSync', 'chown', 'chownSync', + 'copyFile', 'copyFileSync', 'cp', 'cpSync', + 'exists', 'existsSync', 'glob', 'globSync', + 'link', 'linkSync', 'lstat', 'lstatSync', + 'mkdir', 'mkdirSync', 'mkdtemp', 'mkdtempSync', + 'readdir', 'readdirSync', 'readFile', 'readFileSync', + 'readlink', 'readlinkSync', 'realpath', 'realpathSync', + 'rename', 'renameSync', 'rm', 'rmdir', 'rmdirSync', 'rmSync', + 'stat', 'statSync', 'symlink', 'symlinkSync', + 'truncate', 'truncateSync', 'unlink', 'unlinkSync', + 'utimes', 'utimesSync', 'writeFile', 'writeFileSync' +] +for (const name of REQUIRED_EXPORTS) { + if (typeof nativeBinding[name] !== 'function') { + throw new Error(`Native binding is missing required export: ${name}`) + } +} + module.exports.Dirent = nativeBinding.Dirent // ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.js` around lines 575 - 626, The export block assigns many symbols from nativeBinding (e.g., Dirent, Stats, access, readFile, writeFile, etc.) which will silently become undefined if the native binding misses any; add a startup guard immediately after module.exports = nativeBinding that iterates the required export names (Dirent, Stats and the listed functions like access, appendFile, readFile, writeFile, etc.) and verifies each property exists and is of the expected type (constructor/function), and if any are missing throw a clear initialization Error stating which native binding symbol(s) are absent so consumers fail fast with a helpful message.
| test('accessSync: should throw ENOENT (not EACCES) for missing file', (t) => { | ||
| const target = '/tmp/no-such-file-access-' + Date.now() | ||
| t.throws(() => accessSync(target), { message: /ENOENT/ }) | ||
| }) |
There was a problem hiding this comment.
Hardcoded /tmp/ path breaks on Windows.
Lines 98 and 103 both construct paths starting with /tmp/, which is Unix-only. The two tests calling accessSync(target) will always throw with a path-not-found OS error on Windows (not necessarily ENOENT formatted identically), while the intent is cross-platform parity testing. All other tests in this file correctly use tmpdir() for isolation.
🐛 Proposed fix
test('accessSync: should throw ENOENT (not EACCES) for missing file', (t) => {
- const target = '/tmp/no-such-file-access-' + Date.now()
+ const target = join(tmpdir(), 'no-such-file-access-' + Date.now())
t.throws(() => accessSync(target), { message: /ENOENT/ })
})
test('dual-run: accessSync ENOENT error message starts with ENOENT like node:fs', (t) => {
- const target = '/tmp/no-such-file-access-dual-' + Date.now()
+ const target = join(tmpdir(), 'no-such-file-access-dual-' + Date.now())Also applies to: 102-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/access.spec.ts` around lines 97 - 100, The test "accessSync: should
throw ENOENT (not EACCES) for missing file" hardcodes a Unix-only path; update
the target path construction to use the system temp directory and path joining
utilities (use tmpdir()/os.tmpdir() and path.join) instead of '/tmp/' so the
missing-file test is cross-platform; locate the test block referencing
accessSync and the target variable and replace the literal
'/tmp/no-such-file-access-' + Date.now() with a tmpdir-based path (e.g.,
path.join(tmpdir(), 'no-such-file-access-' + Date.now())) and do the same for
the other test(s) in the same file that currently use '/tmp/'.
| import test from 'ava' | ||
| import { cpSync, cp } from '../index.js' | ||
| import * as nodeFs from 'node:fs' | ||
| import { writeFileSync, readFileSync, existsSync, mkdirSync, symlinkSync, readdirSync } from 'node:fs' |
There was a problem hiding this comment.
Unused import: symlinkSync.
symlinkSync is imported from 'node:fs' but never used in any test.
Proposed fix
-import { writeFileSync, readFileSync, existsSync, mkdirSync, symlinkSync, readdirSync } from 'node:fs'
+import { writeFileSync, readFileSync, existsSync, mkdirSync, readdirSync } from 'node:fs'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { writeFileSync, readFileSync, existsSync, mkdirSync, symlinkSync, readdirSync } from 'node:fs' | |
| import { writeFileSync, readFileSync, existsSync, mkdirSync, readdirSync } from 'node:fs' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/cp.spec.ts` at line 4, Remove the unused import symbol symlinkSync
from the import statement that currently imports writeFileSync, readFileSync,
existsSync, mkdirSync, symlinkSync, readdirSync from 'node:fs'; update the
import to only include the actually used functions (e.g., writeFileSync,
readFileSync, existsSync, mkdirSync, readdirSync) so the linter/test suite no
longer flags an unused import.
| @@ -0,0 +1,50 @@ | |||
| import test from 'ava' | |||
| import { readFileSync, readFile, writeFileSync } from '../index.js' | |||
There was a problem hiding this comment.
Unused import: writeFileSync.
writeFileSync is imported from '../index.js' but never used in this file.
Proposed fix
-import { readFileSync, readFile, writeFileSync } from '../index.js'
+import { readFileSync, readFile } from '../index.js'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { readFileSync, readFile, writeFileSync } from '../index.js' | |
| import { readFileSync, readFile } from '../index.js' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/read_file.spec.ts` at line 2, Remove the unused import writeFileSync
from the import statement in this test file; update the line that currently
imports readFileSync, readFile, writeFileSync from '../index.js' to only import
the symbols actually used (readFileSync and readFile) so there are no unused
imports (remove writeFileSync).
| test('readlinkSync: should read symbolic link target', (t) => { | ||
| const dir = tmpDir() | ||
| const target = join(dir, 'target.txt') | ||
| const link = join(dir, 'link.txt') | ||
| writeFileSync(target, 'hello') | ||
| symlinkSync(target, link) | ||
|
|
||
| const result = readlinkSync(link) | ||
| t.is(result, target) | ||
| }) | ||
|
|
||
| test('readlinkSync: should match node:fs readlinkSync', (t) => { | ||
| const dir = tmpDir() | ||
| const target = join(dir, 'target2.txt') | ||
| const link = join(dir, 'link2.txt') | ||
| writeFileSync(target, 'hello') | ||
| symlinkSync(target, link) | ||
|
|
||
| const nodeResult = nodeReadlinkSync(link, 'utf8') | ||
| const hyperResult = readlinkSync(link) | ||
| t.is(hyperResult, nodeResult) | ||
| }) | ||
|
|
||
| test('readlinkSync: should throw ENOENT on non-existent path', (t) => { | ||
| t.throws(() => readlinkSync('/tmp/no-such-link-' + Date.now()), { message: /ENOENT/ }) | ||
| }) | ||
|
|
||
| test('readlinkSync: should throw EINVAL on non-symlink', (t) => { | ||
| const dir = tmpDir() | ||
| const file = join(dir, 'regular.txt') | ||
| writeFileSync(file, 'not a symlink') | ||
| t.throws(() => readlinkSync(file), { message: /EINVAL/ }) | ||
| }) | ||
|
|
||
| test('readlink: async should read symbolic link', async (t) => { | ||
| const dir = tmpDir() | ||
| const target = join(dir, 'target3.txt') | ||
| const link = join(dir, 'link3.txt') | ||
| writeFileSync(target, 'hello') | ||
| symlinkSync(target, link) | ||
|
|
||
| const result = await readlink(link) | ||
| t.is(result, target) | ||
| }) |
There was a problem hiding this comment.
Missing Windows platform skip for symlink tests.
symlinkSync on Windows typically requires elevated privileges (or Developer Mode enabled). Other test files in this PR (e.g., chmod.spec.ts) guard with process.platform === 'win32' checks. These readlink tests create symlinks but have no such guard, which could cause failures on Windows CI.
Suggested guard (apply to each symlink-dependent test)
test('readlinkSync: should read symbolic link target', (t) => {
+ if (process.platform === 'win32') {
+ t.pass('Skipping symlink test on Windows')
+ return
+ }
const dir = tmpDir()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/readlink.spec.ts` around lines 13 - 56, These tests create symlinks
using symlinkSync and then call readlinkSync/readlink (tests named
'readlinkSync: should read symbolic link', 'readlinkSync: should match node:fs
readlinkSync', 'readlinkSync: should throw EINVAL on non-symlink', and
'readlink: async should read symbolic link') but have no Windows guard; add a
platform check to skip these symlink-dependent tests on Windows
(process.platform === 'win32') — either change the test declarations to
conditional skips or add an early skip at the top of each test body (use
t.skip()/return or test.skip for the named tests) so symlink creation is not
attempted on Windows CI.
| test('statSync: atime.getTime() should be close to atimeMs', (t) => { | ||
| const s = statSync('./package.json') | ||
| t.true(Math.abs(s.atime.getTime() - s.atimeMs) < 1000) | ||
| }) |
There was a problem hiding this comment.
Tolerance of < 1000 (1 second) for atime.getTime() - atimeMs is too permissive and masks real precision bugs.
Both s.atime and s.atimeMs derive from the same stat result, so they should agree within 1–2 ms at most (floating-point nanosecond-to-millisecond conversion rounding). A tolerance of 1000 ms would silently pass even if the Rust implementation constructs atime from the integer-seconds component only, discarding up to 999 ms of sub-second precision — precisely the class of bug the sub-second test at line 140 uses < 2 to catch.
🐛 Proposed fix
- t.true(Math.abs(s.atime.getTime() - s.atimeMs) < 1000)
+ t.true(Math.abs(s.atime.getTime() - s.atimeMs) < 2)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('statSync: atime.getTime() should be close to atimeMs', (t) => { | |
| const s = statSync('./package.json') | |
| t.true(Math.abs(s.atime.getTime() - s.atimeMs) < 1000) | |
| }) | |
| test('statSync: atime.getTime() should be close to atimeMs', (t) => { | |
| const s = statSync('./package.json') | |
| t.true(Math.abs(s.atime.getTime() - s.atimeMs) < 2) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/stat.spec.ts` around lines 83 - 86, The test "statSync:
atime.getTime() should be close to atimeMs" is using an overly permissive
tolerance (< 1000 ms); change the assertion to require sub-second agreement
(e.g., < 2 ms) so that s.atime.getTime() and s.atimeMs — both from statSync —
must match within expected millisecond rounding; update the comparison in that
test to use a small tolerance (around 1–2 ms) and keep the rest of the test
intact.
| #[cfg(unix)] | ||
| { | ||
| use std::io::Read; | ||
| if let Ok(mut f) = fs::File::open("/dev/urandom") { | ||
| let _ = f.read_exact(&mut bytes); | ||
| } else { | ||
| fill_fallback(&mut bytes); | ||
| } | ||
| } |
There was a problem hiding this comment.
read_exact result is silently discarded — failed reads produce predictable suffixes.
On Line 18, let _ = f.read_exact(&mut bytes) discards the Result. If the read fails (or reads fewer than 6 bytes), bytes stays all-zero, producing the suffix "AAAAAA" every time (0 % 62 == 0). This degrades uniqueness guarantees and could cause the retry loop to exhaust all 10 attempts.
Proposed fix
if let Ok(mut f) = fs::File::open("/dev/urandom") {
- let _ = f.read_exact(&mut bytes);
+ if f.read_exact(&mut bytes).is_err() {
+ fill_fallback(&mut bytes);
+ }
} else {
fill_fallback(&mut bytes);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mkdtemp.rs` around lines 14 - 22, The code currently ignores the Result
from f.read_exact(&mut bytes) in the unix block of mkdtemp.rs, so failed/partial
reads leave bytes zeroed and produce predictable suffixes; update the block to
check the Result of read_exact on the opened File (symbol: std::fs::File::open
and method read_exact) and on error call fill_fallback(&mut bytes) (or otherwise
fill remaining bytes) instead of discarding the error; ensure any read failure
is handled the same as the existing fallback path so the generated suffix uses
nonzero entropy.
| fn remove_with_retry(path: &Path, opts: &RmOptions) -> Result<()> { | ||
| let max_retries = opts.max_retries.unwrap_or(0) as usize; | ||
| let retry_delay = opts.retry_delay.unwrap_or(100) as u64; | ||
|
|
||
| let mut last_err = None; | ||
| for attempt in 0..=max_retries { | ||
| if attempt > 0 { | ||
| std::thread::sleep(std::time::Duration::from_millis(retry_delay)); | ||
| } | ||
| match remove_recursive(path, opts) { | ||
| Ok(()) => return Ok(()), | ||
| Err(e) => last_err = Some(e), | ||
| } | ||
| } | ||
| Err(last_err.unwrap()) | ||
| } |
There was a problem hiding this comment.
Retry logic deviates from documented Node.js semantics in two ways.
-
No selective retry: The doc comment (Lines 12–14) states retries apply only to
EBUSY,EMFILE,ENFILE,ENOTEMPTY, orEPERM, but this implementation retries on all errors. A permission-denied or I/O error unrelated to transient lock contention will be needlessly retried. -
Fixed delay instead of linear backoff: The doc comment says "linear backoff of
retryDelayms longer on each try," matching Node.js behavior (retryDelay * attempt). The current implementation uses a fixed sleep ofretryDelayevery retry.
Proposed fix
fn remove_with_retry(path: &Path, opts: &RmOptions) -> Result<()> {
let max_retries = opts.max_retries.unwrap_or(0) as usize;
let retry_delay = opts.retry_delay.unwrap_or(100) as u64;
let mut last_err = None;
for attempt in 0..=max_retries {
if attempt > 0 {
- std::thread::sleep(std::time::Duration::from_millis(retry_delay));
+ std::thread::sleep(std::time::Duration::from_millis(retry_delay * attempt as u64));
}
match remove_recursive(path, opts) {
Ok(()) => return Ok(()),
- Err(e) => last_err = Some(e),
+ Err(e) => {
+ let msg = e.to_string();
+ let retryable = msg.contains("EBUSY")
+ || msg.contains("EMFILE")
+ || msg.contains("ENFILE")
+ || msg.contains("ENOTEMPTY")
+ || msg.contains("EPERM")
+ || msg.contains("resource busy")
+ || msg.contains("not empty");
+ if !retryable {
+ return Err(e);
+ }
+ last_err = Some(e);
+ }
}
}
Err(last_err.unwrap())
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rm.rs` around lines 70 - 85, The retry loop in remove_with_retry
currently retries on all errors and uses a fixed delay; change it to only retry
when the returned error from remove_recursive matches the transient error kinds
(EBUSY, EMFILE, ENFILE, ENOTEMPTY, EPERM) and for other errors immediately
return Err(e). Also implement linear backoff by sleeping for
opts.retry_delay.unwrap_or(100) * attempt milliseconds (i.e.,
Duration::from_millis(retry_delay * attempt as u64)) on attempts > 0, and keep
using opts.max_retries for the attempt bound; reference remove_with_retry,
remove_recursive, RmOptions, opts.max_retries, and opts.retry_delay when making
these changes.
| use std::time::UNIX_EPOCH; | ||
| let atime_ms = meta | ||
| .accessed() | ||
| .ok() | ||
| .and_then(|t| t.duration_since(UNIX_EPOCH).ok()) | ||
| .map(|d| d.as_secs_f64() * 1000.0) | ||
| .unwrap_or(0.0); | ||
| let mtime_ms = meta | ||
| .modified() | ||
| .ok() | ||
| .and_then(|t| t.duration_since(UNIX_EPOCH).ok()) | ||
| .map(|d| d.as_secs_f64() * 1000.0) | ||
| .unwrap_or(0.0); |
There was a problem hiding this comment.
Pre-epoch atime/mtime/birthtime silently fall back to 0.
duration_since(UNIX_EPOCH) returns Err for timestamps before 1970, so .ok().unwrap_or(0.0) collapses them to epoch 0. This contrasts with ctime_ms (Line 27) which correctly handles negative values via MetadataExt::ctime(). Consider using the MetadataExt accessors (atime(), atime_nsec(), mtime(), mtime_nsec()) for atime_ms and mtime_ms as well, for consistency and correctness with pre-epoch files.
Suggested approach for atime_ms
- let atime_ms = meta
- .accessed()
- .ok()
- .and_then(|t| t.duration_since(UNIX_EPOCH).ok())
- .map(|d| d.as_secs_f64() * 1000.0)
- .unwrap_or(0.0);
+ let atime_ms = (meta.atime() as f64) * 1000.0 + (meta.atime_nsec() as f64) / 1_000_000.0;Apply the same pattern for mtime_ms. For birthtime_ms, MetadataExt doesn't expose a dedicated accessor, so the SystemTime approach with a fallback is acceptable there.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use std::time::UNIX_EPOCH; | |
| let atime_ms = meta | |
| .accessed() | |
| .ok() | |
| .and_then(|t| t.duration_since(UNIX_EPOCH).ok()) | |
| .map(|d| d.as_secs_f64() * 1000.0) | |
| .unwrap_or(0.0); | |
| let mtime_ms = meta | |
| .modified() | |
| .ok() | |
| .and_then(|t| t.duration_since(UNIX_EPOCH).ok()) | |
| .map(|d| d.as_secs_f64() * 1000.0) | |
| .unwrap_or(0.0); | |
| use std::time::UNIX_EPOCH; | |
| let atime_ms = (meta.atime() as f64) * 1000.0 + (meta.atime_nsec() as f64) / 1_000_000.0; | |
| let mtime_ms = meta | |
| .modified() | |
| .ok() | |
| .and_then(|t| t.duration_since(UNIX_EPOCH).ok()) | |
| .map(|d| d.as_secs_f64() * 1000.0) | |
| .unwrap_or(0.0); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/stat.rs` around lines 14 - 26, The atime_ms and mtime_ms calculations
currently use SystemTime.duration_since(UNIX_EPOCH) which turns pre-1970
timestamps into 0.0; replace those with the libc-backed MetadataExt accessors to
preserve negative/pre-epoch times: compute atime_ms using MetadataExt::atime()
and MetadataExt::atime_nsec() (combine seconds and nanos into milliseconds,
keeping sign), and compute mtime_ms analogously with MetadataExt::mtime() and
MetadataExt::mtime_nsec(); leave birthtime_ms handling as-is (SystemTime +
fallback) and ensure you import/use std::os::unix::fs::MetadataExt to access
those methods.
| let meta = meta.map_err(|e| { | ||
| if e.kind() == std::io::ErrorKind::PermissionDenied { | ||
| Error::from_reason(format!( | ||
| "EACCES: permission denied, stat '{}'", | ||
| path.to_string_lossy() | ||
| )) | ||
| } else { | ||
| Error::from_reason(format!( | ||
| "ENOENT: no such file or directory, stat '{}'", | ||
| path.to_string_lossy() | ||
| )) | ||
| } | ||
| })?; |
There was a problem hiding this comment.
All non-PermissionDenied errors are incorrectly mapped to ENOENT.
The else branch (Line 106–111) maps every error that isn't PermissionDenied to an ENOENT message, including errors like InvalidInput, NotADirectory, TimedOut, etc. This masks the real failure and makes debugging hard.
Suggested fix
let meta = meta.map_err(|e| {
if e.kind() == std::io::ErrorKind::PermissionDenied {
Error::from_reason(format!(
"EACCES: permission denied, stat '{}'",
path.to_string_lossy()
))
- } else {
+ } else if e.kind() == std::io::ErrorKind::NotFound {
Error::from_reason(format!(
"ENOENT: no such file or directory, stat '{}'",
path.to_string_lossy()
))
+ } else {
+ Error::from_reason(format!(
+ "{}, stat '{}'",
+ e, path.to_string_lossy()
+ ))
}
})?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let meta = meta.map_err(|e| { | |
| if e.kind() == std::io::ErrorKind::PermissionDenied { | |
| Error::from_reason(format!( | |
| "EACCES: permission denied, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } | |
| })?; | |
| let meta = meta.map_err(|e| { | |
| if e.kind() == std::io::ErrorKind::PermissionDenied { | |
| Error::from_reason(format!( | |
| "EACCES: permission denied, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else if e.kind() == std::io::ErrorKind::NotFound { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else { | |
| Error::from_reason(format!( | |
| "{}, stat '{}'", | |
| e, path.to_string_lossy() | |
| )) | |
| } | |
| })?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/stat.rs` around lines 100 - 112, The error mapping in the meta.map_err
closure incorrectly maps every non-PermissionDenied error to ENOENT; change the
closure in stat.rs (where meta is mapped) to match on e.kind() and handle at
least ErrorKind::PermissionDenied -> EACCES and ErrorKind::NotFound -> ENOENT,
and for all other kinds return the original error information (e.g., include
e.kind() or e.to_string() in Error::from_reason or propagate the underlying
io::Error) so that InvalidInput/NotADirectory/TimedOut etc. are not masked;
update the closure around the meta variable (the map_err for meta) to use a
match on e.kind() and construct appropriate Error::from_reason messages for each
case or forward the original error.
| match flag { | ||
| "w" => { open_opts.write(true).create(true).truncate(true); } | ||
| "wx" | "xw" => { open_opts.write(true).create_new(true); } | ||
| "a" => { open_opts.append(true).create(true); } | ||
| "ax" | "xa" => { open_opts.append(true).create_new(true); } | ||
| _ => { open_opts.write(true).create(true).truncate(true); } | ||
| } |
There was a problem hiding this comment.
Unknown write flags silently default to w semantics.
Same concern as in read_file.rs — an unrecognized flag value silently falls through to write + create + truncate. Consider erroring on invalid flags to avoid silent data loss (e.g., a caller intending append but passing a typo like "ap" would truncate instead).
Suggested change
- _ => { open_opts.write(true).create(true).truncate(true); }
+ other => {
+ return Err(Error::from_reason(format!(
+ "ERR_INVALID_OPT_VALUE: invalid value for flag: '{}'", other
+ )));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| match flag { | |
| "w" => { open_opts.write(true).create(true).truncate(true); } | |
| "wx" | "xw" => { open_opts.write(true).create_new(true); } | |
| "a" => { open_opts.append(true).create(true); } | |
| "ax" | "xa" => { open_opts.append(true).create_new(true); } | |
| _ => { open_opts.write(true).create(true).truncate(true); } | |
| } | |
| match flag { | |
| "w" => { open_opts.write(true).create(true).truncate(true); } | |
| "wx" | "xw" => { open_opts.write(true).create_new(true); } | |
| "a" => { open_opts.append(true).create(true); } | |
| "ax" | "xa" => { open_opts.append(true).create_new(true); } | |
| other => { | |
| return Err(Error::from_reason(format!( | |
| "ERR_INVALID_OPT_VALUE: invalid value for flag: '{}'", other | |
| ))); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/write_file.rs` around lines 105 - 111, The match on the `flag` variable
in write_file.rs currently uses the `_` arm to silently apply "w" semantics;
change this to treat unknown flags as an error instead of defaulting. Replace
the `_` arm so it returns an Err (or propagates a descriptive error) that
includes the invalid `flag` string, update the enclosing function (e.g.,
write_file) to use a Result return type if it doesn't already, and ensure
callers handle the error; keep the explicit arms for "w", "wx"/"xw", "a", and
"ax"/"xa" unchanged and reference `open_opts` as before.
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/CI.yml (1)
114-114:⚠️ Potential issue | 🟠 MajorAlign artifact actions to a consistent version:
upload-artifact@v5anddownload-artifact@v4create a mismatch.The
buildjob uploads withactions/upload-artifact@v5(line 114), while all three download steps useactions/download-artifact@v4(lines 155, 211, 255). Per the AI-generated summary, this PR downgradeddownload-artifactfrom v6 to v4, which is a regression.While cross-compatibility exists—v4, v5, and v6 all use the same backend architecture—the mismatch leaves your workflow on a significantly outdated downloader. Current stable versions are
upload-artifact@v6anddownload-artifact@v7.Align all artifact actions to a consistent pair. If upgrading to Node.js 24 (required by v6/v7) is not yet desired, revert to the
v4pair instead:Option A: upgrade to latest-compatible
- uses: actions/upload-artifact@v5 + uses: actions/upload-artifact@v6 # test-macOS-windows-binding job - uses: actions/download-artifact@v4 + uses: actions/download-artifact@v6 # test-linux-binding job - uses: actions/download-artifact@v4 + uses: actions/download-artifact@v6 # publish job - uses: actions/download-artifact@v4 + uses: actions/download-artifact@v6Note:
upload-artifact@v6anddownload-artifact@v6+require Node.js 24;v7of download-artifact requires GitHub Actions Runner >= 2.327.1.Option B: revert to consistent v4 pair
- uses: actions/upload-artifact@v5 + uses: actions/upload-artifact@v4Keep
download-artifact@v4as-is.Also applies to: 155-155, 211-211, 255-255
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/CI.yml at line 114, The CI workflow mixes artifact action versions—`uses: actions/upload-artifact@v5` and multiple `uses: actions/download-artifact@v4`—so make them consistent: either upgrade both upload and download to the current stable pair (e.g., change `actions/upload-artifact@v5` → `actions/upload-artifact@v6` and all `actions/download-artifact@v4` → `actions/download-artifact@v7` or `@v6`) ensuring runner/Node.js requirements are met, or revert the uploader to the v4 family so both uploader and downloader use `@v4`; update every occurrence of `actions/upload-artifact` and `actions/download-artifact` in the workflow to the chosen matching versions.README.md (1)
29-29:⚠️ Potential issue | 🟡 MinorFullwidth colon character (:) instead of ASCII colon (:).
Line 29 uses
✨:with a Unicode fullwidth colon (U+FF1A), while all other legend entries use standard ASCII colons. This is likely a copy-paste artifact.-> - ✨:New feature from hyper-fs +> - ✨: New feature from hyper-fs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 29, Replace the Unicode fullwidth colon after the emoji in the README legend entry so it matches the ASCII colon used elsewhere; specifically update the string "✨:New feature from hyper-fs" to use a standard colon ("✨: New feature from hyper-fs") to remove the U+FF1A character and ensure consistent punctuation.
🟡 Minor comments (14)
CONTRIBUTING.md-11-11 (1)
11-11:⚠️ Potential issue | 🟡 MinorBroken TOC anchor —
#编写-rust-实现has no matching heading.No
## 编写 Rust 实现heading exists anywhere in the document, so this TOC link is a dead fragment. The nearest relevant section heading is### 第二步:创建 Rust 源文件(Line 135), whose GitHub-generated anchor is#第二步创建-rust-源文件.🔧 Proposed fix
-[编写 Rust 实现](`#编写-rust-实现`) +[编写 Rust 实现](`#第二步创建-rust-源文件`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` at line 11, The TOC entry "[编写 Rust 实现]" points to a nonexistent anchor `#编写-rust-实现`; update the TOC or headings so the fragment matches an actual heading. Either change the TOC link target to the correct GitHub-generated anchor for the existing section (e.g., replace `#编写-rust-实现` with `#第二步创建-rust-源文件`) or add a matching `## 编写 Rust 实现` heading to the document so the link resolves; ensure the link text and target remain consistent with the heading you choose.CONTRIBUTING.md-308-311 (1)
308-311:⚠️ Potential issue | 🟡 MinorIncorrect test template —
symlinkSyncdoes NOT throw ENOENT for a non-existent target.On Unix/Linux (and most platforms), creating a symlink to a non-existent target is valid — it produces a dangling symlink and
symlinkSyncsucceeds silently. ENOENT is only thrown when the parent directory of the link path (dest) doesn't exist. The current example will mislead contributors into writing assertions that never fail, giving false confidence.The error-handling test should instead target a missing parent directory for
dest, or a pre-existing link path (for EEXIST), matching actual Node.js behaviour:🔧 Proposed correction
-test('symlinkSync: should throw on non-existent target', (t) => { - // 测试错误处理 - t.throws(() => symlinkSync('/no/such/path', dest), { message: /ENOENT/ }) -}) +test('symlinkSync: should throw ENOENT when link parent dir does not exist', (t) => { + // 测试错误处理:链接路径的父目录不存在时应报 ENOENT + const missingParent = join(dir, 'no-such-dir', 'link') + t.throws(() => symlinkSync(target, missingParent), { message: /ENOENT/ }) +}) + +test('symlinkSync: should create a dangling symlink (target need not exist)', (t) => { + // Unix 允许创建指向不存在路径的悬挂符号链接 + const dest = join(dir, 'dangling-link') + t.notThrows(() => symlinkSync('/no/such/path', dest)) +})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 308 - 311, The test "symlinkSync: should throw on non-existent target" is wrong because symlinkSync('/no/such/path', dest) creates a dangling symlink instead of throwing; update the test around the symlinkSync call to instead trigger ENOENT by ensuring the parent directory of dest does not exist (e.g., use a dest path under a non-existent directory) and assert that symlinkSync throws with ENOENT; alternatively add a separate test that asserts EEXIST when dest already exists to cover the existing-link error case.__test__/chown.spec.ts-43-45 (1)
43-45:⚠️ Potential issue | 🟡 MinorMissing Windows skip guard on ENOENT tests.
All other tests in this file skip on Windows (lines 16–19, 26–29, 48–51), but these two ENOENT tests do not. On Windows,
/tmp/no-such-file-...is not a valid Win32 path andchownmay return an error other thanENOENT, causing the regex match to fail.🛡️ Proposed fix
test('chownSync: should throw on non-existent file', (t) => { + if (process.platform === 'win32') { + t.pass('Skipping chown test on Windows') + return + } t.throws(() => chownSync('/tmp/no-such-file-' + Date.now(), 0, 0), { message: /ENOENT/ }) })test('chown: async should throw on non-existent file', async (t) => { + if (process.platform === 'win32') { + t.pass('Skipping chown test on Windows') + return + } await t.throwsAsync(async () => await chown('/tmp/no-such-file-' + Date.now(), 0, 0), { message: /ENOENT/ }) })Also applies to: 57-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/chown.spec.ts` around lines 43 - 45, Two ENOENT tests for chownSync and chown that use a Unix-style /tmp path need to be skipped on Windows; add a Windows guard at the start of each test (the test labeled "chownSync: should throw on non-existent file" and the test for "chown: should throw on non-existent file") by checking process.platform === 'win32' and calling t.skip('skip on Windows') (or use test.skip when appropriate) so the tests don’t run on Win32 and avoid path/error mismatches.src/symlink.rs-39-42 (1)
39-42:⚠️ Potential issue | 🟡 MinorWindows
"junction"type falls back tosymlink_dir, which is not a true junction.
std::os::windows::fs::symlink_dircreates a directory symbolic link, not a junction point. Junctions and directory symlinks have different semantics on Windows (junctions don't require elevated privileges, have different reparse point tags, and behave differently across volumes). Node.js uses a separate code path (fs.constants.UV_FS_SYMLINK_JUNCTION) for true junction creation.If junction support isn't critical for now, consider either documenting this limitation or returning an error for
"junction"instead of silently creating a directory symlink.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/symlink.rs` around lines 39 - 42, The "junction" match arm currently calls std::os::windows::fs::symlink_dir (creating a directory symlink), which is incorrect for true Windows junctions; change the "junction" arm to return an explicit error instead of silently creating a dir symlink (e.g., return Err(std::io::Error::new(std::io::ErrorKind::Unsupported, "junction symlinks are not supported; use 'symlink_dir' or implement junction creation"))), or alternatively add a clear comment and documentation about the limitation; update the code around the "junction" branch (the match arm that now calls std::os::windows::fs::symlink_dir) to make this behavior explicit.src/read_file.rs-85-98 (1)
85-98:⚠️ Potential issue | 🟡 MinorUnknown flag silently falls back to read-only instead of erroring.
Line 97: An unrecognized
flagvalue (e.g., a typo like"rw") silently opens the file as read-only. Node.js throwsERR_INVALID_OPT_VALUEfor invalid flags. This could mask bugs in consumer code.Proposed fix
- _ => { open_opts.read(true); } + other => { + return Err(Error::from_reason(format!( + "ERR_INVALID_OPT_VALUE: The value \"{}\" is invalid for option \"flag\"", + other + ))); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 85 - 98, The match on the variable `flag` currently falls through to a default read-only branch, silently accepting unknown values; instead update the match's `_` arm to return/propagate an error for invalid flags (e.g., create and return an Err with a clear message like "ERR_INVALID_OPT_VALUE: invalid flag: {flag}") rather than mutating `open_opts`, so callers can detect typos; locate the `flag` variable, the `match` block that configures `fs::OpenOptions` (working on `open_opts`) and change the function signature/flow to return a Result so the invalid-flag case returns an error instead of using the read-only fallback.__test__/utimes.spec.ts-43-45 (1)
43-45:⚠️ Potential issue | 🟡 MinorHardcoded
/tmppath breaks on Windows.Lines 44 and 60 use
'/tmp/no-such-file-...', which is not a valid path on Windows — the error may not beENOENT, causing these tests to fail. The rest of the helpers in this suite already usetmpdir()fromnode:os.🛠 Proposed fix
- t.throws(() => utimesSync('/tmp/no-such-file-' + Date.now(), 1000, 2000), { message: /ENOENT/ }) + t.throws(() => utimesSync(join(tmpdir(), 'no-such-file-' + Date.now()), 1000, 2000), { message: /ENOENT/ })- await t.throwsAsync(async () => await utimes('/tmp/no-such-file-' + Date.now(), 1000, 2000), { + await t.throwsAsync(async () => await utimes(join(tmpdir(), 'no-such-file-' + Date.now()), 1000, 2000), {Also applies to: 59-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/utimes.spec.ts` around lines 43 - 45, The tests for utimesSync (and the similar test at lines 59-63) use a hardcoded '/tmp/...' path which fails on Windows; change these to build the non-existent filename using node:os tmpdir() and node:path join (e.g., const nonExistent = path.join(os.tmpdir(), `no-such-file-${Date.now()}`)) and then call utimesSync(nonExistent, ...) so the test reliably throws ENOENT across platforms; update both occurrences (the test named "utimesSync: should throw on non-existent file" and the matching second test) to import/use os.tmpdir() and path.join instead of the hardcoded '/tmp' string.__test__/utimes.spec.ts-65-75 (1)
65-75:⚠️ Potential issue | 🟡 MinorMisleading test title — the API does not accept
Dateobjects.The test is titled "should accept Date objects as atime/mtime" but the implementation converts
Dateto Unix seconds manually (atime.getTime() / 1000) before callingutimesSync. The underlying Rust signature isutimes(path, atime: f64, mtime: f64)and does not acceptDateobjects. The test actually documents the caller-side conversion pattern, not a Date-accepting overload.Consider renaming to something like
'utimesSync: should accept epoch-second timestamps derived from Date objects'.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/utimes.spec.ts` around lines 65 - 75, Rename the misleading test title to accurately reflect behavior: change the test description string in the test with name 'utimesSync: should accept Date objects as atime/mtime' to something like 'utimesSync: should accept epoch-second timestamps derived from Date objects' (or similar), since the test converts Date to seconds before calling utimesSync; no API change is needed—only update the test title to reference utimesSync and the caller-side conversion from Date to epoch seconds.__test__/mkdtemp.spec.ts-27-29 (1)
27-29:⚠️ Potential issue | 🟡 MinorHardcoded
/tmppath breaks on Windows (same issue asutimes.spec.tsandtruncate.spec.ts).Lines 28 and 41 both use
'/tmp/no-such-parent-dir-999/prefix-', an invalid path on Windows that will produce a different error thanENOENT, causing these tests to fail on that platform.🛠 Proposed fix
+import { join } from 'node:path' - t.throws(() => mkdtempSync('/tmp/no-such-parent-dir-999/prefix-'), { message: /ENOENT/ }) + t.throws(() => mkdtempSync(join(tmpdir(), 'no-such-parent-dir-999', 'prefix-')), { message: /ENOENT/ })- await t.throwsAsync(async () => await mkdtemp('/tmp/no-such-parent-dir-999/prefix-'), { + await t.throwsAsync(async () => await mkdtemp(join(tmpdir(), 'no-such-parent-dir-999', 'prefix-')), {Also applies to: 40-44
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdtemp.spec.ts` around lines 27 - 29, The test uses a hardcoded '/tmp/...' path which fails on Windows; change the non-existent-parent path to be platform-independent by constructing it with path.join(os.tmpdir(), 'no-such-parent-dir-999', 'prefix-') and import/require os and path at the top of the test file; update the two assertions in the "mkdtempSync: should throw ENOENT for non-existent parent" test (and the similar case around lines 40-44) to call mkdtempSync with that joined path so the test asserts ENOENT on all platforms.__test__/truncate.spec.ts-30-32 (1)
30-32:⚠️ Potential issue | 🟡 MinorHardcoded
/tmppath — same Windows compatibility issue as in other test files.🛠 Proposed fix
+import { join } from 'node:path' +import { tmpdir } from 'node:os' - t.throws(() => truncateSync('/tmp/no-such-file-' + Date.now()), { message: /ENOENT/ }) + t.throws(() => truncateSync(join(tmpdir(), 'no-such-file-' + Date.now())), { message: /ENOENT/ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/truncate.spec.ts` around lines 30 - 32, The test 'truncateSync: should throw on non-existent file' hardcodes a POSIX /tmp path which breaks on Windows; update the test to build the non-existent path using the OS temp directory (e.g., via os.tmpdir()) and path.join to create the filename used with truncateSync, keeping the same assertion that truncateSync('/tmp/...') throws an ENOENT-like error; change the literal '/tmp/no-such-file-' + Date.now() to a generated path using os.tmpdir() and path.join so the test is cross-platform.__test__/realpath.spec.ts-38-52 (1)
38-52:⚠️ Potential issue | 🟡 MinorTemp directories created by
tmpDir()are never cleaned up.Both symlink tests call
tmpDir(), write files and symlinks into the returned directory, and then exit without removing it. Each test run accumulates a stale directory pair under the system's temp path.🛡️ Proposed fix (apply the same pattern to both tests)
test('dual-run: realpathSync should resolve symlink to real path', (t) => { const dir = tmpDir() + t.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true })) const target = join(dir, 'real-target.txt')test('realpath: async dual-run should resolve symlink same as node:fs', async (t) => { const dir = tmpDir() + t.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true })) const target = join(dir, 'async-target.txt')Also applies to: 60-70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/realpath.spec.ts` around lines 38 - 52, The tmpDir() created in the 'dual-run: realpathSync should resolve symlink to real path' test (and the similar test at 60-70) is never removed, leaving stale temp directories; fix by registering a teardown that removes the temp directory after the test (e.g. call t.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true })) or equivalent nodeFs.rmdirSync/removeSync), placing this after tmpDir() is created so the cleanup runs regardless of test outcome; apply the same pattern to the other symlink test(s) that call tmpDir().__test__/mkdir.spec.ts-8-10 (1)
8-10:⚠️ Potential issue | 🟡 Minor
tmpPathlacks a random component, risking path collision in parallel test execution.
Date.now()alone is not sufficient when AVA runs tests concurrently. Compare withtmpDir()inrealpath.spec.ts, which appendsMath.random().toString(36).slice(2).♻️ Proposed fix
function tmpPath(name: string): string { - return join(tmpdir(), `hyper-fs-test-mkdir-${Date.now()}-${name}`) + return join(tmpdir(), `hyper-fs-test-mkdir-${Date.now()}-${Math.random().toString(36).slice(2)}-${name}`) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 8 - 10, tmpPath currently uses Date.now() only, which can collide under concurrent AVA runs; modify tmpPath to append a secure/random component (e.g., Math.random().toString(36).slice(2) or crypto.randomBytes) to the returned path so each invocation yields a unique directory name, updating the tmpPath function to include that random token alongside Date.now() for collision-resistant test dirs.benchmark/read_file.ts-20-20 (1)
20-20:⚠️ Potential issue | 🟡 Minor
'package.json'is CWD-relative and will fail if the benchmark is run from any directory other than the project root.Use a path anchored to the file location instead:
♻️ Proposed fix
-const realFile = 'package.json' +import { fileURLToPath } from 'node:url' +const realFile = path.join(path.dirname(fileURLToPath(import.meta.url)), '..', 'package.json')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/read_file.ts` at line 20, The constant realFile currently set to 'package.json' is CWD-relative and will break if the benchmark runs outside the repo root; update the assignment for realFile in benchmark/read_file.ts to build an absolute path anchored to the file's location (e.g., use __dirname or import.meta.url with path.join/path.resolve) so it always points to the package.json next to this source file; ensure any needed import of path/URL helpers is added and replace uses of realFile with the new absolute path.__test__/mkdir.spec.ts-85-89 (1)
85-89:⚠️ Potential issue | 🟡 Minor
if (nodeResult !== undefined && hyperResult !== undefined)silently skips path assertions whenhyperResultisnull.
mkdirSyncis typed asstring | nullinindex.d.ts(NAPI-RS mapsundefinedtonull). IfhyperResultisnull, the condition isfalseand theendsWithchecks at lines 87–88 are never executed, making the test pass vacuously. Use a nullish check instead and assert the result explicitly.♻️ Proposed fix
- t.is(typeof hyperResult, typeof nodeResult) - if (nodeResult !== undefined && hyperResult !== undefined) { - t.true(nodeResult.endsWith('node-return')) - t.true(hyperResult.endsWith('hyper-return')) - } + t.truthy(nodeResult) + t.truthy(hyperResult) + t.true((nodeResult as string).endsWith('node-return')) + t.true((hyperResult as string).endsWith('hyper-return'))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/mkdir.spec.ts` around lines 85 - 89, The test currently skips path assertions when hyperResult is null because it uses a strict undefined check; change the guard to a nullish check so both null and undefined are handled (e.g., test that nodeResult != null and hyperResult != null), and add an explicit assertion that hyperResult is not null (and/or nodeResult is not null) before calling .endsWith to avoid vacuous passing; update the checks around the hyperResult/nodeResult variables (the assertions using nodeResult.endsWith and hyperResult.endsWith) to run only after these explicit non-null checks.__test__/realpath.spec.ts-54-58 (1)
54-58:⚠️ Potential issue | 🟡 MinorHardcoded
'src'path makes the test CWD-dependent.
realpathSync('src')andnodeFs.realpathSync('src')will both throwENOENTif the test runner's working directory is not the project root (or if thesrcdirectory is ever renamed). Use a path that is guaranteed to exist, e.g.'.'or a__dirname-relative path.♻️ Proposed fix
test('dual-run: realpathSync should resolve relative path same as node:fs', (t) => { - const nodeResult = nodeFs.realpathSync('src') - const hyperResult = realpathSync('src') + const nodeResult = nodeFs.realpathSync('.') + const hyperResult = realpathSync('.') t.is(hyperResult, nodeResult) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/realpath.spec.ts` around lines 54 - 58, The test "dual-run: realpathSync should resolve relative path same as node:fs" uses a hardcoded 'src' which makes it CWD-dependent and may throw ENOENT; update the test to use a guaranteed-existing path instead (e.g. '.' or a path derived from __dirname) when calling nodeFs.realpathSync and realpathSync so both resolve the same existing location; modify the calls to realpathSync('src') and nodeFs.realpathSync('src') in that test to use the chosen guaranteed path (keeping the assertion t.is(hyperResult, nodeResult) intact).
🧹 Nitpick comments (21)
.gitignore (1)
130-137: LGTM — minor nit: translate the inline comment to English.All additions are sensible:
/npmrestores the build-output ignore,.cursor/keeps Cursor AI editor state out of VCS, andTODO.mdprevents a local scratchpad from being committed.The only nit is line 136:
-# 本地开发备忘,不进入版本控制 +# Local development notes, not tracked in version controlA Chinese-only comment in a shared/OSS repository can be a small friction point for international contributors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 130 - 137, Replace the Chinese-only inline comment "本地开发备忘,不进入版本控制" in .gitignore with an English translation (e.g., "Local development notes — do not commit") so the repository comments are accessible to international contributors; leave the surrounding entries (/npm, .cursor/, TODO.md) unchanged and keep the playful "vibe coding 🤓" line as-is or translate it if you prefer consistency.CONTRIBUTING.md (1)
69-104: Add a language identifier to the fenced code block (MD040).The architecture tree block lacks a language specifier, flagged by markdownlint MD040. Use
textorplaintextto suppress the warning.🔧 Proposed fix
-``` +```text hyper-fs/ ├── src/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 69 - 104, The fenced architecture tree in CONTRIBUTING.md is missing a language identifier which triggers markdownlint MD040; edit the fenced code block that contains the hyper-fs directory tree and add a language specifier such as "text" or "plaintext" (e.g., change ``` to ```text) so the block is treated as plain text and the linter warning is suppressed.__test__/rm.spec.ts (1)
255-262: Misleading test name — retry behavior is not exercised.
'sync: maxRetries should eventually succeed on transient failure'implies a simulated transient failure is injected, but the test removes a perfectly healthy file. This only verifies the options are accepted without error. Rename it to reflect what is actually being tested (e.g.,'sync: maxRetries/retryDelay options should be accepted without error').🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/rm.spec.ts` around lines 255 - 262, Rename the test titled 'sync: maxRetries should eventually succeed on transient failure' to a name that reflects the actual behavior being validated, e.g., 'sync: maxRetries/retryDelay options should be accepted without error', and ensure the test body referencing rmSync(testFile, { maxRetries: 3, retryDelay: 10 }) and assertions (existsSync) remain unchanged so it documents that options are accepted rather than exercising retry logic.__test__/rename.spec.ts (1)
3-4: Redundant dual import fromnode:fs.Lines 3 and 4 both import from the same
'node:fs'module. These can be merged into a single statement.♻️ Proposed refactor
-import * as nodeFs from 'node:fs' -import { writeFileSync, existsSync, mkdirSync, readFileSync } from 'node:fs' +import * as nodeFs from 'node:fs' +const { writeFileSync, existsSync, mkdirSync, readFileSync } = nodeFsOr simply merge into one import:
-import * as nodeFs from 'node:fs' -import { writeFileSync, existsSync, mkdirSync, readFileSync } from 'node:fs' +import * as nodeFs from 'node:fs' +import { writeFileSync, existsSync, mkdirSync, readFileSync, renameSync as nodeRenameSync } from 'node:fs'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/rename.spec.ts` around lines 3 - 4, Remove the redundant duplicate import from 'node:fs' by consolidating the two imports into one: keep a single import that includes the named functions writeFileSync, existsSync, mkdirSync, readFileSync and (if used) the namespace nodeFs; update any references to nodeFs or the named imports accordingly so all usages refer to the single consolidated import (symbols to check: nodeFs, writeFileSync, existsSync, mkdirSync, readFileSync).benchmark/exists.ts (1)
5-7: Hardcoded/tmp/path is Windows-incompatible.
nonExistenton line 7 uses a Unix-specific prefix. On Windows this path resolves unexpectedly (still non-existent, but for the wrong reason), making the "non-existent path" benchmark unreliable in cross-platform scenarios. Consider usingos.tmpdir()or a relative path that is guaranteed not to exist.♻️ Suggested fix
+import { join } from 'node:path' +import { tmpdir } from 'node:os' const existingFile = 'package.json' const existingDir = 'node_modules' -const nonExistent = '/tmp/hyper-fs-bench-nonexistent-path-12345' +const nonExistent = join(tmpdir(), 'hyper-fs-bench-nonexistent-path-12345')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/exists.ts` around lines 5 - 7, The nonExistent constant is hardcoded to a Unix /tmp path which fails cross-platform; update the benchmark to construct a platform-safe non-existent path (e.g., use os.tmpdir() and path.join or a relative path) instead of '/tmp/hyper-fs-bench-nonexistent-path-12345'. Modify the declaration of nonExistent to derive the temp directory via require('os').tmpdir() (and require('path') to join a unique filename) or pick a guaranteed-nonexistent relative path, keeping the existingFile and existingDir constants unchanged.benchmark/mkdir.ts (2)
95-96: Cleanup may fail silently if an earlier benchmark throws.If any
runGroupcall throws (e.g., permission error), the cleanup on line 96 is never reached. Atry/finallywrapper around the benchmark runs would ensure the temp directory is always cleaned up.Proposed fix
-console.log('Benchmarking mkdir') +console.log('Benchmarking mkdir') +try { let counter = 0 // ... benchmark groups ... - -// Cleanup -fs.rmSync(tmpDir, { recursive: true, force: true }) +} finally { + // Cleanup + fs.rmSync(tmpDir, { recursive: true, force: true }) +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/mkdir.ts` around lines 95 - 96, The cleanup call using fs.rmSync(tmpDir, { recursive: true, force: true }) may be skipped if any runGroup invocation throws; wrap the benchmark execution(s) that call runGroup in a try/finally so that the fs.rmSync(tmpDir, ...) call is executed in the finally block, ensuring tmpDir is always removed even on error. Locate the scope that performs runGroup calls in benchmark/mkdir.ts and move the cleanup into a finally paired with the try that surrounds those runGroup invocations.
11-38: Consider using mitata for consistency with other benchmarks.Other benchmark files (e.g.,
stat.ts,readdir.ts) use mitata'sbench/group/runAPI. This file rolls its own harness withbenchmark()andrunGroup(). Using mitata uniformly would give consistent output formatting and statistical rigor (warm-up, outlier handling, etc.) across the benchmark suite.Not blocking — the current harness works correctly for its purpose.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/mkdir.ts` around lines 11 - 38, The file defines a custom harness via benchmark() and runGroup() instead of using mitata; replace the custom functions by using mitata's API: import group, bench, and run from 'mitata', convert each runGroup(...) into a group(...) that registers bench(...) entries for each implementation (using the impl.name and impl.fn), and then call run() to execute the suites; remove the benchmark() and runGroup() functions and any custom timing/printing logic so the file uses mitata for consistent formatting and statistics.__test__/symlink.spec.ts (1)
8-12: Temp directories are not cleaned up after tests.
tmpDir()creates unique directories underos.tmpdir()but they are never removed. Over time (especially in CI), this accumulates leftover test artifacts. Consider adding an AVAtest.after.alwayshook or collecting created dirs for cleanup.Not blocking — the OS temp directory is expected to be ephemeral.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/symlink.spec.ts` around lines 8 - 12, tmpDir() creates unique temp directories that are never removed; update the test file to track directories created by tmpDir (e.g., push each returned path into an array) and add an AVA cleanup hook (test.after.always) that iterates that array and removes each directory (rmSync or rimraf) to ensure tests don't leave artifacts; reference the tmpDir() helper and the AVA test.after.always hook when implementing the cleanup.__test__/readlink.spec.ts (1)
7-11: No cleanup of temporary directories after tests.Each test creates a temp directory but never removes it. Consider using AVA's
test.after.alwaysor a try/finally pattern to clean up, preventing gradual disk accumulation in CI environments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/readlink.spec.ts` around lines 7 - 11, The tmpDir() helper creates temporary directories but never removes them; update tests to ensure cleanup by adding an AVA teardown or try/finally that deletes directories created by tmpDir() — e.g., track returned paths from tmpDir() in tests and call fs.rmSync(path, { recursive: true, force: true }) in a test.after.always hook or in finally blocks; reference the tmpDir() function to locate where paths are created and ensure removal happens after each test to avoid accumulating temp dirs in CI.src/read_file.rs (1)
39-66: Hand-rolled base64 encoder — consider using thebase64crate.The implementation appears correct, but a well-audited crate like
base64would reduce maintenance surface and handle edge cases. This same pattern applies tobase64_decodeinwrite_file.rs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/read_file.rs` around lines 39 - 66, Replace the custom base64 implementation with the well‑maintained base64 crate: add the base64 crate to Cargo.toml, remove or stop using the hand‑rolled base64_encode function in src/read_file.rs (and the corresponding base64_decode in src/write_file.rs), and call the crate's API instead (use the appropriate engine for standard vs URL‑safe/padded behavior) where those functions were used; ensure you select STANDARD for normal base64 and the URL_SAFE or URL_SAFE_NO_PAD engine for url_safe semantics and adjust call sites to pass/receive Vec<u8>/String as the crate API requires.src/types.rs (1)
1-1:#![allow(dead_code)]is a crate-level inner attribute in a non-root file.This suppresses dead-code warnings for this module, which is fine for the
S_IF*constants used only internally. Consider scoping it more narrowly with#[allow(dead_code)]on the constants block if you want to keep dead-code linting active for the rest of the module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.rs` at line 1, The crate-level inner attribute `#![allow(dead_code)]` at the top of the file should be removed and the dead-code allowance scoped to just the `S_IF*` constants: delete the `#![allow(dead_code)]` line and add `#[allow(dead_code)]` immediately above the `const` block (or individual constant declarations) that defines the `S_IF*` symbols so the rest of the module still gets normal dead-code linting.src/cp.rs (1)
50-53: Non-Unixset_timestampsis a no-op — consider documenting this limitation.Windows users requesting
preserveTimestamps: truewill silently get no timestamp preservation. Consider either implementing it viaSetFileTimeor returning an error/warning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cp.rs` around lines 50 - 53, The non-Unix implementation of set_timestamps (fn set_timestamps(_src: &Path, _dest: &Path) -> std::io::Result<()>) is currently a silent no-op, which causes preserveTimestamps:true to be ignored on Windows; either implement timestamp preservation under cfg(windows) by using the Win32 SetFileTime API (or Rust crates that expose it) to copy creation/modified/access times from src to dest, or change this non-unix branch to return a clear Err or emit a warning when called so callers know the operation is unsupported; update the function body for set_timestamps (and any callers that expect Ok(())) accordingly to propagate or handle the error.README.md (1)
253-258: Clarifyutimestimestamp units.Node.js
utimesaccepts seconds (not milliseconds) for numeric timestamps, and also acceptsDateobjects. The current docs listatime: numberandmtime: numberwithout specifying the unit, which could mislead users.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 253 - 258, Update the README section describing Node.js utimes to explicitly state that numeric timestamps for atime and mtime are in seconds (not milliseconds) and that Date objects are also accepted; refer to the utimes function and the atime and mtime parameters and change the docs from "atime: number // ✅ mtime: number // ✅" to indicate "atime: number | Date (number in seconds) mtime: number | Date (number in seconds)" with a short note that Date instances are allowed.src/mkdtemp.rs (1)
9-57: Consider using thegetrandomcrate instead of hand-rolled OS random.This module manually opens
/dev/urandomon Unix, callsBCryptGenRandomvia FFI on Windows, and has a fallback mechanism—all to generate 6 random bytes. Thegetrandomcrate provides a single cross-platform API that handles Linux (getrandomsyscall or/dev/urandom), Windows 7/8 (RtlGenRandom), Windows 10+ (ProcessPrng/RtlGenRandom), and other Unix variants (macOSgetentropy, FreeBSDgetrandom, etc.). This eliminates the need for platform-specificcfgblocks and the unsafe FFI call, while providing proper error handling and extensive auditing from the Rust ecosystem.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mkdtemp.rs` around lines 9 - 57, Replace the manual platform-specific random code in generate_random_suffix with the cross-platform getrandom crate: add getrandom as a dependency, remove the #[cfg(unix)/#[cfg(windows)/#[cfg(not(...)] blocks and unsafe FFI, and call getrandom::getrandom(&mut bytes) to fill the 6-byte buffer (handle the Result by returning an error or falling back via fill_fallback(&mut bytes) if getrandom fails). Keep the CHARS constant and the mapping loop that converts bytes -> characters, and ensure any error handling follows the existing fill_fallback behavior or propagates an error from generate_random_suffix as appropriate.src/link.rs (1)
7-35: LGTM — implementation is correct and consistent with the codebase pattern.Minor optional: the intermediate
Path::newwrappers on lines 8–9 are unnecessary sincestd::fs::hard_linkacceptsAsRef<Path>directly, and&Stringalready satisfies that bound.♻️ Optional simplification
fn link_impl(existing_path: String, new_path: String) -> Result<()> { - let existing = Path::new(&existing_path); - let new = Path::new(&new_path); - - fs::hard_link(existing, new).map_err(|e| { + fs::hard_link(&existing_path, &new_path).map_err(|e| {The
use std::path::Path;import can then be removed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/link.rs` around lines 7 - 35, The intermediate Path::new calls in link_impl are unnecessary: change fs::hard_link(existing, new) to call fs::hard_link(&existing_path, &new_path) (or pass existing_path/new_path directly as they implement AsRef<Path>), remove the local variables existing and new, and delete the now-unused use std::path::Path import if present; keep the same error mapping logic and Error::from_reason calls around the fs::hard_link invocation.__test__/truncate.spec.ts (1)
52-53: Redundant assertion — line 53 is always implied by line 52.
t.is(hyperContent, nodeContent)already asserts string equality; if that passes,hyperContent.length === nodeContent.lengthis trivially true.♻️ Simplification
t.is(hyperContent, nodeContent) - t.is(hyperContent.length, nodeContent.length)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/truncate.spec.ts` around lines 52 - 53, Remove the redundant length assertion: since t.is(hyperContent, nodeContent) already verifies the strings are equal, delete the second assertion that compares hyperContent.length to nodeContent.length (the assertions reference the variables hyperContent and nodeContent in __test__/truncate.spec.ts).index.js (1)
575-626: All individualmodule.exports.X = nativeBinding.Xassignments after line 574 are no-ops.After
module.exports = nativeBinding(line 574),module.exportsandnativeBindingare the same object reference. Every subsequentmodule.exports.X = nativeBinding.Xis a self-assignment with no effect — all ofnativeBinding's properties are already accessible throughmodule.exports.If the intent is an explicit allowlist (only the listed symbols should be exported), replace line 574 with
module.exports = {}so the individual assignments are the only things consumers see. If the intent is to expose everything, the individual assignments can be removed.♻️ Explicit allowlist approach (if desired)
-module.exports = nativeBinding +module.exports = {} module.exports.Dirent = nativeBinding.Dirent module.exports.Stats = nativeBinding.Stats // ... rest of individual assignments unchanged🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.js` around lines 575 - 626, The post-574 assignments are no-ops because module.exports was set to nativeBinding, so each module.exports.X = nativeBinding.X is a self-assignment; decide whether you want an explicit allowlist or to re-export everything: if you want an allowlist, change the existing module.exports = nativeBinding assignment to module.exports = {} and keep the individual assignments (so only the listed symbols like Dirent, Stats, access, readFile, writeFile, etc. are exported); if you want to expose nativeBinding wholesale, remove all the individual module.exports.X = nativeBinding.X lines and keep module.exports = nativeBinding.benchmark/write_file.ts (1)
16-29:benchmark()helper is duplicated acrossbenchmark/write_file.tsandbenchmark/copy_file.ts.The function bodies are structurally identical. Extract it (and optionally
runGroup) into a sharedbenchmark/utils.tsso changes (e.g. to iteration count, timing logic) only need to happen once.♻️ Suggested extraction (benchmark/utils.ts)
+// benchmark/utils.ts +export function benchmark(name: string, fn: () => void, iterations = 50): number { + fn() // warmup + const times: number[] = [] + for (let i = 0; i < iterations; i++) { + const start = process.hrtime.bigint() + fn() + const end = process.hrtime.bigint() + times.push(Number(end - start) / 1_000_000) + } + return times.reduce((a, b) => a + b, 0) / times.length +} + +export function runGroup(groupName: string, impls: { name: string; fn: () => void }[]) { + console.log(`\n${groupName}`) + const results: { name: string; time: number }[] = [] + for (const impl of impls) { + results.push({ name: impl.name, time: benchmark(impl.name, impl.fn) }) + } + const baseline = results[0] + for (const res of results) { + const isBaseline = res === baseline + const ratio = res.time / baseline.time + const diffStr = isBaseline + ? '(baseline)' + : `${ratio.toFixed(2)}x ${ratio >= 1 ? '(slower)' : '(faster)'}` + console.log(` ${res.name.padEnd(25)} ${res.time.toFixed(3)} ms ${diffStr}`) + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/write_file.ts` around lines 16 - 29, The benchmark() helper is duplicated; extract it (and optionally runGroup) into a new shared module benchmark/utils.ts and import it from benchmark/write_file.ts and benchmark/copy_file.ts: move the benchmark(name: string, fn: () => void): number implementation into benchmark/utils.ts, export it (and runGroup if present), then replace the local function in both files with imports like import { benchmark } from './utils'; ensure you preserve the exact timing logic (process.hrtime.bigint and iterations) and any TypeScript signatures so consumers compile without changes.benchmark/read_file.ts (1)
1-5: Inconsistent benchmarking infrastructure acrossbenchmark/*.tsfiles.
read_file.tsusesmitata(run/bench/group), whilewrite_file.tsandcopy_file.tsimplement their own timing loop. The two approaches produce different output formats and have different statistical properties (mitata handles warmup, statistical significance, etc. automatically). Consider standardising all benchmarks on one framework.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/read_file.ts` around lines 1 - 5, Summary: benchmarking is inconsistent—read_file.ts uses mitata (run/bench/group) while write_file.ts and copy_file.ts use manual timing loops—standardise on one approach. Fix: convert the manual timing loops in write_file.ts and copy_file.ts to use mitata's API (import run, bench, group), replace the custom warmup/timing logic with bench closures that call the target functions (e.g., the writeFile/copyFile helper names) and group related benches, and then call run() to execute; remove manual start/stop and statistics code so all three files use the same mitata output/behaviour. Ensure you preserve any setup/teardown (tmp file creation/removal) inside group or bench hooks so results remain comparable.__test__/realpath.spec.ts (1)
4-5: Redundantjoinimport — use the already-importedpathnamespace.
pathis already imported as a namespace at line 4; the namedjoinimport at line 5 is unnecessary.♻️ Proposed fix
import * as path from 'node:path' -import { join } from 'node:path'Then replace every
join(…)call withpath.join(…).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/realpath.spec.ts` around lines 4 - 5, Remove the redundant named import "join" and use the existing "path" namespace: delete the "import { join } from 'node:path'" line and update all occurrences of join(...) to path.join(...), ensuring tests in realpath.spec.ts call path.join where they previously used join.benchmark/cp.ts (1)
98-100:sorted.slice(1, -1)yields an empty array wheniterations ≤ 2, makingavgNaN.With the hardcoded value of
12this is safe, but it's a silent failure mode ifiterationsis ever lowered. A guard makes the intent explicit.♻️ Proposed fix
const sorted = [...times].sort((a, b) => a - b) - const trimmed = sorted.slice(1, -1) // drop best and worst - const avg = trimmed.reduce((a, b) => a + b, 0) / trimmed.length + const trimmed = sorted.length > 2 ? sorted.slice(1, -1) : sorted // drop best and worst when possible + const avg = trimmed.reduce((a, b) => a + b, 0) / trimmed.length🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@benchmark/cp.ts` around lines 98 - 100, The current trimming logic (const sorted = [...times]... const trimmed = sorted.slice(1, -1) const avg = .../trimmed.length) can produce an empty trimmed array when iterations ≤ 2 causing avg to be NaN; update the code to check trimmed.length (or iterations) and fall back to using the full sorted array (or times) when trimming would be empty — for example, compute avg from trimmed if trimmed.length > 0 else compute avg from sorted (or times) to avoid division by zero; refer to variables sorted, trimmed, avg, and times when making the change.
| export declare function utimes(path: string, atime: number, mtime: number): Promise<void> | ||
|
|
||
| export declare function utimesSync(path: string, atime: number, mtime: number): void |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the current state of index.d.ts around lines 234-236
cat -n index.d.ts | sed -n '230,240p'Repository: CoderSerio/hyper-fs
Length of output: 1700
🏁 Script executed:
# Check if index.js has utimes/utimesSync implementation
fd index.js | head -5Repository: CoderSerio/hyper-fs
Length of output: 1700
🏁 Script executed:
# Search for utimes implementation in the JS files
rg -n "utimes" --type=js -C 8Repository: CoderSerio/hyper-fs
Length of output: 1700
🏁 Script executed:
# Verify the Node.js API signature for utimes
web_search
Node.js fs.utimes API signature parameters atime mtime typesRepository: CoderSerio/hyper-fs
Length of output: 1700
🌐 Web query:
Node.js fs.utimes function parameters atime mtime documentation
💡 Result:
fs.utimes(path, atime, mtime, callback) changes a file’s access time (atime) and modified time (mtime). [1]
Parameters (Node.js docs):
path:string | Buffer | URL(a file URL must use thefile:protocol). [1]atime:number | string | Date[1]mtime:number | string | Date[1]callback(err): function called when done (only argument is a possibleError). [1]
How atime / mtime values are interpreted:
- You may pass:
- a number = Unix epoch time in seconds (not milliseconds), or
- a
Date, or - a numeric string like
"123456789.0". [1]
- If the value can’t be converted to a finite number (or is
NaN,Infinity,-Infinity), Node throws an error. [1] - Reminder:
Date.now()returns milliseconds, so you’d typically passDate.now() / 1000. [2]
Update utimes/utimesSync parameter types to accept string | number | Date to match Node.js fs.utimes API.
Node.js fs.utimes accepts atime and mtime as number | string | Date, but the current declarations accept only number. This creates a usability issue: Stats.atime and Stats.mtime return Date objects, so patterns like utimesSync(path, stat.atime, stat.mtime) fail type checking, forcing unnecessary conversions.
Proposed fix
-export declare function utimes(path: string, atime: number, mtime: number): Promise<void>
+export declare function utimes(path: string, atime: string | number | Date, mtime: string | number | Date): Promise<void>
-export declare function utimesSync(path: string, atime: number, mtime: number): void
+export declare function utimesSync(path: string, atime: string | number | Date, mtime: string | number | Date): void🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@index.d.ts` around lines 234 - 236, Update the TypeScript declarations for
utimes and utimesSync so the atime and mtime parameters accept string | number |
Date (matching Node.js fs.utimes) instead of only number; modify the signatures
of the exported functions utimes and utimesSync to use atime: string | number |
Date and mtime: string | number | Date so calls like utimesSync(path,
stat.atime, stat.mtime) type-check correctly.
| module.exports.appendFile = nativeBinding.appendFile | ||
| module.exports.appendFileSync = nativeBinding.appendFileSync |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "appendFile|append_file" --type rustRepository: CoderSerio/hyper-fs
Length of output: 1700
🌐 Web query:
hyper-fs appendFile appendFileSync implementation Rust
💡 Result:
hyper-fs (the Rust crate) is a static-file service for the hyper HTTP library and does not provide appendFile/appendFileSync-style filesystem APIs. [5]
If you’re implementing Node-like fs.appendFile() / fs.appendFileSync() semantics in Rust (append to end, create if missing, like Node’s default flag: 'a') [1], use OpenOptions { append: true, create: true } in Rust. [2]
appendFileSync (sync Rust)
use std::{fs::OpenOptions, io::{self, Write}, path::Path};
pub fn append_file_sync(path: impl AsRef<Path>, data: &[u8]) -> io::Result<()> {
let mut f = OpenOptions::new()
.append(true) // append mode [2]
.create(true) // create if missing [2]
.open(path)?;
f.write_all(data) // write full buffer [4]
}appendFile (async Rust, Tokio)
use std::{io, path::Path};
use tokio::{fs::OpenOptions, io::AsyncWriteExt};
pub async fn append_file(path: impl AsRef<Path>, data: &[u8]) -> io::Result<()> {
let mut f = OpenOptions::new()
.append(true) // append mode [3]
.create(true) // create if missing [3]
.open(path)
.await?;
f.write_all(data).await // write full buffer [6]
}Notes (matching Node behavior)
- Node’s
appendFile*“append data … creating the file if it does not yet exist”; default flag is'a'. [1] - Rust
append(true)alone doesn’t create the file; pair withcreate(true). [2]
Sources: Node.js fs.appendFile docs [1]; Rust std::fs::OpenOptions docs [2]; Tokio OpenOptions docs [3]; Rust Write::write_all docs [4]; hyper-fs crate docs [5]; Tokio AsyncWriteExt::write_all docs [6].
Remove appendFile and appendFileSync exports—they are not implemented in the native binding.
The Rust hyper-fs crate does not provide appendFile or appendFileSync APIs. Since append_file is absent from src/lib.rs, exporting them at lines 579-580 will expose undefined values. Any consumer calling require('hyper-fs').appendFile(...) will encounter TypeError: appendFile is not a function at runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@index.js` around lines 579 - 580, The exports for appendFile and
appendFileSync are being exposed from nativeBinding although the Rust crate (see
absence of append_file in src/lib.rs) doesn't implement them; remove the two
lines that assign module.exports.appendFile = nativeBinding.appendFile and
module.exports.appendFileSync = nativeBinding.appendFileSync so you don't export
undefined functions, or guard them by only exporting if nativeBinding.appendFile
/ nativeBinding.appendFileSync are defined; update any tests or docs referencing
appendFile/appendFileSync accordingly.
| unsafe { | ||
| let c_path = std::ffi::CString::new(dest.to_string_lossy().as_bytes()).map_err(|_| { | ||
| std::io::Error::new(std::io::ErrorKind::InvalidInput, "invalid path") | ||
| })?; | ||
| let times = [ | ||
| libc::timespec { | ||
| tv_sec: atime_secs, | ||
| tv_nsec: atime_nsecs, | ||
| }, | ||
| libc::timespec { | ||
| tv_sec: mtime_secs, | ||
| tv_nsec: mtime_nsecs, | ||
| }, | ||
| ]; | ||
| libc::utimensat(libc::AT_FDCWD, c_path.as_ptr(), times.as_ptr(), 0); | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
utimensat return value is silently discarded — timestamps may fail without notice.
libc::utimensat returns -1 on error (e.g., permission denied), but the return value is ignored. Since the caller explicitly requested preserveTimestamps, failures should be surfaced.
Proposed fix
- libc::utimensat(libc::AT_FDCWD, c_path.as_ptr(), times.as_ptr(), 0);
+ let ret = libc::utimensat(libc::AT_FDCWD, c_path.as_ptr(), times.as_ptr(), 0);
+ if ret != 0 {
+ return Err(std::io::Error::last_os_error());
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| unsafe { | |
| let c_path = std::ffi::CString::new(dest.to_string_lossy().as_bytes()).map_err(|_| { | |
| std::io::Error::new(std::io::ErrorKind::InvalidInput, "invalid path") | |
| })?; | |
| let times = [ | |
| libc::timespec { | |
| tv_sec: atime_secs, | |
| tv_nsec: atime_nsecs, | |
| }, | |
| libc::timespec { | |
| tv_sec: mtime_secs, | |
| tv_nsec: mtime_nsecs, | |
| }, | |
| ]; | |
| libc::utimensat(libc::AT_FDCWD, c_path.as_ptr(), times.as_ptr(), 0); | |
| } | |
| Ok(()) | |
| } | |
| unsafe { | |
| let c_path = std::ffi::CString::new(dest.to_string_lossy().as_bytes()).map_err(|_| { | |
| std::io::Error::new(std::io::ErrorKind::InvalidInput, "invalid path") | |
| })?; | |
| let times = [ | |
| libc::timespec { | |
| tv_sec: atime_secs, | |
| tv_nsec: atime_nsecs, | |
| }, | |
| libc::timespec { | |
| tv_sec: mtime_secs, | |
| tv_nsec: mtime_nsecs, | |
| }, | |
| ]; | |
| let ret = libc::utimensat(libc::AT_FDCWD, c_path.as_ptr(), times.as_ptr(), 0); | |
| if ret != 0 { | |
| return Err(std::io::Error::last_os_error()); | |
| } | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cp.rs` around lines 31 - 48, The call to libc::utimensat in the cp.rs
block currently ignores its return value so timestamp failures (when
preserveTimestamps is requested) are hidden; update the unsafe block containing
c_path, times, and libc::utimensat to check the return value and, on
non-zero/-1, immediately return an Err(std::io::Error::last_os_error()) from the
surrounding function (or propagate that error), so callers see permission/errno
failures when utimensat fails.
| #[cfg(unix)] | ||
| { | ||
| use std::io::Read; | ||
| if let Ok(mut f) = fs::File::open("/dev/urandom") { | ||
| let _ = f.read_exact(&mut bytes); | ||
| } else { | ||
| fill_fallback(&mut bytes); | ||
| } | ||
| } |
There was a problem hiding this comment.
Silent read_exact failure produces a deterministic suffix.
If read_exact fails after /dev/urandom is opened, the bytes array stays [0u8; 6], and generate_random_suffix returns "AAAAAA" on every call. All 10 retries in mkdtemp_impl would then attempt the same directory name, guaranteeing failure after the first successful creation.
Should fall back on error rather than silently continuing:
Proposed fix
#[cfg(unix)]
{
use std::io::Read;
if let Ok(mut f) = fs::File::open("/dev/urandom") {
- let _ = f.read_exact(&mut bytes);
+ if f.read_exact(&mut bytes).is_err() {
+ fill_fallback(&mut bytes);
+ }
} else {
fill_fallback(&mut bytes);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[cfg(unix)] | |
| { | |
| use std::io::Read; | |
| if let Ok(mut f) = fs::File::open("/dev/urandom") { | |
| let _ = f.read_exact(&mut bytes); | |
| } else { | |
| fill_fallback(&mut bytes); | |
| } | |
| } | |
| #[cfg(unix)] | |
| { | |
| use std::io::Read; | |
| if let Ok(mut f) = fs::File::open("/dev/urandom") { | |
| if f.read_exact(&mut bytes).is_err() { | |
| fill_fallback(&mut bytes); | |
| } | |
| } else { | |
| fill_fallback(&mut bytes); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mkdtemp.rs` around lines 14 - 22, The code currently ignores errors from
f.read_exact(&mut bytes) which leaves bytes as zeros and yields a deterministic
"AAAAAA" suffix; modify the block in mkdtemp.rs so that if read_exact returns
Err(_) you call fill_fallback(&mut bytes) (or otherwise populate bytes) instead
of silently continuing—i.e., replace the ignored-result pattern around
std::io::Read::read_exact with explicit error handling that falls back to
fill_fallback when read_exact fails, so generate_random_suffix and mkdtemp_impl
will get non-deterministic bytes.
| fn remove_with_retry(path: &Path, opts: &RmOptions) -> Result<()> { | ||
| let max_retries = opts.max_retries.unwrap_or(0) as usize; | ||
| let retry_delay = opts.retry_delay.unwrap_or(100) as u64; | ||
|
|
||
| let mut last_err = None; | ||
| for attempt in 0..=max_retries { | ||
| if attempt > 0 { | ||
| std::thread::sleep(std::time::Duration::from_millis(retry_delay)); | ||
| } | ||
| match remove_recursive(path, opts) { | ||
| Ok(()) => return Ok(()), | ||
| Err(e) => last_err = Some(e), | ||
| } | ||
| } | ||
| Err(last_err.unwrap()) | ||
| } |
There was a problem hiding this comment.
Retry logic deviates from documented Node.js semantics in two ways.
-
All errors are retried, but the docstring (lines 12–14) and Node.js
rmonly retry on transient errors:EBUSY,EMFILE,ENFILE,ENOTEMPTY,EPERM. Retrying on, e.g.,EACCESorENOENTis wasteful and changes observable behavior. -
Constant delay instead of linear backoff. The doc says "linear backoff of
retryDelayms longer on each try," meaning the delay should increase (retryDelay * attempt). Currently it sleeps a flatretry_delayevery iteration.
Proposed fix
fn remove_with_retry(path: &Path, opts: &RmOptions) -> Result<()> {
let max_retries = opts.max_retries.unwrap_or(0) as usize;
let retry_delay = opts.retry_delay.unwrap_or(100) as u64;
let mut last_err = None;
for attempt in 0..=max_retries {
if attempt > 0 {
- std::thread::sleep(std::time::Duration::from_millis(retry_delay));
+ std::thread::sleep(std::time::Duration::from_millis(retry_delay * attempt as u64));
}
match remove_recursive(path, opts) {
Ok(()) => return Ok(()),
- Err(e) => last_err = Some(e),
+ Err(e) => {
+ let msg = e.status.to_string();
+ let is_retryable = msg.contains("EBUSY")
+ || msg.contains("EMFILE")
+ || msg.contains("ENFILE")
+ || msg.contains("ENOTEMPTY")
+ || msg.contains("EPERM")
+ || msg.contains("not empty");
+ if !is_retryable || attempt == max_retries {
+ return Err(e);
+ }
+ last_err = Some(e);
+ }
}
}
Err(last_err.unwrap())
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rm.rs` around lines 70 - 85, The retry loop in remove_with_retry
currently retries on every error and uses a constant delay; change it to only
retry when the error kind matches transient errno set {EBUSY, EMFILE, ENFILE,
ENOTEMPTY, EPERM} returned by remove_recursive (inspect the Err value to check
its raw_os_error or io::ErrorKind mapping) and stop retrying for any other error
by returning it immediately; also implement a linear backoff using
opts.retry_delay (i.e., sleep for retry_delay * attempt milliseconds) before
each retry (use attempt as multiplier) while still honoring opts.max_retries to
bound attempts.
| let meta = meta.map_err(|e| { | ||
| if e.kind() == std::io::ErrorKind::PermissionDenied { | ||
| Error::from_reason(format!( | ||
| "EACCES: permission denied, stat '{}'", | ||
| path.to_string_lossy() | ||
| )) | ||
| } else { | ||
| Error::from_reason(format!( | ||
| "ENOENT: no such file or directory, stat '{}'", | ||
| path.to_string_lossy() | ||
| )) | ||
| } | ||
| })?; |
There was a problem hiding this comment.
Catch-all maps every non-PermissionDenied error to ENOENT, losing the real error.
Any std::io::Error that isn't PermissionDenied (e.g., ELOOP, ENOTDIR, EIO) is incorrectly reported as ENOENT. This misleads callers and makes debugging difficult.
Proposed fix
let meta = meta.map_err(|e| {
if e.kind() == std::io::ErrorKind::PermissionDenied {
Error::from_reason(format!(
"EACCES: permission denied, stat '{}'",
path.to_string_lossy()
))
- } else {
+ } else if e.kind() == std::io::ErrorKind::NotFound {
Error::from_reason(format!(
"ENOENT: no such file or directory, stat '{}'",
path.to_string_lossy()
))
+ } else {
+ Error::from_reason(format!("{}, stat '{}'", e, path.to_string_lossy()))
}
})?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let meta = meta.map_err(|e| { | |
| if e.kind() == std::io::ErrorKind::PermissionDenied { | |
| Error::from_reason(format!( | |
| "EACCES: permission denied, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } | |
| })?; | |
| let meta = meta.map_err(|e| { | |
| if e.kind() == std::io::ErrorKind::PermissionDenied { | |
| Error::from_reason(format!( | |
| "EACCES: permission denied, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else if e.kind() == std::io::ErrorKind::NotFound { | |
| Error::from_reason(format!( | |
| "ENOENT: no such file or directory, stat '{}'", | |
| path.to_string_lossy() | |
| )) | |
| } else { | |
| Error::from_reason(format!("{}, stat '{}'", e, path.to_string_lossy())) | |
| } | |
| })?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/stat.rs` around lines 100 - 112, The current map_err on meta collapses
every non-PermissionDenied std::io::Error into an ENOENT message, losing the
original error kind; update the closure that maps errors (the meta.map_err(...)
call) to preserve and report the actual io::Error: check e.kind() and return
EACCES for PermissionDenied, otherwise include the original e (or its
kind/message) in the Error::from_reason string—e.g., build the error message
using e.kind() or e.to_string() alongside path.to_string_lossy() so callers see
the real error instead of always ENOENT.
…ith Node.js fs.globSync, including directory matching and file type checks
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (8)
__test__/stat.spec.ts (1)
7-11: Consider cleaning up temp directories after each test.Tests using
tmpDir()(symlink tests, pre-epoch, sub-second) leave artifacts in the OS temp directory. Usingt.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true }))inside each test (or returning the dir from a helper that registers cleanup) keeps CI tidy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/stat.spec.ts` around lines 7 - 11, The tmpDir helper creates temporary directories but never removes them, leaving artifacts; modify the tests that call tmpDir (e.g., the symlink, pre-epoch, and sub-second tests) to register cleanup by calling t.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true })) after creating dir, or refactor tmpDir to return the dir and register the same teardown internally; reference the tmpDir helper and use nodeFs.rmSync in the teardown to remove the temp directory.src/types.rs (2)
1-1:#![allow(dead_code)]suppresses warnings for the entire file.This blanket suppression may hide genuinely unused constants or methods as the codebase evolves. Consider scoping it to specific items (e.g.,
#[allow(dead_code)]on individual constants like theS_IF*block) rather than the whole file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.rs` at line 1, Remove the crate-level directive `#![allow(dead_code)]` and instead apply `#[allow(dead_code)]` directly to the specific unused items (for example the `S_IF*` constants block) so only those items suppress warnings; locate the `S_IF*` constant definitions in `types.rs` and add the attribute above that block (and to any other individual constants or small items that truly need it), then run a build/check to ensure no other unused items are being globally silenced.
54-58: Add#[deprecated]attribute for proper deprecation signaling.The code has a comment indicating this is deprecated, but lacks the
#[deprecated]attribute. napi-rs supports Rust's standard#[deprecated]on napi-exported methods. Add the attribute along with a doc comment to ensure both Rust users and TypeScript/IDE consumers receive deprecation warnings.Suggested fix
- // Deprecated alias + /// Deprecated: use `parentPath` instead. + #[deprecated(note = "Use `parentPath` instead")] #[napi(getter)] pub fn path(&self) -> String { self.parent_path.clone() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.rs` around lines 54 - 58, The getter method path is marked as a deprecated alias but lacks Rust's #[deprecated] attribute and a doc comment; update the pub fn path(&self) -> String (the napi getter) to include a #[deprecated(note = "use `parent_path` instead")] attribute and add a /// Deprecated: use `parent_path` doc comment so both Rust and generated TypeScript/IDE consumers see the deprecation warning while keeping the #[napi(getter)] and returning self.parent_path.clone().src/glob.rs (3)
114-136:.lock().unwrap()will propagate a panic across threads if the mutex is poisoned.If any thread panics while holding the lock, all other threads calling
.unwrap()on the poisoned mutex will also panic. The code inside the lock is trivial (just apush), so poisoning is unlikely in practice, but using.lock().unwrap_or_else(|e| e.into_inner())or propagating a proper error would be more robust.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/glob.rs` around lines 114 - 136, The code uses .lock().unwrap() on result_dirents and result_strings, which will panic if the mutex is poisoned; replace both uses with a safe unwrap that recovers the inner lock (e.g., .lock().unwrap_or_else(|e| e.into_inner())) or handle the error explicitly with match to obtain the guard before pushing into Dirent or pushing relative_path_str; specifically update the block that constructs Dirent { name, parent_path, file_type } (referencing result_dirents, entry, get_file_type_id, Dirent) and the else branch that pushes into result_strings to use the non-panicking lock retrieval and then perform the push.
80-140: Parallel walk produces non-deterministic result ordering.
build_parallel().run()yields entries in arbitrary order. If consumers depend on sorted or stable output (e.g., snapshot tests or diffing), they'll see flaky behavior. Consider sorting the result vectors before returning, or document that ordering is unspecified.
86-90: Silently swallowing walk errors may hide permission/IO issues.Walk errors (e.g., permission denied, broken symlinks) are silently discarded. The caller gets an incomplete result set with no indication that entries were skipped. Consider collecting errors or at least logging them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/glob.rs` around lines 86 - 90, In the closure created by Box::new(move |entry| { ... }) where you currently match entry and silently discard Err(_), change the Err arm to capture the error (e.g., Err(err)) and either log it (e.g., log::warn!("walk error: {}", err)) or append it to a shared/returned errors collection (move in a mutable Vec or send through a channel) before returning ignore::WalkState::Continue; ensure the log includes the error details and context so permission/IO/symlink failures are visible to the caller.__test__/write_file.spec.ts (2)
8-12: Consider adding temp dir cleanup after each test.
tmpFilecreates a new directory on every call but nothing ever removes it. Accumulated temp dirs from frequent test runs can clutter/tmpon dev machines and CI.♻️ Suggested cleanup pattern
+import { rmSync } from 'node:fs' +test.afterEach.always(t => { + // t.context.tmpDirs populated during each test + for (const dir of (t.context as any).tmpDirs ?? []) { + try { rmSync(dir, { recursive: true, force: true }) } catch {} + } +}) function tmpFile(name: string): string { - const dir = join(tmpdir(), `hyper-fs-test-write-${Date.now()}-${Math.random().toString(36).slice(2)}`) + const dir = join(tmpdir(), `hyper-fs-test-write-${Date.now()}-${Math.random().toString(36).slice(2)}`) mkdirSync(dir, { recursive: true }) + // register dir on test context for cleanup return join(dir, name) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/write_file.spec.ts` around lines 8 - 12, tmpFile creates unique temp directories but never removes them; modify tmpFile (or wrap it) to record each created directory path (e.g., push into an in-file array) and add a test-suite teardown (afterEach or afterAll in __test__/write_file.spec.ts) that iterates the recorded paths and removes them recursively (using fs.rmSync or fs.promises.rm with { recursive: true, force: true }) to ensure temp dirs created by tmpFile are cleaned up after tests complete.
14-57: Add encoding tests forappendFileSync.The sync
writeFileSyncencoding tests already exist (lines 61–83 cover hex, base64, and latin1), butappendFileSyncwith encoding options is not tested. Add at least a hex and base64 encoding test forappendFileSyncto match the asyncappendFileencoding coverage (lines 86–101).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__test__/write_file.spec.ts` around lines 14 - 57, Add two sync tests for appendFileSync that mirror the existing writeFileSync and async appendFile encoding coverage: create tmp files with tmpFile, write an initial value using writeFileSync, then call appendFileSync with encoding options 'hex' and 'base64' respectively (using appropriately encoded strings), read back with readFileSync({ encoding: 'utf8' }) and assert with t.is that the decoded content matches the expected concatenated string; place test names like "appendFileSync: should append hex encoding" and "appendFileSync: should append base64 encoding" and reuse the same helpers (tmpFile, readFileSync, writeFileSync, appendFileSync, t.is).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__test__/stat.spec.ts`:
- Around line 88-92: The test title for the test using nodeFs.statSync and
statSync incorrectly claims to validate "atime/mtime" while the assertion only
compares mtime; update the test description string in the test (the one starting
with test('statSync: ...')) to explicitly state it verifies mtime Date values
only (e.g., "statSync: should match node:fs mtime Date value") so the title
matches the actual assertion on hyperStat.mtime.getTime() vs
nodeStat.mtime.getTime().
In `@src/write_file.rs`:
- Line 11: The "ascii" encoding arm currently masks UTF-8 bytes via s.bytes(),
producing incorrect byte sequences for non-ASCII characters; change the
Some("ascii") match arm to iterate s.chars() (like the "latin1" arm) and for
each char cast it to u32 or u16 then mask with 0x7f and collect into the output
bytes so each UTF-16 code unit is mapped to a single masked byte, matching
Node.js behavior; update the Some("ascii") => ... arm in src/write_file.rs
accordingly.
- Around line 57-60: The hex_decode function currently trims the input which
changes semantics; remove the initial trim so you operate on the raw input
string, then validate characters inline: iterate the bytes/chars of s and if any
character is not a valid hex digit return Err(Error::from_reason("Invalid hex
string".to_string())), and keep the existing odd-length check (s.len() % 2 != 0)
using the untrimmed s before decoding; update hex_decode accordingly to use the
original s for length/validation and decoding.
---
Duplicate comments:
In `@__test__/stat.spec.ts`:
- Around line 83-86: The test "statSync: atime.getTime() should be close to
atimeMs" uses an overly permissive tolerance of 1000ms; tighten it to a small
tolerance like 2ms to catch loss of sub-second precision. Update the assertion
in the test that calls statSync(...) and compares s.atime.getTime() and
s.atimeMs to use Math.abs(s.atime.getTime() - s.atimeMs) < 2 (or ≤ 1) instead of
< 1000 so s.atime and s.atimeMs must agree within expected floating-point
rounding; keep the same test name and use the existing s variable.
In `@src/write_file.rs`:
- Around line 105-111: The match on the variable `flag` currently falls through
to the `_` arm and silently applies "w" semantics; instead, change the default
arm to treat unknown flags as an error: remove the `_ => {
open_opts.write(true).create(true).truncate(true); }` fallback and return or
propagate a descriptive error (e.g., Err or a Result::Err) when `flag` doesn't
match "w", "wx", "xw", "a", "ax", or "xa". Update the surrounding function (the
code that constructs and uses `open_opts`) to return an appropriate Result/Error
type so callers receive a clear invalid-flag error rather than silently
truncating files.
- Around line 131-135: The unix-only block currently swallows errors from
fs::set_permissions (let _ = ...), so change it to handle and propagate failures
instead: replace the discard with a proper Result handling (e.g., call
fs::set_permissions(path, fs::Permissions::from_mode(mode))? or map the error to
a useful io::Error with context including path and mode) so failures surface to
the caller; keep the same cfg(unix) block and the same use of PermissionsExt and
opts.mode when implementing this change.
---
Nitpick comments:
In `@__test__/stat.spec.ts`:
- Around line 7-11: The tmpDir helper creates temporary directories but never
removes them, leaving artifacts; modify the tests that call tmpDir (e.g., the
symlink, pre-epoch, and sub-second tests) to register cleanup by calling
t.teardown(() => nodeFs.rmSync(dir, { recursive: true, force: true })) after
creating dir, or refactor tmpDir to return the dir and register the same
teardown internally; reference the tmpDir helper and use nodeFs.rmSync in the
teardown to remove the temp directory.
In `@__test__/write_file.spec.ts`:
- Around line 8-12: tmpFile creates unique temp directories but never removes
them; modify tmpFile (or wrap it) to record each created directory path (e.g.,
push into an in-file array) and add a test-suite teardown (afterEach or afterAll
in __test__/write_file.spec.ts) that iterates the recorded paths and removes
them recursively (using fs.rmSync or fs.promises.rm with { recursive: true,
force: true }) to ensure temp dirs created by tmpFile are cleaned up after tests
complete.
- Around line 14-57: Add two sync tests for appendFileSync that mirror the
existing writeFileSync and async appendFile encoding coverage: create tmp files
with tmpFile, write an initial value using writeFileSync, then call
appendFileSync with encoding options 'hex' and 'base64' respectively (using
appropriately encoded strings), read back with readFileSync({ encoding: 'utf8'
}) and assert with t.is that the decoded content matches the expected
concatenated string; place test names like "appendFileSync: should append hex
encoding" and "appendFileSync: should append base64 encoding" and reuse the same
helpers (tmpFile, readFileSync, writeFileSync, appendFileSync, t.is).
In `@src/glob.rs`:
- Around line 114-136: The code uses .lock().unwrap() on result_dirents and
result_strings, which will panic if the mutex is poisoned; replace both uses
with a safe unwrap that recovers the inner lock (e.g.,
.lock().unwrap_or_else(|e| e.into_inner())) or handle the error explicitly with
match to obtain the guard before pushing into Dirent or pushing
relative_path_str; specifically update the block that constructs Dirent { name,
parent_path, file_type } (referencing result_dirents, entry, get_file_type_id,
Dirent) and the else branch that pushes into result_strings to use the
non-panicking lock retrieval and then perform the push.
- Around line 86-90: In the closure created by Box::new(move |entry| { ... })
where you currently match entry and silently discard Err(_), change the Err arm
to capture the error (e.g., Err(err)) and either log it (e.g., log::warn!("walk
error: {}", err)) or append it to a shared/returned errors collection (move in a
mutable Vec or send through a channel) before returning
ignore::WalkState::Continue; ensure the log includes the error details and
context so permission/IO/symlink failures are visible to the caller.
In `@src/types.rs`:
- Line 1: Remove the crate-level directive `#![allow(dead_code)]` and instead
apply `#[allow(dead_code)]` directly to the specific unused items (for example
the `S_IF*` constants block) so only those items suppress warnings; locate the
`S_IF*` constant definitions in `types.rs` and add the attribute above that
block (and to any other individual constants or small items that truly need it),
then run a build/check to ensure no other unused items are being globally
silenced.
- Around line 54-58: The getter method path is marked as a deprecated alias but
lacks Rust's #[deprecated] attribute and a doc comment; update the pub fn
path(&self) -> String (the napi getter) to include a #[deprecated(note = "use
`parent_path` instead")] attribute and add a /// Deprecated: use `parent_path`
doc comment so both Rust and generated TypeScript/IDE consumers see the
deprecation warning while keeping the #[napi(getter)] and returning
self.parent_path.clone().
| test('statSync: should match node:fs atime/mtime Date values', (t) => { | ||
| const nodeStat = nodeFs.statSync('./package.json') | ||
| const hyperStat = statSync('./package.json') | ||
| t.is(hyperStat.mtime.getTime(), nodeStat.mtime.getTime()) | ||
| }) |
There was a problem hiding this comment.
Test title mentions atime but only mtime is asserted.
The test name says "should match node:fs atime/mtime Date values" but only checks mtime. Since accessing a file between the two statSync calls can update atime on some filesystems, omitting the atime check is reasonable — but the title should reflect that.
Suggested title fix
-test('statSync: should match node:fs atime/mtime Date values', (t) => {
+test('statSync: should match node:fs mtime Date value', (t) => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('statSync: should match node:fs atime/mtime Date values', (t) => { | |
| const nodeStat = nodeFs.statSync('./package.json') | |
| const hyperStat = statSync('./package.json') | |
| t.is(hyperStat.mtime.getTime(), nodeStat.mtime.getTime()) | |
| }) | |
| test('statSync: should match node:fs mtime Date value', (t) => { | |
| const nodeStat = nodeFs.statSync('./package.json') | |
| const hyperStat = statSync('./package.json') | |
| t.is(hyperStat.mtime.getTime(), nodeStat.mtime.getTime()) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__test__/stat.spec.ts` around lines 88 - 92, The test title for the test
using nodeFs.statSync and statSync incorrectly claims to validate "atime/mtime"
while the assertion only compares mtime; update the test description string in
the test (the one starting with test('statSync: ...')) to explicitly state it
verifies mtime Date values only (e.g., "statSync: should match node:fs mtime
Date value") so the title matches the actual assertion on
hyperStat.mtime.getTime() vs nodeStat.mtime.getTime().
| fn encode_string(s: &str, encoding: Option<&str>) -> Result<Vec<u8>> { | ||
| match encoding { | ||
| None | Some("utf8" | "utf-8") => Ok(s.as_bytes().to_vec()), | ||
| Some("ascii") => Ok(s.bytes().map(|b| b & 0x7f).collect()), |
There was a problem hiding this comment.
ascii encoding produces different output than Node.js for non-ASCII characters.
s.bytes() yields raw UTF-8 bytes; masking them produces a different byte count and different values compared to Node.js, which masks each UTF-16 code unit to 7 bits.
Example — 'é' (U+00E9):
- Node.js
'ascii': code-point0xE9 & 0x7F = 0x69→ 1 byte - This impl: UTF-8 bytes
[0xC3, 0xA9]→[0x43, 0x29]→ 2 bytes
Fix: iterate chars() and cast, matching the latin1 arm:
🐛 Proposed fix
- Some("ascii") => Ok(s.bytes().map(|b| b & 0x7f).collect()),
+ Some("ascii") => Ok(s.chars().map(|c| (c as u32 as u8) & 0x7f).collect()),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/write_file.rs` at line 11, The "ascii" encoding arm currently masks UTF-8
bytes via s.bytes(), producing incorrect byte sequences for non-ASCII
characters; change the Some("ascii") match arm to iterate s.chars() (like the
"latin1" arm) and for each char cast it to u32 or u16 then mask with 0x7f and
collect into the output bytes so each UTF-16 code unit is mapped to a single
masked byte, matching Node.js behavior; update the Some("ascii") => ... arm in
src/write_file.rs accordingly.
| fn hex_decode(s: &str) -> Result<Vec<u8>> { | ||
| let s = s.trim(); | ||
| if s.len() % 2 != 0 { | ||
| return Err(Error::from_reason("Invalid hex string".to_string())); |
There was a problem hiding this comment.
hex_decode trims whitespace, diverging from Node.js behavior.
Node.js stops decoding at the first non-hex character, so Buffer.from(' ff', 'hex') returns an empty buffer. This implementation trims first, so ' ff' decodes to [0xFF].
🐛 Proposed fix — remove the trim and reject non-hex input inline
fn hex_decode(s: &str) -> Result<Vec<u8>> {
- let s = s.trim();
- if s.len() % 2 != 0 {
- return Err(Error::from_reason("Invalid hex string".to_string()));
- }
+ if s.len() % 2 != 0 {
+ return Err(Error::from_reason("Invalid hex string: odd length".to_string()));
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/write_file.rs` around lines 57 - 60, The hex_decode function currently
trims the input which changes semantics; remove the initial trim so you operate
on the raw input string, then validate characters inline: iterate the
bytes/chars of s and if any character is not a valid hex digit return
Err(Error::from_reason("Invalid hex string".to_string())), and keep the existing
odd-length check (s.len() % 2 != 0) using the untrimmed s before decoding;
update hex_decode accordingly to use the original s for length/validation and
decoding.
|
I'll open separate pull requests to address those potential issues. For now, I'll focus on building the core structure of the library. |
Type of Change
Description
TODO
Environment
Related Issues:
[List the issue numbers related to this issue, e.g. #123]
Checklist
Summary by CodeRabbit
New Features
Tests
Documentation
Chores