Skip to content

High-Performance Sharded Architecture & O(1) Algorithms (v2.0.0)#2

Merged
Q300Z merged 18 commits intomainfrom
dev
Feb 5, 2026
Merged

High-Performance Sharded Architecture & O(1) Algorithms (v2.0.0)#2
Q300Z merged 18 commits intomainfrom
dev

Conversation

@Q300Z
Copy link
Owner

@Q300Z Q300Z commented Feb 5, 2026

No description provided.

Q300Z added 16 commits February 5, 2026 00:54
BREAKING CHANGE: The public API has been completely redesigned.
- Rustycache now uses sharding for high concurrency.
- The library is now generic over the cache strategy (static dispatch).
- Tokio is now an optional dependency via the 'async' feature.
- All constructors have changed to support sharding and generic types.
@Q300Z Q300Z self-assigned this Feb 5, 2026
Copilot AI review requested due to automatic review settings February 5, 2026 00:45
@Q300Z Q300Z merged commit ef0bdb3 into main Feb 5, 2026
10 checks passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a major architectural overhaul to RustyCache, transitioning from a simple trait-based cache to a high-performance sharded implementation with O(1) algorithms. The changes enable significant performance improvements through index-based arenas, static dispatch, and optimized concurrency primitives.

Changes:

  • Replaces Box<dyn CacheStrategy> with generic sharded architecture using static dispatch
  • Implements index-based arena (Vec + u32 indices) for LRU/FIFO to achieve O(1) operations
  • Adds optional async/sync modes via feature flags, enabling zero-dependency synchronous operation
  • Introduces batch-based LRU access tracking to reduce write lock contention
  • Migrates from standard library Mutex/HashMap to parking_lot and AHash for performance
  • Adds comprehensive benchmarks and examples for both sync and async modes

Reviewed changes

Copilot reviewed 18 out of 19 changed files in this pull request and generated 14 comments.

Show a summary per file
File Description
Cargo.toml Updated dependencies (parking_lot, ahash), added feature flags, benchmarks
src/rustycache.rs Complete rewrite with sharding, generic type parameters, convenience constructors
src/strategy/mod.rs Trait changes: &mut self → &self, added Borrow support, feature-gated async methods
src/strategy/lru.rs Index-based arena implementation with batched access tracking
src/strategy/fifo.rs Index-based arena implementation for O(1) FIFO operations
src/strategy/lfu.rs Refactored to use RwLock and parking_lot primitives
tests/*.rs Updated for new API, added sync tests, refactored helper functions
examples/*.rs New sync and async examples demonstrating usage patterns
benches/*.rs New contention and performance benchmarks
.github/workflows/*.yml Enhanced CI with multi-feature testing and new release workflow
README.md Updated with performance claims, architecture details, and usage examples

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

order: Arc::new(Mutex::new(VecDeque::new())),
state: Arc::new(RwLock::new(LRUState {
map: HashMap::default(),
nodes: Vec::with_capacity(capacity),
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nodes vector is pre-allocated with capacity but can grow beyond that size (line 161-167 in put). This means the u32 index could potentially overflow if more than 2^32 insertions occur over the cache's lifetime (even with evictions). While unlikely in practice, consider adding a check or documenting the maximum supported number of lifetime insertions, or use usize instead of u32 for indices.

Copilot uses AI. Check for mistakes.
Comment on lines +86 to 90
if let Some(entry) = state.map.get_mut(&key) {
entry.value = value;
entry.expires_at = Utc::now() + chrono::Duration::from_std(self.ttl).unwrap();
return;
}
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the LFU put method, when updating an existing key (line 86-89), the frequency is not reset or updated in freq_map. This means the entry keeps its old frequency count in the freq_map, but its position in the BTreeMap is not adjusted. This could lead to inconsistencies where an entry exists in state.map but its frequency tracking in freq_map is stale or incorrect. Consider either resetting the frequency to 1 (and updating freq_map accordingly) or documenting this as intended behavior.

Copilot uses AI. Check for mistakes.
// Always flush batch on put to maintain order before potential eviction
self.apply_access_batch(&mut state);

let expires_at = Utc::now() + chrono::Duration::from_std(self.ttl).unwrap();
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code uses .unwrap() when converting std::time::Duration to chrono::Duration. This conversion can fail if the duration is too large (greater than ~292 years). While unlikely in practice for TTL values, this could cause a panic. Consider handling this error gracefully with expect() and a descriptive message, or document that TTL values must be reasonable.

Suggested change
let expires_at = Utc::now() + chrono::Duration::from_std(self.ttl).unwrap();
let expires_at = Utc::now()
+ chrono::Duration::from_std(self.ttl)
.expect("TTL duration too large to convert to chrono::Duration (must be < ~292 years)");

Copilot uses AI. Check for mistakes.
// - 8 shards (high concurrency)
// - 1000 total capacity
// - 2 seconds TTL (short for demonstration)
// - 1 second cleaner interval
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to simple_sync.rs, the value type is inconsistent. Line 17 uses "Result A" (a &str), but the cache expects String values. This should be "Result A".to_string().

Copilot uses AI. Check for mistakes.
RustyCache is optimized for modern CPU architectures and high-throughput requirements:

## Usage
- **Index-based Arena (O(1))**: LRU and FIFO strategies use a `Vec`-based arena with `usize` indices for the doubly
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation states that the index-based arena uses usize indices (line 14), but the actual implementation uses u32 indices (as seen in src/strategy/lru.rs and fifo.rs). This documentation should be corrected to say u32 instead of usize.

Suggested change
- **Index-based Arena (O(1))**: LRU and FIFO strategies use a `Vec`-based arena with `usize` indices for the doubly
- **Index-based Arena (O(1))**: LRU and FIFO strategies use a `Vec`-based arena with `u32` indices for the doubly

Copilot uses AI. Check for mistakes.
Comment on lines +120 to +130

fn test_lfu_sync_constructor() {
let cache = Rustycache::lfu_sync(1, 10, Duration::from_secs(60));

cache.put("a".to_string(), "b".to_string());

assert_eq!(cache.get(&"a".to_string()), Some("b".to_string()));
}

#[test]

Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to lru_tests.rs, there are extra blank lines before the function definitions (lines 120 and 130). This is a minor style inconsistency.

Copilot uses AI. Check for mistakes.
Comment on lines +316 to +318
V: Clone + Send + Sync + 'static,
{
fn drop(&mut self) {
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Drop implementation only calls stop_cleaner() when the async feature is enabled. However, this means that in async mode, if multiple clones of the cache exist (since it implements Clone), calling stop_cleaner() on one will stop the background task for all clones. This could lead to unexpected behavior where dropping one clone stops cleanup for all references. Consider using Arc for the notify channel with reference counting, or documenting this behavior clearly.

Copilot uses AI. Check for mistakes.
if _start_cleaner {
s.start_cleaner(interval);
}
Rustycache::new(1, move || s.clone())
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The create_cache helper function creates a cache with only 1 shard (line 24), which defeats the purpose of testing the sharded architecture. While this might be intentional to simplify testing, it means the tests don't verify the behavior of the sharding mechanism. Consider testing with multiple shards (e.g., 2 or 4) to ensure the sharding logic works correctly, or add a comment explaining why single-shard testing is sufficient.

Copilot uses AI. Check for mistakes.
Comment on lines 103 to +114

sleep(Duration::from_secs(2)).await; // let entry expire and cleaner run
fn test_fifo_sync_constructor() {
let cache = Rustycache::fifo_sync(1, 10, Duration::from_secs(60));

assert_eq!(cache.get(&"k1".to_string()), None);
assert_eq!(cache.len(), 0);
}
cache.put("a".to_string(), "b".to_string());

#[tokio::test]
async fn test_stop_cleaner_does_not_panic() {
let cache = create_cache(2, 1, 1);
tokio::time::sleep(Duration::from_millis(100)).await;
cache.stop_cleaner(); // Just test stop logic without panic
tokio::time::sleep(Duration::from_millis(100)).await;
assert_eq!(cache.get(&"a".to_string()), Some("b".to_string()));
}

#[cfg(feature = "async")]
#[tokio::test]
async fn test_start_cleaner_does_not_panic() {
let cache = create_cache(2, 1, 1);
tokio::time::sleep(Duration::from_millis(100)).await;
cache.start_cleaner(Duration::from_secs(1)); // just ensure no panic
tokio::time::sleep(Duration::from_millis(100)).await;

Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar style inconsistencies with extra blank lines before function definitions at lines 103 and 114.

Copilot uses AI. Check for mistakes.
Comment on lines +46 to 57
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: swatinem/rust-cache@v2
- name: Run tests
run: cargo test ${{ matrix.feature_flags }} --verbose

# Ensure benchmarks still compile
check-benches:
name: Check Benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CI workflow uses actions/checkout@v6 in some jobs (lines 18, 29) but actions/checkout@v4 in others (lines 46, 57). This inconsistency should be resolved - all jobs should use the same version for consistency. Recommend using v6 consistently since it's used in earlier jobs.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant