Your AI Finally Remembers You
⚡ Created & Architected by Varun Pratap Bhardwaj ⚡
Solution Architect • Original Creator • 2026
Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.
superlocalmemory.com • Quick Start • Why This? • Features • Docs • Issues
Created by Varun Pratap Bhardwaj • 💖 Sponsor • 📜 Attribution Required
SuperLocalMemory now learns your patterns, adapts to your workflow, and personalizes recall — all 100% locally on your machine. No cloud. No LLM. Your behavioral data never leaves your device.
Your memory system evolves with you through three learning layers:
| Layer | What It Learns | How |
|---|---|---|
| Tech Preferences | "You prefer FastAPI over Django" (83% confidence) | Cross-project frequency analysis with Bayesian scoring |
| Project Context | Detects your active project from 4 signals | Path analysis, tags, profile, content clustering |
| Workflow Patterns | "You typically: docs → architecture → code → test" | Time-weighted sliding-window sequence mining |
Recall results get smarter over time — automatically:
- Phase 1 (Baseline): Standard search — same as v2.6
- Phase 2 (Rule-Based): After ~20 feedback signals — boosts results matching your preferences
- Phase 3 (ML Ranking): After ~200 signals — LightGBM LambdaRank re-ranks with 9 personalized features
| Concern | SuperLocalMemory v2.7 | Cloud-Based Alternatives |
|---|---|---|
| Where is learning data? | ~/.claude-memory/learning.db on YOUR machine |
Their servers, their terms |
| Who processes your behavior? | Local gradient boosting (no LLM, no GPU) | Cloud LLMs process your data |
| Right to erasure (GDPR Art. 17)? | slm learning reset — one command, instant |
Submit a request, wait weeks |
| Data portability? | Copy the SQLite file | Vendor lock-in |
| Telemetry? | Zero. Absolutely none. | Usage analytics, behavior tracking |
Your learning data is stored separately from your memories. Delete learning.db and your memories are untouched. Delete memory.db and your learning patterns are untouched. Full data sovereignty.
Every component is grounded in peer-reviewed research, adapted for local-first operation:
| Component | Research Basis |
|---|---|
| Two-stage retrieval pipeline | BM25 → re-ranker (eKNOW 2025) |
| Adaptive cold-start ranking | Hierarchical meta-learning (LREC 2024) |
| Time-weighted sequence mining | TSW-PrefixSpan (IEEE 2020) |
| Bayesian confidence scoring | MACLA (arXiv:2512.18950) |
| LightGBM LambdaRank | Pairwise ranking (Burges 2010, MO-LightGBM SIGIR 2025) |
| Privacy-preserving feedback | Zero-communication design — stronger than differential privacy (ADPMF, IPM 2024) |
| Tool | Purpose |
|---|---|
memory_used |
Tell the AI which recalled memories were useful — trains the ranking model |
get_learned_patterns |
See what the system has learned about your preferences |
correct_pattern |
Fix a wrong pattern — your correction overrides with maximum confidence |
slm useful 42 87 # Mark memories as useful (ranking feedback)
slm patterns list # See learned tech preferences
slm learning status # Learning system diagnostics
slm learning reset # Delete all behavioral data (memories preserved)
slm engagement # Local engagement health metricsUpgrade: npm install -g superlocalmemory@latest — Learning dependencies install automatically.
Learning System Guide → | Upgrade Guide → | Full Changelog
Previous: v2.6.5 — Interactive Knowledge Graph
- Fully interactive visualization with zoom, pan, click-to-explore (Cytoscape.js)
- 6 layout algorithms, smart cluster filtering, 10,000+ node performance
- Mobile & accessibility support: touch gestures, keyboard nav, screen reader
Previous: v2.6 — Security & Scale
SuperLocalMemory is now production-hardened with security, performance, and scale improvements:
- Trust Enforcement — Bayesian scoring actively protects your memory. Agents with trust below 0.3 are blocked from write/delete operations.
- Profile Isolation — Memory profiles fully sandboxed. Zero cross-profile data leakage.
- Rate Limiting — Protects against memory flooding from misbehaving agents.
- HNSW-Accelerated Graphs — Knowledge graph edge building uses HNSW index for faster construction at scale.
- Hybrid Search Engine — Combined semantic + FTS5 + graph retrieval for maximum accuracy.
v2.5 highlights (included): Real-time event stream, WAL-mode concurrent writes, agent tracking, memory provenance, 28 API endpoints.
Upgrade: npm install -g superlocalmemory@latest
Interactive Architecture Diagram | Architecture Doc | Full Changelog
Every time you start a new Claude session:
You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*
AI assistants forget everything between sessions. You waste time re-explaining your:
- Project architecture
- Coding preferences
- Previous decisions
- Debugging history
# Install in one command
npm install -g superlocalmemory
# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"
# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# ✓ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"Your AI now remembers everything. Forever. Locally. For free.
npm install -g superlocalmemoryOr clone manually:
git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.shBoth methods auto-detect and configure 17+ IDEs and AI tools — Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.
superlocalmemoryv2:status
# ✓ Database: OK (0 memories)
# ✓ Graph: Ready
# ✓ Patterns: ReadyThat's it. No Docker. No API keys. No cloud accounts. No configuration.
# Start the interactive web UI
python3 ~/.claude-memory/ui_server.py
# Opens at http://localhost:8765
# Features: Timeline, search, interactive graph, statistics| Scenario | Without Memory | With SuperLocalMemory |
|---|---|---|
| New Claude session | Re-explain entire project | recall "project context" → instant context |
| Debugging | "We tried X last week..." starts over | Knowledge graph shows related past fixes |
| Code preferences | "I prefer React..." every time | Pattern learning knows your style |
| Multi-project | Context constantly bleeds | Separate profiles per project |
Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture backed by peer-reviewed research:
- PageIndex (Meta AI) → Hierarchical memory organization
- GraphRAG (Microsoft) → Knowledge graph with auto-clustering
- xMemory (Stanford) → Identity pattern learning
- A-RAG → Multi-level retrieval with context awareness
- LambdaRank (Burges 2010, MO-LightGBM SIGIR 2025) → Adaptive re-ranking (v2.7)
- TSW-PrefixSpan (IEEE 2020) → Time-weighted workflow pattern mining (v2.7)
- MACLA (arXiv:2512.18950) → Bayesian temporal confidence scoring (v2.7)
- FCS (LREC 2024) → Hierarchical cold-start mitigation (v2.7)
The only open-source implementation combining all eight approaches — entirely locally.
View Interactive Architecture Diagram — Click any layer for details, research references, and file paths.
┌─────────────────────────────────────────────────────────────┐
│ Layer 9: VISUALIZATION (v2.2+) │
│ Interactive dashboard: timeline, graph explorer, analytics │
├─────────────────────────────────────────────────────────────┤
│ Layer 8: HYBRID SEARCH (v2.2+) │
│ Combines: Semantic + FTS5 + Graph traversal │
├─────────────────────────────────────────────────────────────┤
│ Layer 7: UNIVERSAL ACCESS │
│ MCP + Skills + CLI (works everywhere) │
│ 17+ IDEs with single database │
├─────────────────────────────────────────────────────────────┤
│ Layer 6: MCP INTEGRATION │
│ Model Context Protocol: 12 tools, 6 resources, 2 prompts │
│ Auto-configured for Cursor, Windsurf, Claude │
├─────────────────────────────────────────────────────────────┤
│ Layer 5½: ADAPTIVE LEARNING (v2.7 — NEW) │
│ Three-layer learning: tech prefs + project context + flow │
│ LightGBM LambdaRank re-ranking (fully local, no cloud) │
│ Research: eKNOW 2025, MACLA, TSW-PrefixSpan, LREC 2024 │
├─────────────────────────────────────────────────────────────┤
│ Layer 5: SKILLS LAYER │
│ 7 universal slash-commands for AI assistants │
│ Compatible with Claude Code, Continue, Cody │
├─────────────────────────────────────────────────────────────┤
│ Layer 4: PATTERN LEARNING + MACLA │
│ Bayesian confidence scoring (arXiv:2512.18950) │
│ "You prefer React over Vue" (73% confidence) │
├─────────────────────────────────────────────────────────────┤
│ Layer 3: KNOWLEDGE GRAPH + HIERARCHICAL CLUSTERING │
│ Recursive Leiden algorithm: "Python" → "FastAPI" → "Auth" │
│ Community summaries with TF-IDF structured reports │
├─────────────────────────────────────────────────────────────┤
│ Layer 2: HIERARCHICAL INDEX │
│ Tree structure for fast navigation │
│ O(log n) lookups instead of O(n) scans │
├─────────────────────────────────────────────────────────────┤
│ Layer 1: RAW STORAGE │
│ SQLite + Full-text search + TF-IDF vectors │
│ Compression: 60-96% space savings │
└─────────────────────────────────────────────────────────────┘
- Adaptive Learning System — Learns your tech preferences, workflow patterns, and project context. Personalizes recall ranking using local ML (LightGBM). Zero cloud dependency. New in v2.7
- Knowledge Graphs — Automatic relationship discovery. Interactive visualization with zoom, pan, click.
- Pattern Learning — Learns your coding preferences and style automatically.
- Multi-Profile Support — Isolated contexts for work, personal, clients. Zero context bleeding.
- Hybrid Search — Semantic + FTS5 + Graph retrieval combined for maximum accuracy.
- Visualization Dashboard — Web UI for timeline, search, graph exploration, analytics.
- Framework Integrations — Use with LangChain and LlamaIndex applications.
- Real-Time Events — Live notifications via SSE/WebSocket/Webhooks when memories change.
SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:
| Tool | Integration | How It Works |
|---|---|---|
| Claude Code | ✅ Skills + MCP | /superlocalmemoryv2:remember |
| Cursor | ✅ MCP + Skills | AI uses memory tools natively |
| Windsurf | ✅ MCP + Skills | Native memory access |
| Claude Desktop | ✅ MCP | Built-in support |
| OpenAI Codex | ✅ MCP + Skills | Auto-configured (TOML) |
| VS Code / Copilot | ✅ MCP + Skills | .vscode/mcp.json |
| Continue.dev | ✅ MCP + Skills | /slm-remember |
| Cody | ✅ Custom Commands | /slm-remember |
| Gemini CLI | ✅ MCP + Skills | Native MCP + skills |
| JetBrains IDEs | ✅ MCP | Via AI Assistant settings |
| Zed Editor | ✅ MCP | Native MCP tools |
| Aider | ✅ Smart Wrapper | aider-smart with context |
| Any Terminal | ✅ Universal CLI | slm remember "content" |
-
MCP (Model Context Protocol) — Auto-configured for Cursor, Windsurf, Claude Desktop
- AI assistants get natural access to your memory
- No manual commands needed
- "Remember that we use FastAPI" just works
-
Skills & Commands — For Claude Code, Continue.dev, Cody
/superlocalmemoryv2:rememberin Claude Code/slm-rememberin Continue.dev and Cody- Familiar slash command interface
-
Universal CLI — Works in any terminal or script
slm remember "content"- Simple, clean syntaxslm recall "query"- Search from anywhereaider-smart- Aider with auto-context injection
All three methods use the SAME local database. No data duplication, no conflicts.
Complete setup guide for all tools →
| Solution | Free Tier Limits | Paid Price | What's Missing |
|---|---|---|---|
| Mem0 | 10K memories, limited API | Usage-based | No pattern learning, not local |
| Zep | Limited credits | $50/month | Credit system, cloud-only |
| Supermemory | 1M tokens, 10K queries | $19-399/mo | Not local, no graphs |
| Personal.AI | ❌ No free tier | $33/month | Cloud-only, closed ecosystem |
| Letta/MemGPT | Self-hosted (complex) | TBD | Requires significant setup |
| SuperLocalMemory V2 | Unlimited | $0 forever | Nothing. |
| Feature | Mem0 | Zep | Khoj | Letta | SuperLocalMemory V2 |
|---|---|---|---|---|---|
| Works in Cursor | Cloud Only | ❌ | ❌ | ❌ | ✅ Local |
| Works in Windsurf | Cloud Only | ❌ | ❌ | ❌ | ✅ Local |
| Works in VS Code | 3rd Party | ❌ | Partial | ❌ | ✅ Native |
| Universal CLI | ❌ | ❌ | ❌ | ❌ | ✅ |
| Multi-Layer Architecture | ❌ | ❌ | ❌ | ❌ | ✅ |
| Pattern Learning | ❌ | ❌ | ❌ | ❌ | ✅ |
| Adaptive ML Ranking | Cloud LLM | ❌ | ❌ | ❌ | ✅ Local ML |
| Knowledge Graphs | ✅ | ✅ | ❌ | ❌ | ✅ |
| 100% Local | ❌ | ❌ | Partial | Partial | ✅ |
| GDPR by Design | ❌ | ❌ | ❌ | ❌ | ✅ |
| Zero Setup | ❌ | ❌ | ❌ | ❌ | ✅ |
| Completely Free | Limited | Limited | Partial | ✅ | ✅ |
SuperLocalMemory V2 is the ONLY solution that:
- ✅ Learns and adapts locally — no cloud LLM needed for personalization
- ✅ Works across 17+ IDEs and CLI tools
- ✅ Remains 100% local (no cloud dependencies)
- ✅ GDPR Article 17 compliant — one-command data erasure
- ✅ Completely free with unlimited memories
See full competitive analysis →
All numbers measured on real hardware (Apple M4 Pro, 24GB RAM). No estimates — real benchmarks.
| Database Size | Median Latency | P95 Latency |
|---|---|---|
| 100 memories | 10.6ms | 14.9ms |
| 500 memories | 65.2ms | 101.7ms |
| 1,000 memories | 124.3ms | 190.1ms |
For typical personal use (under 500 memories), search results return faster than you blink.
| Scenario | Writes/sec | Errors |
|---|---|---|
| 1 AI tool writing | 204/sec | 0 |
| 2 AI tools simultaneously | 220/sec | 0 |
| 5 AI tools simultaneously | 130/sec | 0 |
WAL mode + serialized write queue = zero "database is locked" errors, ever.
10,000 memories = 13.6 MB on disk (~1.4 KB per memory). Your entire AI memory history takes less space than a photo.
| Memories | Build Time |
|---|---|
| 100 | 0.28s |
| 1,000 | 10.6s |
Leiden clustering discovers 6-7 natural topic communities automatically.
# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2 # Save memory
superlocalmemoryv2:recall "search query" # Search
superlocalmemoryv2:list # Recent memories
superlocalmemoryv2:status # System health
# Profile Management
superlocalmemoryv2:profile list # Show all profiles
superlocalmemoryv2:profile create <name> # New profile
superlocalmemoryv2:profile switch <name> # Switch context
# Knowledge Graph
python ~/.claude-memory/graph_engine.py build # Build graph
python ~/.claude-memory/graph_engine.py stats # View clusters
# Pattern Learning
python ~/.claude-memory/pattern_learner.py update # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5 # Get identity
# Visualization Dashboard
python ~/.claude-memory/ui_server.py # Launch web UI| Guide | Description |
|---|---|
| Quick Start | Get running in 5 minutes |
| Installation | Detailed setup instructions |
| Visualization Dashboard | Interactive web UI guide |
| Interactive Graph | Graph exploration guide (NEW v2.6.5) |
| Framework Integrations | LangChain & LlamaIndex setup |
| Knowledge Graph | How clustering works |
| Pattern Learning | Identity extraction |
| API Reference | Python API documentation |
We welcome contributions! See CONTRIBUTING.md for guidelines.
Areas for contribution:
- Additional pattern categories
- Performance optimizations
- Integration with more AI assistants
- Documentation improvements
If SuperLocalMemory saves you time, consider supporting its development:
- ⭐ Star this repo — helps others discover it
- 🐛 Report bugs — open an issue
- 💡 Suggest features — start a discussion
- ☕ Buy me a coffee — buymeacoffee.com/varunpratah
- 💸 PayPal — paypal.me/varunpratapbhardwaj
- 💖 Sponsor — GitHub Sponsors
MIT License — use freely, even commercially. Just include the license.
Varun Pratap Bhardwaj — Solution Architect
Building tools that make AI actually useful for developers.
100% local. 100% private. 100% yours.