Skip to content

[research] mem0 — persistent cross-run agent memory for ShellForge swarms #56

@jpleva91

Description

@jpleva91

mem0 — Universal memory layer for AI agents

What it does: mem0 is a universal memory layer that extracts structured facts from agent conversations and stores them in a local vector store (default: SQLite + local embeddings). On subsequent runs it retrieves relevant memories and injects them into the prompt — delivering ~90% token reduction vs. full-context replay and ~26% higher accuracy vs. OpenAI's built-in memory on the LOCOMO benchmark. It supports Ollama as the LLM backend (no cloud required), and ships Python and TypeScript/Node.js SDKs. Memories are scoped per-user, per-session, or per-agent.

Why it matters for ShellForge: ShellForge agents are currently stateless — each shellforge agent run starts cold. The QA agent re-discovers the same test gaps; the report agent forgets what it reported last week. mem0 would let the swarm accumulate institutional knowledge: the QA agent remembers which files it already triaged, the report agent builds on prior summaries, and new agents can bootstrap from stored context. With Ollama-native support, it fits the local-first constraint perfectly. The Dagu DAG workflow is a natural integration point — a mem0 search step before agent invocation and a mem0 add step after completion.

GitHub: https://github.com/mem0ai/mem0 ⭐ 50,000+

License: Apache 2.0 ✅

Rough integration effort: Moderate — wrap shellforge agent invocations with mem0 add / search calls; add a memory: stanza to agents.yaml and agentguard.yaml; configure local Ollama as the mem0 LLM backend.

Metadata

Metadata

Assignees

No one assigned

    Labels

    P3Low priority / nice to haveenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions