Summary
MemOS 2.0.14 HTTP API (server mode, Docker) silently accepts requests with arbitrary mem_cube_id values but does not create the cube, leading to data that is stored in Qdrant but not retrievable via /product/search. This is surprising behavior for integrators building multi-tenant / multi-cube systems on top of MemOS server mode.
Reproduction
# 1. Verify cube does NOT exist
curl -s -X POST http://localhost:8004/product/exist_mem_cube_id \
-H 'Content-Type: application/json' \
-d '{"user_id":"user_alice","mem_cube_id":"my_custom_cube"}'
# → {"data": {"my_custom_cube": false}}
# 2. Add memory referencing the non-existent cube
curl -s -X POST http://localhost:8004/product/add \
-H 'Content-Type: application/json' \
-d '{"messages":[{"role":"user","content":"I like blue"}],"user_id":"user_alice","mem_cube_id":"my_custom_cube"}'
# → 200 OK, response says cube_id="my_custom_cube"
# 3. Search the supposedly-stored memory
curl -s -X POST http://localhost:8004/product/search \
-H 'Content-Type: application/json' \
-d '{"query":"color","user_id":"user_alice","mem_cube_id":"my_custom_cube","top_k":5}'
# → 200 OK, but text_mem: [], pref_mem: [], all empty
The data IS in Qdrant (verified via /collections/{name}/points/scroll), but the search engine apparently looks up the cube in a separate registry (Neo4j tree structure) where the cube was never registered.
Root Cause Analysis
Investigation of /app/src/memos/api/handlers/ shows:
AddHandler writes to Qdrant + Neo4j tree, but if the cube hasn't been registered, no tree node is created for the new memory
SearchHandler traverses the tree first, then resolves to Qdrant. Empty tree → empty search results, even though embeddings exist.
MOSCore.create_cube_for_user(cube_name, owner_id, cube_id) exists in /app/src/memos/mem_os/core.py but is not exposed via any HTTP endpoint. The 24 endpoints listed in /openapi.json include cube lookup (/product/exist_mem_cube_id) but no cube creation:
/product/add
/product/search
/product/get_all
/product/exist_mem_cube_id
/product/delete_memory
... (24 total, none for create_cube)
Impact
For users running MemOS as a remote server (Docker compose pattern documented in the README), there is no way via HTTP to:
- Pre-create cubes for multi-tenant isolation
- Discover that a
mem_cube_id doesn't exist before /add happily accepts it
- Get an explicit error when search fails due to missing cube
This silently breaks multi-cube architectures and is hard to debug because:
/add returns 200 with the expected cube_id in the response
- Search returns 200 with empty results (looks like nothing matched)
- Direct vector search against Qdrant returns the data correctly
Workarounds (what we did)
- Drop
mem_cube_id in our integration and use user_id as the implicit cube key
- For tier-based isolation, suffix the
user_id (e.g. user_alice_legal_isolated) — this works because MemOS does internally isolate by user_id
Both work, but neither uses MemOS's intended multi-cube design.
Suggested API Additions
Three options, listed in increasing priority:
Option 1 (Minimum fix): Auto-create on /add
When /product/add receives a mem_cube_id for a cube that doesn't exist, auto-create it. Reuse MOSCore.create_cube_for_user(cube_name=mem_cube_id, owner_id=user_id, cube_id=mem_cube_id).
Option 2 (Explicit): New endpoint POST /product/create_cube
POST /product/create_cube
body: {"cube_id": "...", "owner_id": "...", "cube_name": "..."}
response: {"code": 200, "data": {"cube_id": "..."}}
Option 3 (Robust): Validation + clear error
If mem_cube_id is passed to /add or /search for a non-existent cube, return:
{"code": 404, "message": "Cube 'my_custom_cube' does not exist. Create it via POST /product/create_cube or omit mem_cube_id to use default cube."}
We recommend implementing both Option 2 (explicit creation API) and Option 3 (clear errors).
Reference Integration
We have built a MemoryProvider plugin that integrates MemOS via HTTP in server mode. It currently uses the user_id-suffix workaround for tier-based isolation, with comments referencing this issue.
Once the cube creation API lands, we plan to switch back to using mem_cube_id directly. The plugin includes:
- Multi-cube routing (planned, currently single-cube via user_id)
- Tier 3 audit logging
- Circuit breaker
- Background prefetch + sync
Happy to contribute the plugin to your apps/ directory as memos-remote-hermes-plugin (complement to the local-SQLite memos-local-hermes-plugin) if useful.
Versions
- MemOS: 2.0.14 (built from
main 2026-05-11)
- Deployment: Docker compose (memos-server + memos-neo4j 5.26 + memos-qdrant 1.15)
- Embedding: SiliconFlow BAAI/bge-m3 (1024 dim)
- LLM: external (via local reverse proxy)
- Client: multi-machine server-mode deployment
Summary
MemOS 2.0.14 HTTP API (server mode, Docker) silently accepts requests with arbitrary
mem_cube_idvalues but does not create the cube, leading to data that is stored in Qdrant but not retrievable via/product/search. This is surprising behavior for integrators building multi-tenant / multi-cube systems on top of MemOS server mode.Reproduction
The data IS in Qdrant (verified via
/collections/{name}/points/scroll), but the search engine apparently looks up the cube in a separate registry (Neo4j tree structure) where the cube was never registered.Root Cause Analysis
Investigation of
/app/src/memos/api/handlers/shows:AddHandlerwrites to Qdrant + Neo4j tree, but if the cube hasn't been registered, no tree node is created for the new memorySearchHandlertraverses the tree first, then resolves to Qdrant. Empty tree → empty search results, even though embeddings exist.MOSCore.create_cube_for_user(cube_name, owner_id, cube_id)exists in/app/src/memos/mem_os/core.pybut is not exposed via any HTTP endpoint. The 24 endpoints listed in/openapi.jsoninclude cube lookup (/product/exist_mem_cube_id) but no cube creation:Impact
For users running MemOS as a remote server (Docker compose pattern documented in the README), there is no way via HTTP to:
mem_cube_iddoesn't exist before/addhappily accepts itThis silently breaks multi-cube architectures and is hard to debug because:
/addreturns 200 with the expectedcube_idin the responseWorkarounds (what we did)
mem_cube_idin our integration and useuser_idas the implicit cube keyuser_id(e.g.user_alice_legal_isolated) — this works because MemOS does internally isolate byuser_idBoth work, but neither uses MemOS's intended multi-cube design.
Suggested API Additions
Three options, listed in increasing priority:
Option 1 (Minimum fix): Auto-create on
/addWhen
/product/addreceives amem_cube_idfor a cube that doesn't exist, auto-create it. ReuseMOSCore.create_cube_for_user(cube_name=mem_cube_id, owner_id=user_id, cube_id=mem_cube_id).Option 2 (Explicit): New endpoint
POST /product/create_cubeOption 3 (Robust): Validation + clear error
If
mem_cube_idis passed to/addor/searchfor a non-existent cube, return:{"code": 404, "message": "Cube 'my_custom_cube' does not exist. Create it via POST /product/create_cube or omit mem_cube_id to use default cube."}We recommend implementing both Option 2 (explicit creation API) and Option 3 (clear errors).
Reference Integration
We have built a MemoryProvider plugin that integrates MemOS via HTTP in server mode. It currently uses the user_id-suffix workaround for tier-based isolation, with comments referencing this issue.
Once the cube creation API lands, we plan to switch back to using
mem_cube_iddirectly. The plugin includes:Happy to contribute the plugin to your
apps/directory asmemos-remote-hermes-plugin(complement to the local-SQLitememos-local-hermes-plugin) if useful.Versions
main2026-05-11)