-
Notifications
You must be signed in to change notification settings - Fork 492
Add MemMachine memory integration for NeMo Agent toolkit #1460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
- Add MemMachine memory plugin integration - Rename package from memmachine to nvidia_nat_memmachine to match NAT naming convention - Support both hosted API and self-hosted server modes - Make memmachine-server an optional dependency - Add httpx as required dependency for hosted API support - Update all references in notebook and documentation - Update tests to reflect package rename and memory consolidation behavior - Fix capitalization: 'NeMo Agent toolkit' (lowercase 't') in all text - Fix pyproject.toml to use workspace source for nvidia-nat dependency
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
WalkthroughThis PR introduces a new Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent/Application
participant Builder as WorkflowBuilder
participant MemClient as memmachine_memory_client
participant Editor as MemMachineEditor
participant SDK as MemMachine SDK
participant Server as MemMachine Server
Agent->>Builder: register memory with config
Builder->>MemClient: instantiate with config
MemClient->>SDK: create MemMachineClient(base_url)
MemClient->>SDK: get_or_create_project(org_id, project_id)
SDK->>Server: API request
Server-->>SDK: project response
MemClient->>Editor: initialize with client
MemClient-->>Builder: yield memory_editor
Agent->>Editor: add_items(conversation/memory)
Editor->>Editor: extract metadata (user, session, agent)
Editor->>SDK: memory.add (episodic)
Editor->>SDK: memory.add (semantic)
SDK->>Server: store memories
Agent->>Editor: search(query, top_k)
Editor->>SDK: memory.search(query)
SDK->>Server: query memories
Server-->>SDK: episodic + semantic results
SDK-->>Editor: results
Editor->>Editor: reconstruct MemoryItem objects
Editor-->>Agent: list[MemoryItem]
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🤖 Fix all issues with AI agents
In `@packages/nvidia_nat_memmachine/pyproject.toml`:
- Around line 16-25: Update package dependency lists: in
packages/nvidia_nat_all/pyproject.toml add "nvidia-nat-memmachine" (no version
constraint) to the dependencies array, and in
packages/nvidia_nat_memmachine/pyproject.toml change the memmachine-server entry
from "memmachine-server~=0.2.2" to "memmachine-server~=0.2.1" so the declared
dependency matches the published version; ensure the dependencies remain sorted
per the file comments.
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`:
- Around line 1-9: Add the standard SPDX Apache-2.0 license header to the top of
memmachine_editor.py (above all imports); ensure the exact SPDX short-form line
"SPDX-License-Identifier: Apache-2.0" is present and include any required
copyright owner/year line per project policy so the file containing imports like
asyncio, requests, MemoryType, MemoryEditor, and MemoryItem is properly
licensed.
- Around line 185-196: The add_memory() closure captures outer loop variables
(memory, memory_text, metadata, memory_types) by reference so all scheduled
tasks use the last values; fix by binding those values into the closure before
scheduling (e.g., define add_memory(memory=memory, memory_text=memory_text,
metadata=metadata, memory_types=memory_types): or use functools.partial to bind
arguments) so the memory.add(...) call inside add_memory uses the per-iteration
values when you call task = asyncio.to_thread(add_memory).
- Around line 299-304: The bare except in the episodic_by_conversation loop
around conv_episodes.sort hides real errors; replace it with a targeted
exception handler that catches likely errors (e.g., TypeError, AttributeError,
ValueError) and log the failure including conv_key and the exception before
continuing. Specifically, wrap the conv_episodes.sort(key=lambda e:
e.get("created_at") or e.get("timestamp") or "") call in an except (TypeError,
AttributeError, ValueError) as e block and emit a warning or error via the
module's logger (or logging.warning) referencing conv_key and e so issues aren't
silently swallowed.
In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py`:
- Around line 85-88: The except block in the project-creation fallback (inside
memory.py where the try/except around client/project creation is located)
currently swallows all exceptions with "pass"; replace that with a
logger.exception(...) call to record the full stack trace and context (e.g.,
"Failed to create project, falling back to client usage") so debugging info is
preserved; keep the existing fallback behavior (do not re-raise) but ensure the
exception is logged via logger.exception in the except Exception as e: handler.
In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/register.py`:
- Line 1: Add the standard SPDX Apache-2.0 license header at the top of this
file (register.py) before any code — ensure the file begins with the SPDX
license identifier (e.g., a comment containing "SPDX-License-Identifier:
Apache-2.0") so the header appears above the existing import line "from . import
memory"; update only the file header area to include the required license
comment.
In `@packages/nvidia_nat_memmachine/tests/API_CALL_VERIFICATION.md`:
- Around line 1-5: Add the standard SPDX Apache-2.0 license header as the very
first line of the markdown file (API_CALL_VERIFICATION.md) by inserting
"SPDX-License-Identifier: Apache-2.0" at the top of the file so the file now
begins with that license identifier.
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 1-24: Add the missing SPDX Apache-2.0 license header at the top of
the file (above the docstring or immediately after the shebang) to satisfy
licensing requirements, and mark the integration test with pytest by importing
pytest and decorating the test function (e.g., test_add_and_retrieve) with
`@pytest.mark.integration` so external-service tests are properly classified;
update imports to include "import pytest" and ensure the decorator is applied to
the test function in the script.
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_api_calls.py`:
- Around line 1-2: Update the SPDX header years in the file's top comment lines:
change the copyright year range in the SPDX-FileCopyrightText line (currently
"2024-2025") to include 2026 (e.g., "2024-2026") and ensure the
SPDX-License-Identifier line remains unchanged; edit the two header comment
lines at the top of the file to reflect the new year range.
- Line 366: The assignment to results from calling editor_with_spy.search is
unused and should be removed to avoid lint warnings; simply call await
editor_with_spy.search(...) without assigning its return, or if you prefer
preserving a placeholder, assign to _ (underscore) instead—update the call site
where results is assigned in the test (the await editor_with_spy.search
invocation) to drop the unused variable.
- Line 50: Update the signature of get_calls to use explicit PEP 604 union
typing for the optional parameter: replace the parameter declaration
method_name: str = None with method_name: str | None, and ensure any related
type hints or usages in the get_calls function body or its callers remain valid
with the new annotation.
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py`:
- Around line 173-184: The for-loop in test_memmachine_integration.py uses an
unused loop variable named "attempt" which triggers a lint warning; rename it to
"_attempt" in both occurrences (the loop that calls memory_client.search with
query="morning work allergy peanuts" and the similar loop around lines ~403-414)
so the intent is preserved and the linter is satisfied, keeping the loop logic
and the await asyncio.sleep(2) retry behavior unchanged.
- Around line 1-2: Update the SPDX header year range in the file's top comment
to include 2026: locate the SPDX lines at the top of the file (the
SPDX-FileCopyrightText and SPDX-License-Identifier header) and change the
copyright year range "2024-2025" to "2024-2026" so the notice accurately
reflects the file modification year.
- Around line 22-25: Update the module docstring in the
tests/test_memmachine_integration.py file to wrap code entities in backticks to
satisfy Vale and docstring rules: surround MemMachine server with `MemMachine`,
the environment variable name with `MEMMACHINE_BASE_URL`, and the test command
with backticks like `pytest tests/test_memmachine_integration.py -v`; ensure any
other inline code-like tokens in that docstring are also backticked.
In `@packages/nvidia_nat_memmachine/tests/test_memory.py`:
- Around line 1-2: Update the SPDX header year range in the file header comment
to include 2026; specifically modify the top two comment lines (the SPDX
copyright and SPDX-License-Identifier block) so the year range reads "2024-2026"
instead of "2024-2025" to reflect the file modification in 2026.
- Around line 155-159: The test test_memmachine_memory_client_config_validation
currently catches a broad Exception when constructing
MemMachineMemoryClientConfig; change the assertion to catch the specific
pydantic validation error by replacing pytest.raises(Exception) with
pytest.raises(ValidationError) and add the necessary import for ValidationError
from pydantic so the test asserts the concrete validation failure from
MemMachineMemoryClientConfig.
🧹 Nitpick comments (7)
packages/nvidia_nat_memmachine/src/nat/meta/pypi.md (1)
20-24: Capitalize "Toolkit" in the heading.Per coding guidelines, titles/headings should use "Toolkit" (capital T) while body text uses "toolkit" (lowercase t). Also ensure the file ends with a single newline.
📝 Suggested fix
-# NVIDIA NeMo Agent toolkit Subpackage +# NVIDIA NeMo Agent Toolkit Subpackage This is a subpackage for MemMachine memory integration in NeMo Agent toolkit. For more information about the NVIDIA NeMo Agent toolkit, please visit the [NeMo Agent toolkit GitHub Repo](https://github.com/NVIDIA/NeMo-Agent-Toolkit). +packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py (3)
41-42: Add return type annotation and consider documenting unusedbuilderparameter.Per coding guidelines, public APIs require type hints on return values. The function should declare its return type. The
builderparameter appears unused (flagged by static analysis) — if it's required by the@register_memorydecorator contract, consider adding a comment or using_prefix to indicate intentional non-use.📝 Suggested fix
+from collections.abc import AsyncGenerator +from typing import Any + `@register_memory`(config_type=MemMachineMemoryClientConfig) -async def memmachine_memory_client(config: MemMachineMemoryClientConfig, builder: Builder): +async def memmachine_memory_client( + config: MemMachineMemoryClientConfig, + builder: Builder, # Required by `@register_memory` contract, unused here +) -> AsyncGenerator[Any, None]:
34-38: Clarify the distinction betweenmax_retriesandnum_retriesfromRetryMixin.The class defines
max_retriesfor SDK-level retries, while inheritingnum_retriesfromRetryMixin(used at line 95 forpatch_with_retry). This dual retry configuration may confuse users. Consider adding a note in the docstring explaining when each is used.📝 Suggested docstring addition
""" Configuration for MemMachine memory client. Based on the MemMachine Python SDK as documented at: https://github.com/MemMachine/MemMachine/blob/main/docs/examples/python.mdx Note: This integration is for local/self-hosted MemMachine instances. LLM API keys (e.g., OpenAI) are configured in the MemMachine cfg.yml file, not in this client configuration. + + Retry behavior: + - `max_retries`: SDK-level retries for individual MemMachine API calls. + - `num_retries` (inherited from RetryMixin): Application-level retries with + status code filtering, applied to the memory editor wrapper. """
92-98: Redundantisinstancecheck —configalways inherits fromRetryMixin.Since
MemMachineMemoryClientConfiginherits fromRetryMixin, this check is alwaysTrue. Consider removing the check for clarity, or add a comment if this is intentional defensive coding for future flexibility.📝 Suggested simplification
- if isinstance(config, RetryMixin): - memory_editor = patch_with_retry( - memory_editor, - retries=config.num_retries, - retry_codes=config.retry_on_status_codes, - retry_on_messages=config.retry_on_errors - ) + memory_editor = patch_with_retry( + memory_editor, + retries=config.num_retries, + retry_codes=config.retry_on_status_codes, + retry_on_messages=config.retry_on_errors + )packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py (1)
471-490: Unused variabledelete_semanticand incomplete bulk deletion path.The variable
delete_semanticis assigned but never used. If bulk deletion should respect this flag in the future, consider either implementing partial support or removing the variable to avoid confusion.Proposed fix - remove unused variable
project_id = kwargs.pop("project_id", None) org_id = kwargs.pop("org_id", None) - delete_semantic = kwargs.pop("delete_semantic_memory", False) # Note: MemMachine SDK doesn't have a delete_all methodpackages/nvidia_nat_memmachine/memmachine_memory_example.ipynb (1)
386-403: Consider usinghttpxfor consistency with coding guidelines.The coding guidelines prefer
httpxoverrequests. Whilerequestsworks fine here for a simple health check, consider aligning with the codebase convention.packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)
179-183: Uselogger.exception()when catching and logging exceptions without re-raising.Per coding guidelines, when catching and logging exceptions without re-raising, use
logger.exception()to capture full stack trace. The current approach usingtraceback.print_exc()works but doesn't follow the project's logging conventions.
| [project] | ||
| name = "nvidia-nat-memmachine" | ||
| dynamic = ["version"] | ||
| dependencies = [ | ||
| # Keep package version constraints as open as possible to avoid conflicts with other packages. Always define a minimum | ||
| # version when adding a new package. If unsure, default to using `~=` instead of `==`. Does not apply to nvidia-nat packages. | ||
| # Keep sorted!!! | ||
| "nvidia-nat~=1.4", | ||
| "memmachine-server~=0.2.2", | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if nvidia-nat-memmachine is listed in nvidia_nat_all dependencies
echo "=== Checking nvidia_nat_all/pyproject.toml ==="
if [ -f "packages/nvidia_nat_all/pyproject.toml" ]; then
echo "File found. Searching for nvidia-nat-memmachine:"
grep -n "nvidia-nat-memmachine" packages/nvidia_nat_all/pyproject.toml || echo "NOT FOUND"
echo ""
echo "Full dependencies section in nvidia_nat_all:"
sed -n '/dependencies = \[/,/\]/p' packages/nvidia_nat_all/pyproject.toml | head -30
else
echo "nvidia_nat_all/pyproject.toml not found"
fiRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 552
🌐 Web query:
memmachine-server latest version PyPI release
💡 Result:
The latest memmachine-server release on PyPI is version 0.2.1, uploaded Dec 13, 2025. [1][2]
Sources:
[1] PyPI project page — memmachine-server.
[2] PyPI user profile (sscargal) release listing.
Add nvidia-nat-memmachine to nvidia_nat_all dependencies and fix memmachine-server version.
The package is missing from packages/nvidia_nat_all/pyproject.toml. Additionally, memmachine-server~=0.2.2 is invalid—the latest available version is 0.2.1. When adding nvidia-nat-memmachine to the all-packages, it should be added without a version constraint since workspace dependencies are managed separately (e.g., "nvidia-nat-memmachine"). Update memmachine-server to ~=0.2.1.
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/pyproject.toml` around lines 16 - 25, Update
package dependency lists: in packages/nvidia_nat_all/pyproject.toml add
"nvidia-nat-memmachine" (no version constraint) to the dependencies array, and
in packages/nvidia_nat_memmachine/pyproject.toml change the memmachine-server
entry from "memmachine-server~=0.2.2" to "memmachine-server~=0.2.1" so the
declared dependency matches the published version; ensure the dependencies
remain sorted per the file comments.
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py
Show resolved
Hide resolved
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py
Outdated
Show resolved
Hide resolved
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py
Outdated
Show resolved
Hide resolved
packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py
Outdated
Show resolved
Hide resolved
packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py
Outdated
Show resolved
Hide resolved
| To run these tests: | ||
| 1. Start MemMachine server (and databases) | ||
| 2. Set MEMMACHINE_BASE_URL environment variable (defaults to http://localhost:8080) | ||
| 3. Run: pytest tests/test_memmachine_integration.py -v |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap code entities in backticks in the module docstring.
This avoids Vale false-positives and matches the docstring rules. As per coding guidelines, please update the formatting.
✍️ Proposed fix
-2. Set MEMMACHINE_BASE_URL environment variable (defaults to http://localhost:8080)
-3. Run: pytest tests/test_memmachine_integration.py -v
+2. Set `MEMMACHINE_BASE_URL` environment variable (defaults to `http://localhost:8080`).
+3. Run: `pytest tests/test_memmachine_integration.py -v`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| To run these tests: | |
| 1. Start MemMachine server (and databases) | |
| 2. Set MEMMACHINE_BASE_URL environment variable (defaults to http://localhost:8080) | |
| 3. Run: pytest tests/test_memmachine_integration.py -v | |
| To run these tests: | |
| 1. Start MemMachine server (and databases) | |
| 2. Set `MEMMACHINE_BASE_URL` environment variable (defaults to `http://localhost:8080`). | |
| 3. Run: `pytest tests/test_memmachine_integration.py -v`. |
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py` around
lines 22 - 25, Update the module docstring in the
tests/test_memmachine_integration.py file to wrap code entities in backticks to
satisfy Vale and docstring rules: surround MemMachine server with `MemMachine`,
the environment variable name with `MEMMACHINE_BASE_URL`, and the test command
with backticks like `pytest tests/test_memmachine_integration.py -v`; ensure any
other inline code-like tokens in that docstring are also backticked.
packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py
Outdated
Show resolved
Hide resolved
willkill07
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution!
This needs some work, and I'm going to start with the low-hanging fruit and a quick review before addressing the files that GitHub hid by default due to size.
Overall, all comments by coderabbit and additional ones from me need to be addressed.
Additionally, copyright headers for new files should include the current year rather than 2024-2025
I will admit I don't know how many commits behind develop you are, but you should assume a base as being near or close to top of tree.
This is my own opinion here, but I also don't like the idea of mandating a local server. Is there not an API/microservice that can be used instead?
Finally, if you have any questions don't hesitate to ask!
| # version when adding a new package. If unsure, default to using `~=` instead of `==`. Does not apply to nvidia-nat packages. | ||
| # Keep sorted!!! | ||
| "nvidia-nat~=1.4", | ||
| "memmachine-server~=0.2.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there not just a client version? It seems weird to force a locally running instance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @willkill07 , thanks for looking over this. We are working on a hosted REST API platform where users can go and just make an API key but it hasn't been rolled out yet. Once that is ready, we'll make a new PR to give user the option to use that.
| # Keep package version constraints as open as possible to avoid conflicts with other packages. Always define a minimum | ||
| # version when adding a new package. If unsure, default to using `~=` instead of `==`. Does not apply to nvidia-nat packages. | ||
| # Keep sorted!!! | ||
| "nvidia-nat~=1.4", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
develop has all toolkit package versions at 1.5
| from pkgutil import extend_path | ||
| __path__ = extend_path(__path__, __name__) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessary?
| @@ -0,0 +1,490 @@ | |||
| import asyncio | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing license
| @@ -0,0 +1 @@ | |||
| from . import memory No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing license
| @@ -0,0 +1,129 @@ | |||
| # API Call Verification Tests | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not commit AI-generated documents that serve no meaningful purpose
| @@ -0,0 +1,14 @@ | |||
| # SPDX-FileCopyrightText: Copyright (c) 2024-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file shouldn't be necessary
| @@ -0,0 +1,187 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing license header
| 1. Start MemMachine server (and databases) | ||
| 2. Set MEMMACHINE_BASE_URL environment variable (defaults to http://localhost:8080) | ||
| 3. Run: pytest tests/test_memmachine_integration.py -v |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a non-starter for us to need to manually start these. We have CI components and fixtures should be used to ensure tests will only run if services are running. See other memory examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/nvidia_nat_all/pyproject.toml (1)
23-33: Addnvidia-nat-memmachineto the workspace sources (and syncuv.lock).Since this dependency is a workspace package, it should be listed in
[tool.uv.sources]to ensure local resolution. Also make sureuv.lockis updated to reflect the new dependency.🧩 Proposed update
[tool.uv.sources] nvidia-nat = { workspace = true } nvidia-nat-agno = { workspace = true } nvidia-nat-autogen = { workspace = true } nvidia-nat-crewai = { workspace = true } nvidia-nat-langchain = { workspace = true } nvidia-nat-llama-index = { workspace = true } nvidia-nat-mcp = { workspace = true } nvidia-nat-mem0ai = { workspace = true } +nvidia-nat-memmachine = { workspace = true } nvidia-nat-mysql = { workspace = true }Based on learnings.
🤖 Fix all issues with AI agents
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`:
- Around line 324-331: In the try/except around conv_episodes.sort in
memmachine_editor (the block catching TypeError/AttributeError/ValueError for
conv_key), replace the logger.warning call with logger.exception so the full
stack trace is preserved when the sort fails; keep the same descriptive message
referencing conv_key and that we are "continuing without sorting" but call
logger.exception(...) instead of logger.warning(...).
- Around line 29-45: The docstring for the MemMachineEditor class contains
several metadata keys and code entities that are not formatted as inline code;
update the MemMachineEditor docstring so all code identifiers (e.g., MemoryItem,
and metadata keys `session_id`, `agent_id`, `group_id`, `project_id`, `org_id`,
and the class name `MemMachineClient` reference) are wrapped in backticks to
satisfy Vale/coding guidelines and ensure clarity; keep the explanatory text
unchanged and only modify the docstring formatting around these identifiers.
- Around line 179-190: The closure add_memory currently captures the outer
variable memory by reference, causing deferred asyncio.to_thread tasks to all
use the last memory instance; fix this by binding memory into the closure
signature as a default parameter (e.g., add a parameter like memory=memory) so
each add_memory captures its own memory instance before scheduling; update the
add_memory definition that calls memory.add(...) to include memory as a default
arg consistent with the identical pattern used in the other branch.
In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py`:
- Around line 48-51: The function signature for memmachine_memory_client
currently defines an unused parameter named builder which ruff flags; silence it
by renaming builder to _builder (or alternatively append a per-argument lint
suppression like "# noqa: ARG001") in the memmachine_memory_client definition so
the parameter remains for the `@register_memory` contract but no lint error is
raised.
- Around line 55-62: The project incorrectly requires the server package while
code imports the client: update the dependency in pyproject.toml from
memmachine-server to memmachine-client, and update the ImportError text in
memory.py (the block that catches ImportError for "from memmachine import
MemMachineClient") to instruct installing memmachine-client (e.g., "pip install
memmachine-client") and adjust any package name references/links so they point
to the client package documentation; keep the rest of the error message
structure (include the caught exception e) intact.
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py`:
- Around line 199-202: Replace the logger.exception call in the except block
that captures "Exception as e" with logger.error(..., exc_info=True) so the
error message is logged with the traceback without duplicating stack traces on
re-raise; specifically, locate the except Exception as e block (where
logger.exception("Error during test execution") is called) and change it to log
the same message using logger.error("Error during test execution",
exc_info=True) before re-raising the exception.
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py`:
- Around line 68-85: The two pytest fixture functions lack return type
annotations; update test_config_fixture to declare a return type of
MemMachineMemoryClientConfig (i.e., def test_config_fixture(...) ->
MemMachineMemoryClientConfig:) and update test_user_id_fixture to declare a
return type of str (def test_user_id_fixture(...) -> str:); ensure the
MemMachineMemoryClientConfig symbol is imported or referenced correctly in the
test module if not already.
- Around line 88-156: The async test function
test_add_and_retrieve_conversation_memory is missing the pytest-asyncio marker;
add `@pytest.mark.asyncio` above the function (place it directly after
`@pytest.mark.slow`) so pytest-asyncio runs the async test correctly.
- Around line 49-65: Change the broad except Exception in the MemMachineClient
health-check block to catch the specific requests exception type raised by the
client (e.g., requests.exceptions.RequestException or requests.RequestException)
so real errors aren't swallowed; import the exception from requests at top if
needed and keep the same behavior of raising RuntimeError when fail_missing is
true or calling pytest.skip otherwise while still preserving the existing
ImportError handling for the memmachine import and the use of
MemMachineClient.health_check().
♻️ Duplicate comments (2)
packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py (1)
46-209: Avoid running live MemMachine calls in the default test suite.
test_add_and_retrieveis still a discovered test and will run without the integration marker, even though it requires a live MemMachine instance. Make it a helper (non-test) or mark it as integration to keep it out of default runs.✅ Suggested restructuring
-async def test_add_and_retrieve(): +async def _run_add_and_retrieve(): """Test adding memories and retrieving them.""" @@ -@pytest.mark.integration -async def test_add_and_retrieve_integration(): +@pytest.mark.integration +async def test_add_and_retrieve_integration(): """Integration test for adding and retrieving memories.""" - await test_add_and_retrieve() + await _run_add_and_retrieve() @@ -if __name__ == "__main__": - asyncio.run(test_add_and_retrieve()) +if __name__ == "__main__": + asyncio.run(_run_add_and_retrieve())As per coding guidelines.
packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py (1)
16-24: Backtick code-like tokens in the module/fixture docstrings.The docstrings still leave code entities and the localhost URL unformatted, which can trigger Vale. Please wrap them in backticks for consistency. As per coding guidelines, please wrap code entities in backticks.
✍️ Proposed fix
-Integration tests for MemMachine memory integration. +Integration tests for `MemMachine` memory integration. @@ -Set `MEMMACHINE_BASE_URL` environment variable to override default (http://localhost:8080). +Set `MEMMACHINE_BASE_URL` environment variable to override default (`http://localhost:8080`). @@ -Set MEMMACHINE_BASE_URL environment variable to override default (http://localhost:8080). +Set `MEMMACHINE_BASE_URL` environment variable to override default (`http://localhost:8080`).Also applies to: 39-44
| class MemMachineEditor(MemoryEditor): | ||
| """ | ||
| Wrapper class that implements NAT interfaces for MemMachine Integrations. | ||
| Uses the MemMachine Python SDK (MemMachineClient) as documented at: | ||
| https://github.com/MemMachine/MemMachine/blob/main/docs/examples/python.mdx | ||
|
|
||
| Supports both episodic and semantic memory through the unified SDK interface. | ||
|
|
||
| User needs to add MemMachine SDK ids as metadata to the MemoryItem: | ||
| - session_id | ||
| - agent_id | ||
| - group_id | ||
| - project_id | ||
| - org_id | ||
|
|
||
| Group ID is optional. If not provided, the memory will be added to the 'default' group. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Backtick code identifiers in the MemMachineEditor docstring.
Several code entities/metadata keys are unformatted; wrap them in backticks to avoid Vale issues. As per coding guidelines, please surround code entities in docstrings with backticks.
✍️ Proposed fix
-Wrapper class that implements NAT interfaces for MemMachine Integrations.
-Uses the MemMachine Python SDK (MemMachineClient) as documented at:
+Wrapper class that implements `nat` interfaces for `MemMachine` integrations.
+Uses the `MemMachine` Python SDK (`MemMachineClient`) as documented at:
@@
-User needs to add MemMachine SDK ids as metadata to the MemoryItem:
-- session_id
-- agent_id
-- group_id
-- project_id
-- org_id
+User needs to add `MemMachine` SDK IDs as metadata to the `MemoryItem`:
+- `session_id`
+- `agent_id`
+- `group_id`
+- `project_id`
+- `org_id`
@@
-Group ID is optional. If not provided, the memory will be added to the 'default' group.
+`group_id` is optional. If not provided, the memory will be added to the `default` group.🤖 Prompt for AI Agents
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`
around lines 29 - 45, The docstring for the MemMachineEditor class contains
several metadata keys and code entities that are not formatted as inline code;
update the MemMachineEditor docstring so all code identifiers (e.g., MemoryItem,
and metadata keys `session_id`, `agent_id`, `group_id`, `project_id`, `org_id`,
and the class name `MemMachineClient` reference) are wrapped in backticks to
satisfy Vale/coding guidelines and ensure clarity; keep the explanatory text
unchanged and only modify the docstring formatting around these identifiers.
| # Capture variables in closure to avoid late binding issues | ||
| def add_memory(content=msg_content, role=msg_role, mem_types=memory_types, meta=metadata): | ||
| # Use MemMachine SDK add() method | ||
| # API: memory.add(content, role="user", metadata={}, memory_types=[...]) | ||
| # episode_type should be None (defaults to "message") or EpisodeType.MESSAGE | ||
| memory.add( | ||
| content=content, | ||
| role=role, | ||
| metadata=meta if meta else None, | ||
| memory_types=mem_types, | ||
| episode_type=None # Use default (MESSAGE) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py | sed -n '160,210p'Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 3120
🏁 Script executed:
cat -n packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py | sed -n '140,165p'Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1522
🏁 Script executed:
cat -n packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py | sed -n '120,160p'Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 2131
🏁 Script executed:
# Check if there are more places with similar pattern
rg "def add_memory" packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py -A 5Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 721
Bind memory in the per-message closure to prevent data loss.
This closure captures memory by reference, causing all deferred tasks to use the final memory instance from the outer loop. Since asyncio.to_thread() executes the closure later, all messages—regardless of which item they belong to—will be added to the last memory instance, silently corrupting data.
Bind memory as a default argument so each closure captures the correct instance. Note: the identical pattern is correctly implemented in the elif branch (lines 203–208), suggesting this is an oversight.
🐛 Proposed fix
- def add_memory(content=msg_content, role=msg_role, mem_types=memory_types, meta=metadata):
+ def add_memory(
+ content=msg_content,
+ role=msg_role,
+ mem=memory,
+ mem_types=memory_types,
+ meta=metadata,
+ ):
# Use MemMachine SDK add() method
# API: memory.add(content, role="user", metadata={}, memory_types=[...])
# episode_type should be None (defaults to "message") or EpisodeType.MESSAGE
- memory.add(
+ mem.add(
content=content,
role=role,
metadata=meta if meta else None,
memory_types=mem_types,
episode_type=None # Use default (MESSAGE)
)🧰 Tools
🪛 Ruff (0.14.13)
184-184: Function definition does not bind loop variable memory
(B023)
🤖 Prompt for AI Agents
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`
around lines 179 - 190, The closure add_memory currently captures the outer
variable memory by reference, causing deferred asyncio.to_thread tasks to all
use the last memory instance; fix this by binding memory into the closure
signature as a default parameter (e.g., add a parameter like memory=memory) so
each add_memory captures its own memory instance before scheduling; update the
add_memory definition that calls memory.add(...) to include memory as a default
arg consistent with the identical pattern used in the other branch.
| try: | ||
| conv_episodes.sort(key=lambda e: e.get("created_at") or e.get("timestamp") or "") | ||
| except (TypeError, AttributeError, ValueError) as e: | ||
| # Skip sorting if timestamps are missing or incompatible | ||
| logger.warning( | ||
| f"Failed to sort episodes for conversation '{conv_key}': {e}. " | ||
| "Continuing without sorting." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use logger.exception when swallowing the sort error.
The exception isn’t re-raised; logging policy requires logger.exception to preserve stack traces. As per coding guidelines, log swallowed exceptions with logger.exception().
✍️ Proposed fix
- logger.warning(
+ logger.exception(
f"Failed to sort episodes for conversation '{conv_key}': {e}. "
"Continuing without sorting."
)🤖 Prompt for AI Agents
In
`@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memmachine_editor.py`
around lines 324 - 331, In the try/except around conv_episodes.sort in
memmachine_editor (the block catching TypeError/AttributeError/ValueError for
conv_key), replace the logger.warning call with logger.exception so the full
stack trace is preserved when the sort fails; keep the same descriptive message
referencing conv_key and that we are "continuing without sorting" but call
logger.exception(...) instead of logger.warning(...).
| async def memmachine_memory_client( | ||
| config: MemMachineMemoryClientConfig, | ||
| builder: Builder, # Required by @register_memory contract, unused here | ||
| ) -> AsyncGenerator[MemoryEditor, None]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silence the unused builder parameter to avoid lint failures.
builder is required by the decorator contract, but ruff will still flag it. Rename to _builder (or add # noqa: ARG001) to keep CI clean.
🛠️ Suggested change
async def memmachine_memory_client(
config: MemMachineMemoryClientConfig,
- builder: Builder, # Required by `@register_memory` contract, unused here
+ _builder: Builder, # Required by `@register_memory` contract
) -> AsyncGenerator[MemoryEditor, None]:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def memmachine_memory_client( | |
| config: MemMachineMemoryClientConfig, | |
| builder: Builder, # Required by @register_memory contract, unused here | |
| ) -> AsyncGenerator[MemoryEditor, None]: | |
| async def memmachine_memory_client( | |
| config: MemMachineMemoryClientConfig, | |
| _builder: Builder, # Required by `@register_memory` contract | |
| ) -> AsyncGenerator[MemoryEditor, None]: |
🧰 Tools
🪛 Ruff (0.14.13)
50-50: Unused function argument: builder
(ARG001)
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py` around
lines 48 - 51, The function signature for memmachine_memory_client currently
defines an unused parameter named builder which ruff flags; silence it by
renaming builder to _builder (or alternatively append a per-argument lint
suppression like "# noqa: ARG001") in the memmachine_memory_client definition so
the parameter remains for the `@register_memory` contract but no lint error is
raised.
| from memmachine import MemMachineClient | ||
| except ImportError as e: | ||
| raise ImportError( | ||
| f"Could not import MemMachineClient from memmachine package. " | ||
| f"Error: {e}. " | ||
| "Please ensure memmachine package is installed: pip install memmachine. " | ||
| "See https://github.com/MemMachine/MemMachine/blob/main/docs/examples/python.mdx " | ||
| "for installation instructions." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -path "*/packages/nvidia_nat_memmachine/pyproject.toml" -exec cat {} \;Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1678
🌐 Web query:
PyPI memmachine package MemMachineClient module
💡 Result:
MemMachineClient is part of the MemMachine Python client SDK distributed on PyPI as memmachine-client (latest: 0.2.4, released Jan 16, 2026). [1]
Install
pip install memmachine-clientThe MemMachine install guide also distinguishes client (memmachine-client) vs server (memmachine-server). [2]
Import / module path
The class is exposed from the top-level memmachine module:
from memmachine import MemMachineClient
client = MemMachineClient(base_url="http://localhost:8080")This is shown in the official Python SDK example. [3]
What it is
MemMachineClient is the main Python client for interacting with the MemMachine HTTP API (handles auth/requests, project and memory operations). [4]
Sources: [1] [2] [3] [4]
🌐 Web query:
memmachine-server PyPI package
💡 Result:
memmachine-server is the MemMachine Server PyPI package (“the complete MemMachine memory system server with episodic and profile memory”). The latest release shown on PyPI is v0.2.1 (Dec 13, 2025). [1]
Install:
pip install memmachine-serverMemMachine’s docs also note you can choose memmachine-client vs memmachine-server, depending on whether you want the REST client or the server. [2]
Fix the dependency and install instructions—memmachine-client is required, not memmachine-server.
The package declares memmachine-server as a dependency in pyproject.toml, but the code imports MemMachineClient from the memmachine-client package. These are two separate PyPI packages with different purposes (client vs. server). Update the dependency to memmachine-client and correct the error message to match:
Required changes
In pyproject.toml:
- "memmachine-server~=0.2.1",
+ "memmachine-client~=0.2",In memory.py error message:
- "Please ensure memmachine package is installed: pip install memmachine. "
+ "Please ensure memmachine-client package is installed: pip install memmachine-client. "🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/src/nat/plugins/memmachine/memory.py` around
lines 55 - 62, The project incorrectly requires the server package while code
imports the client: update the dependency in pyproject.toml from
memmachine-server to memmachine-client, and update the ImportError text in
memory.py (the block that catches ImportError for "from memmachine import
MemMachineClient") to instruct installing memmachine-client (e.g., "pip install
memmachine-client") and adjust any package name references/links so they point
to the client package documentation; keep the rest of the error message
structure (include the caught exception e) intact.
| except Exception as e: | ||
| print(f"\n✗ Error: {e}") | ||
| logger.exception("Error during test execution") | ||
| raise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use logger.error(..., exc_info=True) when re-raising.
logger.exception() already emits a stack trace, and re-raising will duplicate it. Prefer logger.error(..., exc_info=True) when re-raising.
🔧 Proposed change
- logger.exception("Error during test execution")
+ logger.error("Error during test execution", exc_info=True)
raiseAs per coding guidelines.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| print(f"\n✗ Error: {e}") | |
| logger.exception("Error during test execution") | |
| raise | |
| except Exception as e: | |
| print(f"\n✗ Error: {e}") | |
| logger.error("Error during test execution", exc_info=True) | |
| raise |
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/tests/test_add_and_retrieve.py` around lines
199 - 202, Replace the logger.exception call in the except block that captures
"Exception as e" with logger.error(..., exc_info=True) so the error message is
logged with the traceback without duplicating stack traces on re-raise;
specifically, locate the except Exception as e block (where
logger.exception("Error during test execution") is called) and change it to log
the same message using logger.error("Error during test execution",
exc_info=True) before re-raising the exception.
| try: | ||
| # Try to import and use MemMachineClient to check server availability | ||
| from memmachine import MemMachineClient | ||
|
|
||
| client = MemMachineClient(base_url=base_url, timeout=5.0) | ||
| client.health_check(timeout=5.0) | ||
| return base_url | ||
| except ImportError: | ||
| reason = "memmachine package not installed. Install with: pip install memmachine" | ||
| if fail_missing: | ||
| raise RuntimeError(reason) from None | ||
| pytest.skip(reason=reason) | ||
| except Exception: | ||
| reason = f"Unable to connect to MemMachine server at {base_url}. Please ensure the server is running." | ||
| if fail_missing: | ||
| raise RuntimeError(reason) from None | ||
| pytest.skip(reason=reason) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, locate and read the test file
find . -type f -name "test_memmachine_integration.py" | head -5Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 138
🏁 Script executed:
#!/bin/bash
# Check the pyproject.toml for memmachine package to understand dependencies
find packages/nvidia_nat_memmachine -type f -name "pyproject.toml" -o -name "setup.py" | head -5Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 114
🏁 Script executed:
#!/bin/bash
# Search for how MemMachineClient is used elsewhere in the codebase
rg "MemMachineClient" --type py -A 3 -B 3 | head -50Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 4691
🏁 Script executed:
#!/bin/bash
# Read the pyproject.toml to see memmachine dependency
cat packages/nvidia_nat_memmachine/pyproject.tomlRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 1678
🏁 Script executed:
#!/bin/bash
# Read the full test file to get context
cat -n packages/nvidia_nat_memmachine/tests/test_memmachine_integration.pyRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 20977
🌐 Web query:
memmachine-server MemMachineClient health_check exceptions
💡 Result:
MemMachineClient.health_check() is documented to raise requests.RequestException when the health check fails (i.e., any underlying HTTP/network error from requests, including connection errors, timeouts, and non-2xx responses if the client calls raise_for_status()). [1]
On the server side, the REST health endpoint is GET /api/v2/health (example: http://localhost:8080/api/v2/health) and returns 200 {} when healthy. [2] The quickstart also mentions a legacy/alternate health endpoint at http://localhost:8080/health for verifying a local install. [3]
Practical handling pattern:
import requests
from memmachine import MemMachineClient
client = MemMachineClient(api_key=..., base_url="http://localhost:8080", timeout=5, max_retries=0)
try:
status = client.health_check(timeout=2) # returns a dict on success
except requests.RequestException as e:
# Typical causes: server not running, wrong base_url, TLS/proxy issues, timeout, 4xx/5xx
raise RuntimeError(f"MemMachine health_check failed: {e}") from eSources: [1] [2] [3]
🌐 Web query:
memmachine-server Python package GitHub
💡 Result:
The memmachine-server Python package’s source code is in the MemMachine/MemMachine GitHub repository: https://github.com/MemMachine/MemMachine [2]. The PyPI release provenance for memmachine-server also points to this same repo (tagged releases) [1].
References: [1] [2]
Avoid catching a blind Exception in the health check path.
Catching all exceptions can hide real errors in the client initialization/health check. The MemMachineClient.health_check() method raises requests.RequestException for connection failures, timeouts, and HTTP errors. Catch that exception specifically instead.
🐛 Proposed fix
+import requests
+
try:
# Try to import and use MemMachineClient to check server availability
from memmachine import MemMachineClient
client = MemMachineClient(base_url=base_url, timeout=5.0)
client.health_check(timeout=5.0)
return base_url
except ImportError:
reason = "memmachine package not installed. Install with: pip install memmachine"
if fail_missing:
raise RuntimeError(reason) from None
pytest.skip(reason=reason)
-except Exception:
+except requests.RequestException as exc:
reason = f"Unable to connect to MemMachine server at {base_url}. Please ensure the server is running."
if fail_missing:
raise RuntimeError(reason) from None
pytest.skip(reason=reason)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Try to import and use MemMachineClient to check server availability | |
| from memmachine import MemMachineClient | |
| client = MemMachineClient(base_url=base_url, timeout=5.0) | |
| client.health_check(timeout=5.0) | |
| return base_url | |
| except ImportError: | |
| reason = "memmachine package not installed. Install with: pip install memmachine" | |
| if fail_missing: | |
| raise RuntimeError(reason) from None | |
| pytest.skip(reason=reason) | |
| except Exception: | |
| reason = f"Unable to connect to MemMachine server at {base_url}. Please ensure the server is running." | |
| if fail_missing: | |
| raise RuntimeError(reason) from None | |
| pytest.skip(reason=reason) | |
| # SPDX-FileCopyrightText: Copyright (c) 2024-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | |
| # SPDX-License-Identifier: Apache-2.0 | |
| """MemMachine integration tests. | |
| These tests verify integration with the `MemMachine` vector database. | |
| Prerequisites: | |
| 1. Start `MemMachine` server: `docker run -p 8080:8080 memmachineteam/memmachine:latest` | |
| 2. Set `MEMMACHINE_BASE_URL` environment variable (defaults to `http://localhost:8080`). | |
| 3. Run: `pytest tests/test_memmachine_integration.py -v`. | |
| """ | |
| import asyncio | |
| import logging | |
| import os | |
| import pytest | |
| import requests | |
| from memmachine import MemMachineClient | |
| logger = logging.getLogger(__name__) | |
| def get_memmachine_base_url(fail_missing: bool = False) -> str: | |
| """Get and validate the MemMachine server URL. | |
| Args: | |
| fail_missing: If True, raise an error when server is unavailable. | |
| If False, skip the test instead. | |
| Returns: | |
| The validated base URL for the MemMachine server. | |
| Raises: | |
| RuntimeError: If fail_missing is True and the server is unavailable. | |
| """ | |
| base_url = os.getenv("MEMMACHINE_BASE_URL", "http://localhost:8080") | |
| try: | |
| # Try to import and use MemMachineClient to check server availability | |
| from memmachine import MemMachineClient | |
| client = MemMachineClient(base_url=base_url, timeout=5.0) | |
| client.health_check(timeout=5.0) | |
| return base_url | |
| except ImportError: | |
| reason = "memmachine package not installed. Install with: pip install memmachine" | |
| if fail_missing: | |
| raise RuntimeError(reason) from None | |
| pytest.skip(reason=reason) | |
| except requests.RequestException: | |
| reason = f"Unable to connect to MemMachine server at {base_url}. Please ensure the server is running." | |
| if fail_missing: | |
| raise RuntimeError(reason) from None | |
| pytest.skip(reason=reason) |
🧰 Tools
🪛 Ruff (0.14.13)
55-55: Consider moving this statement to an else block
(TRY300)
61-61: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py` around
lines 49 - 65, Change the broad except Exception in the MemMachineClient
health-check block to catch the specific requests exception type raised by the
client (e.g., requests.exceptions.RequestException or requests.RequestException)
so real errors aren't swallowed; import the exception from requests at top if
needed and keep the same behavior of raising RuntimeError when fail_missing is
true or calling pytest.skip otherwise while still preserving the existing
ImportError handling for the memmachine import and the use of
MemMachineClient.health_check().
| @pytest.fixture(name="test_config") | ||
| def test_config_fixture(memmachine_base_url: str): | ||
| """Create a test configuration.""" | ||
| # Use unique org/project IDs for each test run to avoid conflicts | ||
| test_id = str(uuid.uuid4())[:8] | ||
| return MemMachineMemoryClientConfig( | ||
| base_url=memmachine_base_url, | ||
| org_id=f"test_org_{test_id}", | ||
| project_id=f"test_project_{test_id}", | ||
| timeout=30, | ||
| max_retries=3 | ||
| ) | ||
|
|
||
|
|
||
| @pytest.fixture(name="test_user_id") | ||
| def test_user_id_fixture(): | ||
| """Generate a unique user ID for testing.""" | ||
| return f"test_user_{uuid.uuid4().hex[:8]}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add return type hints to fixtures.
These fixtures return concrete types; annotate them to match the typing policy. As per coding guidelines, please add return type hints where the return value is not None.
✍️ Proposed fix
-def test_config_fixture(memmachine_base_url: str):
+def test_config_fixture(memmachine_base_url: str) -> MemMachineMemoryClientConfig:
@@
-def test_user_id_fixture():
+def test_user_id_fixture() -> str:🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py` around
lines 68 - 85, The two pytest fixture functions lack return type annotations;
update test_config_fixture to declare a return type of
MemMachineMemoryClientConfig (i.e., def test_config_fixture(...) ->
MemMachineMemoryClientConfig:) and update test_user_id_fixture to declare a
return type of str (def test_user_id_fixture(...) -> str:); ensure the
MemMachineMemoryClientConfig symbol is imported or referenced correctly in the
test module if not already.
| @pytest.mark.integration | ||
| @pytest.mark.slow | ||
| async def test_add_and_retrieve_conversation_memory( | ||
| test_config: MemMachineMemoryClientConfig, | ||
| test_user_id: str | ||
| ): | ||
| """Test adding a conversation memory and retrieving it.""" | ||
| general_config = GeneralConfig() | ||
| async with WorkflowBuilder(general_config=general_config) as builder: | ||
| await builder.add_memory_client("memmachine_memory", test_config) | ||
| memory_client = await builder.get_memory_client("memmachine_memory") | ||
|
|
||
| # Create a test conversation memory | ||
| conversation = [ | ||
| {"role": "user", "content": "I love pizza and Italian food."}, | ||
| {"role": "assistant", "content": "I'll remember that you love pizza and Italian food."}, | ||
| ] | ||
|
|
||
| memory_item = MemoryItem( | ||
| conversation=conversation, | ||
| user_id=test_user_id, | ||
| memory="User loves pizza", | ||
| metadata={ | ||
| "session_id": "test_session_1", | ||
| "agent_id": "test_agent_1", | ||
| "test_id": "conversation_test" | ||
| }, | ||
| tags=["food", "preference"] | ||
| ) | ||
|
|
||
| # Add the memory | ||
| await memory_client.add_items([memory_item]) | ||
|
|
||
| # Wait a moment for indexing (if needed) | ||
| import asyncio | ||
| await asyncio.sleep(1) | ||
|
|
||
| # Retrieve the memory | ||
| retrieved_memories = await memory_client.search( | ||
| query="pizza Italian food", | ||
| top_k=10, | ||
| user_id=test_user_id, | ||
| session_id="test_session_1", | ||
| agent_id="test_agent_1" | ||
| ) | ||
|
|
||
| # Verify we got results | ||
| assert len(retrieved_memories) > 0, "Should retrieve at least one memory" | ||
|
|
||
| # Check that our memory is in the results | ||
| # Note: MemMachine may store conversation messages separately or process them, | ||
| # so we check for the content/keywords rather than exact conversation structure | ||
| found = False | ||
| for mem in retrieved_memories: | ||
| # Check if this is our memory by looking for the test_id in metadata | ||
| if mem.metadata.get("test_id") == "conversation_test": | ||
| found = True | ||
| # MemMachine may return individual messages, not full conversations | ||
| # So we check that the content is present (either in conversation or memory field) | ||
| content = mem.memory or (str(mem.conversation) if mem.conversation else "") | ||
| assert "pizza" in content.lower() or "italian" in content.lower(), \ | ||
| f"Should contain pizza/italian content. Got: {content}" | ||
| # Verify tags | ||
| assert "food" in mem.tags or "preference" in mem.tags, \ | ||
| f"Should have tags. Got: {mem.tags}" | ||
| break | ||
|
|
||
| assert found, f"Should find the memory we just added. Found {len(retrieved_memories)} memories with metadata: {[m.metadata.get('test_id') for m in retrieved_memories]}" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for pytest-asyncio configuration in pyproject.toml files
echo "=== Checking pyproject.toml files for pytest-asyncio config ==="
fd -e pyproject.toml | while read f; do
if grep -q "pytest-asyncio\|asyncio_mode" "$f" 2>/dev/null; then
echo "Found in: $f"
grep -A 5 -B 5 "pytest-asyncio\|asyncio_mode" "$f"
fi
done
echo -e "\n=== Checking for pytest.ini or setup.cfg ==="
fd -e pytest.ini -o -e setup.cfg | while read f; do
echo "Found: $f"
cat "$f"
done
echo -e "\n=== Checking conftest.py files for pytest-asyncio configuration ==="
fd -e conftest.py | while read f; do
echo "=== $f ==="
cat -n "$f"
done
echo -e "\n=== Checking for `@pytest.mark.asyncio` in test files ==="
rg -n "@pytest.mark.asyncio" -g 'test_*.py'Repository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 19817
🏁 Script executed:
#!/bin/bash
# Check root pyproject.toml for pytest configuration
echo "=== Checking root pyproject.toml for pytest-asyncio config ==="
if [ -f "pyproject.toml" ]; then
head -100 pyproject.toml
echo ""
rg -A 10 "\[tool.pytest" pyproject.toml
fi
echo -e "\n=== Checking packages/nvidia_nat_memmachine/pyproject.toml ==="
if [ -f "packages/nvidia_nat_memmachine/pyproject.toml" ]; then
cat packages/nvidia_nat_memmachine/pyproject.toml
fiRepository: NVIDIA/NeMo-Agent-Toolkit
Length of output: 5814
Add @pytest.mark.asyncio decorator to async test function.
The test function is async but is missing the required @pytest.mark.asyncio marker. The codebase uses pytest-asyncio for all async tests and does not have asyncio_mode=auto configured, so the marker is required. Add @pytest.mark.asyncio above the function definition (after @pytest.mark.slow).
🤖 Prompt for AI Agents
In `@packages/nvidia_nat_memmachine/tests/test_memmachine_integration.py` around
lines 88 - 156, The async test function
test_add_and_retrieve_conversation_memory is missing the pytest-asyncio marker;
add `@pytest.mark.asyncio` above the function (place it directly after
`@pytest.mark.slow`) so pytest-asyncio runs the async test correctly.
Description
Add MemMachine as a memory plugin within NAT and also add a notebook example to demonstrate how to use MemMachine in NAT
By Submitting this PR I confirm:
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.