Skip to content

Conversation

@cpdata
Copy link
Owner

@cpdata cpdata commented Oct 15, 2025

Summary

  • add a provider-agnostic meshmind/llm_client.py with configuration helpers so extraction, embeddings, and rerank flows share a single OpenAI-compatible wrapper
  • refactor the MeshMind client, pipeline extraction, embedding encoder, and CLI ingest command to honour new LLM defaults and --llm-* overrides
  • document the LLM_* environment variables, update planning/backlog artifacts, and add targeted tests plus a cleanup plan for post-restriction shims

Testing

  • pytest

https://chatgpt.com/codex/tasks/task_b_68ee1c9ee92c83218cbbebce8b0667b8

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Comment on lines +189 to +203
def with_overrides(
self,
*,
models: Optional[Dict[str, Optional[str]]] = None,
base_urls: Optional[Dict[str, Optional[str]]] = None,
api_key: Optional[str] = None,
) -> "LLMClient":
"""Create a new client that applies the provided overrides."""

return LLMClient(
self.config.override(
models=models, base_urls=base_urls, api_key=api_key
),
client_kwargs=self._default_kwargs,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge with_overrides ignores new API keys and base URLs

When creating an override client the method reuses self._default_kwargs from the original client. Those kwargs already contain the previously resolved api_key and base_url, and the constructor uses setdefault so the values from the new LLMConfig are never applied. Calling client.with_overrides(api_key="new") or changing base_urls still produces a client authenticated against the original endpoint. This makes credential or endpoint overrides impossible and will surprise callers who rely on per-operation credentials.

Useful? React with 👍 / 👎.

Comment on lines +37 to +50
self.embedding_model = config.model_for("embedding", fallback=settings.EMBEDDING_MODEL)

self._graph_driver: Optional[GraphDriver] = graph_driver
self._graph_driver_factory = graph_driver_factory
if self._graph_driver is None and self._graph_driver_factory is None:
self._graph_driver_factory = make_graph_driver_factory()

self._memory_manager: Optional[MemoryManager] = (
MemoryManager(self._graph_driver) if self._graph_driver else None
)
self.entity_registry = EntityRegistry
self.predicate_registry = PredicateRegistry
bootstrap_entities([Memory])
bootstrap_encoders()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Embedding-model overrides lack registered encoder

The CLI and MeshMind constructor allow overriding the embedding model via llm_config or --embedding-model, but MeshMind.__init__ still calls bootstrap_encoders() with no arguments, which registers only settings.EMBEDDING_MODEL. The overridden model name in self.embedding_model is never added to EncoderRegistry, so extract_memories later raises KeyError for the new model. Passing the active embedding model to bootstrap_encoders (or registering it explicitly) is required for the advertised override to work.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants