Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -117,3 +117,10 @@ deploy/terraform/
*.tfstate
*.tfstate.*
.terraform/

gcp_credentials.json
READMEP.md
Untitled-1.excalidraw
.favorites.json
.env
.env*
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
BRAINAPI2 ENGINE LICENSE
Version 1.0 — © 2025 Lumen Platforms Inc.
Version 1.0 — © 2026 Lumen Platforms Inc.

This software ("BrainAPI Engine" or "the Software") is provided under a combined
license consisting of:
Expand Down
16 changes: 11 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,12 @@ stop-neo4j:
delete-neo4j-volumes:
docker compose -f src/lib/neo4j/docker-compose.yaml down -v --remove-orphans

build-neo4j-extension:
docker run --rm -v $(PWD)/src/lib/neo4j:/app -w /app maven:3.9-eclipse-temurin-17 mvn compile

package-neo4j-extension:
docker run --rm -v $(PWD)/src/lib/neo4j:/app -w /app maven:3.9-eclipse-temurin-17 mvn package -DskipTests

start-mongo:
docker compose -f src/lib/mongo/docker-compose.yaml up -d

Expand All @@ -50,7 +56,7 @@ start-api:
stop-api:
pkill -f uvicorn

DEBUG_ENVS := LANGCHAIN_DEBUG="true" LANGCHAIN_VERBOSE="true" DEBUG="true" ENV="development"
DEBUG_ENVS := LANGCHAIN_DEBUG="true" LANGCHAIN_VERBOSE="true" DEBUG="true"

start-all:
@if [ "$(filter debug,$(MAKECMDGOALS))" = "debug" ] || [ "$$DEBUG" = "true" ]; then \
Expand All @@ -60,16 +66,16 @@ start-all:
$(MAKE) start-redis DEBUG=true & \
$(MAKE) start-neo4j DEBUG=true & \
$(MAKE) start-mongo DEBUG=true & \
$(MAKE) start-api DEBUG=true & \
bash -c "export $(DEBUG_ENVS) && poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10"; \
ENV="development" $(MAKE) start-api DEBUG=true & \
bash -c "export $(DEBUG_ENVS) ENV="development" && poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10"; \
Comment on lines +69 to +70
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix broken quoting in the debug celery command.

Line 70 currently terminates the bash -c string early at ENV="development", so the command will fail in debug mode.

🔧 Proposed fix
-		bash -c "export $(DEBUG_ENVS) ENV="development" && poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10"; \
+		bash -c 'export $(DEBUG_ENVS) ENV="development" && poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10'; \
🤖 Prompt for AI Agents
In `@Makefile` around lines 69 - 70, The bash -c argument is broken by unescaped
double quotes around ENV="development"; update the Makefile's debug celery
command so the entire command passed to bash -c is quoted properly (for example
wrap the whole command in single quotes) so that export $(DEBUG_ENVS)
ENV="development" && poetry run celery -A src.workers.app worker --loglevel=info
--pool=threads --concurrency=10 is treated as one argument; ensure you keep the
DEBUG_ENVS and ENV assignment intact and that the bash -c invocation (the string
after bash -c) no longer terminates early.

else \
$(MAKE) start-milvus & \
$(MAKE) start-rabbitmq & \
$(MAKE) start-redis & \
$(MAKE) start-neo4j & \
$(MAKE) start-mongo & \
$(MAKE) start-api & \
poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10; \
ENV="development" $(MAKE) start-api & \
ENV="development" poetry run celery -A src.workers.app worker --loglevel=info --pool=threads --concurrency=10; \
fi

debug:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<p align="center">
<img src="https://img.shields.io/badge/version-2.1.4--dev-blue?style=for-the-badge" alt="Version"/>
<img src="https://img.shields.io/badge/version-2.3.0--dev-blue?style=for-the-badge" alt="Version"/>
<img src="https://img.shields.io/badge/python-3.11+-green?style=for-the-badge&logo=python&logoColor=white" alt="Python"/>
<img src="https://img.shields.io/badge/license-AGPLv3%20%2B%20Commons%20Clause-purple?style=for-the-badge" alt="License"/>
</p>
Expand Down
5 changes: 5 additions & 0 deletions plugins/readme.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Plugins can be used to extend the functionality of the BrainAPI.

To add a plugin, add the plugin into the plugins directory and make sure the `brainapi.config.yaml` file is placed into the root of the `plugins` directory.

If you are using a remote plugin add the `<plugin_name>.config.yaml` file directly into the `plugins` directory.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "brainapi2"
version = "2.1.4-dev"
version = "2.3.0-dev"
description = "Version 2.x.x of the BrainAPI memory layer."
authors = [
{name = "Christian",email = "alch.infoemail@gmail.com"}
Expand Down
63 changes: 42 additions & 21 deletions src/adapters/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
Created Date: Sunday October 19th 2025
Author: Christian Nonis <alch.infoemail@gmail.com>
-----
Last Modified: Saturday December 13th 2025
Modified By: the developer formerly known as Christian Nonis at <alch.infoemail@gmail.com>
Last Modified: Monday January 12th 2026 8:26:26 pm
Modified By: Christian Nonis <alch.infoemail@gmail.com>
-----
"""

Expand Down Expand Up @@ -157,62 +157,83 @@ def get_observations_list(

def get_observation_labels(self, brain_id: str = "default") -> list[str]:
"""
Retrieve all unique observation labels for the specified brain.
Get all unique observation labels for the specified brain.

Parameters:
brain_id (str): Identifier of the brain to query labels from.
brain_id (str): Identifier of the brain to query.

Returns:
list[str]: All unique labels present in observations for the specified brain.
list[str]: A list of unique observation label strings for the specified brain.
"""
return self.data.get_observation_labels(brain_id=brain_id)

def get_changelog_by_id(
self, id: str, brain_id: str = "default"
) -> KGChanges:

def get_changelog_by_id(self, id: str, brain_id: str = "default") -> KGChanges:
"""
Retrieve a changelog entry by its identifier.

Parameters:
id (str): Identifier of the changelog entry to retrieve.
brain_id (str): Brain namespace key to query; defaults to "default".

Returns:
KGChanges: The changelog entry matching the given `id`.
"""
return self.data.get_changelog_by_id(id=id, brain_id=brain_id)

def get_changelogs_list(
self,
brain_id: str = "default",
limit: int = 10,
skip: int = 0,
self,
brain_id: str = "default",
limit: int = 10,
skip: int = 0,
types: list[str] = None,
query_text: str = None
query_text: str = None,
) -> list[KGChanges]:
"""
Retrieve a paginated list of knowledge-graph changelogs for a brain.

Parameters:
brain_id (str): Identifier of the brain to query.
limit (int): Maximum number of changelogs to return.
skip (int): Number of changelogs to skip (offset).
types (list[str] | None): If provided, restrict results to these changelog types.
query_text (str | None): If provided, filter changelogs by matching text.

Returns:
list[KGChanges]: Changelogs matching the filters and pagination parameters.
"""
return self.data.get_changelogs_list(brain_id=brain_id, limit=limit, skip=skip, types=types, query_text=query_text)
return self.data.get_changelogs_list(
brain_id=brain_id,
limit=limit,
skip=skip,
types=types,
query_text=query_text,
)

def get_changelog_types(self, brain_id: str = "default") -> list[str]:
"""
Retrieve distinct changelog types for a brain.

Parameters:
brain_id (str): Identifier of the brain to query; defaults to "default".

Returns:
list[str]: List of changelog type names.
"""
return self.data.get_changelog_types(brain_id=brain_id)

def update_structured_data(
self, structured_data: StructuredData, brain_id: str = "default"
) -> StructuredData:
"""
Update an existing structured data entry.

Parameters:
structured_data (StructuredData): The structured data object with updated information.
brain_id (str): Identifier of the brain context for the update.

Returns:
StructuredData: The updated structured data object.
"""
return self.data.update_structured_data(
structured_data=structured_data, brain_id=brain_id
)
71 changes: 64 additions & 7 deletions src/adapters/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,21 @@
Created Date: Sunday October 19th 2025
Author: Christian Nonis <alch.infoemail@gmail.com>
-----
Last Modified: Sunday October 19th 2025 9:00:59 am
Modified By: the developer formerly known as Christian Nonis at <alch.infoemail@gmail.com>
Last Modified: Thursday January 29th 2026 8:43:59 pm
Modified By: Christian Nonis <alch.infoemail@gmail.com>
-----
"""

import uuid
from tenacity import (
retry,
stop_after_attempt,
wait_exponential,
retry_if_exception_type,
)
from src.adapters.interfaces.embeddings import EmbeddingsClient, VectorStoreClient
from src.constants.embeddings import Vector
from src.lib.embeddings.client import EmbeddingError


class EmbeddingsAdapter:
Expand All @@ -23,19 +30,49 @@ def add_client(self, client: EmbeddingsClient) -> None:
"""
self.embeddings = client

@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=2, max=30),
retry=retry_if_exception_type(EmbeddingError),
)
def _embed_text_with_retry(self, text: str) -> list[float]:
return self.embeddings.embed_text(text)

@retry(
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=1, min=2, max=30),
retry=retry_if_exception_type(EmbeddingError),
)
def _embed_texts_with_retry(self, texts: list[str]) -> list[list[float]]:
return self.embeddings.embed_texts(texts)

def embed_text(self, text: str) -> Vector:
"""
Embed a text and return a vector.
"""
from src.lib.embeddings.client import EmbeddingError

try:
embeddings = self.embeddings.embed_text(text)
embeddings = self._embed_text_with_retry(text)
return Vector(id=str(uuid.uuid4()), embeddings=embeddings, metadata={})
except EmbeddingError as e:
print(f"Embedding failed in adapter, returning empty vector: {e}")
return Vector(id=str(uuid.uuid4()), embeddings=[], metadata={})

def embed_texts(self, texts: list[str]) -> list[Vector]:
"""
Embed a list of texts and return a list of vectors.
"""
try:
embeddings_list = self._embed_texts_with_retry(texts)
return [
Vector(id=str(uuid.uuid4()), embeddings=embeddings, metadata={})
for embeddings in embeddings_list
]
except EmbeddingError as e:
print(f"Embedding failed in adapter, returning empty vectors: {e}")
return [
Vector(id=str(uuid.uuid4()), embeddings=[], metadata={}) for _ in texts
]


class VectorStoreAdapter:
def __init__(self):
Expand Down Expand Up @@ -83,14 +120,34 @@ def search_similar_by_ids(
store: str,
min_similarity: float,
limit: int = 10,
) -> list[Vector]:
) -> dict[str, list[Vector]]:
"""
Search similar vectors by their IDs.
Finds vectors similar to the provided vector IDs within a specified store and brain.

Parameters:
vector_ids (list[str]): IDs of the source vectors to find similarities for.
brain_id (str): Identifier of the brain/namespace to search within.
store (str): Name of the vector store to query.
min_similarity (float): Minimum similarity threshold for returned vectors (inclusive).
limit (int): Maximum number of similar vectors to return per source ID.

Returns:
dict[str, list[Vector]]: Mapping from each source vector ID to a list of similar Vectors.
Each list contains at most `limit` items, includes only vectors with similarity >= `min_similarity`,
and is ordered by descending similarity.
"""
return self.vector_store.search_similar_by_ids(
vector_ids, brain_id, store, min_similarity, limit
)

def remove_vectors(
self, ids: list[str], store: str, brain_id: str = "default"
) -> None:
"""
Remove vectors from the vector store.
"""
return self.vector_store.remove_vectors(ids, store, brain_id)


_embeddings_adapter = EmbeddingsAdapter()
_vector_store_adapter = VectorStoreAdapter()
Loading